activemq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Olivier Langlois <>
Subject RE: [jira] Commented: (AMQ-1739) ActiveMQ 5.1.0 runs out of file descriptors with lots of 'CLOSE_WAIT' sockets
Date Wed, 21 May 2008 21:55:30 GMT

CLOSE_WAIT is not a kernel tuning parameter. A TCP connection goes into the CLOSE_WAIT state
when it receive a FIN segment from its peer. From that point the connection becomes half-duplex.

ie: You will not receive any new data from your peer but you can still send any amount of
data back to the peer. Hence, the socket will stay there as long as the server does not call
close() explicitly on the socket.

Not sure on this but I think this is exactly what the state name means. The socket is waiting
for the application to call close() before going away.


> -----Original Message-----
> From: Filip Hanik (JIRA) []
> Sent: May 21, 2008 17:41
> To:
> Subject: [jira] Commented: (AMQ-1739) ActiveMQ 5.1.0 runs out of file
> descriptors with lots of 'CLOSE_WAIT' sockets
>     [
> 1739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-
> tabpanel&focusedCommentId=42937#action_42937 ]
> Filip Hanik commented on AMQ-1739:
> ----------------------------------
> CLOSE_WAIT is a kernel tuning parameter, how long the connection stays
> there depends on your OS.
> On linux I don't even know if you can control it, but I know you can on
> windows and solaris.
> problem with it being a kernel parameter, is that it then affects all the
> system, any program using TCP connections
> > ActiveMQ 5.1.0 runs out of file descriptors with lots of 'CLOSE_WAIT'
> sockets
> > ------------------------------------------------------------------------
> -----
> >
> >                 Key: AMQ-1739
> >                 URL:
> >             Project: ActiveMQ
> >          Issue Type: Bug
> >          Components: Broker
> >    Affects Versions: 5.1.0
> >         Environment: We have a single broker with no special network-
> stuff. Our broker-system has two single core Opterons, 8GB of memory,
> plenty of I/O and runs a recent 64bit debian with 2.6.21 kernel.
> > Java(TM) SE Runtime Environment (build 1.6.0_06-b02)
> > Java HotSpot(TM) 64-Bit Server VM (build 10.0-b22, mixed mode)
> > We left most of the activemq.xml-configuration as-is and adjusted the
> start-up script to run with 2GB heap size and parallel garbage collector,
> which was more or less needed for 5.0 and left for 5.1 in the start-up
> script.
> >            Reporter: Arjen van der Meijden
> >            Assignee: Rob Davies
> >            Priority: Blocker
> >
> > We have no idea why or when, but within a few days after start-up,
> ActiveMQ suddenly runs out of file descriptors (we've raised the limit to
> 10240). According to lsof it has lots of sockets which are in CLOSE_WAIT
> when that happens. As soon as that happened once, it would re-occur within
> a few hours. This behavior did not happen with ActiveMQ 5.0.
> > We have five queues, all with only one consumer. All consumption and
> production is via the Stomp-interface using PHP-clients. Three of those
> queues get up to 50-100 messages/second in peak moments, while the
> consumers adjust their own consumption rate to the systems load (normally
> its maxed to about 50-150/sec). So in high-load moments on the consumers,
> the queues can grow to a few thousand messages, normally the queues are
> emptied as soon as a message occurs. Those five consumers stay connected
> indefinitely.
> > The messages are all quite small (at most 1 KB or so) and come from 5
> web servers. For each web page-request (about 2-3M/day) a connection is
> made to ActiveMQ via Stomp and at least one message is sent to ActiveMQ,
> for most requests two are sent to the two most active queues. For all
> request a new connection is made and at most 4 stomp-messages are sent to
> ActiveMQ (connect, two messages, disconnect), since apache+php does not
> allow useful reuse of sockets and similar resources.So
> > So the connection-rate is about the same as the highest message rate on
> any of the queues (so 50-100connects/second).
> > When the high amount of sockets in CLOSE_WAIT occurs, we manually
> disable the producers and the sockets disappear gradually. After that the
> amount of sockets stays around 180-190 (mostly opened jars), but seams to
> re-increase more easily than when ActiveMQ is restarted. I have not
> checked if anything special happens on the web servers or databases, since
> their producer and consumer implementation has not changed since 5.0.
> > What I did notice is that the memory-consumption increases heavily prior
> to running out of descriptors, and the consumption re-increases way to
> fast compared to before 11:45u:
> >
> consumption.png
> --
> This message is automatically generated by JIRA.
> -
> You can reply to this email to add a comment to the issue online.

View raw message