activemq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Olivier Langlois <>
Subject RE: [jira] Commented: (AMQ-1739) ActiveMQ 5.1.0 runs out of file descriptors with lots of 'CLOSE_WAIT' sockets
Date Wed, 21 May 2008 14:14:41 GMT
I do not see how using SO_LINGER would resolve the problem as SO_LINGER modifies the socket
behavior after close() is called but in the current problem described in this issue, the sockets
are in the CLOSE_WAIT state exactly because close() is not called on them fast enough.

Olivier Langlois
Programmeur C++
||||||||||||||| STREAMTHEWORLD
t. 1 866 448 4037 ext. 675
t. 1 514 448 4037 ext. 675
f. 1 514 807 1861

> -----Original Message-----
> From: Aaron Mulder (JIRA) []
> Sent: May 21, 2008 8:23
> To:
> Subject: [jira] Commented: (AMQ-1739) ActiveMQ 5.1.0 runs out of file
> descriptors with lots of 'CLOSE_WAIT' sockets
>     [
> 1739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-
> tabpanel&focusedCommentId=42918#action_42918 ]
> Aaron Mulder commented on AMQ-1739:
> -----------------------------------
> For what it's worth, we saw this (problems with lots of CLOSE_WAIT
> sockets) only when testing with client with a tight loop that opened and
> closed lots of connections (instead of pooling), and only on Windows
> (where it would fail unable to get an available port after 4000-5000
> connections).
> Another solution seemed to be to alter the Transport to set SO_LINGER to
> any positive value.
> > ActiveMQ 5.1.0 runs out of file descriptors with lots of 'CLOSE_WAIT'
> sockets
> > ------------------------------------------------------------------------
> -----
> >
> >                 Key: AMQ-1739
> >                 URL:
> >             Project: ActiveMQ
> >          Issue Type: Bug
> >          Components: Broker
> >    Affects Versions: 5.1.0
> >         Environment: We have a single broker with no special network-
> stuff. Our broker-system has two single core Opterons, 8GB of memory,
> plenty of I/O and runs a recent 64bit debian with 2.6.21 kernel.
> > Java(TM) SE Runtime Environment (build 1.6.0_06-b02)
> > Java HotSpot(TM) 64-Bit Server VM (build 10.0-b22, mixed mode)
> > We left most of the activemq.xml-configuration as-is and adjusted the
> start-up script to run with 2GB heap size and parallel garbage collector,
> which was more or less needed for 5.0 and left for 5.1 in the start-up
> script.
> >            Reporter: Arjen van der Meijden
> >            Assignee: Rob Davies
> >            Priority: Blocker
> >
> > We have no idea why or when, but within a few days after start-up,
> ActiveMQ suddenly runs out of file descriptors (we've raised the limit to
> 10240). According to lsof it has lots of sockets which are in CLOSE_WAIT
> when that happens. As soon as that happened once, it would re-occur within
> a few hours. This behavior did not happen with ActiveMQ 5.0.
> > We have five queues, all with only one consumer. All consumption and
> production is via the Stomp-interface using PHP-clients. Three of those
> queues get up to 50-100 messages/second in peak moments, while the
> consumers adjust their own consumption rate to the systems load (normally
> its maxed to about 50-150/sec). So in high-load moments on the consumers,
> the queues can grow to a few thousand messages, normally the queues are
> emptied as soon as a message occurs. Those five consumers stay connected
> indefinitely.
> > The messages are all quite small (at most 1 KB or so) and come from 5
> web servers. For each web page-request (about 2-3M/day) a connection is
> made to ActiveMQ via Stomp and at least one message is sent to ActiveMQ,
> for most requests two are sent to the two most active queues. For all
> request a new connection is made and at most 4 stomp-messages are sent to
> ActiveMQ (connect, two messages, disconnect), since apache+php does not
> allow useful reuse of sockets and similar resources.So
> > So the connection-rate is about the same as the highest message rate on
> any of the queues (so 50-100connects/second).
> > When the high amount of sockets in CLOSE_WAIT occurs, we manually
> disable the producers and the sockets disappear gradually. After that the
> amount of sockets stays around 180-190 (mostly opened jars), but seams to
> re-increase more easily than when ActiveMQ is restarted. I have not
> checked if anything special happens on the web servers or databases, since
> their producer and consumer implementation has not changed since 5.0.
> > What I did notice is that the memory-consumption increases heavily prior
> to running out of descriptors, and the consumption re-increases way to
> fast compared to before 11:45u:
> >
> consumption.png
> --
> This message is automatically generated by JIRA.
> -
> You can reply to this email to add a comment to the issue online.

View raw message