activemq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Joel Poloney (JIRA)" <j...@apache.org>
Subject [jira] Commented: (AMQ-1739) ActiveMQ 5.1.0 runs out of file descriptors with lots of 'CLOSE_WAIT' sockets
Date Thu, 31 Jul 2008 20:14:00 GMT

    [ https://issues.apache.org/activemq/browse/AMQ-1739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=44657#action_44657
] 

Joel Poloney commented on AMQ-1739:
-----------------------------------

Arjen,

I have also been experiencing the same problems. I am running a near identical setup to yours
with ActiveMQ 5.0.0. I switched to 5.1.0 temporarily, but it had so many more problems, that
I had to revert back.

I believe the problem is actually within the Stomp client itself. I've been searching around
about the CLOSE_WAIT socket issue (in general) and it appears to be a problem caused by the
code, not the server. Basically, some where in your code, the socket never gets closed. It
can theoretically remain in a CLOSE_WAIT state forever if this happens.

We monitor our ActiveMQ sockets every minute (using lsof) and about once every few hours,
we see it spike from around 220 open connections to numbers like 7,000 or 8,000. It doesn't
build up or anything, just spikes to extremely high numbers.

In some cases, the queue actually dies (overloaded from too many sockets) and in some cases
it actually recovers and flushes everything out. 

So, I think the problem lies somewhere inside the Stomp client rather than in ActiveMQ itself.
Have you gotten anywhere else with this lately?

> ActiveMQ 5.1.0 runs out of file descriptors with lots of 'CLOSE_WAIT' sockets
> -----------------------------------------------------------------------------
>
>                 Key: AMQ-1739
>                 URL: https://issues.apache.org/activemq/browse/AMQ-1739
>             Project: ActiveMQ
>          Issue Type: Bug
>          Components: Broker
>    Affects Versions: 5.1.0
>         Environment: We have a single broker with no special network-stuff. Our broker-system
has two single core Opterons, 8GB of memory, plenty of I/O and runs a recent 64bit debian
with 2.6.21 kernel.
> Java(TM) SE Runtime Environment (build 1.6.0_06-b02)
> Java HotSpot(TM) 64-Bit Server VM (build 10.0-b22, mixed mode)
> We left most of the activemq.xml-configuration as-is and adjusted the start-up script
to run with 2GB heap size and parallel garbage collector, which was more or less needed for
5.0 and left for 5.1 in the start-up script.
>            Reporter: Arjen van der Meijden
>            Assignee: Rob Davies
>            Priority: Blocker
>         Attachments: stomp-overload-producer.tgz
>
>
> We have no idea why or when, but within a few days after start-up, ActiveMQ suddenly
runs out of file descriptors (we've raised the limit to 10240). According to lsof it has lots
of sockets which are in CLOSE_WAIT when that happens. As soon as that happened once, it would
re-occur within a few hours. This behavior did not happen with ActiveMQ 5.0.
> We have five queues, all with only one consumer. All consumption and production is via
the Stomp-interface using PHP-clients. Three of those queues get up to 50-100 messages/second
in peak moments, while the consumers adjust their own consumption rate to the systems load
(normally its maxed to about 50-150/sec). So in high-load moments on the consumers, the queues
can grow to a few thousand messages, normally the queues are emptied as soon as a message
occurs. Those five consumers stay connected indefinitely.
> The messages are all quite small (at most 1 KB or so) and come from 5 web servers. For
each web page-request (about 2-3M/day) a connection is made to ActiveMQ via Stomp and at least
one message is sent to ActiveMQ, for most requests two are sent to the two most active queues.
For all request a new connection is made and at most 4 stomp-messages are sent to ActiveMQ
(connect, two messages, disconnect), since apache+php does not allow useful reuse of sockets
and similar resources.So 
> So the connection-rate is about the same as the highest message rate on any of the queues
(so 50-100connects/second).
> When the high amount of sockets in CLOSE_WAIT occurs, we manually disable the producers
and the sockets disappear gradually. After that the amount of sockets stays around 180-190
(mostly opened jars), but seams to re-increase more easily than when ActiveMQ is restarted.
I have not checked if anything special happens on the web servers or databases, since their
producer and consumer implementation has not changed since 5.0.
> What I did notice is that the memory-consumption increases heavily prior to running out
of descriptors, and the consumption re-increases way to fast compared to before 11:45u:
> http://achelois.tweakers.net/~acm/tnet/activemq-5.1-memory-consumption.png

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message