activemq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Arjen van der Meijden (JIRA)" <j...@apache.org>
Subject [jira] Commented: (AMQ-1739) ActiveMQ 5.1.0 runs out of file descriptors with lots of 'CLOSE_WAIT' sockets
Date Sun, 29 Jun 2008 12:12:02 GMT

    [ https://issues.apache.org/activemq/browse/AMQ-1739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=43773#action_43773
] 

Arjen van der Meijden commented on AMQ-1739:
--------------------------------------------

I figured out why the CLOSE_WAIT's reoccured.

It is caused by the default memory limit of 5MB per queue, which is configured in the default
activemq.xml. As soon as the limit is hit, activemq just has the producers wait untill space
frees up.
But if there is no consumption, or the consumption isn't fast enough, ActiveMQ will eventually
have more producers than available file descriptors. In my case I got 930 CLOSE_WAIT's for
1024 available descriptors in total. I also had 930 stacks ending like this:

 - java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be imprecise)
 - org.apache.activemq.usage.MemoryUsage.waitForSpace(long) @bci=44, line=85 (Compiled frame)
 - org.apache.activemq.broker.region.Queue.send(org.apache.activemq.broker.ProducerBrokerExchange,
org.apache.activemq.command.Message) @bci=259, line=395 (Interpreted frame)
 - org.apache.activemq.broker.region.AbstractRegion.send(org.apache.activemq.broker.ProducerBrokerExchange,
org.apache.activemq.command.Message) @bci=42, line=350 (Compiled frame)
 - org.apache.activemq.broker.region.RegionBroker.send(org.apache.activemq.broker.ProducerBrokerExchange,
org.apache.activemq.command.Message) @bci=142, line=437 (Compiled frame)


So each of the 930 threads is waiting for someone to make some room, but since they prevent
any other connection from entering the system, the memory will never be freed again. Possibly,
even if there is a consumer which picks up consumption after a while, the fact that it ran
out of file descriptors might cause it to a unpredictable and dangerous situation.

After removing the memory limit for seperate queues, I reran my "overload producer" and was
able to produce several tens of thousands of messages. But it wasn't really gone after that.
Similar behaviour seems to occur as soon as you run out of the systemUsage's memoryUsage and/or
when the disk or temporary spaces fill up, even though it by no means had reached the disk
limit yet, but possibly the temporary limit.

The problem here is of course that there might not be a solution to it. But it may help if
the logfile gets error messages about hitting memory limits, rather than just leaving the
user in the dark?

> ActiveMQ 5.1.0 runs out of file descriptors with lots of 'CLOSE_WAIT' sockets
> -----------------------------------------------------------------------------
>
>                 Key: AMQ-1739
>                 URL: https://issues.apache.org/activemq/browse/AMQ-1739
>             Project: ActiveMQ
>          Issue Type: Bug
>          Components: Broker
>    Affects Versions: 5.1.0
>         Environment: We have a single broker with no special network-stuff. Our broker-system
has two single core Opterons, 8GB of memory, plenty of I/O and runs a recent 64bit debian
with 2.6.21 kernel.
> Java(TM) SE Runtime Environment (build 1.6.0_06-b02)
> Java HotSpot(TM) 64-Bit Server VM (build 10.0-b22, mixed mode)
> We left most of the activemq.xml-configuration as-is and adjusted the start-up script
to run with 2GB heap size and parallel garbage collector, which was more or less needed for
5.0 and left for 5.1 in the start-up script.
>            Reporter: Arjen van der Meijden
>            Assignee: Rob Davies
>            Priority: Blocker
>         Attachments: stomp-overload-producer.tgz
>
>
> We have no idea why or when, but within a few days after start-up, ActiveMQ suddenly
runs out of file descriptors (we've raised the limit to 10240). According to lsof it has lots
of sockets which are in CLOSE_WAIT when that happens. As soon as that happened once, it would
re-occur within a few hours. This behavior did not happen with ActiveMQ 5.0.
> We have five queues, all with only one consumer. All consumption and production is via
the Stomp-interface using PHP-clients. Three of those queues get up to 50-100 messages/second
in peak moments, while the consumers adjust their own consumption rate to the systems load
(normally its maxed to about 50-150/sec). So in high-load moments on the consumers, the queues
can grow to a few thousand messages, normally the queues are emptied as soon as a message
occurs. Those five consumers stay connected indefinitely.
> The messages are all quite small (at most 1 KB or so) and come from 5 web servers. For
each web page-request (about 2-3M/day) a connection is made to ActiveMQ via Stomp and at least
one message is sent to ActiveMQ, for most requests two are sent to the two most active queues.
For all request a new connection is made and at most 4 stomp-messages are sent to ActiveMQ
(connect, two messages, disconnect), since apache+php does not allow useful reuse of sockets
and similar resources.So 
> So the connection-rate is about the same as the highest message rate on any of the queues
(so 50-100connects/second).
> When the high amount of sockets in CLOSE_WAIT occurs, we manually disable the producers
and the sockets disappear gradually. After that the amount of sockets stays around 180-190
(mostly opened jars), but seams to re-increase more easily than when ActiveMQ is restarted.
I have not checked if anything special happens on the web servers or databases, since their
producer and consumer implementation has not changed since 5.0.
> What I did notice is that the memory-consumption increases heavily prior to running out
of descriptors, and the consumption re-increases way to fast compared to before 11:45u:
> http://achelois.tweakers.net/~acm/tnet/activemq-5.1-memory-consumption.png

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message