activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dejan Bosanac <de...@nighttale.net>
Subject Re: "too many open files" error with 5.3 and Stomp
Date Fri, 30 Oct 2009 12:46:31 GMT
Hi Alex,

see info in this post

http://www.nabble.com/%22too-many-open-files%22-error-with-5.3-and-Stomp-to25888831.html#a26129409

basically, try turning off the producer flow control and don't use stomp+nio
at the moment

Cheers
--
Dejan Bosanac - http://twitter.com/dejanb

Open Source Integration - http://fusesource.com/
ActiveMQ in Action - http://www.manning.com/snyder/
Blog - http://www.nighttale.net


On Fri, Oct 30, 2009 at 1:40 PM, alex.hollerith <alex@hollerith.net> wrote:

>
> config:
> <policyEntry queue=">" producerFlowControl="true" memoryLimit="5mb">
>
> setup:
> 1 perl stomp producer producing into a queue,
>  connecting and disconnecting on every post,
>  rather low frequency of posts (0.5/minute)
> 0 consumers
>
> behaviour:
> works OK until around 68 messages are in the queue (surely depends on the
> size of the messages)
>
> after that you get this in the log:
> 2009-10-29 20:32:05,189 | INFO  | Usage Manager memory limit reached on
> queue://test.soccerfeeds.queue. Producers will be throttled to the rate at
> which messages are removed <...>
>
> And while the activemq service is in that "throttling producers" state you
> will see CLOSE_WAIT sockets building up:
> tcp        0      0 ::ffff:10.60.1.51:61613     ::ffff:10.60.1.206:41519
> CLOSE_WAIT
> tcp        0      0 ::ffff:10.60.1.51:61613     ::ffff:10.60.1.206:36141
> CLOSE_WAIT
> tcp        0      0 ::ffff:10.60.1.51:61613     ::ffff:10.60.1.206:45840
> CLOSE_WAIT
> tcp        0      0 ::ffff:10.60.1.51:61613     ::ffff:10.60.1.206:43793
> CLOSE_WAIT
> tcp        0      0 ::ffff:10.60.1.51:61613     ::ffff:10.60.1.206:40212
> CLOSE_WAIT
> tcp        0      0 ::ffff:10.60.1.51:61613     ::ffff:10.60.1.206:44060
> CLOSE_WAIT
> tcp        0      0 ::ffff:10.60.1.51:61613     ::ffff:10.60.1.206:43776
> CLOSE_WAIT
> tcp        0      0 ::ffff:10.60.1.51:61613     ::ffff:10.60.1.206:44032
> CLOSE_WAIT
> tcp        0      0 ::ffff:10.60.1.51:61613     ::ffff:10.60.1.206:43781
> CLOSE_WAIT
> tcp        0      0 ::ffff:10.60.1.51:61613     ::ffff:10.60.1.206:40200
> CLOSE_WAIT
> tcp        0      0 ::ffff:10.60.1.51:61613     ::ffff:10.60.1.206:44045
> CLOSE_WAIT
>
> You can watch the numbers grow with:
> watch --interval=5 'netstat -an |grep tcp |grep 61613 | grep CLOSE_WAIT |wc
> -l'
>
> Every post increases the number of CLOSE_WAIT sockets by 1. And the sockets
> will not go away, the number seems to be steadily growing, we watched it
> for
> around 17 hours.
>
> Now just consume one single message (we did this via the admin
> webinterface)
> and the number of sockets in CLOSE_WAIT drops to 0 instantly.
>
> [root@bladedist01 activemq]# netstat -an |grep tcp |grep 61613
> tcp        0      0 :::61613                    :::*
> LISTEN
>
> Our theory is that activemq does somehow manage to build up sockets in
> CLOSE_WAIT state while it is in  "throttling producers" mode on a given
> queue until, eventually, the system runs out of resources (file descriptors
> in this case).
> --
> View this message in context:
> http://old.nabble.com/%22too-many-open-files%22-error-with-5.3-and-Stomp-tp25888831p26129409.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message