activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
Subject Re: Crashing an AMQ producer in ~12 seconds
Date Mon, 01 Oct 2007 12:37:49 GMT
On Oct 1, 2007, at 8:21 AM, ttmdev wrote:

> The messages are persistent, so shouldn't this give the broker the  
> option of
> sending them to the store, thus freeing up memory? Only if the  
> messages are
> non-persistent does the broker have no choice but to keep them in- 
> memory.

That is my understanding.

> Saqib is right, you should only have to create one connection.  
> Connections
> are heavy-weight objects, opening and closing one for each send  
> puts a lot
> of strain on the system. Because you're sending to the same topic,  
> you can
> also get away with having to create just one session and publisher.

Yep, that's right.  In production, I use a PooledConnectionFactory.   
The reason that I'm not using a pool in this example is precisely  
because this reproduces exactly the problem that I see in production,  
where ~1500 threads pile up and the jvm runs out of memory (max heap  
is ~2Gb in prod).

In the example that I sent to the list earlier, if you kill -3 the  
jvm pid after about ~1300 messages have been sent, you'll see a ton  
of threads that are actually the source of the problem.

"AcitveMQ Connection Worker: tcp://localhost/" daemon  
prio=5 tid=0x00511d40 nid=0x1845c00 waiting on condition  

While I do accept that creating a regular connection with each  
request is not a good practice and is not the type of thing that one  
would want to do in production, it seems to me that the behavior  
ought to be correct and that this should still function, albeit  
slowly.  Currently, it does not appear to be so.


                                    Philip Jacob

View raw message