activemq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dominic Tootell (JIRA)" <j...@apache.org>
Subject [jira] Commented: (AMQ-2475) If tmp message store fills up, broker can deadlock due to while producers wait on disk space and consumers wait on acks
Date Sat, 07 Nov 2009 15:12:53 GMT

    [ https://issues.apache.org/activemq/browse/AMQ-2475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=55210#action_55210
] 

Dominic Tootell commented on AMQ-2475:
--------------------------------------

I had an investigate into attempted to patch this locally in activemq-core, on a fusesource
5.3.0.4 (MacOSX 10.6.1).

I've run the following tests on the patch, will I'll attach: (the patch diffs, the patched
.java and the broker xml I used in testing):

The test cases I've run overnight and this morning/afternoon are:

- Virtual Topic (  VirtualTopic.iplayer  -> Consumer.A.VirtualTopic.iplayer)
- 3 x Producer, 4,000,000 messages each onto Virtual Topic (12million in total)
- 1 x Consumer
- 100mb tmp_store limit

- Virtual Topic (  VirtualTopic.iplayer  -> Consumer.A.VirtualTopic.iplayer)
- 6 x Producer, 2,000,000 messages each onto Virtual Topic (12million in total)
- 1 x Consumer
- 512mb tmp_store limit

The tmp_storage was definitely limiting ok, and niether the broker, producer or consumer blocked:

du -sh of the tmp_storage area:
{code}
dominic-tootells-macbook-pro:data dominict$ du -sh *
 96M	journal
 48K	kr-store
  0B	lock
512M	tmp-test-broker

dominic-tootells-macbook-pro:data dominict$ du -sh *
 96M	journal
 48K	kr-store
  0B	lock
483M	tmp-test-broker

dominic-tootells-macbook-pro:data dominict$ du -sh *
 96M	journal
 48K	kr-store
  0B	lock
490M	tmp-test-broker

dominic-tootells-macbook-pro:data dominict$ du -sh *
 64M	journal
 48K	kr-store
  0B	lock
 38M	tmp-test-broker

dominic-tootells-macbook-pro:data dominict$ 

{code}


I've also run the junit provided by Martin; this ran ok too; with no blockage.

I shall attach the potential patches.  I haven't run any other tests against the patches;
to see if they potentially cause any other unforeseen issues (i.e. normal persistent queue
- will do this later on)

cheers
/dom





> If tmp message store fills up, broker can deadlock due to while producers wait on disk
space and consumers wait on acks
> -----------------------------------------------------------------------------------------------------------------------
>
>                 Key: AMQ-2475
>                 URL: https://issues.apache.org/activemq/browse/AMQ-2475
>             Project: ActiveMQ
>          Issue Type: Bug
>          Components: Broker, Message Store, Transport
>    Affects Versions: 5.3.0
>         Environment: Tested on Windows XP with JDK 1.60_13, but fairly sure it will be
an issue on all platforms
>            Reporter: Martin Murphy
>            Assignee: Rob Davies
>         Attachments: hangtest.zip
>
>
> I will attach a simple project that shows this. In the test the tmp space is set to 32
MB and two threads are created. One thread will constantly produce 1KB messages and the other
consumes these, but sleeps for 100ms, note that producer flow control is turned off as well.
The goal here is to ensure that the producers block while the consumers read the rest of the
messages from the broker and catch up, this in turn frees up the disk space and allows the
producer to send more messages. This config means that you can bound the broker based on disk
space rather than memory usage.
> Unfortunately in this test using topics while the broker is reading in the message from
the producer it has to lock the matched list it is adding it to. This is an abstract from
the Topic's point of view and doesn't realize that the file may block based on the file system.

> {code}
>     public void add(MessageReference node) throws Exception { //... snip ...
>             if (maximumPendingMessages != 0) {
>                 synchronized (matchedListMutex) {   // We have this mutex
>                     matched.addMessageLast(node); // ends up waiting for space
>                     // NOTE - be careful about the slaveBroker!
>                     if (maximumPendingMessages > 0) {
> {code}
> Meanwhile the consumer is sending acknowledgements for the 10 messages it just read in
(the configured prefetch) from the same topic, but since they also modify the same list in
the topic this waits as well on the mutex held to service the producer:
> {code}
>     private void dispatchMatched() throws IOException {       
>         synchronized (matchedListMutex) {  // never gets passed here.
>             if (!matched.isEmpty() && !isFull()) {
> {code}
> This is a fairly classic deadlock. The trick is now how to resolve this given the fact
that the topic isn't aware that it's list may need to wait for the file system to clean up.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message