qpid-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Stephen Lau" <stephen....@xml-asia.org>
Subject Re: Stress C++ broker without flow control
Date Thu, 05 Mar 2009 10:43:00 GMT
I am checking the difference betweens integration of spring-jms and pure 
qpid-perftests
The queues only cummulate when using spring-jms,, still checking the 
behaviour of default qpid-perftests

Stephen

----- Original Message ----- 
From: "Stephen" <register@inode.serveftp.com>
To: <users@qpid.apache.org>
Sent: Thursday, March 05, 2009 3:03 PM
Subject: Re: Stress C++ broker without flow control


> Hi ,
>
> I still have question on messages stored in C++ broker, but not releasing.
>
> I am sending ~200k messages of 512bytes each from Java Client to C++ 
> broker
> I found there is a limit in C++ broker of 100MB default of each queue, 
> when the limit reach,
>
> 2009-mar-05 11:22:31 notice Journal "TplStore": Created
> 2009-mar-05 11:22:31 notice Store module initialized; 
> dir=/ebina/fno/qpid_c++_m4/qpid_store
> 2009-mar-05 11:22:31 notice SASL disabled: No Authentication Performed
> 2009-mar-05 11:22:31 notice Listening on TCP port 5678
> 2009-mar-05 11:22:31 notice Broker running
> 2009-mar-05 11:24:48 warning Message 204801 on 
> _@development_983f81f0-272a-48f9-aecd-0f203ebd3848 cannot be released from 
> memory as the queue is not durable
> 2009-mar-05 11:24:48 error Execution exception: resource-limit-exceeded: 
> Policy exceeded on _@development_983f81f0-272a-48f9-aecd-0f203ebd3848 by 
> message 204801 of size 512 , policy: size: max=104857600, 
> current=104857094; count: unlimited; type=flow_to_disk 
> (qpid/broker/QueuePolicy.cpp:90)
>
> Then I shutdown the producer and consumer, And use qpid-tool to check the 
> exact queue which exceeded the limit
>
>    Type       Element                113                 116 119 
> 124              126
> 
> ========================================================================================================================
>    property   vhostRef               103                 103 103 
> 103              103
>    property   name                   _@development_b9e6 
> _@development_6d47b  _@development_9e79  mgmt-HXXXXXP0XX  repl-HXX
>    property   durable                False               False False 
> False            False
>    property   autoDelete             False               False False 
> True             True
>    property   exclusive              True                True True 
> True             True
>    property   arguments              {}                  {} {} 
> {}               {}
>    statistic  msgTotalEnqueues       204801 messages     204800 204797 
> 653              45
>    statistic  msgTotalDequeues       204801              0 0 
> 653              45
>    statistic  msgTxnEnqueues         0                   0 0 
> 0                0
>    statistic  msgTxnDequeues         0                   0 0 
> 0                0
>    statistic  msgPersistEnqueues     204801              204800 204797 
> 0                0
>    statistic  msgPersistDequeues     204801              0 0 
> 0                0
>    statistic  msgDepth               0                   204800 204797 
> 0                0
>    statistic  byteDepth              0 octets            104857094 
> 104856064           0                0
>    statistic  byteTotalEnqueues      104857606           104857094 
> 104856064           86850            22238
>    statistic  byteTotalDequeues      104857606           0 0 
> 86850            22238
>    statistic  byteTxnEnqueues        0                   0 0 
> 0                0
>    statistic  byteTxnDequeues        0                   0 0 
> 0                0
>    statistic  bytePersistEnqueues    104857606           104857094 
> 104856064           0                0
>    statistic  bytePersistDequeues    104857606           0 0 
> 0                0
>
> It seems that the messages are enqueue in those "queues" and never got 
> deleted. And (it seems that) these queues exist forever, without manual 
> actions.
>
> Any idea / suggestion about the msg stored in memory / queues? I should 
> have "subscribed" / "consumed" them already
>
> Here are my settings:
> Java producer is sending msg using url: 
> topic://development/admin_in?durable='false', with auto acknowledgement
> Java consumer is receiving msg from url: 
> topic://development/*.*?durable='false', with client acknowledgement
>
> The binding and queue are created by C++ broker automatically.
> I have tried to increase the queue limit of C++ broker like this:
>
> ./qpid-config -a guest/guest@127.0.0.1:5678 add exchange topic development
> ./qpid-config -a guest/guest@127.0.0.1:5678 add queue 
> linux.news --max-queue-size 209715200
> ./qpid-config -a guest/guest@127.0.0.1:5678 bind development linux.news
>
> but no use, as the msgs are enqueued in other queues and do not dequeue.
>
> Regards,
> Stephen
>
>
> ----- Original Message ----- 
> From: "Rafael Schloming" <rafaels@redhat.com>
> To: <users@qpid.apache.org>
> Sent: Wednesday, March 04, 2009 4:46 AM
> Subject: Re: Stress C++ broker without flow control
>
>
>> Gordon Sim wrote:
>>> Stephen wrote:
>>>> Hi ,
>>>>
>>>> I just got the first few lines from console:
>>>>
>>>> l2009-mar-03 18:22:43 warning Journal "admin_in": Enqueue capacity
>>>> threshold exceeded on queue "admin_in".
>>>> 2009-mar-03 18:22:43 error Unexpected exception: Enqueue capacity
>>>> threshold exceeded on queue "admin_in". (JournalImpl.cpp:501)
>>>> 2009-mar-03 18:22:43 error Connection 172.16.25.95:3777 closed by 
>>>> error:
>>>> Enqueue capacity threshold exceeded on queue "admin_in".
>>>> (JournalImpl.cpp:501)(501)
>>>
>>> Great, thats what I was looking for. This is the root cause of the
>>> problem, though clearly the handling of this on the java side is also an
>>> issue.
>>
>> The Java exception handling should now be fixed on trunk.
>>
>> --Rafael
>>
>>
>> ---------------------------------------------------------------------
>> Apache Qpid - AMQP Messaging Implementation
>> Project:      http://qpid.apache.org
>> Use/Interact: mailto:users-subscribe@qpid.apache.org
>>
>>
>
>
> ---------------------------------------------------------------------
> Apache Qpid - AMQP Messaging Implementation
> Project:      http://qpid.apache.org
> Use/Interact: mailto:users-subscribe@qpid.apache.org
>
> 


---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org


Mime
View raw message