qpid-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gordon Sim <g...@redhat.com>
Subject Re: Error on recovery (MessageStoreImpl.cpp:701): Dbc::get: Cannot allocate memory
Date Tue, 16 Sep 2014 11:02:47 GMT
On 09/16/2014 10:55 AM, Radek Smigielski wrote:
> Hi,
>      I am using qpid-cpp-server-0.18-20.el6.x86_64 on RHEL 6.5 x64. All was working perfectly
fine and suddenly qpidd started misbehaving. The bottom line is that when I am trying to start
it now I am getting below error:
>
> 2014-09-16 04:42:15 [Model] trace Mgmt delete exchange. id:reply_26985676554641559482ff2612d0a398
Statistics: {bindingCount:0, bindingCountHigh:0, bindingCountLow:0, byteDrops:0, byteReceives:0,
byteRoutes:0, msgDrops:0, msgReceives:0, msgRoutes:0, producerCount:0, producerCountHigh:0,
producerCountLow:0}
> 2014-09-16 04:42:15 [Model] trace Mgmt delete exchange. id:reply_753ac950c81d4182854f2ec501b5ef1b
Statistics: {bindingCount:0, bindingCountHigh:0, bindingCountLow:0, byteDrops:0, byteReceives:0,
byteRoutes:0, msgDrops:0, msgReceives:0, msgRoutes:0, producerCount:0, producerCountHigh:0,
producerCountLow:0}
> 2014-09-16 04:42:15 [Model] trace Mgmt delete exchange. id:reply_cb8a3cb6414f4e0d8c32c8d81eeec8bc
Statistics: {bindingCount:0, bindingCountHigh:0, bindingCountLow:0, byteDrops:0, byteReceives:0,
byteRoutes:0, msgDrops:0, msgReceives:0, msgRoutes:0, producerCount:0, producerCountHigh:0,
producerCountLow:0}
> 2014-09-16 04:42:15 [Broker] critical Unexpected error: Error on recovery (MessageStoreImpl.cpp:701):
Dbc::get: Cannot allocate memory

[...]

> When I start qpidd from command line as a user, it runs fine but not as a qpidd user.
Also after I removed all the content from /var/lib/qpidd, I could start qpidd again.
> So obviously it's some kind of memory limit issue but I am trying to understand what
is the limit exactly? And why this happened now? what triggers this?

My first guess is that there are too many exchanges and this is causing 
qpidd to use up available memory. The openstack driver for qpid 
originally used a new auto-delete exchange - which weren't actually 
implemented until quite recently - for every request. This led to lots 
of abandoned but undeleted exchanges. The amqp_rpc_single_reply_queue 
option was added as a workaround for that, if set to true it will use a 
single queue and exchange for all replies to a given client. This is 
fairly easy to test simply by monitoring the number of exchanges while 
openstack is running (e.g. qpid-stat -e). If the nummber keeps going up 
and up, then check your config for the amqp_rpc_single_reply_queue option.

(Another possibility is that it's not actually a memory issue per se, 
but some form of file corruption in the db4 part of the files, such that 
a read is attempting to read an invalid value. Running db_recover with 
the -c option may (or may not!) help if that were the case).

>
> I am using qpid for OpenStack Havana.

My advice fwiw would be not to use persistence at all. The way the 
impl_qpid driver is written for openstack, message delivery is not 
acknowledged anyway so it doesn't really buy you anything (in my opinion).


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Mime
View raw message