qpid-proton mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dominic Evans <dominic.ev...@uk.ibm.com>
Subject Messenger: pn_delivery leaking pn_disposition memory?
Date Fri, 25 Apr 2014 17:39:47 GMT
In one of our automated client stress tests, we've noticed that we seem to be
leaking memory. We were previously seeing this on qpid-proton 0.6 and I've
retested on 0.7 RC3 and it is still occurring

==16195== 45,326,848 bytes in 25,294 blocks are possibly lost in loss record
1,865 of 1,867
==16195==    at 0x4C274A0: malloc (in
/usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==16195==    by 0x86CC7AC: pn_data (codec.c:363)
==16195==    by 0x86D7372: pn_disposition_init (engine.c:1066)
==16195==    by 0x86D756B: pn_delivery (engine.c:1102)
==16195==    by 0x86DB93E: pn_do_transfer (transport.c:738)
==16195==    by 0x86D3A21: pn_dispatch_frame (dispatcher.c:146)
==16195==    by 0x86D3B28: pn_dispatcher_input (dispatcher.c:169)
==16195==    by 0x86DCB4C: pn_input_read_amqp (transport.c:1117)
==16195==    by 0x86DF4FD: pn_io_layer_input_passthru (transport.c:1964)
==16195==    by 0x86DF4FD: pn_io_layer_input_passthru (transport.c:1964)
==16195==    by 0x86DC74E: transport_consume (transport.c:1037)
==16195==    by 0x86DF89B: pn_transport_process (transport.c:2052)
==16195== 
==16195== 45,326,848 bytes in 25,294 blocks are possibly lost in loss record
1,866 of 1,867
==16195==    at 0x4C274A0: malloc (in
/usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==16195==    by 0x86CC7AC: pn_data (codec.c:363)
==16195==    by 0x86D4E58: pn_condition_init (engine.c:203)
==16195==    by 0x86D738A: pn_disposition_init (engine.c:1067)
==16195==    by 0x86D756B: pn_delivery (engine.c:1102)
==16195==    by 0x86DB93E: pn_do_transfer (transport.c:738)
==16195==    by 0x86D3A21: pn_dispatch_frame (dispatcher.c:146)
==16195==    by 0x86D3B28: pn_dispatcher_input (dispatcher.c:169)
==16195==    by 0x86DCB4C: pn_input_read_amqp (transport.c:1117)
==16195==    by 0x86DF4FD: pn_io_layer_input_passthru (transport.c:1964)
==16195==    by 0x86DF4FD: pn_io_layer_input_passthru (transport.c:1964)
==16195==    by 0x86DC74E: transport_consume (transport.c:1037)
==16195== 
==16195== 45,328,640 bytes in 25,295 blocks are possibly lost in loss record
1,867 of 1,867
==16195==    at 0x4C274A0: malloc (in
/usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==16195==    by 0x86CC7AC: pn_data (codec.c:363)
==16195==    by 0x86D7360: pn_disposition_init (engine.c:1065)
==16195==    by 0x86D756B: pn_delivery (engine.c:1102)
==16195==    by 0x86DB93E: pn_do_transfer (transport.c:738)
==16195==    by 0x86D3A21: pn_dispatch_frame (dispatcher.c:146)
==16195==    by 0x86D3B28: pn_dispatcher_input (dispatcher.c:169)
==16195==    by 0x86DCB4C: pn_input_read_amqp (transport.c:1117)
==16195==    by 0x86DF4FD: pn_io_layer_input_passthru (transport.c:1964)
==16195==    by 0x86DF4FD: pn_io_layer_input_passthru (transport.c:1964)
==16195==    by 0x86DC74E: transport_consume (transport.c:1037)
==16195==    by 0x86DF89B: pn_transport_process (transport.c:2052)


Looking at the code I can see this should get freed in
pn_disposition_finalize once pn_decref(delivery) is called, but I haven't
yet had a chance to determine why this isn't occurring. Has anyone else seen
this before and is there anything obvious we could be doing wrong?




--
View this message in context: http://qpid.2158936.n2.nabble.com/Messenger-pn-delivery-leaking-pn-disposition-memory-tp7607416.html
Sent from the Apache Qpid Proton mailing list archive at Nabble.com.

Mime
View raw message