qpid-proton mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rafael Schloming <...@alum.mit.edu>
Subject Re: Messenger: pn_delivery leaking pn_disposition memory?
Date Fri, 25 Apr 2014 18:32:22 GMT
On Fri, Apr 25, 2014 at 1:39 PM, Dominic Evans <dominic.evans@uk.ibm.com>wrote:

> In one of our automated client stress tests, we've noticed that we seem to
> be
> leaking memory. We were previously seeing this on qpid-proton 0.6 and I've
> retested on 0.7 RC3 and it is still occurring
>
> ==16195== 45,326,848 bytes in 25,294 blocks are possibly lost in loss
> record
> 1,865 of 1,867
> ==16195==    at 0x4C274A0: malloc (in
> /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==16195==    by 0x86CC7AC: pn_data (codec.c:363)
> ==16195==    by 0x86D7372: pn_disposition_init (engine.c:1066)
> ==16195==    by 0x86D756B: pn_delivery (engine.c:1102)
> ==16195==    by 0x86DB93E: pn_do_transfer (transport.c:738)
> ==16195==    by 0x86D3A21: pn_dispatch_frame (dispatcher.c:146)
> ==16195==    by 0x86D3B28: pn_dispatcher_input (dispatcher.c:169)
> ==16195==    by 0x86DCB4C: pn_input_read_amqp (transport.c:1117)
> ==16195==    by 0x86DF4FD: pn_io_layer_input_passthru (transport.c:1964)
> ==16195==    by 0x86DF4FD: pn_io_layer_input_passthru (transport.c:1964)
> ==16195==    by 0x86DC74E: transport_consume (transport.c:1037)
> ==16195==    by 0x86DF89B: pn_transport_process (transport.c:2052)
> ==16195==
> ==16195== 45,326,848 bytes in 25,294 blocks are possibly lost in loss
> record
> 1,866 of 1,867
> ==16195==    at 0x4C274A0: malloc (in
> /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==16195==    by 0x86CC7AC: pn_data (codec.c:363)
> ==16195==    by 0x86D4E58: pn_condition_init (engine.c:203)
> ==16195==    by 0x86D738A: pn_disposition_init (engine.c:1067)
> ==16195==    by 0x86D756B: pn_delivery (engine.c:1102)
> ==16195==    by 0x86DB93E: pn_do_transfer (transport.c:738)
> ==16195==    by 0x86D3A21: pn_dispatch_frame (dispatcher.c:146)
> ==16195==    by 0x86D3B28: pn_dispatcher_input (dispatcher.c:169)
> ==16195==    by 0x86DCB4C: pn_input_read_amqp (transport.c:1117)
> ==16195==    by 0x86DF4FD: pn_io_layer_input_passthru (transport.c:1964)
> ==16195==    by 0x86DF4FD: pn_io_layer_input_passthru (transport.c:1964)
> ==16195==    by 0x86DC74E: transport_consume (transport.c:1037)
> ==16195==
> ==16195== 45,328,640 bytes in 25,295 blocks are possibly lost in loss
> record
> 1,867 of 1,867
> ==16195==    at 0x4C274A0: malloc (in
> /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==16195==    by 0x86CC7AC: pn_data (codec.c:363)
> ==16195==    by 0x86D7360: pn_disposition_init (engine.c:1065)
> ==16195==    by 0x86D756B: pn_delivery (engine.c:1102)
> ==16195==    by 0x86DB93E: pn_do_transfer (transport.c:738)
> ==16195==    by 0x86D3A21: pn_dispatch_frame (dispatcher.c:146)
> ==16195==    by 0x86D3B28: pn_dispatcher_input (dispatcher.c:169)
> ==16195==    by 0x86DCB4C: pn_input_read_amqp (transport.c:1117)
> ==16195==    by 0x86DF4FD: pn_io_layer_input_passthru (transport.c:1964)
> ==16195==    by 0x86DF4FD: pn_io_layer_input_passthru (transport.c:1964)
> ==16195==    by 0x86DC74E: transport_consume (transport.c:1037)
> ==16195==    by 0x86DF89B: pn_transport_process (transport.c:2052)
>
>
> Looking at the code I can see this should get freed in
> pn_disposition_finalize once pn_decref(delivery) is called, but I haven't
> yet had a chance to determine why this isn't occurring. Has anyone else
> seen
> this before and is there anything obvious we could be doing wrong?
>

Are these the only kinds of valgrind records you are seeing? I can't see
offhand how it would be possible to leak the the nodes inside a pn_data_t
without also leaking a whole bunch of other stuff. I ran the simple
messenger send/recv examples under valgrind and it was clean for me.

--Rafael

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message