directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mark Atwell (JIRA)" <>
Subject [jira] Closed: (DIRMINA-62) Memory 'Leak' when causing auto-expandable ByteBuffer to expand and change buffer-pool (stack).
Date Tue, 05 Jul 2005 16:47:10 GMT
     [ ]
Mark Atwell closed DIRMINA-62:

> Memory 'Leak' when causing auto-expandable ByteBuffer to expand and change buffer-pool
> -----------------------------------------------------------------------------------------------
>          Key: DIRMINA-62
>          URL:
>      Project: Directory MINA
>         Type: Bug
>     Versions: 0.7.2
>  Environment: Non-specific.
>     Reporter: Mark Atwell
>     Assignee: Trustin Lee
>      Fix For: 0.7.3

> We have been using the excellent MINA library - BTW how do you pronounce this: Minner?
or Minor? or...?
> Anyway, we had an apparent memory leak when using the MINA code with auto-expandable
> I've tracked it down to the allocate/de-allocate algorithm and buffering.
> The problem is that we originally requested a small initial buffer and then putXXX()
tons of things into it, causing it to grow. However, when the buffer is released (implicitly
by calling ...write), the now large buffer gets released to a different pool (stack). Since
these are unbounded, pools, the large-buffer pool just aggregates the big buffers - which
practically never get used.
> I originally thought that I could just release the underlying ByteBuffer when the pool
reaches some maximum, but no joy. It looks like I may need to rely on garbage-collection kicking
in, but this is far from effective (For JavaSoft's Lame 'solution', see:
). Why can't the NIO class have a deallocate/release call?! Grrrr! :o(
> I believe a better/more elegant solution may be to modify ByteBuffer.ensureCapacity()
to also use the stack/pools rather than allocate more native ByteBuffers... and I guess that
this would be faster also? I've tested this and it seems to work fine:
> The change is in ByteBuffer.ensureCapacity. Change:
>     java.nio.ByteBuffer newBuf = isDirect() ? java.nio.ByteBuffer.allocateDirect(newCapacity)
: java.nio.ByteBuffer.allocate(newCapacity);
> To:
>     java.nio.ByteBuffer newBuf = allocate0(newCapacity,isDirect());
> Obviously one of the things one can do in the interim is work out (or approximate/over-estimate)
the maximum buffer size, but if we want to do this with any degree of accuracy we need to
encode our buffer first, which rather defeats the purpose/benefit of the ByteBuffer.

This message is automatically generated by JIRA.
If you think it was sent incorrectly contact one of the administrators:
For more information on JIRA, see:

View raw message