directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alex Karasulu <aok...@bellsouth.net>
Subject Re: Re: [eve] BufferPool Implementation
Date Fri, 12 Dec 2003 15:54:19 GMT
Harmeet,

Thanks for the comments I'm glad someone got back to me with what
they thought about it.

> A) Would it be better to hide the pooling or not semantics inside the implementation.
 

I wanted the fact that these buffers were pooled to be known 
explicitly.  I did not want any question about it.

> Would it better to rename BufferPool to BufferManager.
> and have something like this: 
> 
> interface BufferManager
> {
>    // get a buffer. Hides pooling, reuse etc.
>    Buffer getBuffer(long size, BufferUsage intendedUsageHint);
> 
>    // get a buffer of default size. Default is implementation 
>    // dependent and may be set during configuration.
>    Buffer getBuffer(BufferUsage intendedUsage);
> 
>    // a buffer obtained by BufferManager may be returned for future reuse.
>    void release(Buffer buffer);
> }
> 
> Where BufferUsage could be an enum like object to indicate intendedUsage hint. This may
cause BufferManager to give a memory mapped, direct or byte  buffer.

I disagree with respect to this BufferUsage enumeration.  I
don't see any reason to use it since we would only want to pool
direct buffers.  The primary reason for pooling any object
is that it costs too much to create and destroy it.  Direct
buffers fit into this category because of the nature in which
they are allocated outside of the Java Heap.  Now the backing
store to non-direct buffers are not allocated outside of the
heap but use byte arrays and things like that.  So it would not
make sense for me to create a pool of non-direct buffers.  The
synchronization costs would outweight the creation of byte[]s
on non-direct buffers.  So in the end I would just create the
backing stores of non-directt buffers as I need them without 
using a buffer pool.

> B) Is it a good idea to expose BufferPoolConfig ? Why would a user of BufferPool want
to know about it's internal configuration.

Good question.  I primarily exposed the configuration bean 
for asking the pool what sized buffers it stores before getting
one.  Also it would be good for listing the characteristics of
the pool like by how many buffers does it increment when it needs
to grow.  This can be used for providing JMX information about the 
component.

> C) Why is a good idea to pass a_party at claim, release time. A buffer could be obtained
by one object but be operated on by other object. Object 'done with' a buffer may not know
about the initial requestor of Buffer.

Here's the problem:  We have multiple stages that will be interested
in the buffer.  Basically the buffer is handed off to other stages 
using SEDA events that carry the buffer as its payload.  Stages have
events enqueued onto their queue synchronously by Subscribers for 
their events of interest.  The hitch is that even thought the event
deliver and enqueue operations are synchronous the stage event 
processing is asynchrounous and happens within another thread.  So 
here's an example sequence of events where stage A writes to a direct
buffer and stage B reads from it.

1). Stage A claims a buffer using getBuffer(this) where this is
    the interested party claiming interest.  
2). Stage A creates an instance of an InputEvent which it 
    subclasses to allow access to buffer pool interfaces via 
    the event.  Look at InputEvent the event here (comments
    removed):

public abstract class InputEvent extends EventObject
{
    protected final ByteBuffer m_buffer ;
    
    public InputEvent( ClientKey a_client, ByteBuffer a_buffer )
    {
        super( a_client ) ;
        m_buffer = a_buffer ;
    }
    
    public abstract ByteBuffer claimInterest( Object a_party ) ;
    public abstract void releaseInterest( Object a_party ) ;
}

    Now here's the code in stage A that claims the buffer and 
    fires the event with the buffer in it.

try
{
    l_buf = m_bp.getBuffer( this ) ;
    l_channel.read( l_buf ) ;
}
catch ( ResourceException e )
{
    m_monitor.bufferUnavailable( m_bp, e ) ;
    continue ;
}
catch ( IOException e )
{
    m_monitor.readFailed( l_client, e ) ;
    m_bp.releaseClaim( l_buf, this ) ;
    continue ;
}
					
// report to monitor, create the event, and publish it
m_monitor.inputRecieved( l_client ) ;
InputEvent l_event = new ConcreteInputEvent( l_client, l_buf ) ;
m_router.publish( l_event ) ;
m_bp.releaseClaim( l_buf, this ) ;

    The ConcreteInputEvent has a handle to the BufferPool and
    can call the respective interfaces when calls are made to
    the following methods on the event:

    public ByteBuffer claimInterest( Object a_party ) ;
    public void releaseInterest( Object a_party ) ;

3). Once Stage A fires this event with the buffer in it a Subscriber
    registered with the event notificatier/router has its 
    inform(EventObject) method called synchronously.  This Subscriber
    may be Stage B or an inner class of Stage B.  

4). Stage B's Subscriber, within the context of the worker thread of 
    Stage A performs a claimInterest operation on the event using the
    reference to Stage B as the argument.  So the Subscriber claims 
    interest on the buffer on behalf of Stage B.

5). After the event is fired Stage A releases claim to the
    buffer.  At this point there the buffer still has a 
    positive reference count due to the claim made by the
    subscriber of Stage B.

6). Stage B's driver dequeue's the event and invokes a
    worker to process it.  The worker gets access to the
    event's buffer by laying claim to it using Stage B
    again a second time which does not have an effect since
    Stage B already claimed interest using the Subscriber.
    Once the read is complete then the releaseClaim can be
    called to remove Stage B from the interested party list.

7). Once the buffer pool detects the reference count drop
    to zero the buffer is removed from the in use list and 
    added to the free list of the buffer pool to be reused.

http://jmule.sourceforge.net/doc/javadoc/org/jmule/core/util/ByteBufferFactory.html

I'm looking at this now for more ideas.  BTW I don't like what 
I have here.  It's in my opinion a poor attempt at solving the
problem of sharing a pooled buffer resource across Stages and 
their threads.  I know it will work and I will have to document
the heck out of it to make sure people can understand why it is
made the way it is.  I'm hoping someone will offer a better solution
so I can replace it.

Thanks for your response Harmeet,
Alex





Mime
View raw message