directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alex Karasulu <>
Subject RE: RE: [eve] A Buffer Pool (of direct mapped buffers)
Date Wed, 03 Dec 2003 15:50:32 GMT
It's amazing how a few hours of sleep can help clear your thoughts.

> My big question is how the per thread allocation of direct
> buffers works with the SEDA model.  Let's see a stage has an event
> queue and a pool of worker threads and one handler thread.  The
> handler thread dequeues events and gives it to a worker to process.
> If we create a direct memory buffer for each worker in a ThreadLocal
> then for example in the input module where we read from the client
> the reading thread can read into its own allocated buffer.  Now
> this buffer has to be handed off to the next stage (the decoder)
> using an event.  This event is then processed in another thread
> which drives the read from the buffer to decode it.  So it does 
> not work that well; meaning synchronization issues will occur and

I agree with myself here this fresh new day.  The fact that
the buffer must be made available to more than the thread populating
it makes TLS moot.  It would have been nice to do but the idea conflicts 
with the SEDA design.

> So actually pooling direct memory buffers using a central pool
> is looking like a great idea.  Synchronization will be required
> however.  BTW what I see happening here is the worker thread
> asks for a buffer from the direct buffer pool.  The pools gives
> exclusive access to this buffer to the requesting worker.  The
> worker then uses the buffer to read data into it from a channel.
> When the non-blocking read completes the buffer is packaged with
> an InputEvent and enqueued by the worker thread onto the decoder
> stage's event queue.  

The key here is to package a view buffer that is read only into
the event so we don't need to synchronize.  The read only view 
buffer's backing store is the backing store of the allocated
direct buffer.  Now the only problem is when do we give the 
the original direct buffer back to the buffer pool to be reclaimed 
for use once again?  

I thought it could be given back after the last listener has the
event delivered to it.  However the problem here is the fact that
stage event processing is asynchronous.  The listener enqueues the
event and returns immediately so even after delivering the event 
to all subscribers/listners the event may not have been processed.

So how do we reclaim buffers? Or more importantly how do we know
when the last listener is done processing the event.  

One approach that comes to mind but I don't like it very much
is to flag SEDA events or events that are processed asynchronously.
When stages or listeners process such events they are required to
add themselves to the event as a concerned party.  Once the event
processing is complete they remove themselves from the event as 
a concerned party.  The last party removed from the event automatically
requests the reclaiming of the buffer carried in the event.  How 
this is enforced - meaning how we make listeners use these methods
so the concern can be noted and removed?  

Any better simpler ideas out there?  Anyone have a good pattern
or two out there.  I guess this sounds more like an offshoot of 
the Observer pattern used to detect event processing state.

> Ohh I'm already finding problems here too.

The problems are going away.

> Looks like a synchronization nightmare.  Let me think of this 
> some more on the ride home.  I'll try to get back to you tommorrow.




View raw message