directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mark Imel" <locr...@imelshire.com>
Subject RE: [eve] A Buffer Pool (of direct mapped buffers)
Date Sun, 07 Dec 2003 17:16:35 GMT
(Sorry for the delayed response, but my home network was having some
'issues'...)

Wow, I'm glad I didn't respond to quickly...You've definitely given me
something to think about. It's a fascinating problem... The idea of
pooling a resource between asynchonous listeners, and then reclaiming
the resource when everyone is done with it.

I've used refCounting for this in the past.  With the idea that once the
ref count of the resource goes to 0, then it can add itself back into
the pool.

RefCounting is of course, fairly simple to implement, as long as the
consumers (client code) are disciplined enough to 'do the right thing'.
When you pass a reference to the buffer off to a listener, you can
automatically bump its ref count... Then the listener must call
release() on the buffer when it's done.

Frankly, that doesn't sound too difficult... Maybe I'm missing some
important factors?

-----Original Message-----
From: Alex Karasulu [mailto:aok123@bellsouth.net] 
Sent: Wednesday, December 03, 2003 7:51 AM
To: Apache Directory Developers List
Subject: RE: RE: [eve] A Buffer Pool (of direct mapped buffers)


It's amazing how a few hours of sleep can help clear your thoughts.

> My big question is how the per thread allocation of direct buffers 
> works with the SEDA model.  Let's see a stage has an event queue and a

> pool of worker threads and one handler thread.  The handler thread 
> dequeues events and gives it to a worker to process. If we create a 
> direct memory buffer for each worker in a ThreadLocal then for example

> in the input module where we read from the client the reading thread 
> can read into its own allocated buffer.  Now this buffer has to be 
> handed off to the next stage (the decoder) using an event.  This event

> is then processed in another thread which drives the read from the 
> buffer to decode it.  So it does not work that well; meaning 
> synchronization issues will occur and

+1
I agree with myself here this fresh new day.  The fact that
the buffer must be made available to more than the thread populating it
makes TLS moot.  It would have been nice to do but the idea conflicts 
with the SEDA design.

> So actually pooling direct memory buffers using a central pool is 
> looking like a great idea.  Synchronization will be required however. 
> BTW what I see happening here is the worker thread asks for a buffer 
> from the direct buffer pool.  The pools gives exclusive access to this

> buffer to the requesting worker.  The worker then uses the buffer to 
> read data into it from a channel. When the non-blocking read completes

> the buffer is packaged with an InputEvent and enqueued by the worker 
> thread onto the decoder stage's event queue.

The key here is to package a view buffer that is read only into the
event so we don't need to synchronize.  The read only view 
buffer's backing store is the backing store of the allocated direct
buffer.  Now the only problem is when do we give the 
the original direct buffer back to the buffer pool to be reclaimed 
for use once again?  

I thought it could be given back after the last listener has the event
delivered to it.  However the problem here is the fact that stage event
processing is asynchronous.  The listener enqueues the event and returns
immediately so even after delivering the event 
to all subscribers/listners the event may not have been processed.

So how do we reclaim buffers? Or more importantly how do we know when
the last listener is done processing the event.  

One approach that comes to mind but I don't like it very much is to flag
SEDA events or events that are processed asynchronously. When stages or
listeners process such events they are required to add themselves to the
event as a concerned party.  Once the event processing is complete they
remove themselves from the event as 
a concerned party.  The last party removed from the event automatically
requests the reclaiming of the buffer carried in the event.  How 
this is enforced - meaning how we make listeners use these methods so
the concern can be noted and removed?  

Any better simpler ideas out there?  Anyone have a good pattern or two
out there.  I guess this sounds more like an offshoot of 
the Observer pattern used to detect event processing state.

> Ohh I'm already finding problems here too.

The problems are going away.

> Looks like a synchronization nightmare.  Let me think of this some
> more on the ride home.  I'll try to get back to you tommorrow.

-1 

Thoughts?  

Alex



Mime
View raw message