directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alex Karasulu <>
Subject RE: RE: [eve] A Buffer Pool (of direct mapped buffers)
Date Thu, 01 Jan 1970 00:00:00 GMT

One last thing I may be wrong about the statement concerning
direct memory buffers not being on the Java Heap.  Perhaps
someone else has the answer to this.  Anyway I do know that
the GC is told not to mess with direct buffers so the burden
on the GC should not (hopefully) exist with direct buffers.


> From: Alex Karasulu <>
> Date: 2003/12/03 Wed AM 01:58:05 EST
> To: "Apache Directory Developers List" <>
> Subject: RE: RE: [eve] A Buffer Pool (of direct mapped buffers)
> > There's a couple of main reasons why we use a per thread pool.
> > One is that we have a finite set of threads in our server, so if we use
> > ThreadLocalStorage, there's a good chance that whatever we store on that
> > thread we'll get to use again.
> Yes that makes sense especially in the context of Eve.
> > Secondly, having a per thread pool, removes any synchronization
> > overhead.  In fact, if you own the thread, you can even pre-create slots
> > on the Thread to prevent hashMap lookups for your TLS.
> Could you clarify the "pre-create slots" part.  I get the synchronization
> overhead I think.  Basically if its stored in a thread local and garranteed
> to be used by only one thread you never need synchronization on that resource.
> > Now, I'll be the first to admit that the newer VM's have greatly
> > improved the performance regarding synchronization, so if you want to be
> > sure that perThread is faster, or slower, you'll need to run your own
> > tests.
> Oh I'm sure it would be faster.  There is no doubt about that.
> It's a question I think of how much.
> > Also, during long discussions regarding pooling in general, I have to
> > admit, that you can be putting a burden on the Garbage collector.  In my
> I don't think direct buffers in NIO are even in the Java heap.  They
> do cost alot to create because the OS at the native level is asked to
> intervene.  It costs alot to create and destroy but the GC is not 
> even aware of this memory.  It does not show up on the Java heap.
> My big question is how the per thread allocation of direct
> buffers works with the SEDA model.  Let's see a stage has an event
> queue and a pool of worker threads and one handler thread.  The
> handler thread dequeues events and gives it to a worker to process.
> If we create a direct memory buffer for each worker in a ThreadLocal
> then for example in the input module where we read from the client
> the reading thread can read into its own allocated buffer.  Now
> this buffer has to be handed off to the next stage (the decoder)
> using an event.  This event is then processed in another thread
> which drives the read from the buffer to decode it.  So it does 
> not work that well; meaning synchronization issues will occur and
> multiple threads will be needed for each request and held until
> the request processing is completed with a response flush back
> to the client.   Yeah this will not work that well with SEDA
> I think but what are your thoughts?
> So actually pooling direct memory buffers using a central pool
> is looking like a great idea.  Synchronization will be required
> however.  BTW what I see happening here is the worker thread
> asks for a buffer from the direct buffer pool.  The pools gives
> exclusive access to this buffer to the requesting worker.  The
> worker then uses the buffer to read data into it from a channel.
> When the non-blocking read completes the buffer is packaged with
> an InputEvent and enqueued by the worker thread onto the decoder
> stage's event queue.  Ohh I'm already finding problems here too.
> Looks like a synchronization nightmare.  Let me think of this 
> some more on the ride home.  I'll try to get back to you tommorrow.
> > experience, it's been well worth it, and the stability as well as memory
> > management of our server have improved as a result.
> Alex

View raw message