directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alex Karasulu <>
Subject [snickers] Work on non-blocking ASN.1 BER API
Date Fri, 12 Dec 2003 19:29:19 GMT

As I was working on the frontend I hit a point where I had to 
make a decision.  Basically after implementing a DefaultInputManager
I realized that the Decoder implementation would need to be
non-blocking or completely different from the way it exists today.

Let me give some background on the matter.  The Decoder (similar
to the Encoder) must decode messages from the input stream into 
objects in memory using Basic Encoding Rules (BER) based on the
ASN.1 definitions for the LDAPv3 protocol.  PDUs (Protocol Data 
Units) are marshalled and demarshalled using BER Encoders and 
Decoders respectively.  The two Decoder and Encoder modules within
the server are responsible for these aspects of the protocol.

Now the input managing stage reads input from clients into a direct
buffer and fires an input event with the buffer as the payload.  
Other Subscribers for this InputEvent type are informed of it.  The
Decoder stage after the input stage is a Subscriber for these

Before the use of nonblocking IO.  The Decoder would preform a 
blocking read waiting until the entire PDU is delivered to the
server from the client.  At that time we used Snacc runtime 
libraries that supplied streams to encode and decode the data
to and from compiled ASN.1 stubbs generated by the Snacc compiler.

Now with non-blocking IO the InputEvent may only carry a small 
portion of the PDU.  There really is no way to tell.  We could
use the same Snacc encoding streams (that is if the licensing 
issues were not present) if we collected the content into a 
temporary buffer for that PDU and read from it.  The call to the
encoder read would return when the buffer is filled with a 
complete PDU.  This approach is not good for a few reasons.

1). The entire contents of the PDU must be in memory at one
    time along with the marshalled message object representing
    the PDU.  This means two copies of the same data in memory
    at the same time.  Plus a direct memory buffer will hold a
    peice of the PDU as well at some points in time.

2). Two threads will be needed to decode a PDU.  First a worker
    from the decoder must be used to read and decode from the 
    temporary PDU buffer.  Meanwhile other workers must copy the
    buffers of incoming InputEvents into this temporary PDU buffer.
    So there are times where we are not blocking and decoding is 
    actively occurring where two threads are being used one to 
    produce and another to consume.

These two problems consume both space and time.  If we could find
a way to have a non-blocking decoder the problem could be reduced
to using one direct buffer while using a single thread to drive the
decoding process.

Here's what I have in mind.

The InputEvent processing stage should really be a place where 
Decoders are managed one dedicated to each client.  A decoder is 
a simple object that takes a ByteBuffer and reads it partially 
populating a PDU Message.  The decoder must also track where it 
left off so it can continue to populate the Message envelope when
new buffers with peices of the PDU arrive.

The DecoderManager can then be implemented as a Stage where 
workers processing InputEvents, use the payload buffer of the
event to call decode( ByteBuffer ).  The worker adding the 
last InputEvent to complete a PDU also triggers the creation
and publishing of a RequestEvent which carries the marshalled
Message envelope object representing the PDU.

I spoke to Wes about this stuff on IRC and he volunteered to
work on it.  I will try to help as well but I'm spread thin.
If others are interested please feel free to join the effort.
Basically what will be come snickers for Java will be based 
on this code.  Also in the next few days lets try to discuss 
this and the design.


View raw message