directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alex Karasulu <aok...@bellsouth.net>
Subject Re: Re: [asn.1] BER codec design
Date Sat, 24 Jan 2004 18:12:27 GMT
Wes,

I'm a little confused about your approach could you gimme some mock 
interfaces so I can understand? 

I wrote some more inline ...

> 
> From: "Wes McKean" <wmckean@logictrends.com>
> I am a very very strong component of the KISS principle.  The easier 
> it is to understand, the less likelyhood something has to break, and 
> the easier it is to get more or other people to work on it.  

I agree 100%!

> With that in mind, I believe the encoding and decoding should continue 
> to use an event based architecture.

Why? I presume you're talking in terms of SEDA events.  Why would modeling
your API to specifically suite the needs of the server make it more generic
and reusable?  Looks like this approach will actually contradict the second
sentence in this email about making it usable by as many people as possible.

You could make the codec event driven if you wanted to but not tie it down
to our SEDA design in Eve.  You don't want other users to have to know or 
import Eve code.  Knowing you, you'd probably create your own event types
for it if you chose this approach so the import stuff would be a non issue.

I'm not trying to make this complex or sit here writing emails forever but 
I think this sort of discussion is good - we need to go through it at 
least once.  

Ok I wrote this before in an email earlier but perhaps I was not clear.
There is no need for you to taint what your doing to make it fit into 
Eve's design.  Stay as general as possible.  Your writing a generic API
for encoding and decoding ASN.1 messages.  Your TLVTree codec will be
usable for any BER based communications not just for those based on LDAP.

Now this BER handling phase into the TLVTree has a couple of requirements
because we want to make it work efficiently with both blocking and 
non-blocking servers.

1. Don't presume the substrate (data required encoded or decoded) is provided 
   all at one time; this is good for more than just non-blocking servers 
   but most servers that need to stream out large amounts of data in chunks.

2. Design the interfaces so they can be used in both a blocking and 
   non-blocking fashion.

Think of yourself as an API writter w/ an implementation and not a SEDA 
stage writter.  I'm your client.  You should I think make an API as 
general as possible so that even clients can use it and so it can fit 
into a framework that can swap out your implementation when it would like 
to.

> Given that it is highly unlikely that the thread passing objects to the 
> encoder needs to care about what's done with them after they are encoded, 
> we can simply register a sink with the encoder, and every time it fills up 
> a ByteBuffer, it simply fires an event off to the sink which processes it.

You could do that.  You're just mimicing a session for that PDU decode or
encode operation.  You're still going to need a key to associate peices of
work with a particular request: remember the sink could be processing 
multiple requests at a time.  So you need to multiplex the incomming chunks.

> I assume this sink would place the ByteBuffer on the output queue for the
> socket, given that the object writing to the socket should be in its own 
> thread.  (Hell, since the sink is just an interface, it could be implemented 
> by the same object utilizing the encoder, so it would *know*).  Using this 
> method, the code feeding the encoder only has only one thing to worry about.

You're still thinking in terms of a server.  Forget about how Eve will
handle this for now.  We will have to use your BER provider in the clients
as well.  We use the message provider framework in clients as well as the 
server.

> Now, as far as decoding goes...  I see the thread reading the ByteBuffers,
> and the thread doing the decoding as two separate entities.  The reader 
> thread reads a ByteBuffer, puts in the queue of the decoder.  Now, here's 
> the trick.  The decoder may or may not be currently decoding.  If a thread 
> is running the decoder, then the buffer will be picked up by this thread.  
> If the decoder is not currently running, then a thread is picked from the 
> pool, and started on the decoder.  Once the decoder has a complete PDU, ASN 
> stub, or whatever, it fires an event on its own sink for a process to handle > the
TLV tree.

Ok I have an idea for how we can just start coding without getting off
track.  Why don't you build a ByteBuffer to TLVTree generator.  At this
point just presume the entire tree's contents are in the ByteBuffer.  
We can worry about freezing the state when the buffer is incomplete and 
starting it back up again with the another buffer later.

When designing around your TLV tree keep in mind you might have to
break it up into peices where you release say i.e. 12 TLV nodes from the
decoder from one ByteBuffer and 8 nodes from another and so on.  So
think in terms of returning TLV nodes as arrays.  These nodes naturally
come out of the socket in depth first order right?  So your TLV tree
is "streaming" out of the decoder itself in chunks.  And conversely 
they can be streamed into an encoder in chunks.  This way you never 
need the entire tree in memory at one time.

Alex



Mime
View raw message