directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alex Karasulu <aok...@bellsouth.net>
Subject [asn.1] BER codec design
Date Sat, 24 Jan 2004 04:30:13 GMT
Hi all,

It seems to me like Wes' breaking down the BER encoding/decoding into two
steps with an intermediate structure.  This intermediate structure is the
(BER specific) TLV tree Wes mentioned before.  

1). The first is to go from the binary encoding to a binary encoded TLV 
    tree and vice versa.  The TLV tree is just a structure reflecting the
    containment tree of the ASN.1 datatype.  Each leaf in the tree is a
    simple (non-composite) ASN.1 type with label and value.  I'm very
    interested in your ideas for representing this TLV tree BTW.

2). The second takes you from a TLV tree to a populated instance of a
    data structure stub (an Object of the stub class) and vice versa.

Question is do we design this process into the server with non-blocking
staged processing in mind or do you generate stuff conforming to what 
works best for this BER codec?  I think we design for the BER codec
but keep in mind we perform chunk wise operations.  So when encoding
we ask for the first chunk and keep going back for more until we're
told all has been transferred for that encoding.  Likewise for decoding
we give chunks one chunk at a time until we're told we have enough for
a single decoding session.

Also I think we might have to define multiple sub-codecs composing the
stages of a larger ASN.1 BER codec like so:

          codec A           codec B
              
--------- decoder --------- decoder ---------
-       - ------> -       - ------> -       -
-Encoded-         -TLVTree-         -Decoded-
-       - <------ -       - <------ -       -
--------- encoder --------- encoder ---------

Let's just look at codec A.  Here we of course need to know what our
tree classes are to build the right args but for now presume we are
working with Objects.  

interface EncoderA
{
    SessionKey encode( ByteBuffer a_buf, Object a_inTree ) 
        throws EncodingException ;

    boolean encode( ByteBuffer a_buf, SessionKey a_key ) ;
}

The encoder encodes binary data into an NIO byte buffer.  Both
methods reflect this.  However notice that the first returns a
SessionKey and actually takes the TLVTree as an Object type.
The first method returns null if it can fit all of the encoded
image of the TLVTree into the buffer.  If it cannot it saves its
state in the encoding process initializing a session for the
encoding and returns a key to access that state once again to
continue the process.  So for the most part small response PDUs 
should only require one call to the first encode overloaded 
method without incurring any session and key obj creation 
overheads.

If however the encoded image cannot fit into the buffer then
the session objects are created and a non-null key is returned.
The buffer btw should be completely full.  Calls to the second
method get the next chunk of encoded data which is again deposited 
into the provided ByteBuffer.  This method unlike the first 
requires the session key to continue the process.

I'll come back to the rest later but this is how we get an encoder
to return chunk wise data that is conducive to the SEDA architecture.
I think we could use this interface as the interface for the entire
BER encoder and not just for this phase of the encoding within codec 
A.

I'll come back to this again later.

Alex


Mime
View raw message