directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alex Karasulu <>
Subject Re: Questions about ASN.1 codec
Date Mon, 31 Jan 2005 17:08:40 GMT
Emmanuel Lecharny wrote:

>Thanks Alex.
>Some remarks :
>1) As much of the Ls will be one or two bytes long, isn't it a good idea
>to create 3 subclass to handle those case ? 
>- a ByteConstructedTuble for every tuple which length is < 127 bytes
>- a ShortConstructedTuble for every tuple which length is 1 < L < 256
>- a LongConstructedTuble for every tuple which length is 255 < bytes
NICE! I like that.  This is more efficient and interface does not 
change.  Logic gets faster too.  Good call.

>2) Do we really need an ASN.1 compiler? LDAP protocol is not that
>big ...
:) can't stop at LDAP can we.  Actually no we do not need the ASN.1 
compiler.  This was a while back when we decided to do the stub 
compiler.  It's stayed pretty much the same for some time now.  It a 
nice to have but I'm not going to put too much time into it at this 
stage.  It's not high priority but its a very useful thing to have if we 
augment services with other ASN.1 based protocols.

I'd like to design a few myself one day.

Right now though no need to put focus there.  The runtime is the most 
important in terms of performance and preventing DoS attacks.

>3) Encoder could be something like a pipeline :
>Java classes (representing LDAP operation and responses) + a toNode()
>method -> serialization to PDU (special NodeToPDU utility class) ->
>dumping the result to a Channel
That's how it works today.  Actually we generate the TLV tree ( not good 
really cuz of memory usage) but then use a visitor to change this tree.  
For example the encoders that generate TLV's for parts of the LDAP 
message can generate in any configuration meaning using indeterminate 
forms etc.  LDAP does not allow this but its easier code to do that.  
The visitor then descends the tree altering it to be in the determinate 
form.  But there are stages or a pipeline of processing.  However I do 
admit the formality around this pipelined approach needs to improve.

>3) Decoder is much more a complex issue, because we have to deal with
>badly formated PDUs (specially crafted PDUs that are written to broke
>the decoder). 
Right this problem is even further amplified by the fact that decoders 
are stateful. 

>Is a callback system rubust enough to handle a
>TLTLTLTLTLTLTLTLTL..(many recursives TLV)... TLV? (could it happen?). I
>buy the idea of removing the Digester concept. This is not a generic
>ASN.1 decoder, is it?
It's designed to be a mechanism to sit on top of a TLV stream (BER and 
DER works with it) so it is not a generic ASN.1 decoder but specific to 
and sits on top of BER and DER streams.

>4) Channels could be a plus. It can deal with ByteBuffer[], so we don't
>have to build a byte array before sending it through the socket. 
Yeah I have to clean up the way we manage buffers all together.  It's 
really sloppy in there right now.  It works but it's inefficient.  I can 
probably squeeze 10x more perfromance out of her.

>have a MemoryMapped mech., so it may be used for hudge PDUs (not sure
>actually, I'm checking)
No I do not.  I delayed streaming large TLV values to disk.  BTW another 
mechanism can be breaking a large TLV into several smaller TLVs but you 
must reassemble again so no point to this.  Whole goal here is to avoid 
swallowing up a variable size chunk of memory to hold the value.  
Imagine the fun a DoS attack can have on the server in this case.

BTW I don't know if we need to use memory mapping here.  Perhaps we can 
do this but I recommend building into the subsystem a temporary disk 
store that can manage a file of raw bytes, clean up holes, and manage 
the allocation deallocation of values.  We can take this approach or we 
can use flat files on the disk subsystem for each large PDU.  Then have 
a referral with a file url pointing to the file containing the large 
value (might be like a big image for example).

I don't know what would be the right approach at the moment.  I figure 
this is going to take a couple tries before we get it right.

Great comments and questions by the way! #1 is something we can use 


>Le lundi 31 janvier 2005 à 09:35 -0500, Alex Karasulu a écrit : 
>>Emmanuel Lecharny wrote:
>>>I read something about the ASN.1 compiler that is on its way, but is
>>>actually hand crafted, but I don't remember were. Could somebody give me
>>>the pointer?
>>Yeah everything in ASN.1 runtime for LDAP has been hand crafted.  The 
>>stub compiler is not fully operational just yet.  This is something Alan 
>>Cabrera is working on.  All LDAP stubbs were hand woven within the LDAP 
>>common jar as interfaces with classes implementing them.  They are 
>>filled up by this attrocious peice of code that works like the XML 
>>digester in commons.  A bad move on my part.  I will redo this garbage 
>>when I have the chance to use a better mech than firing rules to 
>>populate LDAP stubbs based on tag sequences encountered within the ASN.1 
>>tag stream.  What works for XML SAX processing does not necessarily lead 
>>to a fast ASN.1 runtime heh.
>>I have a page where I describe what needs to be done to make the ASN.1 
>>RT really fast.  Here's that doc...
>>I should get to this around next month if someone does not beat me to it.

View raw message