directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alex Karasulu" <>
Subject RE: [snickers] Should we use rules like digester?
Date Tue, 30 Mar 2004 07:35:42 GMT

Just wanted to add that I found a way to have super fast pattern matching
to the TLV digester.

Basically the pattern in digester is a '/' separated list of element names.
However in our digester the element stack is replaced with the TLV stack and
the element tag corresponds to the TLV's tag field.  Because of this and our
'int' encoding for Tags, the pattern really is an int[] rather than a

For fast pattern matching I'm building a TagTree composed of Tag nodes.
When a rule is added like so:

void addRule( int[] pattern, Rule rule ) ;

A path is either traversed or built or both in this tree to add the rules.  
As the nesting of TLV's change while decoding the BER stream, a pointer into

the tree walking it determines if rules are available to fire.  If the path 
does not exist in the tree then there are no rules to fire and so on.

Here's some starter code that will eventually do this sort of thing:


> -----Original Message-----
> From: Alex Karasulu []
> Sent: Monday, March 29, 2004 5:39 PM
> To: 'Apache Directory Developers List'
> Subject: [snickers] Should we use rules like digester?
> Hi,
> Ok I'm looking closely at the way digester in commons works and thinking
> wait a minute we could do the same thing for building stub containment
> trees.  Basically digester is a fancy pattern matcher that triggers some
> rules which push and pop objects onto and off of a stack.  Objects are
> tied
> together by rules and other means to build containment trees.
> We can build a similar BER event digester which fires off registered rules
> and exposes a stack where stub objects are pushed and popped while
> building
> a PDU envelope stub.
> Now if we go with this route then the runtime, and build time integration
> becomes trivial.  The stub compiler simply generates POJO stubs and
> processing instructions as Rules, a RuleSet in digester parlance, for the
> stubs.  The rules are registered with a digester to understand incoming
> encoded ASN.1 types.  This way rules can be registered dynamically to
> process any incoming ASN.1 type on the fly if they need to.
> This approach also makes building the compiler's backend all that much
> simpler.  Generating POJOs from the ASN.1 module's set of types is not
> that
> difficult however generating the encoding and decoding peers will not be
> as
> easy.  So why bother?  I think the use of rules that are registered to
> match
> for certain tags make life that much easier.  The low level BER tlv
> encoding
> is handled by the TLV decoder.  It does not even decode the value leaving
> the decoding decisions up to the rules.  This makes sense since the data
> type aspects may shift across languages and contexts.  For example ASN.1
> integers in the general case should use BigIntegers however some protocols
> do not need a BigInteger object's precision so their rules may decide to
> use
> an int or even a long.  However so much is saved by not creating tons of
> objects when you don't have to.
> The only drawback to this approach is that there is no equivalent encoding
> model for it but I think we can devise one if that's what we decide.  If
> in
> the end we find that the encoding and decoding approaches need to be a bit
> different then so be it.
> Alex

View raw message