directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alex Karasulu" <aok...@bellsouth.net>
Subject [eve] Backend subsystem's desigm
Date Fri, 05 Dec 2003 21:51:55 GMT
Wes,

 

In response to your earlier email concerning the Interceptors

and such I thought it might be a good time to breakdown the 

way the backend subsystem is designed.  I hope this makes sense.

Let me know what you think?

 

Please everyone is welcome to comment.  I figure later on we

can use this trail for documentation purposes.

 

Alex

 

 

Introduction

 

The backend subsystem is composed of server components

used to manage entries across partitions that act as 

entry databases.

 

The backend subsystem’s façade is the JNDI.  What does

this mean.  Basically JNDI is used to create, access, 

modify, delete and search for entries in these partitions.

 

 

Partitions, Naming Contexts and Backends

 

These database partitions attach to the directory server’s

namespace at points called suffixes and they make the 

namespace a disconnected tree.  Here are some examples of

suffix names:

 

dc=ford,dc=com

dc=example,dc=com

dc=apache,dc=org

ou=finance,o=Smith Barney,l=nyc,c=us

 

These can all be suffix names.  While we’re here one no-no

Is to have a backend suffix within another like so:

 

dc=apache,dc=org

dc=Jakarta,dc=apache,dc=org

 

These partitions/databases do not all need to be implemented

the same way they just need to support the AtomicBackend 

interface which extends the Backend interface.

 

These indivisible backends hang off of another very special

backend which is a singleton within the backend subsystem.

This special backend is the nexus.  It consolidates all these

partitions and makes them appear as one backend hence it is

divisible not atomic.  This backend routes AtomicBackend 

interface methods to the attached AtomicBackends based on

the naming context in which the call occurs.  So a search

with a base of dc=sales,dc=ford,dc=com would be routed to the

AtomicBackend whose suffix is dc=ford,dc=com.  Now its apparent

why we cannot nest backends: doing so makes naming context

resolution impossible.

 

 

Using the JNDI

 

 

The Backend interface is nice and at a low level.  It’s not

Easily used as a user API and there are usage semantics that

must be observed making it complex to use.

 

Since we intend to support Java (and Groovy script) based stored 

procedures we need a easy to use access API and model that people

are already familiar with.  The JNDI is that API and that means

we need to implement a server-side JNDI LDAP provider that 

translates JNDI calls to calls against the backend nexus.

 

This has several advantages.  Users don’t need to learn a new

API to start writing stored procedures that have access to the

data in the server.  Let’s face it there’s no point to a stored

procedure if it cannot access the backend data.  Stored procedures

can be POJOs (plain old java objects). The routines for the procs

should work within or outside of the server just by switching the

the JNDI provider used.  The SP code does not even need to be 

recompiled so long as the appropriate environment properties are 

provided for the alternative JNDI provider (i.e. SUN LDAP provider)

and perhaps other server specific properties the SP may depend on.

This makes SP writing and testing a cake walk!  Anybody who has 

used the JNDI can write a useful stored procedure, test it and 

deploy it to the server.  The only difference to the user is in

the speed of operation.  When within the server the SP should 

execute much faster than when run remotely using a JNDI LDAP 

provider that accesses the server through the line protocol.

 

Interceptor Framework

 

We noticed early that several services orthogonal to backend 

operation could be implemented in one place so every backend

does not need to replicate this functionality regardless of

its implementation.

 

Some of the services in mind which could be applied across the 

board are:

 

*        authorization (access controls on operations)

*        replication

*        transaction management

*        entry level locking

*        name normalization

*        entry caching

*        trigger firing

*        event notification

*        dynamic attribute management

*        operational attribute management

*        collective attribute management

*        …

 

To enable these services they must operate before the nexus

has routed a backend operation to one of the AtomicBackends

hanging off of it.  These services must intervene between the

call to the JNDI and the call to the target backend operation.

An interceptor framework has been built to do just.  Services

are injected between calls to the JNDI on the provider’s 

JNDI context and calls to the nexus by interceptors within

the framework.

 

The framework has 3 interceptor pipelines that chain interceptors

so one can operate after another to add the needed service to

the method call.  The three pipelines are listed below along with

their behavioral characteristics:

 

before (fail fast)

after (fail fast)

on error (NOT fail fast)

 

While processing a fail fast pipeline any interceptor error shorts

the operation of interceptors down stream.  The only pipeline that

does not behave in this fashion is the “on error” pipeline which

guarantees the intervention of every interceptor in the chain.

 

This may at first seem complex and we admit to a degree it is. 

However the framework is designed to deal with the inherent 

complexity of calls and error handling mechanisms within Java.  

Let’s explore the reason for being (RFB) of each interceptor pipeline.

 

The before pipeline is invoked before the target method on the

backend nexus is called.  Any services that need to massage parameters

to the target or even stop it in its tracks can do so here.  For 

example a access control interceptor can be added to the before pipeline

to reject operations that are not allowed based on some authorization

policy.  Other services like caching and even name normalization can

have interceptors here.  BTW a server may need to have an interceptor

in more than one pipeline: the best example if this is a transaction

service but we can discuss this later.

 

The after pipeline is ideal for adding interceptors which need to

operate on return values before they are given back to the user.  As

you may have guessed this pipeline is called into action right after

the call is made against the target nexus method.  An ideal service 

for this pipeline would be the dynamic attribute service.  There’s

an RFC that is specifically geared towards transient or dynamically

calculated attributes within entries.  Implementing such a service

is easy with an after pipeline interceptor that calculates and injects 

new attributes into entries returned by backend methods.  Another 

ideal example is the management of collective attributes.  Collective

attributes are attributes that have constant values across a set of

entries in a directory.  For example I can set the building name 

attribute to be the same for all entries of people under a specific

node.  This can be implemented as a specialized version of the 

dynamic attribute service which adds the attribute and value on the

way out instead of replicating the collective value by storing the 

it for each entry.

 

The on error pipeline may never be needed.  As the name implies it

is invoked when there is an error in normal processing.  It basically

there for handing exceptions and errors.  If there is a failure in

the before pipeline, while making the target method call on the 

Backend or while going through the after pipeline the on error 

pipeline is invoked.  There are several reasons for this.  First

there will be times where we need to cleanup after a failure.  Some

services will need to rollback partial work on errors.  This takes

us back to a transaction service.  A transaction service can have 

an interceptor in all three pipelines.  The before interceptor can

begin a transaction or extend an existing one, the after interceptor

can perform the final commit if the transaction ends and the on error

pipeline interceptor can rollback if there are any failures.  Other

examples of cleanup services injected in the on error pipeline are 

ones that cleanup stale locks if errors abruptly abort operations.

The uses for this special pipeline are limitless.

 

How is the framework implemented?  

 

How is the framework implemented?  Well nothing is better than looking

at the code which presently resides within the sandbox and needs to be

moved.  Eve’s interceptor framework is implemented using a combination

of weaved compile time aspects and a dynamic proxy for the nexus: only

one is created and used since the nexus is a singleton making the proxy

a tolerable solution.  Of course there are performance tradeoffs here 

however they can be remedied if they become intolerable using other 

better techniques.  

 

Basically every provider JNDI context created has a handle on a proxy

object to the nexus.  Each JNDI Context contains its own environment 

parameters and is aware of its name within the directory namespace.  

Relative JNDI operations are transformed by JNDI method implementations

into absolute canonical names (a.k.a. distinguished names) and calls 

are then made against the appropriate nexus methods to do the work. When a 

call is made within the provider’s JNDI context on the nexus methods 

aspects push the JNDI Context onto a Stack.  The Stack of JNDI contexts

is dedicated to a Thread using a ThreadLocal.  So looking at a Thread’s

Stack we can see which JNDI Contexts made calls against the nexus and

in know their nesting order.  Right before the nexus methods on the 

nexus proxy made within a JNDI context method return, aspects pop the

current Thread’s ThreadLocal Stack.  This removes the JDNI Context that

made the backend nexus call.  If a backend call raises a trigger which 

executes a stored procedure that calls the JNDI yet again then another
context is

pushed onto the Stack and so on.  This way we can track resources and 

operations that may recurse, to detect infinitely recursive chains and 

to manage environments while interceptors are called.  In fact the 

interceptor’s operating context is the current executing Thread’s
ThreadLocal 

Stack of JNDI contexts and the environment variables stored within

them.  Yeah that’s a mouth full and a lot of hard stuff.  However some 

good news – this is probably the most complex part of the server.

 

So what does the proxy do? The proxy is the hook into the interceptor

framework.  The proxy basically drives the invocation of before 

interceptors, then the call to the target nexus method on the actual

nexus singleton, then the invocation on the after pipeline.  If all 

is successful then the proxy call returns if not the on error pipeline

is invoked to clean up and all cleanup interceptors are invoked regardless

if one fails or not.  The proxy is basically the trap used to kick off

interceptors, and the aspect is there so we can inject code to setup

the JNDI context call stack which is the environment in which interceptors

operate.

 

For example the authorization interceptor may use the context to determine

the identity of the authenticated principal.  Using this information and

access control policy information it can determine if the operation should

be allowed or denied.

 

The framework makes stacking these orthogonal services on top of 

ignorant backends possible.  The backend implementer can focus on 

building a backend to store entries rather than worrying about every

aspect of the server.

 

A Detachable Embeddable Backend Subsystem

 

Another neat advantage to using the JNDI as we do comes into play

when we consider embedding the server or just its backend.  If we 

wanted to embed the server in an application all it would take is

a couple of extra initial environment parameters when asking for

the first context of the server-side JNDI provider.

 

Basically when you use a JNDI provider the service provider’s 

ContextFactory must be specified as a parameter so it’s class 

can be loaded by the NamingManager.  ContextFactory is an interface

that creates Contexts.  Every JNDI provider implementation must

implement this interface – it’s the way the InitialContext class

gets a handle on the provider’s Context implementation objects.

 

So the ContextFactory implementation for the server side LDAP JNDI

provider first checks to see that the backend has started up.  If 

it has not it fires up the backend services using the container

kernel of choice and satisfies the initial request for a context.

 

The calling application that is embedding Eve’s backend uses the 

server-side JNDI provider like it would using any other provider

by supplying the environment properties needed.  There may be some

server-side provider specific properties for configuring and 

setting up the backend outside of using configuration files.  

 

The end result is the use of the JNDI as the bootstrapping and

embedding API for the entire backend apparatus.  You use the 

provider add a little water and wala you have an LDAP server you

can access remotely.

 

With the JNDI as the façade around the backend apparatus, the 

front end simplifies down to a client of the server side JNDI 

provider.  The front end and the backend are now detachable.  If

you wanted to you could just use the front end to proxy for an

X.500 server without having its own backend – just use a different

JNDI provider and point it to another server.  If you want keep

the front end backend combo and have a full blown LDAP server.

 

For those that want to embed both the front end and the backend

together to enable remote access through the protocol then an 

optional environment parameter can be used to ask the server side 

JNDI provider to fire up a front end as well as the backend subsystem.

 

These are future plans in the works.

 

Conclusion

 

 

A detachable pluggable backend subsystem makes Eve highly versatile

and attractive to those who want to experiment with new protocol 

extensions, embedding the server and toying with new directory concepts.

 

As we saw this detachable facet makes the backend embeddable 

without exposing remote access unless necessary.  It also makes it easy to

implement virtual directories using the front end as a proxy to

other servers.  It also enables the easy implementation of meta-directories 

along with the trigger and stored procedure support.  Since JNDI is the 

front end backend coupling interface and used for embedding as well as 

writing stored procedures there is very little overhead to the developer, 

experimenter, user and stored procedure writer.  Every subsystem of the

server should be relatively palatable if one knows and understands the 

JNDI.

 

 

Alex


Mime
View raw message