directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alex Karasulu <aok...@bellsouth.net>
Subject Frontend redesign
Date Wed, 03 Dec 2003 00:44:49 GMT
Guys,

Here are the topics of discussion:


1). Service decoupling using a centralized event manager service
2). The SEDA stages, events and processing pathway
3). Session handling, identity and synchronization
4). Interactions with server side JNDI provider


1). Service Decoupling Using an EventManager
============================================

First of all it makes sense to use a central event
manager as the hub for delivering events from one
stage or just one service to another.  What do I mean
by that?

Within the server, SEDA events must be delivered from sources
to sinks.  Each stage is modeled as a service in the server.
Obviously we're going to have the following basic stage flow
for handling a state full client's request response sequence:

 input --> decode --> process 
                         |
                         |
output <-- encode <------/

These points represent the request processing stages.  Now
an event in one stage upstream enqueues onto the queue of the
stage downstream creating dependencies between these services
which are SEDA stages.  Hence the request processing pathway 
looks like one of the service dependency chains within the 
server.

Now this is not a problem if we did not have the other services
like the listener and the session manager in the picture which 
introduce cycles in the dependency graph.  These services are 
not Stages in the SEDA sense but they do emit events that are 
listened to by other stages.

If we use a central event manager to enable communication 
between all the services through events SEDA based or not then
we can turn the dependency graph into a simple tree with the
event manager at the root with one level of children.  All 
services have a dependency on the event manager.  I realized
that this was the direction I wanted to go towards in past
designs but I got mixed up in SEDA verses non-SEDA events.
At the end of the day it does not matter what the nature of the
event is.  A central event router or manager can be used to 
decouple stages and non-stage services.  This way we can do away
with methods on the service interfaces which really do not need
to be exposed.  The event handler interfaces are enough for 
most generic services and for services that are SEDA stages the
enqueue interface method is all you need for these services.  
However I think its best to hid this and not expose it as a 
service interface.  It is best to use explicit listener interfaces
to flag a stage as recieving certain events rather than exposing 
a generic stage enqueue() method.  

So I think we should have one central event manager that is 
used as a hub to route SEDA and non-seda events from service
to service thereby decoupling services and reducing the number
of exposed service interface methods per service.  Now let me
give an example using a particular stage to stage coupling.  
Take the input-decoder stage coupling for example.  Here the input
stage may be the source of InputEvents which carry some chunk if
data read in a non-blocking fashion from the client's PDU.  The
Decoder is an InputListener which explicitly processes an 
InputEvent using the inputRecieved() listener interface method.

public void inputRecieved( InputEvent an_event ) ;

Now the decoder stage or a InputListener for the decoder would 
just enqueue this event onto the decoder's queue and return.  But
now let's look at the benefits of using the event manager.  First
the input stage fires the InputEvent using a service method on
the EventManager.  The event manager synchronously delivers the 
event to the target which in this case is the InputListener.  There
may be more than one InputListener registered with the EventManager.
So the event is delivered synchronously to the listeners however
the stage listener processes the event asynchronously.  The enqueue
operation returns immediately in the listener for the decoder stage.
So synchronous event delivery is really asynchronous for stages.
The input stage service does not depend on the decoder or vice versa.
Both however depend on the EventManager service which is the middleman.

This is all very simple and I probably bored you guys with
the big explanation and the example but I wanted to make sure I 
communicated this simple idea completely.


2). The SEDA stages, events and processing pathway
==================================================

This is now really simple.  A SEDA event is just like any other 
simple vanilla event.  Except the difference is the nature of the
sink makes the event handling semantics asynchronous yet the order
of event processing is preserved by the stage's queue.

The processing pathway was already covered above so there is not
much that's left to be said here.


3). Session handling, identity and synchronization
==================================================

For the time being presume the server only does a simple bind
to establish the clients identity.  

Basically the socket connection almost represents the session.
I say this because another bind operation on an already bound
or anonymous user session results in session destruction and
replacement without dropping the socket connection.  Not a big
deal.  And all the session really is, is a hash table with some
extra session specific parameters like the time the session was
established et cetera.

Now socket connections and IO streams previously were managed 
using a unique client key which contained various pieces of info
specific to the socket connection.  In fact the actual key is
constructed from parts of the connection parameters like client 
host, port to server host port et cetera.  Note that his key 
carries with it an input synchronization object and an output 
synchronization object.  This enables two stages to synchronize
on a socket channel if they need to.  Also client keys can expire
to represent the fact that the connection was dropped.

The session object returns the ClientKey.  Previously I tried not
to expose too much in the ClientKey.  Namely I tried to avoid 
carrying the socket with the key or exposing access to the socket
the key represents.  Now I just gave in temporarily and allowed
for access to the socket.  This might be a bad move but it allowed
me to just package the client's Session or the key in events to 
pass things around.  At first I said I should pass around the socket
in the event rather than keep it in the key.  This way I can give the
key to untrusted code without comprimising the socket and this 
I think must be the case.  We should discuss this detail.

Now keep in mind that for nonblocking IO the getChannel() call on
a socket returns a non-null channel.  For blocking IO or sockets
created using methods other than through channel construction, the
returned channel is null.  So we can just give a stage that needs to
work on a socket channel the socket.  This means stages can detect
the nature of events and determine if they will handle it or not.

So a NonBlockingInputManager (input stage) is an implementation that
handles channel based non-blocking IO.  Other implementations of the
InputManager can just handle blocking IO.  We can have both residing 
within the same server.  For the time being SSL connections can 
leverage the blocking IO stages rather than the NIO based ones until
SUN adds SSL support to the nio stuff. 

So to summarize we need to determine if we keep the ClientKey 
concept, how we manage protecting the socket when the code is
not trusted.  Should we hide the socket or just carry it in a
event.  Or is it best to have a service that enables access to
the socket based on the (old implementation) and we protect the
access to the service rather than the key. 


4). Interactions with server side JNDI provider
===============================================

Now this is the cool stuff.  We will be replacing the old 
code that directly accessed the backend nexus: the big
mama jama backend that all backends hung off of.  In its
place we shall have request processors/handlers that use
JNDI and the server side LDAP JNDI provider to access the
nexus.  

Just as a heads up there is an interceptor framework in 
the backend between the JNDI provider and the nexus which
inject several services to spare backend developers from
having to implement.

Continuing on the JNDI provider is used to make calls 
against the backend subsystem which is detachable from
the front end completely and can run in isolation without
a front end.  In the begining there is a bind operation 
that establishes session.  This operation may need to access
the backend to authenticate.  A separate service is designed
to abstract away this access.  I say "may need" above because
later when SASL is enabled access to certificate stores or
Kerberos tickets may occur outside of the server.  But presume
that this stuff is built into Eve or the authentication is 
simple and the authentication profile resides on a backend.
To get at this information the credentials of the user on
a simple bind attempt must be looked up using JNDI and 
it must be determined if the user is authorized or not.

If the backend operates in stand alone mode then do we support
authentication there and simply have the frontend use the 
standard JNDI based means to pass credential information to the
provider.  Or should the provider not care so long as a principle
is present within the environment passed to the initial context.

Note that if authentication is to occur within the provider we
want to avoid having to do it for each request.  Basically this
is a matter of associating front-end session with the provider.
Basically the JNDI provider tracks session via the Context.  If
the InitialContext passes then that means the users identity is
known and all other contexts there after inherit the identity.

Now if authorization is not conducted within the backend subsystem
then there has to be a way to pass identity down into it and this
could be a matter of passing a principle within the environment.

However keep in mind that with stored procedures and triggers
the developer will have access to these contexts.  They need access
to determine what the subject of an operation is.  To preserve 
security then access to these security objects must be protected.
Basically identity theft (impersonation) is possible.

So lots of things here to think about before the front end 
redesign can occur.  

Alex


Mime
View raw message