directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Richard Wallace <>
Subject Re: [seda] A preview on redesigned SEDA (or Netty?)
Date Wed, 08 Dec 2004 23:52:27 GMT
Alex Karasulu wrote:
> Richard Wallace wrote:
>> Alex Karasulu wrote:
> <snip/>
>>> Yep that's my primary concern with any API and the heart of the 
>>> framework.  Basically you have the common plumbing in place 
>>> (input,decode,process,encode,output).  Using the same stage component 
>>> for the stages in this prefab pipeline you should be able to build 
>>> your own pipelines inside the ProtoPro.
>> Alright, that's kind of what I was thinking.  Because servers also 
>> typically need to access some kind of backing store at some point.  
>> For some things, like a DNS server, this could easily be done in a one 
>> step, but for something like a web server or email server this could 
>> be a multi-step process, or need to be asynchronous itself, such as 
>> when doing file IO.  I would think that even an LDAP server could 
>> benefit from more than one stage in the protocol handling because 
>> you've got to parse the incoming request and turn it into something 
>> that you can easily pass to your backend.
> Actually the parse part is the "decode" stage.  The stateful (chunking) 
> encoder decoder pair will do this for you.  This is why the 
> ProtocolProvider has a getEncoderFacotry() and a getDecoderFactory() 
> method defined.  Regulating these operations with a stage makes sense 
> under load.  Sometimes you'll have encodings that are CPU intensive and 
> so you need a higher ratio of threads to load.  SEDA is excellent for 
> that.  Somethimes the encoding is a joke.  In this case I'd like to make 
> it so the provider could somehow tell SEDA to skip encoding or decoding 
> stages to not suffer synchronization penalties and latency costs.

Oh I see. I hadn't seen the getEncoderFactory() and getDecoderFactory() 
before.  I agree that in the decoding stage you could do the parsing 
part and in most scenarios that is probably what you want to do.  But I 
can see cases where you may want to have more stages for doing things 
like (en|de)cryption, (un)compression, etc.  But then, for those that 
want/need something like this, they can have their encoder/decoder 
factories build little sub-pipelines that do that.

>> Then I would think that whatever backend your using, you would want 
>> that to be an asynchronous request (especially in the case where your 
>> backend is another ldap server or something else that could have a 
>> high latency).
> Good call on application of this. Yes this would be a good example for it.

I forgot file IO.  That's a biggie in web and mail servers.

>> I see your point that you want to make it as simple as possible and 
>> I'm all for that.  I've looked at the DefaultFrontend and 
>> DefaultFrontendFactory in the seda trunk and it does seem like it is 
>> be easy to build new and varied pipelines with it.  I just wanted to 
>> ask because while looking through Mina I got the feeling that it is 
>> mostly just concerned with abstracting the details of asynchronous 
>> network IO away from the developer and not necessarily processing 
>> requests in a pipeline.
> Funny you say this Trustin and I were talking last night about this.  I 
> think he rights some bullet proof code and his abstractions for handing 
> non-blocking IO are really really nice.  But this is not SEDA.  Berin 
> touched on this too and talked about the possibility of a hybrid.  
> Perhaps we just need more than one choice or just that a hybrid that 
> uses one over the other until certail thresholds are reached.

Now that really would be awesome!  The tough thing is figuring out those 
thresholds.  But that's a bit OT right now.

> now MINA will blow away any SEDA implementation up until I think 100s of 
> clients.  Then SEDA is your savior.
> Now MINA is more ACE/Netty like which is great.  And SEDA well is SEDA 
> which is great too.  In the end I want these frameworks to be sooo easy 
> to use protocol implementors do not need swim in SEDA or ACE or NIO.  I 
> want to think about requests and responses.  This ease of use and 
> performance in all scenarios are my two big requirements.

Sounds good to me. =)

>>   To me that the core of what SEDA is all about, not just asynchronous 
>> IO, but asynchronous event processing.  Whether those events are 
>> network operations, operations to perform on an image or some other 
>> form of user input.
> Exactly! Well said.  Take a look at the CoR stuff and the 
> commons-pipeline effort over at Jakarta.  They have been doing similar 
> things perhaps better for network specific and general applications that 
> are processing anything like images in the example you gave above.  
> Please let us know what you think.  Your grip on SEDA and its merits is 
> pretty solid and well I have been trying to understand the SEDA 
> commons-pipline connection to no avail.  Looking forward to your 
> impressions and  comments....

I hadn't seen the commons-pipeline project before but I'm always a bit 
wary of stuff in the commons.  Here's a good example of why (found in 
the DigesterPipelineFactory.init() method):

         ruleSets.add(new PipelineRuleSet(ruleSets));

Aside from that hesitation to use anything commons related (which is 
nearly unavoidable these days it seems). I'm not sure what you mean by a 
connection between commons-pipeline and SEDA.  The concepts are 
fundamentally the same.  There is are pipelines and within pipelines are 
stages and the stages processing is controlled by what in c-p is a 
"StageDriver" and in SEDA is a Controller with a thread pool.  Right now 
it looks like c-p only has a SingleThreadStageDriver, but it wouldn't be 
difficult to add a StageDriver that uses a thread pool, I don't think.

The biggest feature that they are currently missing that is in the 
directory seda implementation is event routing.  From what I've seen in 
the directory seda code an Event can be routed to one or more (? it 
looks like right now DefaultEventRouter throws an 
UnsupportedOperationException, on line 148, if this situation occurs, 
but the framework is there to support such a thing) different stages. 
Right now the c-p pipeline just passes events from one stage to the next 
in the order that the stages were added.  There is support for doing a 
branch to another pipeline, though it's not clear when this branch is taken.

I think the framework in directory is much more complete and usable.  If 
it turns out that I can't use it for the sub-pipelines I need that we 
talked about before then I'll probably just use Berin's d-haven event 
package to build my own pipeline for processing.  Might not be pretty 
but I think I could make something that would work for my purposes.  I 
don't think I'll need to do that tho, not from what I've seen so far.


View raw message