Return-Path: Delivered-To: apmail-incubator-directory-dev-archive@www.apache.org Received: (qmail 63961 invoked from network); 29 Nov 2004 14:33:23 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (209.237.227.199) by minotaur-2.apache.org with SMTP; 29 Nov 2004 14:33:23 -0000 Received: (qmail 97454 invoked by uid 500); 29 Nov 2004 14:32:50 -0000 Delivered-To: apmail-incubator-directory-dev-archive@incubator.apache.org Received: (qmail 97361 invoked by uid 500); 29 Nov 2004 14:32:48 -0000 Mailing-List: contact directory-dev-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: "Apache Directory Developers List" Delivered-To: mailing list directory-dev@incubator.apache.org Received: (qmail 97216 invoked by uid 99); 29 Nov 2004 14:32:45 -0000 X-ASF-Spam-Status: No, hits=0.0 required=10.0 tests= X-Spam-Check-By: apache.org Received-SPF: pass (hermes.apache.org: local policy) Received: from ensim1.25oz.com (HELO ensim1.25oz.com) (216.40.203.202) by apache.org (qpsmtpd/0.28) with ESMTP; Mon, 29 Nov 2004 06:32:40 -0800 Received: from [192.168.1.111] ([66.208.37.51]) (authenticated bits=0) by ensim1.25oz.com (8.12.10/8.12.10) with ESMTP id iATEupSt031548 for ; Mon, 29 Nov 2004 09:56:51 -0500 Message-ID: <41AB330C.6070307@d-haven.org> Date: Mon, 29 Nov 2004 09:32:44 -0500 From: Berin Loritsch User-Agent: Mozilla Thunderbird 0.9 (X11/20041103) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Apache Directory Developers List Subject: Re: [RT] SEDA Package: Rework Proposal (long, sorry) References: <41A4DC36.2030908@d-haven.org> <41A4FB89.3010504@bellsouth.net> In-Reply-To: <41A4FB89.3010504@bellsouth.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked X-Spam-Rating: minotaur-2.apache.org 1.6.2 0/1000/N Alex Karasulu wrote: > Let me also inject that I would like to see some statistics and user > feedback too. The good thing about doing this release with the > current SEDA framework is that we get things out there for people to > complain about. That gives use feedback and requirements. I would > also like to bang requests against both servers to see where the > performance bottlenecks are so we can design with these in mind. I > don't want to design the next best internet protocol server framework > without these metrics. It makes me feel like I'm spinning my wheels. > I think we all agree with this. Agreed. Just something we can work with. > Say Berin do you have documentation on the event package at D-Haven > that is at 50K feet with some drill down. I'd love to look at it. > Likewise I'd like to look at documentation on Netty2 and the Geronimo > Networking code. I think we should take the best of all the worlds > here. Plus I have some serious ACE research to do as well. I think > Trustin has been doing the same till now. I have some documentation here: http://projects.d-haven.org/modules/sections/index.php?op=listarticles&secid=4 I have not yet gotten the build to create the Xdocs. Essentially, you assemble the "big" pipeline by wiring together several "small" pipelines, and registering each with the ThreadManager. The ThreadManager will use whatever thread policy you decide for pushing the events through the pipeline. Assembling the pipeline is really not to hard. The DefaultPipeline has an array of Source objects (usually Queues) and one EventHandler. The EventHandler is an object that does something with those events. The easiest solution to "hardwire" something together for what we have here is to create some EventHandlers that pass in the Sink objects necessary for passing in the next stage. This is where something simple is done. We can add some variation to it to handle more complex event routing as needed. Adding the command subsystem as part of the pipeline, and a routing stage (something that routes events based on type of event) will provide us with a very flexible and easily tuned system. >> in what the stages do and how the pipelines and stages are >> configured. That part is not >> done in the D-Haven Event library. The core set of stages that I see >> we need are as follows: >> >> 1. ConnectionManager (this includes firewalling by dropping >> unallowable connections) >> 2. Reader (start reading bytes from the stream) >> * Router (1 pipe per protocol) >> 3. Decoder (use the decoder from the protocol handler) >> 4. RequestHandler (use the request handler from the protocol handler) >> 5. Encoder (use the encoder from the protocol handler) >> >> 6. Writer (start writing bytes to the stream) >> > Now, all this is just the skeleton of what makes a SEDA system go. > The real power is > These are the exact same components in SEDA btw. Right, but they are a bit too strongly typed IMO. Keep in mind that as necessary we can deal with non-reentrant protocol handler stages by providing a load balancing multiplexer/ demultiplexer. IOW, being able to handle multiple requests at a time by providing a separate pipeline per concurrency needed. It would ensure that only one thread is operating on the sensitive area at a time--but there are multiple instances of the set up making it easier to deal with. > I think the best way to procede is to start up the dialog as you have > recommended and have done. This is excellent. Now I think we should > all get familiar with SEDA, Netty2, Geronimo Networking, D-Haven Event > and the ACE architecture and incorporate them into our > converstations. Let's start a branch or several branches where we can > play with these ideas and these constructs. Meanwhile let's get this > release out the door and see what's good and bad about SEDA. > I really want the best of all the worlds and could care less what we > have at the end of the day so long as some very basic fundamentals are > met: > > 1). I don't want users having to know SEDA theory to write a protocol > server that snaps in to the framework. So details can be hidden and > administrators deploying servers can be concerned with SEDA settings > and dynamics. SEDA or ACE is just a model and we should not get > carried away with it. We are in the business of writing protocol > servers not extending Matt Welsh's discertation. Right, and part of that is being able to parallelize non-reentrant code--which is currently not possible. > 2). Make sure we have a simple, clean and intuitive ProtocolProvider > interface with helper interfaces whatever they may be I think we have this, and I don't think it needs to be altered--unless we come up with a need for it. > 3). Make sure the framework leverages encoder/decoder pairs that can > chunk data and maintain state between chuncks - this way we actually > utilize non-blocking facilities to the fullest extent I think this ability is best done by maintaining state in the event itself (making it easier to make reentrant stages). > 4). Make sure the framework is fast and optimized for rapidly > implementing internet protocol servers and in this regard I would like > design decisions to be driven by some statistics and concensus Right, and with the ability to have some callbacks for events and errors set up, we can monitor a running system. > 5). Avoid generic framework-itis: we want a specific framework for > writing internet protocol servers that behaves sort of like inetd in a > single process. Its all about leveraging simplicity in design. I'm not trying to create Avalon over here. > Lastly although least important in the decision making process I would > like the internals to be easy to maintain and grasp for those > developing the framework and maintaining it. However this is less > important than the points above. If we work with a small set of principles, it makes the whole thing easier to grasp. I have a feeling that the current SEDA system has too many principles to grasp--making it more difficult than it needs to be. -- "Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning." - Rich Cook