httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrew Finkenstadt <>
Subject Apache 2.0 ideas
Date Tue, 03 Nov 1998 05:52:53 GMT

Many of you have no idea who I am, so I thought I'd introduce myself and give
out a half-baked idea for Apache 2.0 development.

In my day job I'm a senior developer for an online game company whose
simultaneous interactive connection load is in the thousands into one product,
and in the tens of thousands in aggregate.  While I haven't directly written
any code that talks at low-levels to our user base, I have done quite a bit of
work with stateful expressions of HTML pages both with IIS and Apache with or
without mod_perl to help.

I've been reading new-httpd for several months and have been enlightened
several times at what the OSS process can be like.

I was doing some serious cogitating on how to distribute the load for a
generic application that has various parts communicating via a well-defined
messaging interface across potentially thousands of processors or processes,
and came across the LISTSERV method of disconnected virtual machines. (cf: )  That said, a strong message passing
architecture (similar to the apache request_rec but designed to minimize
expensive memory-to-memory copies) would probably suffice to avoid multiple
independent processes like could be used elsewhere, or the monolithic
single-threaded (unix) process of LISTSERV.

In a pique of fancy I started sketching out on the back of an napkin at dinner
this evening just how you'd go about dividing the various portions of Apache
into DVMs, and basically came up with this model:

   USER sends 
     one or more (perhaps empty, perhaps lengthy) TRANSACTIONS to 
       a SERVER who eventually GENERATES 
         a (perhaps empty, perhaps lengthy, perhaps delayed) REPLY
   expected by the USER.

At heart, this describes just about any send-expect protocol, of which
HTTP/1.0 is one.  

HTTP/1.1 adds complexity by attempting to multiplex connections across one
expensive-to-bring-up connection, along with various add-ons for content
language negotiation and abilities to signal third-party waystations (cache
servers, proxy servers, etc) about the contents.  One must deal with this
complexity while still maintaining the still-unstateful transaction.  There's
no guarantee that a proxy server implementing /1.1 WOULDN'T intercomingle
MULTIPLE users' requests across the same connection, and so one can not assume
that kept-open connections have anything to do with each other.

Thus we deal with HTTP/1.1 conceptually by generating multiple transactions
and combining output back to the requesting user.

Thus, we end up with layers that:

  Read in and gather an entire transaction (POST/PUT data, etc)
  Submit the transaction-message to the server black box.
  Magically deal with 1.1 multiple transactions, output chunking,

The server black box:

  receives a message (the digested transaction, which doesn't necessarily have
to come from an HTTP processor, it could just as easily come from a Gopher
processor, or command line exerciser),

  has various ways of knowing how to service the reply through the various
phases (authorization, authentication, etc),

  has various back end methods of retrieving data (file, process, CGI,
mod_perl transaction handler,,

  and passes the result back to the originator as yet another message.

If at any point it becomes important to transfer the message to another
processor instead of something in local memory, then it is passed across
transparently with its accompanying environment necessary for processing
(magic, but we do it here), and the result re-inserted into the output chain
when the message has been processed.

I would think this sort of processing would work on multiple processor boxes,
or in a single-shared-memory multi-thread architecture program where the
time-cost of copying memory around to toss messages around can be minimized.

I'm not sure how well this could be implemented using the current NSPR


View raw message