httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jeffrey W. Baker" <>
Subject Re: HTTP + XML + SCP = HTTP/ng
Date Fri, 11 Feb 2000 23:22:47 GMT
Dean Gaudet wrote:
> On Mon, 7 Feb 2000, Martin Pool wrote:
> > Given the state of Java and HTTP today I wonder whether it would be
> > better to build a toolkit of Java classes for building HTTP servers,
> > rather than using AJP to plug into Apache.
> that's what i was advocating.
> > Where's Apache going to go in the future?  Is it going to be a single
> > big do-everything program, or a cluster of complementary projects?
> i hope not, but unfortunately there's a little bit of momentum... i've
> made a couple attempts to suggest ways we could split it up.  but i don't
> have the energy to evangelise the topic.
> so one of the points i brought up earlier in the thread hasn't been
> addressed:  what problems are people trying to solve?
> the web applications of which i'm aware are essentially composed of:
> - static content
> - dynamic content which is cacheable (i.e. almost static)
> - dynamic content which is uncacheable
> the best url design includes all these components under the same hostname.
> so either all the services end up on one box, or some sort of front-end is
> required.
> both apache and squid perform as front-ends today.
> if we had a front-end server with the following feature set:
> - full HTTP/1.1 proxy cache
> - HTTP/1.1 server capable of serving static files
> - uses async i/o at least for serving static files (which includes
>   files in the proxy cache)
> - capable of dividing the urlspace amongst different pools of
>   backend servers
> - capable of load balancing amongst several backends within a pool
> - handles persistant/pipelined connections with clients and with
>   backends
> - SSL

If #3 above includes buffering, then I'm with you.  If not, I would add
buffering.  It is very important for the front end to decouple the back
end from the client's pipe, so as to allow the two layers to scale
independently.  I realize that one can just as easily tweak the kernel's
socket buffer size, but I believe there are smarter ways to go about

The current choice of Apache or Squid for a load-balance, failing-over,
buffering, caching HTTP accellerator are unsatisfactory.  In high
performance situations I find myself rolling my own.  The most daunting
thing about that is of course handling the HTTP protocol itself, where I
rely on the philosophy of passing everything verbatim to the backend
Apache and doing whatever it does, which also doubles as free keep-alive

I would be very pleased to see some kind of libapachehttp for the
protocol, libapacheio for async Fu, and something very like squid's
redirector interface for implementing load balancing, failover, and
urlspace logic.  SSL is a bonus that can already be lifted from openssl,
and I can personally do without caching altogether.

Context: I am much more interested in using HTTP as an application
protocol between remote clients and my application server than I am in
delivering HTML documents.  The two have very different requirements in
that a pure application server generally doesn't generate anything
cachable and doesn't serve static files of any kind.


View raw message