httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian Behlendorf <br...@organic.com>
Subject Re: random question of the hour
Date Mon, 17 Jun 1996 20:57:58 GMT
> > Things would be different if sites ran a separate Web server for their
> > large file downloads, set appropriate timeouts, never restart the server

Hmm, interesting thought here.  What would be the problems with the
following design: using a very very small NPH program which did two
things:

1) dumped the file from the file system to the socket
2) at the end, or upon closure of the socket, logged the transaction using
syslog in some simple format

Conceivably a compiled C program which did this could be very very small -
it might be smaller than an extra thread in a multithreaded Apache.  One
can use the main server for access control, mapping the URL to the file,
etc - but leave the actual delivery up to the NPH.  The reason to use NPH
is not because you can write the CGI headers, but because the script
"detaches" from the main cluster of servers, leaving them free to do other
things, in theory.  And, the NPH scripts would (?) be shielded from server
software restarts.

The problem with CGI is that you get, as a bonus prize in every box, at
least one shell invocation.  Yes?  So if you had 100 files being
downloaded, with a 5Kbyte C program which did I/O, you would take up far
more than 500K...

But we don't really need the shell, do we?  Or do we?  Could we not use
nph/cgi but something else with the effect we want?

> > and have some way of limiting the number of simulaneous
> > connections/downloads.

Is there some way the server could be told to only fork this mini-server
if there was free memory available? 

	Brian

--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--
brian@organic.com  www.apache.org  hyperreal.com  http://www.organic.com/JOBS


Mime
View raw message