httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marc Slemko <>
Subject Re: non-buffered CGIs suck
Date Fri, 06 Mar 1998 06:11:32 GMT
On Thu, 5 Mar 1998, Dean Gaudet wrote:

> On Thu, 5 Mar 1998, Marc Slemko wrote:
> > (actually, it could be).  If the OS modified tv to indicate time left it
> > is easy, but otherwise there is no nice way to do that. 
> i.e. linux.  The timevalue is modified to indicate the remaining time. 


> Linus tried to revert it during 2.1.x because Linux is the only unix that
> supports this and so nobody could use it.  But I showed that the C library
> depended on this functionality and he left it in. 
> > Yes.  It was just there to force a context switch.
> > 
> > It is an inaccurate representation of unbuffered CGIs sending static
> > content, but I would suggest it may be very accurate for a CGI sending
> > short bits of information that each require a disk read, etc.  A well
> > designed app won't do that because of buffering on reading that input
> > data. I'm not worried about well designed apps though, since they will
> > watch their output too.
> If it's not a well designed app it can do far worse than spit small
> packets on the net.  But if you feel this is a fun challenge to solve go

The problem is that Apache is making this possible by disabling Nagle, so
we should deal with all the consequences of disabling Nagle or not do it.

> for it :)
> Maybe you just want to solve the "I don't want a buffer to age more than N
> seconds" problem in general.  It affects more than just mod_cgi you
> know... for example if you're in a pipelined connection a bunch of small
> short responses can be in the buffer, unsent, waiting for a long running
> request to generate enough output to flush the buffer. 
> It's probably as easy as making a second timeout notation in the
> scoreboard and sending a different signal when that timeout expires.  This
> works for all OPTIMIZE_TIMEOUTS configurations... which uh... are all I
> care about -- i.e. it covers probably 95% of our installations.  (And
> probably covers more except we don't have detailed info on the systems so
> we don't use shmget or mmap... see autoconf.)

Naw, I just wait for you to abstract timeouts then use that.  <g>

On a related note, I want to look into how the various buffer sizes
interact with each other and if there is any reason at all why it makes
sense to use such small buffers for reading and writing.

> You notate only when you put data in an empty buffer, and you remove the
> notation when you flush the buffer.  The accuracy is +0 to +1s from when
> you want it, and you never make a syscall to do it. 

I am not convinced that sort of accuracy is really enough for this.  I
would almost always rather put an extra segment on the network then wait a

> Critical section?  Easy.  It's just like SIGALRM handling.  You need have
> a nesting counter, and sometimes you have to defer the flush until the
> nesting goes to 0. 
> Dean

View raw message