httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marc Slemko <ma...@worldgate.com>
Subject Re: non-buffered CGIs suck
Date Fri, 06 Mar 1998 04:42:01 GMT
On Thu, 5 Mar 1998, Dean Gaudet wrote:

> 
> 
> On Thu, 5 Mar 1998, Marc Slemko wrote:
> 
> > Why should it have any significant impact at all on them?  Heck, you have
> > less overhead when there is a delay of less than the select timeout
> > because you avoid pointless flushes.  When it does timeout and go to
> > block, you have one extra syscall overhead.
> > 
> > What other overhead is there?
> 
> 4k chunks never get buffered.  So waiting 100ms for each of them hurts
> overall throughput. 

I'm not sure I follow.  If they don't get buffered, where is the problem?
You do a 4k write.  It doesn't get buffered, so it goes out without a
flush.  You then wait for either 100ms or the next write, whichever comes
first.  If the next write comes right away, there is no difference.  This
code only comes into play if we need to block for the next read.  If you
do 2k writes, for example, then that 2k could end up being delayed an
extra 100 ms.

If you did a 4k write and it didn't get sent until the flush or more data
was written, it could add delay.  Not necessarily that much though, since
you have to remember you still have the send buffer size in the TCP stack
so in bulk data flow I can see no delays since the CGI should be able to
write at speeds >> than the network can send.

What really should be done here is to prevent sending things if there
isn't a full segment, but we have no way to do that.

> 
> > Remember prior to 1.1?  We had Nagle enabled.
> 
> Doesn't help in all cases though.  But point taken.  How do things look if
> you re-enable Nagle?

Things are fine from the packet size perspective if you reenable Nagle,
but it may cause performance problems in some cases.  The original reason
for disabling it was due to the fact that we sent the headers in a
separate segment.  

We should only run into trouble with Nagle if we have two short segments
in a row.  Before, that could be the end of one response body and the
headers of the next response.  Now we don't flush after the headers are
sent, so that (common) case doesn't happen.  It could happen with just the
right sequence of cache validation stuff; not when we have a whole bunch
of requests pipelined at once, but when a new one comes in after we sent
the last segment of the previous response but before we have the ACK back
for it.  I am planning on looking to see if it is possible to enable Nagle
without causing problems.  Nagle is a lot smarter than Apache can be about
this because of the layer it is at.  I am also looking to see how many
systems have the sucky segment size problem; I am told that most don't,
and I don't even see it with all FreeBSD systems.  Not sure why yet. 

> 
> And maybe I should check your script on Linux to see if it's another
> freebsd feature ;)  (couldn't resist ;) 
> 
> > > And I still disagree with every single CGI FAQ that says "set $| =1; in
> > > your perl scripts".  I've never understood why that is there.  I never
> > > seem to require it.  At least our FAQ explains that you should turn
> > > buffering back on. 
> > 
> > If you do anything that mixes non-buffered and buffered IO you need it or
> > something similar.  If you do:
> > 
> > print "Content-type: text/plain\n\n";
> > system("/bin/hostname");
> > 
> > you need it.
> 
> Yeah you're right, I guess I don't write these sort of lame CGIs so I
> never run into it. 
> 
> Dean
> 
> 



Mime
View raw message