Return-Path: Delivered-To: new-httpd-archive@hyperreal.org Received: (qmail 7166 invoked by uid 6000); 6 Mar 1998 05:49:10 -0000 Received: (qmail 7160 invoked from network); 6 Mar 1998 05:49:09 -0000 Received: from twinlark.arctic.org (204.62.130.91) by taz.hyperreal.org with SMTP; 6 Mar 1998 05:49:09 -0000 Received: (qmail 1582 invoked by uid 500); 6 Mar 1998 05:49:13 -0000 Date: Thu, 5 Mar 1998 21:49:12 -0800 (PST) From: Dean Gaudet To: TLOSAP Subject: Re: non-buffered CGIs suck In-Reply-To: Message-ID: X-Comment: Visit http://www.arctic.org/~dgaudet/legal for information regarding copyright and disclaimer. MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: new-httpd-owner@apache.org Precedence: bulk Reply-To: new-httpd@apache.org On Thu, 5 Mar 1998, Marc Slemko wrote: > (actually, it could be). If the OS modified tv to indicate time left it > is easy, but otherwise there is no nice way to do that. i.e. linux. The timevalue is modified to indicate the remaining time. Linus tried to revert it during 2.1.x because Linux is the only unix that supports this and so nobody could use it. But I showed that the C library depended on this functionality and he left it in. > Yes. It was just there to force a context switch. > > It is an inaccurate representation of unbuffered CGIs sending static > content, but I would suggest it may be very accurate for a CGI sending > short bits of information that each require a disk read, etc. A well > designed app won't do that because of buffering on reading that input > data. I'm not worried about well designed apps though, since they will > watch their output too. If it's not a well designed app it can do far worse than spit small packets on the net. But if you feel this is a fun challenge to solve go for it :) Maybe you just want to solve the "I don't want a buffer to age more than N seconds" problem in general. It affects more than just mod_cgi you know... for example if you're in a pipelined connection a bunch of small short responses can be in the buffer, unsent, waiting for a long running request to generate enough output to flush the buffer. It's probably as easy as making a second timeout notation in the scoreboard and sending a different signal when that timeout expires. This works for all OPTIMIZE_TIMEOUTS configurations... which uh... are all I care about -- i.e. it covers probably 95% of our installations. (And probably covers more except we don't have detailed info on the systems so we don't use shmget or mmap... see autoconf.) You notate only when you put data in an empty buffer, and you remove the notation when you flush the buffer. The accuracy is +0 to +1s from when you want it, and you never make a syscall to do it. Critical section? Easy. It's just like SIGALRM handling. You need have a nesting counter, and sometimes you have to defer the flush until the nesting goes to 0. Dean