Return-Path: Delivered-To: apmail-httpd-dev-archive@httpd.apache.org Received: (qmail 69802 invoked by uid 500); 24 Jun 2002 07:01:40 -0000 Mailing-List: contact dev-help@httpd.apache.org; run by ezmlm Precedence: bulk Reply-To: dev@httpd.apache.org list-help: list-unsubscribe: list-post: Delivered-To: mailing list dev@httpd.apache.org Received: (qmail 69788 invoked from network); 24 Jun 2002 07:01:40 -0000 Date: Mon, 24 Jun 2002 00:02:07 -0700 From: Brian Pane Subject: Re: core_output_filter buffering for keepalives? Re: Apache 2.0 Numbers In-reply-to: To: dev@httpd.apache.org Message-id: <1024902128.1576.82.camel@localhost> MIME-version: 1.0 X-Mailer: Ximian Evolution 1.0.3 (1.0.3-6) Content-type: text/plain Content-transfer-encoding: 7BIT References: X-Spam-Rating: daedalus.apache.org 1.6.2 0/1000/N On Sun, 2002-06-23 at 23:12, Cliff Woolley wrote: > On Mon, 24 Jun 2002, Bill Stoddard wrote: > > > Yack... just noticed this too. This renders the fd cache (in > > mod_mem_cache) virtually useless. Not sure why we cannot setaside a fd. > > You can. The buckets code is smart enough to (a) take no action if the > apr_file_t is already in an ancestor pool of the one you're asking to > setaside into and (b) just use apr_file_dup() to get it into the requested > pool otherwise to handle the pool cleanup/lifetime issues. > > It's the core_output_filter that's doing an apr_bucket_read() / > apr_brigade_write() here. Presumably to minimize the number of buffers > that will have to be passed to writev() later. > > That could be changed pretty easily, and the mmap/memcpy/munmap/writev > would go away. Note, however, that since you can only pass one fd to > sendfile at a time anyway, I suppose we can take advantage of sendfilev, which accepts multiple file descriptors, on platforms where it's available. However, I'd rather not setaside file descriptors here anyway, because doing so would leave us vulnerable to running out of file descriptors in the multithreaded MPMs. > delaying the sending of a FILE bucket is pretty > pointless if you're going to send it out with sendfile later anyway. What > would be better is to mmap the file and hang onto the mmap to pass a bunch > of mmap'ed regions to writev() all at once. For cache purposes, that just > means that all you have to do is consider the size of the files you're > dealing with, and if they're small, use MMapFile instead of CacheFile. If > we then got rid of the apr_bucket_read/apr_brigade_write in the > core_output_filter and just saved up a brigade instead, you'd be set. I just have one consideration to add here: if we add code to do an mmap, we need to make sure that it does an open+read instead of mmap if "EnableMMAP off" has been set for the directory containing the file. The more I think about it, though, the more I like the idea of just writing the brigade out to the client immediately when we see EOS in core_ouput_filter(), even if c->keepalive is true. If we do this, the only bad thing that will happen is that if a client opens a keepalive connection and sends a stream of requests for 1-byte files, each file will be sent back in a separate small packet. But that's still an improvement over the non-keepalive case (and equivalent to the packetization that we get from 1.3). --Brian