httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian Pane <>
Subject [request for comments] limiting file bucket setaside
Date Mon, 05 Aug 2002 18:56:36 GMT
Currently, several output filters use ap_save_brigade() to set
aside a brigade until more data is available.  The content-length
filter, for example, sets aside the brigade in many circumstances
if it hasn't yet seen EOS.  And the core output filter sets aside
small files on keepalive requests in hopes of coalescing multiple
small files into a single write.

The setting aside of file buckets is one of the larger performance
problems remaining in the httpd.  The setaside function for file
buckets currently does an mmap+memcpy+munmap.  There are some common
cases, like keepalive requests for small files and SSI requests, in
which a file that could otherwise be sendfile'd is instead mmap'ed
and copied into the heap.

Here's a proposal for fixing this (thanks to Cliff and Ryan for
brainstorming about this on IRC and providing some key insights).

* Change the file bucket setaside code to use apr_file_setaside(),
  which simply copies the file data structure and doesn't try to
  mmap or read the file.  But if we set aside too many file descriptors
  in this manner, we might run out of descriptors in a multithreaded
  server, so...

* When setting aside a brigade in any filter, limit the number of
  file buckets that can be set aside.  If the brigade exceeds this
  limit, turn some of the file buckets (preferably the smaller ones)
  into mmap or heap buckets so that there are fewer than 'limit'
  file buckets remaining.  The limit ideally should be user-configurable.

* We'd need to identify a place in the server to put this logic.
  Cliff suggested ap_save_brigade(), and I think that's a good
  choice: the need to conserve file descriptors is really specific
  to the httpd architecture, so I think it makes sense to have it
  in an ap_ function rather than apr_.

I'm eager to try this (post 2.0.40), as it could significantly help
2.0's performance on keepalive requests.  But first, does anyone else
have feedback on the design approach?


View raw message