On Fri, Oct 05, 2007 at 03:37:57PM +1000, Bojan Smojver wrote:
> Now imagine someone (like yours truly :-) writing a handler/filter that
> sends many, many buckets inside a brigade down the filter chain. This
> causes the httpd process to start consuming many, many megabytes (in
> some instances I measured almost 500 MB in my tests), which are never
> returned. Then imagine multiple httpd processes doing the same thing and
> not releasing any of that memory back. The machine goes into DOS
> quickly, due to excessive swapping.
It sounds like that is the root cause. If you create a brigade with N
buckets in for arbitrary values of N, expect maximum memory consumption
to be O(N). The output filtering guide touches on this:
http://httpd.apache.org/docs/trunk/developer/output-filters.html
Filters need to be written to pass processed buckets down the filter
chain ASAP rather than buffering them up into big brigades. Likewise
for a content generator - buffering up a hundred 1MB HEAP buckets in a
brigade will obviously give you a maximum heap usage of ~100MB; instead,
pass each HEAP bucket down the filter chain as its generated and you get
maximum of ~1MB.
Most of the shipped filters do behave correctly in this respect (though
there are problems with the handling of FLUSH buckets); see e.g. the
AP_MIN_BYTES_TO_WRITE handling in mod_include.
joe
|