httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aleksey Midenkov <>
Subject Re: Cleanup/desctruction of connection pool and associated bucket_alloc
Date Fri, 05 Oct 2007 09:28:27 GMT
And what if a large file is downloaded and processed by filters? Did the 
buckets allocated by filters will not be deallocated until the connection 
end? This can be a cause of DOS. The buckets should be freed after they have 
flushed out of ap_core_output_filter.

On Friday 05 October 2007 09:37:57 Bojan Smojver wrote:
> I noticed that if a large number of buckets in a brigade are sent out,
> the resident memory footprint of httpd process (been playing with 2.2.6
> for now) will go up significantly.
> For instance, one could replicate this behaviour by having a file
> processed by the INCLUDES filter, which contains a lot of references
> (say a thousand) to something like this:
> <!--#include virtual='somefile.html' -->
> The size of the somefile.html does not matter (it can actually be zero).
> In this particular example, resident size of httpd jumped from 3 to 11
> MB. The served file in question was about 40 kB in size (i.e. the SHTML
> file containing the virtual directive). Quite a bit for such a small
> chunk of HTML being pushed out.
> What appears to be happening is that conn->pool and conn->bucket_alloc
> do not get destroyed (but rather just cleaned), which then causes the
> footprint of the process to go up, given that a lot of buckets were
> allocated. If fact, even destroying conn->pool does not help, because it
> would appear that conn->pool is not the owner of its allocator.
> Destroying conn->pool->parent brings the memory footprint of httpd back
> in check.
> Now imagine someone (like yours truly :-) writing a handler/filter that
> sends many, many buckets inside a brigade down the filter chain. This
> causes the httpd process to start consuming many, many megabytes (in
> some instances I measured almost 500 MB in my tests), which are never
> returned. Then imagine multiple httpd processes doing the same thing and
> not releasing any of that memory back. The machine goes into DOS
> quickly, due to excessive swapping.
> Sure, I could fix my code to slam buckets together to reduce the number
> of them, but that would not fix any other handler/filter (e.g.
> mod_include). So, I'm guessing the correct fix would be to:
> - make conn->pool have/own its own allocator
> - destroy, rather then clear conn->pool on connection close
> Thoughts?

View raw message