httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Peter Van Biesen <>
Subject Re: Segmentation fault when downloading large files
Date Mon, 02 Sep 2002 07:58:19 GMT
I've continued to investigate the problem, maybe you know what could
cause it.

I'm using a proxy chain, a proxy running internally and forwarding all
requests to an other proxy in the DMZ. Both proxies are identical. It is
always the internal proxy that crashes; the external proxy has no
problem downloading large files ( I haven't tested the memory usage yet
). Therefor, when the proxy connects directly to the site, the memory is
freed, but when it forwards the request to another proxy, it is not. 

Anyhow, I'll wait until the 2.0.41 will be released, maybe this will
solve the problem. Does anybody know when this will be ?


Graham Leggett wrote:
> Brian Pane wrote:
> > But the memory involved here ought to be in buckets (which can
> > be freed long before the entire request is done).
> >
> > In 2.0.39 and 2.0.40, the content-length filter's habit of
> > buffering the entire response would keep the httpd from freeing
> > buckets incrementally during the request.  That particular
> > problem is gone in the latest 2.0.41-dev CVS head.  If the
> > segfault problem still exists in 2.0.41-dev, we need to take
> > a look at whether there's any buffering in the proxy code that
> > can be similarly fixed.
> The proxy code doesn't buffer anything, it basically goes "get a bucket
> from backend stack, put the bucket to frontend stack, cleanup bucket,
> repeat".
> There are some filters (like include I think) that "put away" buckets as
> the response is handled, it is possible one of these filters is also
> causing a "leak".
> Regards,
> Graham
> --
> -----------------------------------------
>         "There's a moon
>                                         over Bourbon Street
>                                                 tonight..."

View raw message