httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rob Hartill <hart...@ooo.lanl.gov>
Subject Re: any ideas on this 0.8.16 problem?
Date Mon, 27 Nov 1995 09:19:56 GMT
 
> +1 on releasing 1.0.0.  It has been running on our server for the
> weekend with no problems. 
> 
> However, I was having problems with 0.8.16, which crashed our server
> machine on Wednesday and drove the root partition to an early grave. :(
> I am hoping that this is not a general problem, but here it is in case
> anyone has seen something similar or can find the cause.  Maybe one of
> you folks with a serious test setup can reproduce it.

It might be the same problem I was seeing.

I originally dismissed lack of memory, but there doesn't seem to be
anything else that could cause it.

It's possible that when Cardiff runs out of memory that Apache is failing
to make a check for lack of memory - or more likely failing to react
correctly, and it sends itself into a downward spiral to certain death
(a massive writing spree to error_log)

> Since I have Multiviews on for all
> directories, each failure also invokes the multiview code. My httpd.conf:
> 
>      MinSpareServers 2
>      MaxSpareServers 4
>      MaxClients 40
>      MaxRequestsPerChild 60
> 
> My current theory is that the high request rate, combined with a severe
> memory leak somewhere (multiviews?),

Unless it is switched on by default, I don't use it at Cardiff, so that
should eliminate that part of the code.

> is causing a server memory blowout.
> Interestingly, the first "Unable to fork new process" occurs immediately
> after the 60th request.  I am going to reduce MaxRequestsPerChild to 40,
> just in case.

I think mine's down to 15.

I still get one of these "blowouts" every day or two.


Are there any cases where an apache child process decides to continue
after a failed request for memory?. If there are, shouldn't the action
be to do an immediate log and die?



rob

Mime
View raw message