httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dean Gaudet <dgau...@arctic.org>
Subject Re: worth fixing "read headers forever" issue?
Date Fri, 02 Jan 1998 22:13:33 GMT
There's two problems here.  One is setting artificial limits on the size
of requests we'll process, and the other is getting rid of children which
have allocated more RAM than we want them to (because children never free
their ram).  The latter is solved by what I said... you could also make
the malloc_block() code abort gracelessly when it determines that too much
RAM is in use.  It'd be about the same as setting an rlimit() except we'd
know what the cause was.

You can abort somewhat gracefully in a lot of cases, whenever alarms
aren't blocked for example you can just longjmp out.  But it's still a
pain in the ass trying to deliver an error message to the client.  I
suppose we can test to find out if the response has started, and if not
give the client a 500 response.  Otherwise we have to stop the response
early, there's nothing we can do.

Dean

On Fri, 2 Jan 1998, Brian Behlendorf wrote:

> At 01:22 PM 1/2/98 -0800, Dean Gaudet wrote:
> >When it malloc()s a block it increments a
> >scoreboard entry showing how much memory it has allocated.  The parent
> >prefers to kill off the child with the most memory allocated.  It's not at
> >all expensive actually. 
> 
> But will the parent check "often enough" to kill a child when it's reading
> an infinite number of headers, for example?  It won't kill a child reading
> a request anyways.  If we want a /general/ solution to the "reading input
> from a network" problem, that checking should also be done while input is
> being read.
> 
> 	Brian
> 
> 
> --=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--
> specialization is for insects				  brian@organic.com
> 


Mime
View raw message