httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ben Laurie <>
Subject Re: YA Apache DoS attack
Date Sat, 08 Aug 1998 09:45:13 GMT
Dean Gaudet wrote:
> You lose all the speed benefits of pools if you try any of this... (not to
> mention the simplicity, and the possibility of introducing memory leaks).
> Remember, you're talking about deformed requests.  You don't want to put
> things into the path of well-formed requests.  (I don't care that HTTP
> allows these things to happen, I personally think that's a mistake.)  So
> my suggestion is to make a linked list of 4K buffers, and chop them up
> into headers as you can, or use them to figure out how long a header is
> and allocate the storage for the header in one step.
> In the normal case the request fits in the first 4K, and there are no
> continuation headers, so you can build the headers_in table just by
> walking through the buffer and putting \0 here and there.  You don't need
> to allocate any extra memory.
> In the abnormal cases, such as a header spanning a buffer boundary, or
> continuation headers, you have to allocate extra memory... but that's no
> problem... you just calculate the size of memory you need, allocate, and
> assemble the bits and pieces.
> One difficulty is what to do with the "extra" data which is not part of
> the headers but which follows the headers... you essentially want to be
> able to push that back into the BUFF.  You only need one level of this
> sort of pushing, so support it directly -- it's a little magic with inptr.

My first thought was to store the headers unmerged until the end. Then
merge them as a single step (calculating the correct size in advance, of
course). It occurred to me that this is still O(n^2) in time, though
O(n) in memory.

Fixing the O(n^2) time problem is presumably a matter of keeping a
hash/tree of headers and chaining them, making in O(n*log(n)) which my
gut tells me is as good as you are going to get. Oh, you could do a
qsort at the end, instead, with the same effect.

Another solution, I thought, was to allocate twice as much memory as you
need for a merged header, and do subsequent merges in place, until you
run out, then allocate double again. This limits total memory
consumption to twice (ish) the size of the headers. Still O(n^2) in
time, though.

Of course, this should apply to anything that uses merging, just in



Ben Laurie            |Phone: +44 (181) 735 0686| Apache Group member
Freelance Consultant  |Fax:   +44 (181) 735 0689|
and Technical Director|Email: |
A.L. Digital Ltd,     |Apache-SSL author
London, England.      |"Apache: TDG"


View raw message