httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marc Slemko <>
Subject argh... bugs in request processing...
Date Fri, 02 Jan 1998 08:24:02 GMT
All I'm trying to do is have Apache die properly if a client tries to send
too many headers.  Easy, I know what to do and how to do it.  Doing it,
however, is another question.

from http_request.c:

    if (r->status != HTTP_OK) {
        recursive_error = type;

        while (r->prev && (r->prev->status != HTTP_OK))
            r = r->prev;        /* Get back to original error */

        type = r->status;
        custom_response = NULL; /* Do NOT retry the custom thing! */

well, that explains why my error code is magically changing to a 408 in
the middle of processing.  Take the simplest case of no subrequests.
r->prev == NULL but we changed the default r->status to 408 to properly
log timed out requests; I was a bit apprehensive at the time, but only
because I was afraid it would mess something up somewhere I didn't

It did.  In this case it loses the status passed to die() and replaces it
with 408.  A quick fix is to change:

    if (r->status != HTTP_OK) {


    if (r->status != HTTP_OK && r->prev) {

which fixes it for me (again, in the simple case of no subreqs), but this
is not the correct fix.  We either need to stop setting it to 408 at the
start of the request and find some other way to log the timeout properly
or we need to change all the code that checks this to accept either
HTTP_OK or 408.  This is not the only bit of code that needs to be fixed.
Haven't looked at this too deeply; I wouldn't doubt that there are other
ways to fix this.

The second issue is that some errors (eg. URI too large) never get output
to the client.  The reason for this is that the buffer isn't flushed
before the connection is closed.  Oh, for some reason a URI too large
doesn't call die() anyway but just sets r->status so it doesn't even try
to send a response.  Why!?!  

Having send_error_response flush would be one way to fix the flush
problem, but could have negative implications.  Oh, fsck it.  The problem
is that a NULL return from read_request is used to indicate an error, yet
if that is done then r isn't set so lingering_close is never called and we
just set B_EOUT and toss the junk. 

Hmm.  Are we leaking stuff here, since the pool isn't destroyed if
read_request bails out?

So we need to change read_request to cleanup after itself before returning
NULL and flush connections.  Hmm... looks like this messes up the post
read-request stuff too.

Don't worry if some of the above makes no sense, it is past my bedtime.  I
was just trying to do a quick fixed patch for limiting reading request
headers.  Honest.

View raw message