httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Greg Ames <>
Subject [PATCH] daedalus coredump from 2.0.35c
Date Wed, 10 Apr 2002 13:35:25 GMT
Cliff Woolley wrote:
> Is this the version with the patched mmap.c, I guess?

yes, sorry, I left out the big picture.

> Bucket #6 is quite likely the sentinel of another brigade that some stupid
> filter copied over on us.  I'd'a sworn we fixed the byterange filter, but
> it *is* seeming like a common denominator here.  Which corefile did you
> say this was?  I'll try to track it down.


/usr/local/apache is a symlink which always points to the running server's
install directory.  So if I screw up like this in the future,
/usr/local/apache/corefiles/httpd.core.* is a good bet, unless you see posts
about a change in the production server code level.

I can get this to seg fault on my ThinkPad pretty easily, running a byterange
request that looks like what I posted.  This happens when the byterange filter
thinks it should generate an error bucket with HTTP_RANGE_NOT_SATISFIABLE.  But
sometimes it generates these invalidly.

The byterange filter appears to be in the wrong place in the filter chain.  It
is above the C_L filter, which creates a major problem.  It can't tell what the
full content length is for the entire request, so it can't do its job as far as
determining if a Range: header applies.  Yeah, if you serve unparsed static
pages only, it happens to work.  But try it with mod_include or another filter
which can expand or shrink the output and all bets are off.

mod_include sends down an MMAP bucket containing a few bytes of data before the
first <!--#include virtual > tag.  When the b_r filter first runs, it won't have
a context.  This makes it call ap_set_byterange which extracts the data from the
Range: header.  Then it calls parse_byterange, passing r->clength which at this
time contains the length of the file represented by r->filename. 
parse_byterange decides the range is invalid based entirely on the length of the
first file.  It unwinds to the b_r filter, which removes itself from the filter
chain and then creates an error bucket with HTTP_RANGE_NOT_SATISFIABLE.

The following patch makes the seg faults go away for me, even with a truly
invalid byte range. It also allows the b_r filter to do a much better job of
determining if the range is valid:

Index: modules/http/http_core.c
RCS file: /home/cvs/httpd-2.0/modules/http/http_core.c,v
retrieving revision 1.301
diff -u -d -b -r1.301 http_core.c
--- modules/http/http_core.c    29 Mar 2002 08:17:22 -0000      1.301
+++ modules/http/http_core.c    10 Apr 2002 03:16:13 -0000
@@ -309,9 +309,9 @@
 static int http_create_request(request_rec *r)
     if (!r->main && !r->prev) {
-        ap_add_output_filter_handle(ap_byterange_filter_handle,
-                                    NULL, r, r->connection);
+                                    NULL, r, r->connection);
+        ap_add_output_filter_handle(ap_byterange_filter_handle,
                                     NULL, r, r->connection);
                                     NULL, r, r->connection);

There might still be a bug somewhere in the error bucket path that I just can't
hit at the moment.  I wonder if mod_include gets confused and does bad things if
the filter chain decides to bail prematurely.  I've seen something like that in
the past.

I'd like to take a look at RFC 2616 after some sleep to verify the headers I see
with this patch, to beat it up some more, and to hear if there are valid reasons
why b_r should come before C_L.


View raw message