httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brian Havard" <>
Subject Re: cvs commit: apr-util/buckets apr_buckets_file.c (fwd)
Date Sun, 18 Feb 2001 10:04:01 GMT
On Sat, 17 Feb 2001 07:53:12 -0800 (PST), wrote:

>> >This was just committed to apr_util's bucket code.  This should fix the
>> >mod_include problem (at least it did for me).  What was happening, was
>> >that we read from the file, and converted it into a heap bucket.  Then, we
>> >destroyed the file bucket (free'ing *s at the same time), and then we used
>> >s->start to create the second file bucket.  Obviously, this means we were
>> >using garbage to create the second file_bucket.  This should allow us to
>> >read from the files cleanly.
>> >
>> >The second problem is one I haven't fixed however.  Brian, if you increase
>> >the size of your file to 10Meg, then mod_include will read all 10Meg into
>> >memory before sending it down the stack.  That's BAD!  mod_include needs
>> >to be taught how to stream data when there are no SSI tags in the file.
>> >
>> >I will be adding the second issue to STATUS.  Brian, please test the
>> >latest code on OS/2.
>> Well, that fixes the 50k case being scrambled but 100k file still gives no
>> output at all. The failure point is header+body>64k
>Is the file in the same location?

Yep (when the server's running anyway), thought the contents appears
irrelevant, just the size & being parsed by mod_include. I'm just testing
with the same 1k per line with line number space padded. I'd look at
debugging it myself if I had a bit more time spare.

 |  Brian Havard                 |  "He is not the messiah!                   |
 |  |  He's a very naughty boy!" - Life of Brian |

View raw message