httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Paul J. Reder" <>
Subject Re: cvs commit: httpd-2.0/modules/filters mod_include.c
Date Sat, 18 Aug 2001 17:27:43 GMT
Ok, I have stepped through the debugger over and over on this. Jeff's
fixes are fine for now, maybe forever.

The concern I have is that Jeff's patch plays into the way the code currently
recovers from running over BYTE_COUNT_THRESHOLD. Once it has processed this
many bytes it calls SPLIT_AND_PASS_PRETAG_BUCKETS, then resets parsing to 
start at the beginning of the tag (i.e. in the tag "<!--#[directive]..." it
would restart at the < if it was looking for the header when it ran over
BYTE_COUNT_THREASHOLD or [directive] otherwise).

The reason for my concern is twofold. First, and least important, is the wasted
parsing time spent reparsing the same bytes again. The second, and more important,
is that if the tag itself is over BYTE_COUNT_THRESHOLD bytes long this code will

It will parse 8192 bytes, then call SPLIT_AND_PASS_PRETAG_BUCKETS. The first time
this is called it will split the bucket right before the tag begins. If subsequent
parsing spans 8192 bytes, all within this single tag, then the next call to
SPLIT_AND_PASS_PRETAG_BUCKETS will do nothing. From this point on it will continue
to reset and parse the same 8192 bytes of the tag and reset again...

The question is, should we care about the very remote possiblity that someone might
code a tag longer than BYTE_COUNT_THRESHOLD number of bytes? If not, then Jeff's
code can stand, forever.

If we do care, then the code needs to be changed to be able to handle tags longer
than BYTE_COUNT_THRESHOLD number of bytes, and needs to be changed to not reparse


Paul J. Reder
"The strength of the Constitution lies entirely in the determination of each
citizen to defend it.  Only if every single citizen feels duty bound to do
his share in this defense are the constitutional rights secure."
-- Albert Einstein

View raw message