httpd-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Luke Meyer <>
Subject [users@httpd] script to reproduce gzip+chunked bug
Date Fri, 20 Jan 2012 20:10:28 GMT
I've been tracking down an elusive problem that affects POST requests made to httpd with the
body gzipped and chunked, deflated with mod_deflate and passed on to a backend. I found that
most of the time it worked; but occasionally (sending exactly the same thing multiple times)
the contents wouldn't arrive at the backend intact. I think it has to do with the chunking,
which varies depending on networking.

A bit of an edge case, obviously, but I think I have the tool to reproduce it. I've created
the attached script, which will gzip a test file and send it with any specific chunk sizes
I want. Can I get some feedback on whether my implementation is correct? I ask because I seem
to have uncovered a completely different bug along the way: if the last chunk is 8 bytes or
smaller, httpd returns 400 (doesn't understand the request). Wireshark seems to de-chunk and
un-zip the packets it captures just fine, and as long as the chunk size isn't too small, so
does httpd. So I think it's right, but I want to know if I'm doing something stupid before
I file it as a bug.

Script operation is pretty simple; once you have perl and IO::Socket::INET and Compress::Zlib
in place, just run it against a server like so:

./ -server=localhost -port=80 -url=/ -file=/usr/share/dict/words -size=40
(or just let the script use the defaults you see at the top)

I've only tested it on Linux so far. Could you have a look and let me know?

View raw message