httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brandon Fosdick <>
Subject Re: Large file support in 2.0.56?
Date Sat, 22 Apr 2006 21:25:24 GMT
Brandon Fosdick wrote:
> If my theory is correct, then I think the solution is to find a way to 
> stream data to the storage provider earlier in the request process. I 
> don't know if that's a core issue, or just some config bits in mod_dav, 
> or my provider, that need to be fiddled. It's odd that httpd buffers the 
> whole thing and then mod_dav streams it in 2K chunks, so I've got a 
> feeling there's something in mod_dav that needs tweaking.

More notes...

I found the part in mod_dav that streams the request body to the storage provider (see the
"Buckets and brigades" thread). It reads a fixed 2K block from the input brigade and then
passes a pointer to that block to the provider. Rinse and repeat until reaching EOS.

On a whim I tried changing 2K to 64K, just to see what would happen. Using mod_dav_fs with
2K blocks, the client will timeout after ~75MB have been written to disk. Using 64K blocks,
~90MB are written to disk.

Not a big difference, but it furthers my suspicion that this problem has more to do with timing
than with file size. The amount of data written to disk appears to depend on the write speed
as well as the patience of the client.

View raw message