apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Cliff Woolley <jwool...@virginia.edu>
Subject Re: File buckets and downloadng files larger than 4gig...
Date Wed, 17 Dec 2003 17:09:25 GMT
On Wed, 17 Dec 2003, Brad Nicholes wrote:

> incompatibilities in the bucket code.  There are a number of places
> where file lengths are defined as apr_size_t rather than apr_off_t.
> What is the downside of redefining these variables as apr_off_t (ie.
> off64_t rather than off_t)?

We went back and forth on that a lot when writing it.  (Greg Stein and
Bill Rowe did most of the back-and-forth I think, but whatever.  ;)  The
way it's supposedly written now is that a brigade can contain as much as
an apr_off_t's worth of data, but a single bucket can contain only an
apr_size_t's worth.  If you have a large file, you're supposed to split it
into multiple file buckets and chain them together.  One stated reason for
that is that funcs like sendfile() only take a size_t's worth of data at a
time anyway.  AFAIK the main reason wass that some buckets just can't hold
more than a size_t of data no matter what you do, eg a heap bucket on a
32-bit machine, so to be consistent they're all that size.

> I guess the other question would be, is this even an issue? Do users
> expect to be able to download a file larger than 4gig or even 2gig?

I should certainly think so.  It's certainly been a very big issue for me
at times, since graphics people like myself tend to toss around huge
files.  :)


View raw message