apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brad Nicholes" <BNICHO...@novell.com>
Subject Re: File buckets and downloadng files larger than 4gig...
Date Wed, 17 Dec 2003 17:30:03 GMT
   Buckets being restricted to a size_t is kind of what I expected.  So
here is what I am seeing and maybe you can help me work through this. 
In  ap_content_length_filter() the code attempts to add up all the
lengths of all of the buckets and put that value into  r->bytes_sent
before setting the content-length header.  The problem is that there
appears to be only one bucket and the length of that bucket is
(actual_filesize - 4gig) for any file greater than 4gig.  Where should
the dividing up of the whole file into smaller buckets happen?

Brad

Brad Nicholes
Senior Software Engineer
Novell, Inc., the leading provider of Net business solutions
http://www.novell.com 

>>> Cliff Woolley <jwoolley@virginia.edu> Wednesday, December 17, 2003
10:09:25 AM >>>
On Wed, 17 Dec 2003, Brad Nicholes wrote:

> incompatibilities in the bucket code.  There are a number of places
> where file lengths are defined as apr_size_t rather than apr_off_t.
> What is the downside of redefining these variables as apr_off_t (ie.
> off64_t rather than off_t)?

We went back and forth on that a lot when writing it.  (Greg Stein and
Bill Rowe did most of the back-and-forth I think, but whatever.  ;) 
The
way it's supposedly written now is that a brigade can contain as much
as
an apr_off_t's worth of data, but a single bucket can contain only an
apr_size_t's worth.  If you have a large file, you're supposed to split
it
into multiple file buckets and chain them together.  One stated reason
for
that is that funcs like sendfile() only take a size_t's worth of data
at a
time anyway.  AFAIK the main reason wass that some buckets just can't
hold
more than a size_t of data no matter what you do, eg a heap bucket on
a
32-bit machine, so to be consistent they're all that size.

> I guess the other question would be, is this even an issue? Do users
> expect to be able to download a file larger than 4gig or even 2gig?

I should certainly think so.  It's certainly been a very big issue for
me
at times, since graphics people like myself tend to toss around huge
files.  :)

--Cliff

Mime
View raw message