apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Cliff Woolley <jwool...@virginia.edu>
Subject Re: File buckets and downloadng files larger than 4gig...
Date Wed, 17 Dec 2003 18:58:11 GMT
On Wed, 17 Dec 2003, Brad Nicholes wrote:

> before setting the content-length header.  The problem is that there
> appears to be only one bucket and the length of that bucket is
> (actual_filesize - 4gig) for any file greater than 4gig.

Weird, but I can believe it.

> Where should the dividing up of the whole file into smaller buckets
> happen?

Right now it's supposed to be happening in the handler.  I've always hated
that.  I think it'd be much cooler if this could be handled inside the
buckets code.  But I don't know right off where the right place to do that
would be.

Can't do it in apr_bucket_file_create() by having it create multiple
buckets and chaining them, because the bucket creation operation is
semantically restricted to creating only /one/ bucket, and violating that
would cause all kinds of macro ops to be broken.

Can't do it in file_bucket_read() by having a single file bucket contain
the large file and just pretend it had either only 4GB of data or by
having e->length == -1 (size unknown), because that would cause
apr_brigade_length() to do the wrong thing [return an incorrect brigade
length or have to actually read the file in to memory (or mmap it),
respectively, both of which are Badness].

I'll try to think of an alternative, but suggestions are welcome.

--Cliff

Mime
View raw message