apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "William A. Rowe, Jr." <wr...@apache.org>
Subject Re: File buckets and downloadng files larger than 4gig...
Date Wed, 17 Dec 2003 18:06:06 GMT
At 11:30 AM 12/17/2003, Brad Nicholes wrote:
>   Buckets being restricted to a size_t is kind of what I expected.  So
>here is what I am seeing and maybe you can help me work through this. 
>In  ap_content_length_filter() the code attempts to add up all the
>lengths of all of the buckets and put that value into  r->bytes_sent
>before setting the content-length header.  The problem is that there
>appears to be only one bucket and the length of that bucket is
>(actual_filesize - 4gig) for any file greater than 4gig.  Where should
>the dividing up of the whole file into smaller buckets happen?

The code is in core.c default_handler():

        bb = apr_brigade_create(r->pool, c->bucket_alloc);
#if APR_HAS_SENDFILE && APR_HAS_LARGE_FILES
        if ((d->enable_sendfile != ENABLE_SENDFILE_OFF) &&
            (r->finfo.size > AP_MAX_SENDFILE)) {
            /* APR_HAS_LARGE_FILES issue; must split into mutiple buckets,
             * no greater than MAX(apr_size_t), and more granular than that
             * in case the brigade code/filters attempt to read it directly.
             */
            apr_off_t fsize = r->finfo.size;
            e = apr_bucket_file_create(fd, 0, AP_MAX_SENDFILE, r->pool,
                                       c->bucket_alloc);
            while (fsize > AP_MAX_SENDFILE) {
                apr_bucket *ce;
                apr_bucket_copy(e, &ce);
                APR_BRIGADE_INSERT_TAIL(bb, ce);
                e->start += AP_MAX_SENDFILE;
                fsize -= AP_MAX_SENDFILE;
            }
            e->length = (apr_size_t)fsize; /* Resize just the last bucket */
        }
        else
#endif
            e = apr_bucket_file_create(fd, 0, (apr_size_t)r->finfo.size,
                                       r->pool, c->bucket_alloc);

Now the expectation was that only the sendfile API needed a file chunked
apart into multiple sendfile steps.  Pretty obvious that isn't enough of a
solution...

So perhaps the next question is - should some bucket types allow >ssize_t
worth of content (like sendfile) and move this behavior either to the sendfile
function, or move it into apr_brigade_read()?

Wide open to ideas, seems like both yourself and Cliff have real-life applications
to stress this :)

Bill 


Mime
View raw message