apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brad Nicholes" <BNICHO...@novell.com>
Subject Re: File buckets and downloadng files larger than 4gig...
Date Wed, 17 Dec 2003 19:40:42 GMT
    Thanks.  After doing some more digging I also ran across this chunk
of code myself.  Casting the filesize to apr_size_t in the #else part of
the code was a dead give-away.  I guess NetWare hit the odd combination
here in that we don't have sendfile but we do have large files.  This is
what was causing the problem.  I have tweeked the #if statement just a
little so that if large file support is enabled then it still breaks the
file into smaller chunks.  I am testing it now, but I don't think that
it is a complete solution.  This should still work even if a platform
has large files and the enable_sendfile is off.

Brad

Brad Nicholes
Senior Software Engineer
Novell, Inc., the leading provider of Net business solutions
http://www.novell.com 

>>> "William A. Rowe, Jr." <wrowe@apache.org> Wednesday, December 17,
2003 11:06:06 AM >>>
At 11:30 AM 12/17/2003, Brad Nicholes wrote:
>   Buckets being restricted to a size_t is kind of what I expected. 
So
>here is what I am seeing and maybe you can help me work through this.

>In  ap_content_length_filter() the code attempts to add up all the
>lengths of all of the buckets and put that value into  r->bytes_sent
>before setting the content-length header.  The problem is that there
>appears to be only one bucket and the length of that bucket is
>(actual_filesize - 4gig) for any file greater than 4gig.  Where
should
>the dividing up of the whole file into smaller buckets happen?

The code is in core.c default_handler():

        bb = apr_brigade_create(r->pool, c->bucket_alloc);
#if APR_HAS_SENDFILE && APR_HAS_LARGE_FILES
        if ((d->enable_sendfile != ENABLE_SENDFILE_OFF) &&
            (r->finfo.size > AP_MAX_SENDFILE)) {
            /* APR_HAS_LARGE_FILES issue; must split into mutiple
buckets,
             * no greater than MAX(apr_size_t), and more granular than
that
             * in case the brigade code/filters attempt to read it
directly.
             */
            apr_off_t fsize = r->finfo.size;
            e = apr_bucket_file_create(fd, 0, AP_MAX_SENDFILE,
r->pool,
                                       c->bucket_alloc);
            while (fsize > AP_MAX_SENDFILE) {
                apr_bucket *ce;
                apr_bucket_copy(e, &ce);
                APR_BRIGADE_INSERT_TAIL(bb, ce);
                e->start += AP_MAX_SENDFILE;
                fsize -= AP_MAX_SENDFILE;
            }
            e->length = (apr_size_t)fsize; /* Resize just the last
bucket */
        }
        else
#endif
            e = apr_bucket_file_create(fd, 0,
(apr_size_t)r->finfo.size,
                                       r->pool, c->bucket_alloc);

Now the expectation was that only the sendfile API needed a file
chunked
apart into multiple sendfile steps.  Pretty obvious that isn't enough
of a
solution...

So perhaps the next question is - should some bucket types allow
>ssize_t
worth of content (like sendfile) and move this behavior either to the
sendfile
function, or move it into apr_brigade_read()?

Wide open to ideas, seems like both yourself and Cliff have real-life
applications
to stress this :)

Bill 


Mime
View raw message