apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "William A. Rowe, Jr." <wr...@rowe-clan.net>
Subject Re: lengths - brigades v.s. buckets.
Date Mon, 16 Jul 2001 17:07:29 GMT
From: "Bill Stoddard" <bill@wstoddard.com>
Sent: Friday, July 13, 2001 7:10 AM


> > Ok, I'm back to fixing all the 64 bit off_t discrepancies in APR/Apache.
> >
> > Can we basically agree that a "Bucket" can never be bigger than apr_ssize_t?
>
> Is the bucked backed by RAM?  If so, then I agree.  file buckets that can be sent down
the chain for
> use by sendfile should not have this restriction. If you need, for whatever reason, to
MMAP or read
> in the file, then sure apr_ssize_t is a reasonable upper limit (we'll set the actual
limit much
> lower in practice).

The bigger issue is converting buckets from one type to another.  A brigade operation can

always insert extra buckets, if it's necessary.  A bucket is a singleton, so _if_ a bucket

must be convertable to another type of bucket, it can't have disparate sizes.

> > I've no problems with using apr_off_t for the length of a full Brigades itself.
> > That means we can split a brigade on any apr_off_t, but would only need to
> > split a bucket on an apr_ssize_t.  It implies a 'Pipe' bucket can't generate
> > more than 2^31 bytes without breaking the code.
> 
> I don't follow the comment about a pipe bucket.  Sure, if you attempt to buffer the entire
pipe,
> there is a limit and 2^31 is not an unreasonable limit. In practice, we would never attempt
to
> buffer this much.

Ack.  It goes to the size argument.  If you are doing a _brigade_ read, then the size remains
undefined.  If you convert it to a bucket, it's trapped into the 2^31 restriction.

> > This means a huge file would need to be split by the caller into multiple file
> > buckets, no longer than ssize_t.  Is this reasonable?
>
> Yes, provided this in no way implies that you cannot have a file_bucket that references
an open fd
> to a file of arbitray size.

Well, if you leave the size undefined (-1) then you are fine.  If you attempt to convert it
or
determine it's length, then we are messed up.


From: "Roy T. Fielding" <fielding@ebuilt.com>
Sent: Thursday, July 12, 2001 11:54 PM


> > This means a huge file would need to be split by the caller into multiple file
> > buckets, no longer than ssize_t.  Is this reasonable?
> 
> Wouldn't that make it difficult to call sendfile on a file bucket that
> points to a huge file?


I question if sendfile() called on a file of a given size would even succeed, or
crash on most largefile/sendfile compatible systems :-)





Mime
View raw message