trafficserver-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Plevyak <jplev...@gmail.com>
Subject Re: any sort of 32bit limitation preventing large file caches?
Date Tue, 24 Apr 2012 03:42:54 GMT
Humm... it would be easy to enlarge the aggregation buffer or disable the
fragment offsets in the header.   The former is probably the most powerful
solution since if you are serving huge documents, you probably want the
ability to restart/seek and likely also have significant memory on a per
disk basis.   We could make it configurable.

john

On Mon, Apr 23, 2012 at 7:04 PM, taorui <weilogster@126.com> wrote:

> hmmm, The limitation of size of an object can be cached is not specific,
> but we can estimate the limitation through the source. (suppose every
> fragment is 1M except the first fragment).
>
> agg_buf is 4M, which means the first fragment can not exceed 4M, then
> most fragments of an obj can be cached is (4M - sizeof(Doc) - hdr_len) /
> sizeof(Frag). So we can get the largest size of an obj can be cached is
> ((4M - sizeof(Doc) - hdr_len) / 8) * 1M, that can not be exceed 5G.
>
> On Mon, 2012-04-23 at 15:55 -0700, Bruce Lysik wrote:
> > Hi folks,
> >
> > Is there any sort of internal limitation on the size of an object that
> can be cached?  We are seeing 1.5gb objects cached fine, but a 5.5gb object
> doesn't seem to be cached.
> >
> > Thanks in advance.
> >
> > --
> > Bruce Z. Lysik <blysik@yahoo.com>
>
>
>
>

Mime
View raw message