trafficserver-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Leif Hedstrom <zw...@apache.org>
Subject Re: Optimizing ATS For Forward-Only Proxy Caching
Date Mon, 25 Nov 2013 11:06:57 GMT
On Nov 25, 2013, at 3:22 AM, Adam W. Dace <colonelforbin74@gmail.com> wrote:

> Once more, thanks for your response.  I was planning on getting ahold of someone after
the next release of ATS but hearing from your early is great.

Great! Yeah, we need these types of “Quick Starts” from real users, so keep ‘em coming.

> 
> 
> I wanted to mention this setting in particular first, since it's really so vital to the
cache structure from what I understand.
> 
> Leif wrote:
> "10. Your setting for proxy.config.cache.min_average_object_size seems wrong. If your
average object size is 32KB, you should set this to, hem, 32KB :). However, to give some headroom,
my personal recommendation is to 2x the number of directory entries, so set the configuration
to 16KB.
> 
> The math is mostly correct, except the calculation for "Disk Cache Object Capacity”
is in fact the max number of directory entries the cache can hold. Each object on disk consumes
*at least* one directory entry, but can consume more (amc, what’s our current guideline
here?). "
> 
> Just verifying here...the setting really is that simple?  Don't get me wrong, I'll start
using it immediately, but are there any "gotchas"?  I appreciate your idea of having headroom
but I'm really trusting the cache itself to simply expire old objects and perhaps be a bit
suboptimal if my setting is wrong.

The headroom is on the directory entries, you have to have headroom here. If you run of of
directory entries, bad things happens :). And yes, the cache always expires “old” objects
as necessary, this is by definition. It’s a cyclone (or cyclical) cache, you simple write
until you wrap around, and then start writing over older objects. The simplicity is what makes
it both efficient (super low RAM overhead) and fast (quick disk access and no LRU management
overhead).

> 
> What I don't understand is the internal structure of the cache, unfortunately it's just
a big blob from my point of view.  :-)

The gotcha is that a URL can consume more than one directory entry. The calculations you are
doing is calculating how many directory entries you have available. If the system runs out
of direntries, you basically can’t use all available storage. In a sense, it’s similar
to running out of I-nodes on a file system.

Our recommendation has been (someone, amc?, correct me) is to allow for at least 2x the number
of directory entries as you expect to store. So, if your average object size really is 32KB,
set the config to 16KB (it’s an inverse). The configuration is a klumpsy way of specifying
how many directory entries you have available, nothing else. So with your math, the number
of directory entries is

	dir_ents = Total Disk Size / average_object_size


Alan, it’d be great to get a better understanding on how many directory entries someone
*really* needs to allocate. Since you know everything in the cache now, can you maybe write
something up, or explain it to use mere mortals?

Thanks,

— leif


Mime
View raw message