trafficserver-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Leif Hedstrom <>
Subject Re: partitions
Date Thu, 05 Nov 2009 01:42:16 GMT
On Nov 4, 2009, at 4:24 PM, Belmon, Stephane wrote:

> Hello YTS folks,
> Yesterday (or was that on IRC?) I think it was stated that once  
> around the ring, a partition gets dropped (rather than compacted I  
> assume). Could you elaborate a bit on how the cache actually  
> reclaims space in the current version? (I'll take my answer off the  
> air ;-) especially if you want to compare and contrast with how it  
> used to work, but I thought more people could be interested).

The disk cache gets divided up into "blocks", 8GB each. On top of each  
8GB chunk, the allocated RAM cache gets split up, and there's an LRU  
on to of that (so often accessed objects in an 8GB chunk gets served  
out of RAM cache). In addition, there's an in memory index for all the  
"slots" in the disk cache.

As you write to disk cache, you consume space out of the first (or  
current) 8GB block. Once it's filled up, we move on to the next one.  
Once you fill the last 8GB block, we start over from the beginning  
again, freeing the first 8GB chunk in one action. This means cache  
eviction is done 8GB at a time.

The reason for this design was simplicity and speed. There is no meta  
data involved managing the disk cache. The in-memory foot print for  
the disk cache is (if I recall) 8 bytes per object, and this is  
preallocated based on


This is 8000 by default. You can reduce the amount of space consumed  
for in-memory indices by increasing this value, but that also means  
you can store fewer objects. But if all your objects are say 32k or  
larger, you are definitely better off setting the above setting  

I hope I haven't got any of this wrong, and hopefully it makes sense. :)

-- leif

View raw message