jakarta-jcs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Schwarz, Peter" <peter.schw...@objectfx.com>
Subject RE: High Data Volume Issues
Date Wed, 19 Jul 2006 19:15:44 GMT
> How many items do you expect?  

We're expecting, on the high end, millions of items. 

> 100GB is a tremendous amount of data to cache.  I'm
> caching millions of items using the MySQL disk cache,
> but the items are mostly under 10k and I don't
> typically go over 2 gb.  

This depends on how your test is written.  One thing that we noticed was
that the cache loses writes of data (through the purgatory), if you don't
wait long enough.  We had, for 2 million items, each 16k, a cache size of
33GB, which was inline with the computed size for the cache.  

I'm not sure what MySQL would be doing to keep that size down. 

> It keeps the keys and the file offset in memory, 
> so it is not suitable for lots of items, but 
> it can handle fewer large items.  

As for the memory limitations, the key set is actually not too bad, since
we're using pretty small keys (Integers for the tests).  With a 512M vm
size, this wasn't an issue.

> The optimization routine is fairly crude.   

Using a database isn't an option for us (I know, it sounds strange, but hey,
sometimes the powers that be make certain calls...), so we're planning on
making changes to the defrag/recycle portions of the IndexDiskCache.  We
would like to submit the changes for contribution.  What would the process
be for that?

Cheers, 

Peter

---------------------------------------------------------------------
To unsubscribe, e-mail: jcs-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jcs-dev-help@jakarta.apache.org


Mime
View raw message