cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ran Tavory <ran...@gmail.com>
Subject Re: Nodes getting slowed down after a few days of smooth operation
Date Mon, 11 Oct 2010 22:09:06 GMT
Thanks Peter, Robert and Brandon.
So it seems that the only suspect by now is my excessive caching ;)
I'll get a better look at the GC activity next time shit starts to happen,
but in the mean time, as for the cache size (cassandra's internal cache),
it's row cache capacity is set to 10,000,000. I actually wanted to say 100%
but at the time there was a bug that interpreted 100% as just 1 so I used
10M instead.
My motivation was that since I don't have too much data (10G each node) then
why don't I cache the hell out of it, so I started with a cache size of 100%
and a much larger heap size (started with 12G out of the 16G ram). Over time
I've learned that too much heap for the JVM is like a kid in a candy shop,
it'll eat as much as it can and then throw up (the kid was GC storming), so
I started lowering the max heap until I reached 6G. with 4G I ran OOM BTW.
So now I have row cach capacity of effectively 100%, a heap size of 6G, data
of 10G and so I wonder how come the heap doesn't explode?
Well, as it turns out, although I have 10G data on each node, the row cache
effective size is only about  681 * 2377203 = 1.6G (bytes)

                Key cache: disabled
                Row cache capacity: 10000000
                Row cache size: *2377203*
                Row cache hit rate: 0.7017551635100059
                Compacted row minimum size: 392
                Compacted row maximum size: 102961
                Compacted row mean size: *681*

This strengthens what both Peter and Brandon have suggested that the row
cache is generating too much GC b/c it gets invalidated too frequently.
That's certainly possible, so I'll try to set a 50% row cache size on one of
the nodes (and wait about a week...) and see what happens, and if this
proves to be the answer then this means that my dream of "I have so little
data and so much ram, why don't I cache the hell out of it" isn't going to
come true b/c too much of the row cache gets invalidated and hence GCed
which creates too much overhead for the JVM. (well, at least I was getting
nice read performance while it lasted ;)
If this is true, then how would you recommend optimizing the row cache size
for maximum utility and minimum GC overhead?

I've pasted here a log snippet from one of the servers while it was at high
CPU and GCing http://pastebin.com/U1cszFKv
You can see a large number of pending reads as well as other pending tasks
(response stage or consistency manager).
GC runs every like 20-40 seconds and almost for the entire duration of that
20-40 secs. I'm not sure what to make of all the other numbers such as: GC
for ConcurrentMarkSweep: 22742 ms, 181335192 reclaimed leaving 6254994856
used; max is 6552551424

Thanks!

On Mon, Oct 11, 2010 at 7:42 PM, Peter Schuller <peter.schuller@infidyne.com
> wrote:

> > 170141183460469231731687303715884105727
> > 192.168.252.88Up         10.07 GB
>
> Firstly, I second the point raised about the row cache size (very
> frequent concurrent GC:s is definitely an indicator that the JVM heap
> size is too small, and the row cache seems like a likely contender -
> especially given that you say it builds up over days). Note though
> that you have to look at the GCInspector's output with respect to the
> concurrent mark/sweep GC phases to judge the live set in your heap,
> rather than system memory. Attaching with jconsole or visualvm to the
> JVM will also give you a pretty good view of what's going on. Look for
> the heap usage as it appears after one of the "major" dips in the
> graph (not the regular sawtooth dips, which are young generation
> collections and won't help indicate actual live set).
>
> That said, with respect to caching effects: Your total data size seems
> to be about in the same ballpark as memory. Your maximum heap size is
> 6 gig; on a 16 gig machine, taking into account varous overhead, maybe
> you've got something like 8 GB for buffer cache? It doesn't sound
> strange at all that there would be a significant difference between a
> 32 GB machine and a 16 GB machine given your ~ 10 GB data size given
> that buffer cache size goes from "slightly below data size" to "almost
> three times data size". Especially when major or almost-major
> compactions are triggered; on the small machine you would expect to
> evict everything from cache during a compaction (except that touched
> *during* the compaction) while on the larger machine the newly written
> sstables effectively fit the cache too.
>
> But note that these are two pretty different conditions; the first is
> about making sure your JVM heap size is appropriate. The second can be
> tested for by observing I/O load (iostat -x -k 1) and correlating with
> compactions. So e.g., what's the average utilization and queue size in
> iostat just before a compaction vs. just after it? That difference
> should be due to cache eviction (assuming you're not servicing a
> built-up backlog). There is also the impact of compaction itself, as
> it is happening, and the I/O it generates. In general, the higher your
> disk load is prior to compaction, the less margin there is to deal
> with compaction happening concurrently.
>
> In general, whether or not you are willing to make the assumption that
> activley used data fits in RAM will severely affect the hardware
> requirements for serving your load.
>
> --
> / Peter Schuller
>



-- 
/Ran

Mime
View raw message