cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Benjamin Black...@b3k.us>
Subject Re: 0.7 memory usage problem
Date Sun, 26 Sep 2010 03:01:17 GMT
Looking further, I would expect your 36000 writes/sec to trigger a
memtable flush every 8-9 seconds (which is already crazy), but you are
actually flushing them every ~1.7 seconds, leading me to believe you
are writing a _lot_ faster than you think you are.

 INFO [ROW-MUTATION-STAGE:21] 2010-09-24 13:13:23,203
ColumnFamilyStore.java (line 422) switching in a fresh Memtable for
HiFreq at CommitLogContext(file='C:\Cassandra\Cass07\commitlog\CommitLog-1285358848765.log',
position=13796967)
 INFO [ROW-MUTATION-STAGE:4] 2010-09-24 13:13:25,171
ColumnFamilyStore.java (line 422) switching in a fresh Memtable for
HiFreq at CommitLogContext(file='C:\Cassandra\Cass07\commitlog\CommitLog-1285358848765.log',
position=29372124)
 INFO [ROW-MUTATION-STAGE:8] 2010-09-24 13:13:26,937
ColumnFamilyStore.java (line 422) switching in a fresh Memtable for
HiFreq at CommitLogContext(file='C:\Cassandra\Cass07\commitlog\CommitLog-1285358848765.log',
position=44950820)


b

On Sat, Sep 25, 2010 at 7:53 PM, Benjamin Black <b@b3k.us> wrote:
> The log posted shows _10_ pending in MPF stage, and the errors show
> repeated failures trying to flush memtables at all:
>
>  INFO [GC inspection] 2010-09-24 13:16:11,281 GCInspector.java (line
> 156) MEMTABLE-POST-FLUSHER             1        10
>
> You are also flushing _really_ small memtables to disk (looks to be
> triggered by the default ops threshold):
>
>  INFO [FLUSH-WRITER-POOL:1] 2010-09-24 12:55:27,296 Memtable.java
> (line 150) Writing Memtable-HiFreq@741540175(15105576 bytes, 314640
> operations)
>
> Based on what you said initially:
>
> "600 row (60 columns per row) per second, ~3K size rows"
>
> If that is so, you are writing 36000 columns per second to a single
> machine (why are you not distributing the client load across the
> cluster, as is best practice?).  If your RF is 3 on your 3 node
> cluster, every node is taking every write, so you are trying to
> maintain 36000 writes per second per node.  Even with a dedicated
> (spinning media) commitlog drive, you can't possibly keep up with
> that.
>
> What is your disk setup?
>
> What CL are you using for these writes?
>
> Can you post your client code for doing the writes?
>
> It is odd that you are able to do 36000/sec _at all_ unless you are
> using CL.ZERO, which would quickly lead to OOM.
>
>
> b
>

Mime
View raw message