cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "B. Todd Burruss" <bburr...@real.com>
Subject Re: Persistently increasing read latency
Date Thu, 03 Dec 2009 19:32:06 GMT
i do not have any pending tasks in the compaction pool but i have 1164
files in my data directory.  one thing to note about my situation is
that i did run out of disk space during my test.  cassandra _seemed_ to
recover nicely.

tim, is your's recovering?  i plan to rerun the test tonight with a
slightly smaller data set, however, the 'get' performance was dwindling
before the node ran out of disk space.

does compaction only happen when idle?

and yes, i'm running with 0.5-beta1, but not trunk.



On Thu, 2009-12-03 at 11:03 -0800, B. Todd Burruss wrote:
> i am seeing this as well.  i did a test with just 1 cassandra node,
> ReplicationFactor=1, 'get' ConsistencyLevel.ONE, 'put'
> ConsistencyLevel.QUORUM.  The first test was writing and reading random
> values starting from a fresh database.  The put performance is staying
> reasonabe, but the read performance falls off dramatically as the data
> grows.  The get performance fell from approx 6500 get/sec to 150 get/sec
> (as reported by my client stats.)  The database has grown to approx
> 500gig.  i have the stats recorded on 5 second intervals and i see a
> very linear drop off as the data grows.
> 
> i stopped the server and restarted it, let it do its thing during
> restart and then rerun a read-only test using the exact same data.  i am
> still at about 150 get/sec.  via JMX i can see the read latency at about
> 60, but this varies as the app runs.
> 
> my keyspace is simple:
> 
>  <Keyspaces>
>     <Keyspace Name="uds">
>       <KeysCachedFraction>0.01</KeysCachedFraction>
>       <ColumnFamily CompareWith="BytesType" Name="bucket" />
>     </Keyspace>
>   </Keyspaces>
> 
> all values are exactly the same and are 2k in length.
> 
> i've tried to do some tuning to make things faster but don't necessarily
> understand the options.  here are some of the params i've changed in the
> config file:
> 
> <CommitLogRotationThresholdInMB>256</CommitLogRotationThresholdInMB>
> <MemtableSizeInMB>1024</MemtableSizeInMB>
> <MemtableObjectCountInMillions>0.6</MemtableObjectCountInMillions>
> <CommitLogSyncPeriodInMS>1000</CommitLogSyncPeriodInMS>
> <MemtableFlushAfterMinutes>1440</MemtableFlushAfterMinutes>
> 
> hope this data helps, and any help you can provide is much appreciated.
> 
> 
> On Tue, 2009-12-01 at 20:18 -0600, Jonathan Ellis wrote:
> > On Tue, Dec 1, 2009 at 7:31 PM, Freeman, Tim <tim.freeman@hp.com> wrote:
> > > Looking at the Cassandra mbean's, the attributes of ROW-MUTATION-STAGE and
ROW-READ-STAGE and RESPONSE-STAGE are all  less than 10.  MINOR-COMPACTION-POOL reports 1218
pending tasks.
> > 
> > That's probably the culprit right there.  Something is wrong if you
> > have 1200 pending compactions.
> > 
> > This is something that upgrading to trunk will help with right away
> > since we parallelize compactions there.
> > 
> > Another thing you can do is increase the memtable limits so you are
> > not flushing + compacting so often with your insert traffic.
> > 
> > -Jonathan
> 
> 



Mime
View raw message