cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Weijun Li <weiju...@gmail.com>
Subject Re: Cassandra benchmark shows OK throughput but high read latency (> 100ms)?
Date Tue, 16 Feb 2010 18:16:35 GMT
Thanks for for DataFileDirectory trick and I'll give a try.

Just noticed the impact of number of data files: node A has 13 data files
with read latency of 20ms and node B has 27 files with read latency of 60ms.
After I ran "nodeprobe compact" on node B its read latency went up to 150ms.
The read latency of node A became as low as 10ms. Is this normal behavior?
I'm using random partitioner and the hardware/JVM settings are exactly the
same for these two nodes.

Another problem is that Java heap usage is always 900mb out of 6GB? Is there
any way to utilize all of the heap space to decrease the read latency?

-Weijun

On Tue, Feb 16, 2010 at 10:01 AM, Brandon Williams <driftx@gmail.com> wrote:

> On Tue, Feb 16, 2010 at 11:56 AM, Weijun Li <weijunli@gmail.com> wrote:
>
>> One more thoughts about Martin's suggestion: is it possible to put the
>> data files into multiple directories that are located in different physical
>> disks? This should help to improve the i/o bottleneck issue.
>>
>>
> Yes, you can already do this, just add more <DataFileDirectory> directives
> pointed at multiple drives.
>
>
>> Has anybody tested the row-caching feature in trunk (shoot for 0.6?)?
>
>
> Row cache and key cache both help tremendously if your read pattern has a
> decent repeat rate.  Completely random io can only be so fast, however.
>
> -Brandon
>

Mime
View raw message