incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Edward Capriolo <edlinuxg...@gmail.com>
Subject Re: SSTable size versus read performance
Date Thu, 16 May 2013 20:54:51 GMT
lz4 is supposed to achieve similar compression while using less resources
then snappy. It is easy to test, just change then run a 'nodetool rebuild'
. Not sure when lz4 was introduced but being that it is new to cassandra
there may not be many large deployments running it yet.


On Thu, May 16, 2013 at 4:40 PM, Keith Wright <kwright@nanigans.com> wrote:

> Thank you for that.  I did not have trickle_fsync enabled and will give it
> a try.  I just noticed that when running a describe on my table, I do not
> see the sstable size parameter (compaction_strategy_options =
> {'sstable_size_in_mb':5}) included.  Is that expected?  Does it mean its
> using the defaults?
>
> Assuming none of the tuning here makes a noticeable difference, my next
> step is to try switching from LZ4 to Snappy.  Any opinions on that?
>
> Thanks!
>
> CREATE TABLE global_user (
>   user_id bigint,
>   app_id int,
>   type text,
>   name text,
>   extra_param map<text, text>,
>   last timestamp,
>   paid boolean,
>   sku_time map<text, timestamp>,
>   values map<timestamp, float>,
>   PRIMARY KEY (user_id, app_id, type, name)
> ) WITH
>   bloom_filter_fp_chance=0.100000 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.000000 AND
>   gc_grace_seconds=86400 AND
>   read_repair_chance=0.100000 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'LeveledCompactionStrategy'} AND
>   compression={'chunk_length_kb': '8', 'crc_check_chance': '0.1',
> 'sstable_compression': 'LZ4Compressor'};
>
> From: Igor <igor@4friends.od.ua>
> Reply-To: "user@cassandra.apache.org" <user@cassandra.apache.org>
> Date: Thursday, May 16, 2013 4:27 PM
> To: "user@cassandra.apache.org" <user@cassandra.apache.org>
> Subject: Re: SSTable size versus read performance
>
> just in case it will be useful to somebody - here is my checklist for
> better read performance from SSD
>
> 1. limit read-ahead to 16 or 32
> 2. enable 'trickle_fsync' (available starting from cassandra 1.1.x)
> 3. use 'deadline' io-scheduler (much more important for rotational drives
> then for SSD)
> 4. format data partition starting on 2048 sector boundary
> 5. use ext4 with noatime,nodiratime,discard mount options
>
> On 05/16/2013 10:48 PM, Edward Capriolo wrote:
>
> I was going to say something similar I feel like the SSD drives read much
> "more" then the standard drive. Read Ahead/arge sectors could and probably
> does explain it.
>
>
> On Thu, May 16, 2013 at 3:43 PM, Bryan Talbot <btalbot@aeriagames.com>wrote:
>
>> 512 sectors for read-ahead.  Are your new fancy SSD drives using large
>> sectors?  If your read-ahead is really reading 512 x 4KB per random IO,
>> then that 2 MB per read seems like a lot of extra overhead.
>>
>> -Bryan
>>
>>
>>
>>
>> On Thu, May 16, 2013 at 12:35 PM, Keith Wright <kwright@nanigans.com>wrote:
>>
>>> We actually have it set to 512.  I have tried decreasing my SSTable size
>>> to 5 MB and changing the chunk size to 8 kb
>>>
>>> From: Igor <igor@4friends.od.ua>
>>> Reply-To: "user@cassandra.apache.org" <user@cassandra.apache.org>
>>> Date: Thursday, May 16, 2013 1:55 PM
>>>
>>> To: "user@cassandra.apache.org" <user@cassandra.apache.org>
>>> Subject: Re: SSTable size versus read performance
>>>
>>> My 5 cents: I'd check blockdev --getra for data drives - too high values
>>> for readahead (default to 256 for debian) can hurt read performance.
>>>
>>>
>
>

Mime
View raw message