cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From DuyHai Doan <doanduy...@gmail.com>
Subject Re: LCS Increasing the sstable size
Date Wed, 31 Aug 2016 11:28:56 GMT
Some random thoughts

1) Are they using SSD ?

2) If using SSD, I remember that one recommendation is not to exceed
~3Tb/node, unless they're using DateTiered or better TimeWindow compaction
strategy

3) LCS is very disk intensive and usually exacerbates write amp the more
you have data

4) The huge number of SSTable let me suspect some issue with compaction not
keeping up. Can you post here a "nodetool tablestats"  and
"compactionstats" ? Are there many pending compactions ?

5) Last but not least, what does "dstat" shows ? Is there any frequent CPU
wait ?

On Wed, Aug 31, 2016 at 12:34 PM, Jérôme Mainaud <jerome@mainaud.com> wrote:

> Hello,
>
> My cluster use LeveledCompactionStrategy on rather big nodes (9 TB disk
> per node with a target of 6 TB of data and the 3 remaining TB are reserved
> for compaction and snapshots). There is only one table for this application.
>
> With default sstable_size_in_mb at 160 MB, we have a huge number of
> sstables (25,000+ for 4TB already loaded) which lead to IO errors due to
> open files limit (set at 100,000).
>
> Increasing the open files limit can be a solution but at this level, I
> would rather increase sstable_size to 500 MB which would keep the file
> number around 100,000.
>
> Could increasing sstable size lead to any problem I don't see ?
> Do you have any advice about this ?
>
> Thank you.
>
> --
> Jérôme Mainaud
> jerome@mainaud.com
>

Mime
View raw message