cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brice Dutheil <brice.duth...@gmail.com>
Subject Re: LCS Strategy, compaction pending tasks keep increasing
Date Tue, 21 Apr 2015 13:06:40 GMT
I’m not sure I get everything about storm stuff, but my understanding of
LCS is that compaction count may increase the more one update data (that’s
why I was wondering about duplicate primary keys).

Another option is that the code is sending too much write request/s to the
cassandra cluster. I don’t know haw many nodes you have, but the less node
there is the more compactions.
Also I’d look at the CPU / load, maybe the config is too *restrictive*,
look at the following properties in the cassandra.yaml

   - compaction_throughput_mb_per_sec, by default the value is 16, you may
   want to increase it but be careful on mechanical drives, if already in SSD
   IO is rarely the issue, we have 64 (with SSDs)
   - multithreaded_compaction by default it is false, we enabled it.

Compaction thread are niced, so it shouldn’t be much an issue for serving
production r/w requests. But you never know, always keep an eye on IO and
CPU.

— Brice

On Tue, Apr 21, 2015 at 2:48 PM, Anishek Agarwal <anishek@gmail.com> wrote:

sorry i take that back we will modify different keys across threads not the
> same key, our storm topology is going to use field grouping to get updates
> for same keys to same set of bolts.
>
> On Tue, Apr 21, 2015 at 6:17 PM, Anishek Agarwal <anishek@gmail.com>
> wrote:
>
>> @Bruice : I dont think so as i am giving each thread a specific key range
>> with no overlaps this does not seem to be the case now. However we will
>> have to test where we have to modify the same key across threads -- do u
>> think that will cause a problem ? As far as i have read LCS is recommended
>> for such cases. should i just switch back to SizeTiredCompactionStrategy.
>>
>>
>> On Tue, Apr 21, 2015 at 6:13 PM, Brice Dutheil <brice.dutheil@gmail.com>
>> wrote:
>>
>>> Could it that the app is inserting _duplicate_ keys ?
>>>
>>> -- Brice
>>>
>>> On Tue, Apr 21, 2015 at 1:52 PM, Marcus Eriksson <krummas@gmail.com>
>>> wrote:
>>>
>>>> nope, but you can correlate I guess, tools/bin/sstablemetadata gives
>>>> you sstable level information
>>>>
>>>> and, it is also likely that since you get so many L0 sstables, you will
>>>> be doing size tiered compaction in L0 for a while.
>>>>
>>>> On Tue, Apr 21, 2015 at 1:40 PM, Anishek Agarwal <anishek@gmail.com>
>>>> wrote:
>>>>
>>>>> @Marcus I did look and that is where i got the above but it doesnt
>>>>> show any detail about moving from L0 -L1 any specific arguments i should
>>>>> try with ?
>>>>>
>>>>> On Tue, Apr 21, 2015 at 4:52 PM, Marcus Eriksson <krummas@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> you need to look at nodetool compactionstats - there is probably
a
>>>>>> big L0 -> L1 compaction going on that blocks other compactions
from starting
>>>>>>
>>>>>> On Tue, Apr 21, 2015 at 1:06 PM, Anishek Agarwal <anishek@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> the "some_bits" column has about 14-15 bytes of data per key.
>>>>>>>
>>>>>>> On Tue, Apr 21, 2015 at 4:34 PM, Anishek Agarwal <anishek@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hello,
>>>>>>>>
>>>>>>>> I am inserting about 100 million entries via datastax-java
driver
>>>>>>>> to a cassandra cluster of 3 nodes.
>>>>>>>>
>>>>>>>> Table structure is as
>>>>>>>>
>>>>>>>> create keyspace test with replication = {'class':
>>>>>>>> 'NetworkTopologyStrategy', 'DC' : 3};
>>>>>>>>
>>>>>>>> CREATE TABLE test_bits(id bigint primary key , some_bits
text) with
>>>>>>>> gc_grace_seconds=0 and compaction = {'class': 'LeveledCompactionStrategy'}
>>>>>>>> and compression={'sstable_compression' : ''};
>>>>>>>>
>>>>>>>> have 75 threads that are inserting data into the above table
with
>>>>>>>> each thread having non over lapping keys.
>>>>>>>>
>>>>>>>> I see that the number of pending tasks via "nodetool
>>>>>>>> compactionstats" keeps increasing and looks like from "nodetool
cfstats
>>>>>>>> test.test_bits" has SSTTable levels as [154/4, 8, 0, 0, 0,
0, 0, 0, 0],
>>>>>>>>
>>>>>>>> Why is compaction not kicking in ?
>>>>>>>>
>>>>>>>> thanks
>>>>>>>> anishek
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>  ​

Mime
View raw message