incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From rash aroskar <rashmi.aros...@gmail.com>
Subject Re: 1.2 leveled compactions can affect big bunch of writes? how to stop/restart them?
Date Thu, 19 Sep 2013 19:16:13 GMT
Thanks for responses.
Nate - I haven't tried changing compaction_throughput_mb_per_sec. In my
cassandra.yaml I had set it to 32 to begin with. Do you think 32 can be too
much if the cassandra get once in a while writes but when it gets writes
its a big chunk together?


On Thu, Sep 19, 2013 at 12:33 PM, sankalp kohli <kohlisankalp@gmail.com>wrote:

> You cannot start level compaction. It will run based on data in each
> level.
>
>
> On Thu, Sep 19, 2013 at 9:19 AM, Nate McCall <nate@thelastpickle.com>wrote:
>
>> As opposed to stopping compaction altogether, have you experimented with
>> turning down compaction_throughput_mb_per_sec (16mb default) and/or
>> explicitly setting concurrent_compactors (defaults to the number of cores,
>> iirc).
>>
>>
>> On Thu, Sep 19, 2013 at 10:58 AM, rash aroskar <rashmi.aroskar@gmail.com>wrote:
>>
>>> Hi,
>>> In general leveled compaction are I/O heavy so when there are bunch of
>>> writes do we need to stop leveled compactions at all?
>>> I found the nodetool stop COMPACTION, which states it stops compaction
>>> happening, does this work for any type of compaction? Also it states in
>>> documents 'eventually cassandra restarts the compaction', isn't there a way
>>> to control when to start the compaction again manually ?
>>> If this is not applicable for leveled compactions in 1.2, then what can
>>> be used for stopping/restating those?
>>>
>>>
>>>
>>> Thanks,
>>> Rashmi
>>>
>>
>>
>

Mime
View raw message