cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Thomas Borg Salling <tbsall...@tbsalling.dk>
Subject Re: Huge number of sstables after adding server to existing cluster
Date Fri, 03 Apr 2015 20:04:56 GMT
I agree with Pranay. I have experienced exactly the same on C* 2.1.2.
/Thomas.

2015-04-03 19:33 GMT+02:00 Pranay Agarwal <agarwalpranaya@gmail.com>:

> I remember once that happening to me. The SStables were way beyond the
> limit (32 default) but the compaction were still not starting. All I did
> was "nodetool enableautocompaction keyspace table" and the compaction
> immediately started and count of SSTables were down to normal level. It was
> little surprising to me as well because I had never disabled compaction in
> the first place.
>
> -Pranay
>
> On Fri, Apr 3, 2015 at 10:18 AM, Robert Coli <rcoli@eventbrite.com> wrote:
>
>> On Fri, Apr 3, 2015 at 4:57 AM, Mantas Klasavičius <
>> mantas.klasavicius@gmail.com> wrote:
>>
>>> Q1:is that what we should expect to happen?
>>>
>> A known problem with the current streaming paradigm when combined with
>> vnodes is that newly bootstrapped nodes do a bunch of compaction.
>>
>>
>>> Q2:what could be the reason of not reducing number of sstables?
>>>
>> nodetool setcompactionthroughput 0 # note, if you don't have spare i/o,
>> this could negatively affect service time
>>
>>
>>> Q3:what we need to do to reduce number of sstables per server?
>>>
>> Make sure you're compacting faster than you're writing and wait.
>>
>> =Rob
>> http://twitter.com/rcolidba
>>
>
>

Mime
View raw message