cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean Carlo <jean.jeancar...@gmail.com>
Subject Re: Incremental repairs in 3.0
Date Tue, 13 Sep 2016 13:02:54 GMT
Hi Paulo!

Sorry there was something I was doing wrong.
Now  I can see that the value of Repaired At changes even if there is no
streaming. I am using cassandra 2.1.14 and the comand was nodetool repair
-inc -par.

Anyway good to know this:

> If you're using subrange repair, please note that this has only partial
support for incremental repair due to CASSANDRA-10422 and should not mark
sstables as repaired, so you could be hitting CASSANDRA-12489.

Thx Paulo :)


Saludos

Jean Carlo

"The best way to predict the future is to invent it" Alan Kay

On Mon, Sep 12, 2016 at 2:06 PM, Paulo Motta <pauloricardomg@gmail.com>
wrote:

> > I truncate a table lcs, Then I inserted one line and I used nodetool
> flush to have all the sstables. Using a RF 3 I ran a repair -inc directly
> and I observed that the value of Reaired At was equal 0.
>
> Were you able to troubleshoot this? The value of repairedAt should be
> mutated even when there is not streaming, otherwise there might be
> something going on. What version are you using and what command did you use
> to trigger incremental repair?
>
> If you're using subrange repair, please note that this has only partial
> support for incremental repair due to CASSANDRA-10422 and should not mark
> sstables as repaired, so you could be hitting CASSANDRA-12489.
>
> 2016-09-07 8:15 GMT-03:00 Jean Carlo <jean.jeancarl48@gmail.com>:
>
>> Well I did an small test on my cluster and I didn't get the results I was
>> expecting.
>>
>> I truncate a table lcs, Then I inserted one line and I used nodetool
>> flush to have all the sstables. Using a RF 3 I ran a repair -inc directly
>> and I observed that the value of Reaired At was equal 0.
>>
>> So I start to think that if there is no changes ( diff on the merkles
>> trees) the repair will not pass to the streaming phase, and it is there
>> where the sstables are marked as repaired.
>>
>> I did another test to confirm my assomptions and I saw the sstables
>> marked as repaired. ("repaired at" value isn't 0). Well just those sstables
>> not sync.
>>
>> So my quesion is, if we migrate to repair inc in prod and we dont use the
>> migration procedure, for tables that some sstables are never mutated, they
>> will keep in a not repaired state ?
>>
>> Probably there is something I am not able to see
>>
>>
>>
>>
>>
>> Saludos
>>
>> Jean Carlo
>>
>> "The best way to predict the future is to invent it" Alan Kay
>>
>> On Tue, Sep 6, 2016 at 8:19 PM, Bryan Cheng <bryan@blockcypher.com>
>> wrote:
>>
>>> HI Jean,
>>>
>>> This blog post is a pretty good resource: http://www.datastax.
>>> com/dev/blog/anticompaction-in-cassandra-2-1
>>>
>>> I believe in 2.1.x you don't need to do the manual migration procedure,
>>> but if you run regular repairs and the data set under LCS is fairly large
>>> (what this means will probably depend on your data model and
>>> hardware/cluster makeup) you can take advantage of a full repair to make
>>> anticompaction a bit easier. What we observed was the anticompaction
>>> procedure taking longer than a standard full repair and with a higher load
>>> on the cluster while running.
>>>
>>> On Tue, Sep 6, 2016 at 2:00 AM, Jean Carlo <jean.jeancarl48@gmail.com>
>>> wrote:
>>>
>>>> Hi @Bryan
>>>>
>>>> When you said "sizable amount of data" you meant a huge amount of data
>>>> right? Our big table is in LCS and if we use the migration process we will
>>>> need to run a repair seq over this table for a long time.
>>>>
>>>> We are planning to go to repairs inc using the version 2.1.14
>>>>
>>>>
>>>> Saludos
>>>>
>>>> Jean Carlo
>>>>
>>>> "The best way to predict the future is to invent it" Alan Kay
>>>>
>>>> On Tue, Jun 21, 2016 at 4:34 PM, Vlad <qa23d-vvd@yahoo.com> wrote:
>>>>
>>>>> Thanks for answer!
>>>>>
>>>>> >It may still be a good idea to manually migrate if you have a
>>>>> sizable amount of data
>>>>> No, it would be brand new ;-) 3.0 cluster
>>>>>
>>>>>
>>>>>
>>>>> On Tuesday, June 21, 2016 1:21 AM, Bryan Cheng <bryan@blockcypher.com>
>>>>> wrote:
>>>>>
>>>>>
>>>>> Sorry, meant to say "therefore manual migration procedure should be
>>>>> UNnecessary"
>>>>>
>>>>> On Mon, Jun 20, 2016 at 3:21 PM, Bryan Cheng <bryan@blockcypher.com>
>>>>> wrote:
>>>>>
>>>>> I don't use 3.x so hopefully someone with operational experience can
>>>>> chime in, however my understanding is: 1) Incremental repairs should
be the
>>>>> default in the 3.x release branch and 2) sstable repairedAt is now properly
>>>>> set in all sstables as of 2.2.x for standard repairs and therefore manual
>>>>> migration procedure should be necessary. It may still be a good idea
to
>>>>> manually migrate if you have a sizable amount of data and are using LCS
as
>>>>> anticompaction is rather painful.
>>>>>
>>>>> On Sun, Jun 19, 2016 at 6:37 AM, Vlad <qa23d-vvd@yahoo.com> wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> assuming I have new, empty Cassandra cluster, how should I start using
>>>>> incremental repairs? Is incremental repair is default now (as I don't
see
>>>>> *-inc* option in nodetool) and nothing is needed to use it, or should
>>>>> we perform migration procedure
>>>>> <http://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsRepairNodesMigration.html>
>>>>> anyway? And what happens to new column families?
>>>>>
>>>>> Regards.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Mime
View raw message