cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Stefan Podkowinski (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (CASSANDRA-12888) Incremental repairs broken for MVs and CDC
Date Wed, 09 Nov 2016 14:55:58 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Stefan Podkowinski updated CASSANDRA-12888:
-------------------------------------------
    Description: 
SSTables streamed during the repair process will first be written locally and afterwards either
simply added to the pool of existing sstables or, in case of existing MVs or active CDC, replayed
on mutation basis:

As described in {{StreamReceiveTask.OnCompletionRunnable}}:

{quote}
We have a special path for views and for CDC.

For views, since the view requires cleaning up any pre-existing state, we must put all partitions
through the same write path as normal mutations. This also ensures any 2is are also updated.

For CDC-enabled tables, we want to ensure that the mutations are run through the CommitLog
so they can be archived by the CDC process on discard.
{quote}

Using the regular write path turns out to be an issue for incremental repairs, as we loose
the {{repaired_at}} state in the process. Eventually the streamed rows will end up in the
unrepaired set, in contrast to the rows on the sender site moved to the repaired set. The
next repair run will stream the same data back again, causing rows to bounce on and on between
nodes on each repair.

See linked dtest on steps to reproduce. An example for reproducing this manually using ccm
can be found [here|https://gist.github.com/spodkowinski/2d8e0408516609c7ae701f2bf1e515e8]

  was:
SSTables streamed during the repair process will first be written locally and afterwards either
simply added to the pool of existing sstables or, in case of existing MVs or active CDC, replayed
on mutation basis:

{quote}
We have a special path for views and for CDC.

For views, since the view requires cleaning up any pre-existing state, we must put all partitions
through the same write path as normal mutations. This also ensures any 2is are also updated.

For CDC-enabled tables, we want to ensure that the mutations are run through the CommitLog
so they can be archived by the CDC process on discard.
{quote}

Using the regular write path turns out to be an issue for incremental repairs, as we loose
the {{repaired_at}} state in the process. Eventually the streamed rows will end up in the
unrepaired set, in contrast to the rows on the sender site moved to the repaired set. The
next repair run will stream the same data back again, causing rows to bounce on and on between
nodes on each repair.

See linked dtest on steps to reproduce. An example for reproducing this manually using ccm
can be found [here|https://gist.github.com/spodkowinski/2d8e0408516609c7ae701f2bf1e515e8]


> Incremental repairs broken for MVs and CDC
> ------------------------------------------
>
>                 Key: CASSANDRA-12888
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-12888
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Streaming and Messaging
>            Reporter: Stefan Podkowinski
>            Priority: Critical
>
> SSTables streamed during the repair process will first be written locally and afterwards
either simply added to the pool of existing sstables or, in case of existing MVs or active
CDC, replayed on mutation basis:
> As described in {{StreamReceiveTask.OnCompletionRunnable}}:
> {quote}
> We have a special path for views and for CDC.
> For views, since the view requires cleaning up any pre-existing state, we must put all
partitions through the same write path as normal mutations. This also ensures any 2is are
also updated.
> For CDC-enabled tables, we want to ensure that the mutations are run through the CommitLog
so they can be archived by the CDC process on discard.
> {quote}
> Using the regular write path turns out to be an issue for incremental repairs, as we
loose the {{repaired_at}} state in the process. Eventually the streamed rows will end up in
the unrepaired set, in contrast to the rows on the sender site moved to the repaired set.
The next repair run will stream the same data back again, causing rows to bounce on and on
between nodes on each repair.
> See linked dtest on steps to reproduce. An example for reproducing this manually using
ccm can be found [here|https://gist.github.com/spodkowinski/2d8e0408516609c7ae701f2bf1e515e8]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message