cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "prmg (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-7168) Add repair aware consistency levels
Date Wed, 01 Apr 2015 04:09:54 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-7168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14389957#comment-14389957
] 

prmg commented on CASSANDRA-7168:
---------------------------------

[~tjake] I'm giving a try on this ticket for learning purposes and so far I calculated maxPartitionRepairTime
on the coordinator, sent over via the MessagingService to the replicas on the ReadCommand
and skipped sstables with repairedAt <= maxPartitionRepairTime on the CollationController.
One part that was not clear to me in your description was: 
bq. We will also need to include tombstones in the results of the non-repaired column family
result since they need to be merged with the repaired result.
Is that tombstone inclusion already done by the normal flow of the collation controller or
is it necessary to add some post-processing after repaired sstables <= maxPartitionRepairTime
are skipped? Would be great if you could clarify that a bit for me. Thanks!

> Add repair aware consistency levels
> -----------------------------------
>
>                 Key: CASSANDRA-7168
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-7168
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: T Jake Luciani
>              Labels: performance
>             Fix For: 3.1
>
>
> With CASSANDRA-5351 and CASSANDRA-2424 I think there is an opportunity to avoid a lot
of extra disk I/O when running queries with higher consistency levels.  
> Since repaired data is by definition consistent and we know which sstables are repaired,
we can optimize the read path by having a REPAIRED_QUORUM which breaks reads into two phases:
>  
>   1) Read from one replica the result from the repaired sstables. 
>   2) Read from a quorum only the un-repaired data.
> For the node performing 1) we can pipeline the call so it's a single hop.
> In the long run (assuming data is repaired regularly) we will end up with much closer
to CL.ONE performance while maintaining consistency.
> Some things to figure out:
>   - If repairs fail on some nodes we can have a situation where we don't have a consistent
repaired state across the replicas.  
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message