cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "T Jake Luciani (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-7168) Add repair aware consistency levels
Date Tue, 11 Nov 2014 23:03:34 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-7168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207287#comment-14207287
] 

T Jake Luciani commented on CASSANDRA-7168:
-------------------------------------------

To re-summarize this ticket, the goal is to improve performance of queries that require consistency
by using the repaired data cut the amount of remote data to check at quorum.  Initially let’s
only try to perform this optimization when the coordinator is a partition replica.

I think the following would be a good way to start:

  * Add REPAIRED_QUORUM level
  * Change StorageProxy.read to allow a special code path for REPAIRED_QUORUM that will  
  ** Identify the max repairedAt time for the SStables that cover the partition
  ** Pass the max repaired at time to the ReadCommand and MessageService  
  ** Execute the repaired only read locally.
  ** Merge the results.
  
For the actual reads we will need to change the collation controller to take the max repaired
at time and ignore sstables repaired sstables with repairedAt > the passed one.  We will
also need to include tombstones in the results of the non-repaired column family result since
they need to be merged with the repaired result.

> Add repair aware consistency levels
> -----------------------------------
>
>                 Key: CASSANDRA-7168
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-7168
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: T Jake Luciani
>              Labels: performance
>             Fix For: 3.0
>
>
> With CASSANDRA-5351 and CASSANDRA-2424 I think there is an opportunity to avoid a lot
of extra disk I/O when running queries with higher consistency levels.  
> Since repaired data is by definition consistent and we know which sstables are repaired,
we can optimize the read path by having a REPAIRED_QUORUM which breaks reads into two phases:
>  
>   1) Read from one replica the result from the repaired sstables. 
>   2) Read from a quorum only the un-repaired data.
> For the node performing 1) we can pipeline the call so it's a single hop.
> In the long run (assuming data is repaired regularly) we will end up with much closer
to CL.ONE performance while maintaining consistency.
> Some things to figure out:
>   - If repairs fail on some nodes we can have a situation where we don't have a consistent
repaired state across the replicas.  
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message