cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "T Jake Luciani (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-7168) Add repair aware consistency levels
Date Tue, 11 Nov 2014 23:03:34 GMT


T Jake Luciani commented on CASSANDRA-7168:

To re-summarize this ticket, the goal is to improve performance of queries that require consistency
by using the repaired data cut the amount of remote data to check at quorum.  Initially let’s
only try to perform this optimization when the coordinator is a partition replica.

I think the following would be a good way to start:

  * Change to allow a special code path for REPAIRED_QUORUM that will  
  ** Identify the max repairedAt time for the SStables that cover the partition
  ** Pass the max repaired at time to the ReadCommand and MessageService  
  ** Execute the repaired only read locally.
  ** Merge the results.
For the actual reads we will need to change the collation controller to take the max repaired
at time and ignore sstables repaired sstables with repairedAt > the passed one.  We will
also need to include tombstones in the results of the non-repaired column family result since
they need to be merged with the repaired result.

> Add repair aware consistency levels
> -----------------------------------
>                 Key: CASSANDRA-7168
>                 URL:
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: T Jake Luciani
>              Labels: performance
>             Fix For: 3.0
> With CASSANDRA-5351 and CASSANDRA-2424 I think there is an opportunity to avoid a lot
of extra disk I/O when running queries with higher consistency levels.  
> Since repaired data is by definition consistent and we know which sstables are repaired,
we can optimize the read path by having a REPAIRED_QUORUM which breaks reads into two phases:
>   1) Read from one replica the result from the repaired sstables. 
>   2) Read from a quorum only the un-repaired data.
> For the node performing 1) we can pipeline the call so it's a single hop.
> In the long run (assuming data is repaired regularly) we will end up with much closer
to CL.ONE performance while maintaining consistency.
> Some things to figure out:
>   - If repairs fail on some nodes we can have a situation where we don't have a consistent
repaired state across the replicas.  

This message was sent by Atlassian JIRA

View raw message