cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rustam Aliyev (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-7642) Adaptive Consistency
Date Thu, 31 Jul 2014 04:11:39 GMT


Rustam Aliyev commented on CASSANDRA-7642:

Here's better version of use case which highlights the goal better. DC concept in C* is essentially
a Replication Group (RG) and I will use this term to draw a parallel between RG and simple
C* replica:

In context of RG, today C* allows ONE (LOCAL_QUORUM) and ALL (EACH_QUORUM). This means, if
I want to achieve strong consistency across RGs, I can only do R+W as ONE+ALL or ALL+ONE.
This is consistent with known R + W > N formula and this isn't fault tolerant.

Today we don't support R+W as QUORUM+QUORUM in context of RGs which would guarantee strong
consistency and at the same time be fault tolerant. *This is important for the type of applications
which are completely stateless across RGs.*

We could have some sort of QUORUM_QUORUM for that, where it would require QUORUM of RGs to
respond. However, that will require at least 3 RGs which can be expensive.

The main goal of AC is to solve this problem with 2 RGs.

> Adaptive Consistency
> --------------------
>                 Key: CASSANDRA-7642
>                 URL:
>             Project: Cassandra
>          Issue Type: New Feature
>          Components: Core
>            Reporter: Rustam Aliyev
>             Fix For: 3.0
> h4. Problem
> At minimum, application requires consistency level of X, which must be fault tolerant
CL. However, when there is no failure it would be advantageous to use stronger consistency
Y (Y>X).
> h4. Suggestion
> Application defines minimum (X) and maximum (Y) consistency levels. C* can apply adaptive
consistency logic to use Y whenever possible and downgrade to X when failure occurs.
> Implementation should not negatively impact performance. Therefore, state has to be maintained
globally (not per request).
> h4. Example
> h4. Use Case
> Consider a case where user wants to maximize their uptime and consistency. They designing
a system using C* where transactions are read/written with LOCAL_QUORUM and distributed across
2 DCs. Occasional inconsistencies between DCs can be tolerated. R/W with LOCAL_QUORUM is satisfactory
in most of the cases.
> Application requires new transactions to be read back right after they were generated.
Write and read could be done through different DCs (no stickiness). In some cases when user
writes into DC1 and reads immediately from DC2, replication delay may cause problems. Transaction
won't show up on read in DC2, user will retry and create duplicate transaction. Occasional
duplicates are fine and the goal is to minimize number of dups.
> Therefore, we want to perform writes with stronger consistency (EACH_QUORUM) whenever
possible without compromising on availability. Using adaptive consistency they should be able
to define:
>    {{Read CL = LOCAL_QUORUM}}
> Similar scenario can be described for {{Write CL = ADAPTIVE (MIN:QUORUM, MAX:ALL)}} case.
> h4. Criticism
> # This functionality can/should be implemented by user himself.
> bq. It will be hard for an average user to implement topology monitoring and state machine.
Moreover, this is a pattern which repeats.
> # Transparent downgrading violates the CL contract, and that contract considered be just
about the most important element of Cassandra's runtime behavior.
> bq.Fully transparent downgrading without any contract is dangerous. However, would it
be problem if we specify explicitly only two discrete CL levels - MIN_CL and MAX_CL?
> # If you have split brain DCs (partitioned in CAP), you have to sacrifice either consistency
or availability, and auto downgrading sacrifices the consistency in dangerous ways if the
application isn't designed to handle it. And if the application is designed to handle it,
then it should be able to handle it in normal circumstances, not just degraded/extraordinary
> bq. Agreed. Application should be designed for MIN_CL. In that case, MAX_CL will not
be causing much harm, only adding flexibility.
> # It might be a better idea to loudly downgrade, instead of silently downgrading, meaning
that the client code does an explicit retry with lower consistency on failure and takes some
other kind of action to attempt to inform either users or operators of the problem. The silent
part of the downgrading which could be dangerous.
> bq. There are certainly cases where user should be informed when consistency changes
in order to perform custom action. For this purpose we could allow/require user to register
callback function which will be triggered when consistency level changes. Best practices could
be enforced by requiring callback.

This message was sent by Atlassian JIRA

View raw message