cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aleksey Yeschenko (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-7642) Adaptive Consistency
Date Wed, 30 Jul 2014 16:22:42 GMT


Aleksey Yeschenko commented on CASSANDRA-7642:

For Unavailable, yeah, you have to retry. But it's not too much extra pressure - C* short-cuts
unavailable reqs pretty early in the process, so not much work gets double-done by C* itself.

Also, true that WTE does not tell you which replicas accepted the write. It still lets you
to do forms of 'adaptive CL' (like min: ONE, max: <any CL>) or (min: QUORUM, max: <ALL>),
and pairs that don't cross global/local-dc CLs.

It's not perfect, but should be good enough for most uses of 'adaptive CL'.

Now, this only matters if we assume that the feature itself is not stupid AND is not too niche
for inclusion in C*. I claim that it's both stupid and too niche. Besides, to some users our
existing CLs are already too much/complicated enough - we don't need to complicate it further
for no good reason. This - is not a good reason.

> Adaptive Consistency
> --------------------
>                 Key: CASSANDRA-7642
>                 URL:
>             Project: Cassandra
>          Issue Type: New Feature
>          Components: Core
>            Reporter: Rustam Aliyev
>             Fix For: 3.0
> h4. Problem
> At minimum, application requires consistency level of X, which must be fault tolerant
CL. However, when there is no failure it would be advantageous to use stronger consistency
Y (Y>X).
> h4. Suggestion
> Application defines minimum (X) and maximum (Y) consistency levels. C* can apply adaptive
consistency logic to use Y whenever possible and downgrade to X when failure occurs.
> Implementation should not negatively impact performance. Therefore, state has to be maintained
globally (not per request).
> h4. Example
> h4. Use Case
> Consider a case where user wants to maximize their uptime and consistency. They designing
a system using C* where transactions are read/written with LOCAL_QUORUM and distributed across
2 DCs. Occasional inconsistencies between DCs can be tolerated. R/W with LOCAL_QUORUM is satisfactory
in most of the cases.
> Application requires new transactions to be read back right after they were generated.
Write and read could be done through different DCs (no stickiness). In some cases when user
writes into DC1 and reads immediately from DC2, replication delay may cause problems. Transaction
won't show up on read in DC2, user will retry and create duplicate transaction. Occasional
duplicates are fine and the goal is to minimize number of dups.
> Therefore, we want to perform writes with stronger consistency (EACH_QUORUM) whenever
possible without compromising on availability. Using adaptive consistency they should be able
to define:
>    {{Read CL = LOCAL_QUORUM}}
> Similar scenario can be described for {{Write CL = ADAPTIVE (MIN:QUORUM, MAX:ALL)}} case.
> h4. Criticism
> # This functionality can/should be implemented by user himself.
> bq. It will be hard for an average user to implement topology monitoring and state machine.
Moreover, this is a pattern which repeats.
> # Transparent downgrading violates the CL contract, and that contract considered be just
about the most important element of Cassandra's runtime behavior.
> bq.Fully transparent downgrading without any contract is dangerous. However, would it
be problem if we specify explicitly only two discrete CL levels - MIN_CL and MAX_CL?
> # If you have split brain DCs (partitioned in CAP), you have to sacrifice either consistency
or availability, and auto downgrading sacrifices the consistency in dangerous ways if the
application isn't designed to handle it. And if the application is designed to handle it,
then it should be able to handle it in normal circumstances, not just degraded/extraordinary
> bq. Agreed. Application should be designed for MIN_CL. In that case, MAX_CL will not
be causing much harm, only adding flexibility.
> # It might be a better idea to loudly downgrade, instead of silently downgrading, meaning
that the client code does an explicit retry with lower consistency on failure and takes some
other kind of action to attempt to inform either users or operators of the problem. The silent
part of the downgrading which could be dangerous.
> bq. There are certainly cases where user should be informed when consistency changes
in order to perform custom action. For this purpose we could allow/require user to register
callback function which will be triggered when consistency level changes. Best practices could
be enforced by requiring callback.

This message was sent by Atlassian JIRA

View raw message