cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rustam Aliyev (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-7642) Adaptive Consistency
Date Wed, 30 Jul 2014 21:56:39 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14080042#comment-14080042
] 

Rustam Aliyev commented on CASSANDRA-7642:
------------------------------------------

Many interesting points and ideas. But it sounds like discussion is getting a bit too broad.
Just to reinforce the main point:

bq. Writes - "how many nodes do I wait to hear back from?"

If possible EACH_Q (99.9%), but app designed to work with LOCAL_Q (0.1%). For details see
use case above (it's simplified version of real use case).

It is in fact niche use case and from what I can see mostly would be useful in multi-DC setups.

> Adaptive Consistency
> --------------------
>
>                 Key: CASSANDRA-7642
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-7642
>             Project: Cassandra
>          Issue Type: New Feature
>          Components: Core
>            Reporter: Rustam Aliyev
>             Fix For: 3.0
>
>
> h4. Problem
> At minimum, application requires consistency level of X, which must be fault tolerant
CL. However, when there is no failure it would be advantageous to use stronger consistency
Y (Y>X).
> h4. Suggestion
> Application defines minimum (X) and maximum (Y) consistency levels. C* can apply adaptive
consistency logic to use Y whenever possible and downgrade to X when failure occurs.
> Implementation should not negatively impact performance. Therefore, state has to be maintained
globally (not per request).
> h4. Example
> {{MIN_CL=LOCAL_QUORUM}}
> {{MAX_CL=EACH_QUORUM}}
> h4. Use Case
> Consider a case where user wants to maximize their uptime and consistency. They designing
a system using C* where transactions are read/written with LOCAL_QUORUM and distributed across
2 DCs. Occasional inconsistencies between DCs can be tolerated. R/W with LOCAL_QUORUM is satisfactory
in most of the cases.
> Application requires new transactions to be read back right after they were generated.
Write and read could be done through different DCs (no stickiness). In some cases when user
writes into DC1 and reads immediately from DC2, replication delay may cause problems. Transaction
won't show up on read in DC2, user will retry and create duplicate transaction. Occasional
duplicates are fine and the goal is to minimize number of dups.
> Therefore, we want to perform writes with stronger consistency (EACH_QUORUM) whenever
possible without compromising on availability. Using adaptive consistency they should be able
to define:
>    {{Read CL = LOCAL_QUORUM}}
>    {{Write CL = ADAPTIVE (MIN:LOCAL_QUORUM, MAX:EACH_QUORUM)}}
> Similar scenario can be described for {{Write CL = ADAPTIVE (MIN:QUORUM, MAX:ALL)}} case.
> h4. Criticism
> # This functionality can/should be implemented by user himself.
> bq. It will be hard for an average user to implement topology monitoring and state machine.
Moreover, this is a pattern which repeats.
> # Transparent downgrading violates the CL contract, and that contract considered be just
about the most important element of Cassandra's runtime behavior.
> bq.Fully transparent downgrading without any contract is dangerous. However, would it
be problem if we specify explicitly only two discrete CL levels - MIN_CL and MAX_CL?
> # If you have split brain DCs (partitioned in CAP), you have to sacrifice either consistency
or availability, and auto downgrading sacrifices the consistency in dangerous ways if the
application isn't designed to handle it. And if the application is designed to handle it,
then it should be able to handle it in normal circumstances, not just degraded/extraordinary
ones.
> bq. Agreed. Application should be designed for MIN_CL. In that case, MAX_CL will not
be causing much harm, only adding flexibility.
> # It might be a better idea to loudly downgrade, instead of silently downgrading, meaning
that the client code does an explicit retry with lower consistency on failure and takes some
other kind of action to attempt to inform either users or operators of the problem. The silent
part of the downgrading which could be dangerous.
> bq. There are certainly cases where user should be informed when consistency changes
in order to perform custom action. For this purpose we could allow/require user to register
callback function which will be triggered when consistency level changes. Best practices could
be enforced by requiring callback.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message