cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kurt Greaves (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-13442) Support a means of strongly consistent highly available replication with tunable storage requirements
Date Tue, 10 Oct 2017 02:54:02 GMT


Kurt Greaves commented on CASSANDRA-13442:

Yeah OK I'm convinced (if it can be proven, obviously), however let's not go around making
it incredibly misleading.

bq. 10-20x on transient replicas. Not at full replicas or overall.
Saying 10-20x is really misleading. No one is actually going to see a 10 - 20x improvement
in disk usage. Even a reduction of 1/3 would be optimistic I'm sure.

bq. With vnodes data would be spread out over several nodes so the additional utilization
at each node could be substantially less.
Let's not pretend people running vnodes can actually run repairs.

bq. Some of it might end up being part of overlapping functionality. I can hope.
Not sure if there is a ticket for it but I've been meaning to create one which would probably
benefit from this change. Need a way to change RF without downtime and without costing a fortune
(DC migration). I can see ways in which transient replicas would give this functionality,
as will need some way to change RF on the fly and not cause nodes to be responsible for data
they don't yet have.

If you could add a replica as transient at any time this would almost solve the RF change
problem, assuming you had some way to  transition between transient and real replicas.

> Support a means of strongly consistent highly available replication with tunable storage
> -----------------------------------------------------------------------------------------------------
>                 Key: CASSANDRA-13442
>                 URL:
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Compaction, Coordination, Distributed Metadata, Local Write-Read
>            Reporter: Ariel Weisberg
> Replication factors like RF=2 can't provide strong consistency and availability because
if a single node is lost it's impossible to reach a quorum of replicas. Stepping up to RF=3
will allow you to lose a node and still achieve quorum for reads and writes, but requires
committing additional storage.
> The requirement of a quorum for writes/reads doesn't seem to be something that can be
relaxed without additional constraints on queries, but it seems like it should be possible
to relax the requirement that 3 full copies of the entire data set are kept. What is actually
required is a covering data set for the range and we should be able to achieve a covering
data set and high availability without having three full copies. 
> After a repair we know that some subset of the data set is fully replicated. At that
point we don't have to read from a quorum of nodes for the repaired data. It is sufficient
to read from a single node for the repaired data and a quorum of nodes for the unrepaired
> One way to exploit this would be to have N replicas, say the last N replicas (where N
varies with RF) in the preference list, delete all repaired data after a repair completes.
Subsequent quorum reads will be able to retrieve the repaired data from any of the two full
replicas and the unrepaired data from a quorum read of any replica including the "transient"
> Configuration for something like this in NTS might be something similar to { DC1="3-1",
DC2="3-2" } where the first value is the replication factor used for consistency and the second
values is the number of transient replicas. If you specify { DC1=3, DC2=3 } then the number
of transient replicas defaults to 0 and you get the same behavior you have today.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message