Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id C2D36200C6B for ; Tue, 18 Apr 2017 05:13:46 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id C1750160BAB; Tue, 18 Apr 2017 03:13:46 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 163BD160BAE for ; Tue, 18 Apr 2017 05:13:45 +0200 (CEST) Received: (qmail 90391 invoked by uid 500); 18 Apr 2017 03:13:45 -0000 Mailing-List: contact commits-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cassandra.apache.org Delivered-To: mailing list commits@cassandra.apache.org Received: (qmail 90374 invoked by uid 99); 18 Apr 2017 03:13:45 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 18 Apr 2017 03:13:45 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 891C21A03A8 for ; Tue, 18 Apr 2017 03:13:44 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -98.702 X-Spam-Level: X-Spam-Status: No, score=-98.702 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KAM_NUMSUBJECT=0.5, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id Hqrm9seOWqe0 for ; Tue, 18 Apr 2017 03:13:43 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 2873D5FB3D for ; Tue, 18 Apr 2017 03:13:43 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 57B95E0A6C for ; Tue, 18 Apr 2017 03:13:42 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id A8BF521B49 for ; Tue, 18 Apr 2017 03:13:41 +0000 (UTC) Date: Tue, 18 Apr 2017 03:13:41 +0000 (UTC) From: "sankalp kohli (JIRA)" To: commits@cassandra.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (CASSANDRA-13442) Support a means of strongly consistent highly available replication with storage requirements approximating RF=2 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Tue, 18 Apr 2017 03:13:46 -0000 [ https://issues.apache.org/jira/browse/CASSANDRA-13442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15972055#comment-15972055 ] sankalp kohli commented on CASSANDRA-13442: ------------------------------------------- [~tjake] We need to make changes to bootstrap and other operations to know about this. This will also involve changes in the resolver to know which is full replica vs partial replica and make best use of that. > Support a means of strongly consistent highly available replication with storage requirements approximating RF=2 > ---------------------------------------------------------------------------------------------------------------- > > Key: CASSANDRA-13442 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13442 > Project: Cassandra > Issue Type: Improvement > Components: Compaction, Coordination, Distributed Metadata, Local Write-Read Paths > Reporter: Ariel Weisberg > > Replication factors like RF=2 can't provide strong consistency and availability because if a single node is lost it's impossible to reach a quorum of replicas. Stepping up to RF=3 will allow you to lose a node and still achieve quorum for reads and writes, but requires committing additional storage. > The requirement of a quorum for writes/reads doesn't seem to be something that can be relaxed without additional constraints on queries, but it seems like it should be possible to relax the requirement that 3 full copies of the entire data set are kept. What is actually required is a covering data set for the range and we should be able to achieve a covering data set and high availability without having three full copies. > After a repair we know that some subset of the data set is fully replicated. At that point we don't have to read from a quorum of nodes for the repaired data. It is sufficient to read from a single node for the repaired data and a quorum of nodes for the unrepaired data. > One way to exploit this would be to have N replicas, say the last N replicas (where N varies with RF) in the preference list, delete all repaired data after a repair completes. Subsequent quorum reads will be able to retrieve the repaired data from any of the two full replicas and the unrepaired data from a quorum read of any replica including the "transient" replicas. -- This message was sent by Atlassian JIRA (v6.3.15#6346)