From user-return-35884-apmail-cassandra-user-archive=cassandra.apache.org@cassandra.apache.org Mon Aug 12 21:22:49 2013 Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 2E23210F5F for ; Mon, 12 Aug 2013 21:22:49 +0000 (UTC) Received: (qmail 78430 invoked by uid 500); 12 Aug 2013 21:22:46 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 78403 invoked by uid 500); 12 Aug 2013 21:22:46 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 78395 invoked by uid 99); 12 Aug 2013 21:22:46 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 12 Aug 2013 21:22:46 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW X-Spam-Check-By: apache.org Received-SPF: error (athena.apache.org: local policy) Received: from [209.85.214.180] (HELO mail-ob0-f180.google.com) (209.85.214.180) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 12 Aug 2013 21:22:42 +0000 Received: by mail-ob0-f180.google.com with SMTP id up14so850889obb.25 for ; Mon, 12 Aug 2013 14:22:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-gm-message-state:from:content-type:message-id:mime-version :subject:date:references:to:in-reply-to; bh=EjU3MZn5za8p4QFDgVwM9EysW4ygfbzw6eVtuWG839M=; b=aEEN25EMfabGaoDoJX4hMrDS85qdXVCsMNcNH4ad9TSrGoAwRPCS0Bt4ICGAAac/fU oFtFeFtXjvL4brg9upuYMD9hLGv36T6jANIap1YVWre5JY+RTruLf8+z7OekgQu13Yne SAZK0MPQ0+hDFSJuy1wPjqW6ZJUrpd8mm2Y7YhnNCz4XVTgv1NMACKQWeYQtSPvJtZvh RAHzips0rUg2KdcYn3SVXHXr2t+AZNnFbWQik7fUQXj2y0LBc9kID5c8WxMzlAiZRGdw almFxLhwANdFljJ/g5RKD5AGLhqsMOGVuZoGL7hpJ19EU3cCCQeHDdLgvEWDckOQ95Io z1PQ== X-Gm-Message-State: ALoCoQlWVO/5pZxNsGYIj8vZjhWyLjJzHAp0RmLcwU/jc7eph2LgtVkP/rBnGz6pQH6wMpsOjBl2 X-Received: by 10.182.65.34 with SMTP id u2mr3060192obs.53.1376342526088; Mon, 12 Aug 2013 14:22:06 -0700 (PDT) Received: from [172.16.1.7] ([203.86.207.101]) by mx.google.com with ESMTPSA id r4sm35932201oem.3.2013.08.12.14.22.03 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 12 Aug 2013 14:22:05 -0700 (PDT) From: Aaron Morton Content-Type: multipart/alternative; boundary="Apple-Mail=_02DB1883-6C83-4375-A0EE-82C53D4B78D9" Message-Id: <53DA5298-9A65-45AD-AD54-5B6D719A6DAF@thelastpickle.com> Mime-Version: 1.0 (Mac OS X Mail 6.5 \(1508\)) Subject: Re: Handling quorum writies fails Date: Tue, 13 Aug 2013 09:22:02 +1200 References: To: Cassandra User In-Reply-To: X-Mailer: Apple Mail (2.1508) X-Virus-Checked: Checked by ClamAV on apache.org --Apple-Mail=_02DB1883-6C83-4375-A0EE-82C53D4B78D9 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=us-ascii > So when I am performing a QUORUM write on cluster with RF=3D3 and one = node fails, I will get write error status and one successful write on = another node. >=20 If you lost one node during or before a write at QUOURM and RF 3 the = write would succeed without any error to the client.=20 > write will be propagated to other nodes when they became online; Yes this is hinted hand off.=20 > write can be completely lost if the node accepted that write will be = completely broken before propagation. Yes *if* atomic batches are not used = http://www.datastax.com/dev/blog/atomic-batches-in-cassandra-1-2 They are used by default for CQL 3 and require a separate call over the = thrift API.=20 > What is the best ways to deal with such kind of fails in let say = hypothetical funds transfer logging? >=20 You should watch / read the talk from Matt Dennis on data modelling here = http://www.datastax.com/company/news-and-events/events/cassandrasummit2012= /presentations Cheers ----------------- Aaron Morton Cassandra Consultant New Zealand @aaronmorton http://www.thelastpickle.com On 12/08/2013, at 12:27 AM, Arthur Zubarev = wrote: > Hello Mikhail, >=20 > The bullet 1 implies consistency, but at later time. And you don't = lose the transaction. By the way, RF 3 to support financials is too low. >=20 > #2, if the entire disk (that had no parity) fails you lost this write, = but the 3rd node would have the write. >=20 > Again, having a greater CF is the key. >=20 > Regards, >=20 > Arthur >=20 >=20 >=20 > -------- Original message -------- > From: Mikhail Tsaplin =20 > Date: 08/10/2013 12:13 PM (GMT-05:00)=20 > To: user@cassandra.apache.org=20 > Subject: Handling quorum writies fails=20 >=20 >=20 > Hi. >=20 > According to Datastax documentation about atomicity in Cassandra: = QUORUM write succeeded only on one node will not be rolled back (Check = Atomicity chapter = there:http://www.datastax.com/documentation/cassandra/1.2/webhelp/index.ht= ml#cassandra/dml/dml_about_transactions_c.html). So when I am performing = a QUORUM write on cluster with RF=3D3 and one node fails, I will get = write error status and one successful write on another node. This = produces two cases: >=20 > write will be propagated to other nodes when they became online; > write can be completely lost if the node accepted that write will be = completely broken before propagation. > What is the best ways to deal with such kind of fails in let say = hypothetical funds transfer logging? >=20 --Apple-Mail=_02DB1883-6C83-4375-A0EE-82C53D4B78D9 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=us-ascii

So = when I am performing a QUORUM write on cluster with RF=3D3 and one node = fails, I will get write error status and one successful write on another = node.

If you lost = one node during or before a write at QUOURM and RF 3 the write would = succeed without any error to the = client. 

  1. write = will be propagated to other nodes when they became = online;
Yes this is hinted hand = off. 

  1. write = can be completely lost if the node accepted that write will be = completely broken before = propagation.
Yes *if* atomic = batches are not used = http://www.datastax.com/dev/blog/atomic-batches-in-cassandra-1-2
=

They are used by default for CQL 3 and require a = separate call over the thrift = API. 

What is the best ways to deal with such = kind of fails in let say hypothetical funds transfer = logging?

You should watch / read the talk from Matt Dennis = on data modelling here http://www.datastax.com/company/news-and-events/ev= ents/cassandrasummit2012/presentations

Cheers


-----------------
Aaron Morton
Cassandra = Consultant
New = Zealand

@aaronmorton

On 12/08/2013, at 12:27 AM, Arthur Zubarev <arthur.zubarev@aol.com> = wrote:

Hello Mikhail,

The = bullet 1 implies consistency, but at later time. And you don't lose the = transaction. By the way, RF 3 to support financials is too = low.

#2, if the entire disk (that had no = parity) fails you lost this write, but the 3rd node would have the = write.

Again, having a greater CF is the = key.

Regards,

Arthur



---= ----- Original message --------
From: Mikhail Tsaplin <tsmisher@gmail.com>
Date: = 08/10/2013 12:13 PM (GMT-05:00)
To: user@cassandra.apache.org =
Subject: Handling quorum writies fails


Hi.

According to Datastax documentation about atomicity in Cassandra: QUORUM = write succeeded only on one node will not be rolled back (Check = Atomicity chapter there:http://w= ww.datastax.com/documentation/cassandra/1.2/webhelp/index.html#cassandra/d= ml/dml_about_transactions_c.html). So when I am performing a QUORUM = write on cluster with RF=3D3 and one node fails, I will get write error = status and one successful write on another node. This produces two = cases:

  1. write will be propagated to = other nodes when they became online;
  2. write can be completely lost if the node accepted that write will be = completely broken before propagation.

What is the best ways to deal with such kind of fails in let say = hypothetical funds transfer logging?


= --Apple-Mail=_02DB1883-6C83-4375-A0EE-82C53D4B78D9--