cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jacques-Henri Berthemet <>
Subject RE: How many nodes do we require
Date Thu, 31 Mar 2016 16:04:26 GMT
You’re right. I meant about data integrity, I understand it’s not everybody’s priority!

Jacques-Henri Berthemet

From: Jonathan Haddad []
Sent: jeudi 31 mars 2016 17:48
Subject: Re: How many nodes do we require

Losing a write is very different from having a fragile cluster.  A fragile cluster implies
that whole thing will fall apart, that it breaks easily.  Writing at CL=ONE gives you a pretty
damn stable cluster at the potential risk of losing a write that hasn't replicated (but has
been ack'ed) which for a lot of people is preferable to downtime.  CL=ONE gives you the *most
stable* cluster you can have.
On Tue, Mar 29, 2016 at 12:57 AM Jacques-Henri Berthemet <<>>
Because if you lose a node you have chances to lose some data forever if it was not yet replicated.

Jacques-Henri Berthemet

From: Jonathan Haddad [<>]
Sent: vendredi 25 mars 2016 19:37

Subject: Re: How many nodes do we require

Why would using CL-ONE make your cluster fragile? This isn't obvious to me. It's the most
practical setting for high availability, which very much says "not fragile".
On Fri, Mar 25, 2016 at 10:44 AM Jacques-Henri Berthemet <<>>
I found this calculator very convenient:

Regardless of your other DCs you need RF=3 if you write at LOCAL_QUORUM, RF=2 if you write/read
at ONE.

Obviously using ONE as CL makes your cluster very fragile.
Jacques-Henri Berthemet

-----Original Message-----
From: Rakesh Kumar [<>]
Sent: vendredi 25 mars 2016 18:14
Subject: Re: How many nodes do we require

On Fri, Mar 25, 2016 at 11:45 AM, Jack Krupansky
<<>> wrote:
> It depends on how much data you have. A single node can store a lot of data,
> but the more data you have the longer a repair or node replacement will
> take. How long can you tolerate for a full repair or node replacement?

At this time, for a foreseeable future, size of data will not be
significant. So we can safely disregard the above as a decision

> Generally, RF=3 is both sufficient and recommended.

Are you telling a SimpleReplication topology with RF=3
or NetworkTopology with RF=3.

taken from:

Three replicas in each data center: This configuration tolerates
either the failure of a one node per replication group at a strong
consistency level of LOCAL_QUORUM or multiple node failures per data
center using consistency level ONE."

In our case, with only 3 nodes in each DC, wouldn't a RF=3 effectively mean ALL.

I will state our requirement clearly:

If we are going with six nodes (3 in each DC), we should be able to
write even with a loss of one DC and loss of one node of the surviving
DC. I am open to hearing what compromise we have to do with the reads
during the time a DC is down. For us write is critical, more than

May be this is not possible with 6 nodes, and requires more.  Pls advise.
View raw message