incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Hellmans" <robert.hellm...@aastra.com>
Subject Cluster temporarily split into segments
Date Fri, 24 Aug 2012 11:00:36 GMT
Hi !
 
I'm preparing the test below. I've found a lot of information about
deadnode replacements and adding extra nodes to increase capacity, but
didn't find anything about this segementation issue. Anyone that can
share experience/ideas ?
 
 
Setup:
Cluster with 6 nodes {A,B,C,D,E,F}, RF=6, using CL=ONE (read) and
CL=ALL(write). 
 
 
Suppose that connectivity breaks down (for whatever reason) causing two
isolated segments:
S1 = {A,B,C,D} and S2 = {E,F}.
 
Cluster connectivity anomalities will be detected by all nodes in this
setup, so clients in S1 and S2 can be advised
to change their CL strategy. It is extremly important that reads will
continue to operate in both S1 and S2 
and I don't see any reason why they shouldn't. It is almost that
important that writes in each segment can continue, but
to be able to write at all, the CL strategy definitely needs to be
changed.
In S1, for instance change to CL=QUORUM for both reads/writes
In S2, CL(write) change to TWO/ONE/ANY. CL(read) may be changed to TWO
 
During the connectivity breakdown, clients in both S1 and S2
simultaneously change/add/delete data. 
 
 
 
So now to the interesting question, what happens when S1 and S2
reestablish full connectivity again ?
Again, the re-connectivity event will be detected, so should I trig some
special repair sequence ?
Or should I've been doing some actions already when the connectivity
broke ?
What about connectivity dropout time, longer/shorter than
max_hint_window ?
 
 
 
 
Rds /Robert
 
 
 

Mime
View raw message