cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Cassandra Wiki] Update of "Operations" by JonathanEllis
Date Wed, 16 Jun 2010 15:09:43 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for change notification.

The "Operations" page has been changed by JonathanEllis.
The comment on this change is: r/m confusing paragraph about manually changing RS.
http://wiki.apache.org/cassandra/Operations?action=diff&rev1=52&rev2=53

--------------------------------------------------

  
   * The corollary to this is, if you want to start with a single DC and add another later,
when you add the second DC you should add as many nodes as you have in the first rather than
adding a node or two at a time gradually.
  
- Replication strategy is not intended to be changed once live, but if you are sufficiently
motivated it can be done with some manual effort:
+ Replication factor is not really intended to be changed in a live cluster either, but increasing
it may be done if you (a) read at ConsistencyLevel.QUORUM or ALL (depending on your existing
replication factor) to make sure that a replica that actually has the data is consulted, (b)
are willing to accept downtime while anti-entropy repair runs (see below), or (c) are willing
to live with some clients potentially being told no data exists if they read from the new
replica location(s) until repair is done.
  
+ The same options apply to changing replication strategy.
-  1. anticompact each node's primary Range, yielding sstables containing only that Range
data
-  1. copy those sstables to the nodes responsible for extra replicas under the new strategy
-  1. change the strategy and restart
- 
- Replication factor is not really intended to be changed in a live cluster either, but increasing
it may be done if you (a) use ConsistencyLevel.QUORUM or ALL (depending on your existing replication
factor) to make sure that a replica that actually has the data is consulted, (b) are willing
to accept downtime while anti-entropy repair runs (see below), or (c) are willing to live
with some clients potentially being told no data exists if they read from the new replica
location(s) until repair is done.
  
  Reducing replication factor is easily done and only requires running cleanup afterwards
to remove extra replicas.
  

Mime
View raw message