Same results.  I restarted the node also to see if it just wasn’t picking up the changes and it still shows Simple. 


When I specify the DC for strategy_options I should be using the DC name from properfy file snitch right?  Ours is “Fisher” and “TierPoint” so that’s what I used.


From: Mohit Anchlia []
Sent: Monday, August 27, 2012 1:21 PM
Subject: Re: Expanding cluster to include a new DR datacenter


In your update command is it possible to specify RF for both DC? You could just do DC1:2, DC2:0.

On Mon, Aug 27, 2012 at 11:16 AM, Bryce Godfrey <> wrote:

Show schema output show the simple strategy still

[default@unknown] show schema EBonding;

create keyspace EBonding

  with placement_strategy = 'SimpleStrategy'

  and strategy_options = {replication_factor : 2}

  and durable_writes = true;


This is the only thing I see in the system log at the time on all the nodes:


INFO [MigrationStage:1] 2012-08-27 10:54:18,608 (line 659) Enqueuing flush of Memtable-schema_keyspaces@1157216346(183/228 serialized/live bytes, 4 ops)

INFO [FlushWriter:765] 2012-08-27 10:54:18,612 (line 264) Writing Memtable-schema_keyspaces@1157216346(183/228 serialized/live bytes, 4 ops)

INFO [FlushWriter:765] 2012-08-27 10:54:18,627 (line 305) Completed flushing /opt/cassandra/data/system/schema_keyspaces/system-schema_keyspaces-he-34817-Data.db (241 bytes) for commitlog p$



Should I turn the logging level up on something to see some more info maybe?


From: aaron morton []
Sent: Monday, August 27, 2012 1:35 AM

Subject: Re: Expanding cluster to include a new DR datacenter


I did a quick test on a clean 1.1.4 and it worked 


Can you check the logs for errors ? Can you see your schema change in there ?


Also what is the output from show schema; in the cli ? 





Aaron Morton

Freelance Developer



On 25/08/2012, at 6:53 PM, Bryce Godfrey <> wrote:




[default@unknown] describe cluster;

Cluster Information:

   Snitch: org.apache.cassandra.locator.PropertyFileSnitch

   Partitioner: org.apache.cassandra.dht.RandomPartitioner

   Schema versions:

        9511e292-f1b6-3f78-b781-4c90aeb6b0f6: [,,,,]


From: Mohit Anchlia [
Sent: Friday, August 24, 2012 1:55 PM
Subject: Re: Expanding cluster to include a new DR datacenter


That's interesting can you do describe cluster?

On Fri, Aug 24, 2012 at 12:11 PM, Bryce Godfrey <> wrote:

So I’m at the point of updating the keyspaces from Simple to NetworkTopology and I’m not sure if the changes are being accepted using Cassandra-cli.


I issue the change:


[default@EBonding] update keyspace EBonding

...     with placement_strategy = 'org.apache.cassandra.locator.NetworkTopologyStrategy'

...     and strategy_options={Fisher:2};


Waiting for schema agreement...

... schemas agree across the cluster


Then I do a describe and it still shows the old strategy.  Is there something else that I need to do?  I’ve exited and restarted Cassandra-cli and it still shows the SimpleStrategy for that keyspace.  Other nodes show the same information.


[default@EBonding] describe EBonding;

Keyspace: EBonding:

  Replication Strategy: org.apache.cassandra.locator.SimpleStrategy

  Durable Writes: true

    Options: [replication_factor:2]



From: Bryce Godfrey [
Sent: Thursday, August 23, 2012 11:06 AM
Subject: RE: Expanding cluster to include a new DR datacenter


Thanks for the information!  Answers my questions.


From: Tyler Hobbs [
Sent: Wednesday, August 22, 2012 7:10 PM
Subject: Re: Expanding cluster to include a new DR datacenter


If you didn't see this particular section, you may find it useful:

Some comments inline:

On Wed, Aug 22, 2012 at 3:43 PM, Bryce Godfrey <> wrote:

We are in the process of building out a new DR system in another Data Center, and we want to mirror our Cassandra environment to that DR.  I have a couple questions on the best way to do this after reading the documentation on the Datastax website.  We didn’t initially plan for this to be a DR setup when first deployed a while ago due to budgeting, but now we need to.  So I’m just trying to nail down the order of doing this as well as any potential issues.


For the nodes, we don’t plan on querying the servers in this DR until we fail over to this data center.   We are going to have 5 similar nodes in the DR, should I join them into the ring at token+1?

Join them at token+10 just to leave a little space.  Make sure you're using LOCAL_QUORUM for your queries instead of regular QUORUM.


All keyspaces are set to the replication strategy of SimpleStrategy.  Can I change the replication strategy after joining the new nodes in the DR to NetworkTopologyStategy with the updated replication factor for each dr?

Switch your keyspaces over to NetworkTopologyStrategy before adding the new nodes.  For the strategy options, just list the first dc until the second is up (e.g. {main_dc: 3}).


Lastly, is changing snitch from default of SimpleSnitch to RackInferringSnitch going to cause any issues?  Since its in the Cassandra.yaml file I assume a rolling restart to pick up the value would be ok?

This is the first thing you'll want to do.  Unless your node IPs would naturally put all nodes in a DC in the same rack, I recommend using PropertyFileSnitch, explicitly using the same rack.  (I tend to prefer PFSnitch regardless; it's harder to accidentally mess up.)  A rolling restart is required to pick up the change.  Make sure to fill out first if using PFSnitch.


This is all on Cassandra 1.1.4, Thanks for any help!



Tyler Hobbs