Not sure I follow here.
As soon as it came back up, due to some human error, rack1 goes down. Now for some rows it is possible that Quorum cannot be established.
if the first rack has come up I assume all nodes are available, if you then lose a different rack I assume you have 2/3 of the nodes available and would be able to achieve a QUORUM.
Just to minimize the issues, we are thinking of running read repair manually every night.
If you are reading and writing at QUORUM and the cluster does not have a QUORUM of nodes available writes will not be processed. During reads any mismatch between the data returned from the nodes will be detected and resolved before returning to the client.
Read Repair is an automatic process that reads from more nodes than necessary and resolves the differences in the back ground.
I would run nodetool repair / Anti Entropy as normal, once on every machine every gc_grace_seconds. If you have a while rack fail for run repair on the nodes in the rack if you want to get it back to consistency quickly. The need to do that depends on the config for Hinted Handoff, read_repair_chance, Consistency level, the write load, and (to some degree) the number of nodes. If you want to be extra safe just run it.
Co-Founder & Principal Consultant
Apache Cassandra Consulting
We are thinking through the deployment architecture for our Cassandra cluster. Let us say that we choose to deploy data across three racks.
If let us say that one rack power went down for 10 mins and then it came back. As soon as it came back up, due to some human error, rack1 goes down. Now for some rows it is possible that Quorum cannot be established. Just to minimize the issues, we are thinking of running read repair manually every night.
Is this a good idea? How often do you perform read repair on your cluster?