Having a well known node configuration that is trivial (one step) to create is your best maintenance bet. We are using 4 disk nodes in the following configuration:
disk1: boot_raid1 os_raid1 cassandra_commit_log
disk2: boot_raid1 os_raid1 cassandra_data_dir_raid0
This gives us a solid stable foundation for the OS and the recommended configuration for cassandra commitlog and data dirs. Every node in the ring can be replaced with a single command via cobbler to have a replacement provisioned from bare metal to take over for a node if it fails. We will never bother with repairing a node - we will replace it entirely upon failure from bare metal. The node with the issue will be taken out of service, the issue resolved and put back into a pool of spares.
On May 4, 2011, at 9:52 AM, Anthony Ikeda wrote:
I wouldn't be concerned more about the performance with this configuration I'm looking more form a maintenance perspective - I have to draft some maintenance for our infrastructure team whom are used to a standard NAS storage setup which Cassandra obviously breaks.
Ultimately, would keeping the cassandra service separate from the data and/or commit logs benefit from a recovery perspective where if we lose the primary partition, we could restore that from the data that is still on the secondary?
What it considered best practice?
What kind of routine health checks are best to look for daily? monthly? annually?
Basically how do you up-skill a technical infrastructure team to be able to maintain a Cassandra node ring?
On Wed, May 4, 2011 at 9:39 AM, Eric tamme <email@example.com>
I don't think it makes much difference from a performance perspective
> I just want to ask, when setting up nodes in a Node ring is it worthwhile
> using a 2 partition setup? i.e. Cassandra on the Primary, data directories
> etc on the second partition or does it really not make a difference?
at all. You might want to create a separate LVM for your data, or