If you're using cassandra 1.2 then you have a choice specified in the yaml


# policy for data disk failures:
# stop: shut down gossip and Thrift, leaving the node effectively dead, but
#       can still be inspected via JMX.
# best_effort: stop using the failed disk and respond to requests based on
#              remaining available sstables.  This means you WILL see obsolete
#              data at CL.ONE!
# ignore: ignore fatal errors a


-Bryan



On Wed, Jun 5, 2013 at 6:11 AM, Christopher Wirt <chris.wirt@struq.com> wrote:

I would hope so. Just trying to get some confirmation from someone with production experience.

 

Thanks for your reply

 

From: Shahab Yunus [mailto:shahab.yunus@gmail.com]
Sent: 05 June 2013 13:31
To: user@cassandra.apache.org
Subject: Re: Multiple JBOD data directory

 

Though, I am a newbie bust just had a thought regarding your question 'How will it handle requests for data which unavailable?', wouldn't the data be served in that case from other nodes where it has been replicated?

 

Regards,

Shahab

 

On Wed, Jun 5, 2013 at 5:32 AM, Christopher Wirt <chris.wirt@struq.com> wrote:

Hello,

 

We’re thinking about using multiple data directories each with its own disk and are currently testing this against a RAID0 config.

 

I’ve seen that there is failure handling with multiple JBOD.

 

e.g.

We have two data directories mounted to separate drives

/disk1

/disk2

One of the drives fails

 

Will Cassandra continue to work?

How will it handle requests for data which unavailable?

If I want to add an additional drive what is the best way to go about redistributing the data?

 

Thanks,

Chris