cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeff Jirsa <jji...@gmail.com>
Subject Re: Self-healing data integrity?
Date Sat, 09 Sep 2017 17:50:39 GMT
Cassandra doesn't do that automatically - it can guarantee consistency on read or write via
ConsistencyLevel on each query, and it can run active (AntiEntropy) repairs. But active repairs
must be scheduled (by human or cron or by third party script like http://cassandra-reaper.io/),
and to be pedantic, repair only fixes consistency issue, there's some work to be done to properly
address/support fixing corrupted replicas (for example, repair COULD send a bit flip from
one node to all of the others)



-- 
Jeff Jirsa


> On Sep 9, 2017, at 1:07 AM, Ralph Soika <ralph.soika@imixs.com> wrote:
> 
> Hi,
> 
> I am searching for a big data storage solution for the Imixs-Workflow project. I started
with Hadoop until I became aware of the 'small-file-problem'. So I am considering using Cassandra
now. 
> But Hadoop has one important feature for me. The replicator continuously examines whether
data blocks are consistent across all datanodes. This will detect disk errors and automatically
move data from defective blocks to working blocks. I think this is called 'self-healing mechanism'.
> Is there a similar feature in Cassandra too?
> 
> Thanks for help
> 
> Ralph
> 
> 
> 
> -- 
> 

Mime
View raw message