incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Adam Fisk...@littleshoot.org>
Subject Re: cassandra over hbase
Date Thu, 26 Nov 2009 03:23:33 GMT
Thanks for all the helpful responses, everyone. I've honestly been
going back and forth a lot with this decision, but it's surprising how
much of a difference the usability of Cassandra from an install and
interface perspective really makes, even for techies like us. The
HBase command line throws all sorts of scary exceptions even when it's
really working.

It surprised me how much of a difference Cassandra's quick setup makes
for a company on a tight deadline - not at all to imply Cassandra
can't go toe to toe with HBase on the merits of the internals - more
to say that extra effort is well worth it in terms of building the
Cassandra community.

Nice work, and thanks again!

-Adam


On Tue, Nov 24, 2009 at 10:56 AM, Stu Hood <stuart.hood@rackspace.com> wrote:
>> JR> After chatting with some Facebook guys, we realized that one potential
>> JR> benefit from using HDFS is that the recovery from losing partial data in a
>> JR> node is more efficient. Suppose that one lost a single disk at a node. HDFS
>> JR> can quickly rebuild the blocks on the failed disk in parallel.
>
> HDFS replicates eagerly, which means that having a node down for longer than a timeout
period will also mean that you do more work than you needed. Cassandra replicates (very) lazily,
and I prefer laziness for the sake of efficiency.
>
>> JR> So, when this happens, the whole node probably has to be taken out
>> JR> and bootstrapped. The same problem exists when a single sstable file
>> JR> is corrupted.
>> I think recovering a single sstable is a useful thing, and it seems like
>> a better problem to solve.
>
> This is why we need to get #193 in. Going to the filesystem and deleting/fuzzing an SSTable
on a node and then running a repair will cause a new SSTable to be created  that overlays
and reapairs the first based on the data from the other nodes.
>
> Thanks,
> Stu
>
> -----Original Message-----
> From: "Ted Zlatanov" <tzz@lifelogs.com>
> Sent: Tuesday, November 24, 2009 8:40am
> To: cassandra-user@incubator.apache.org
> Subject: Re: cassandra over hbase
>
> On Mon, 23 Nov 2009 11:58:08 -0800 Jun Rao <junrao@almaden.ibm.com> wrote:
>
> JR> After chatting with some Facebook guys, we realized that one potential
> JR> benefit from using HDFS is that the recovery from losing partial data in a
> JR> node is more efficient. Suppose that one lost a single disk at a node. HDFS
> JR> can quickly rebuild the blocks on the failed disk in parallel. This is a
> JR> bit hard to do in cassandra, since we can't easily find the data on the
> JR> failed disk from another node.
>
> This is an architectural issue, right?  IIUC Cassandra simply doesn't
> care about disks.  I think that's a plus, actually, because it
> simplifies the code and filesystems in my experience are better left up
> to the OS.  For instance, we're evaluating Lustre and for many specific
> reasons it's significantly better for our needs than HDFS, so HDFS would
> be a tough sell.
>
> JR> So, when this happens, the whole node probably has to be taken out
> JR> and bootstrapped. The same problem exists when a single sstable file
> JR> is corrupted.
>
> I think recovering a single sstable is a useful thing, and it seems like
> a better problem to solve.
>
> Ted
>
>
>
>



-- 
Adam Fisk
http://www.littleshoot.org | http://adamfisk.wordpress.com |
http://twitter.com/adamfisk

Mime
View raw message