cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jonathan Ellis <>
Subject Re: [Q] MapReduce behavior and Cassandra's scalability for petabytes of data
Date Mon, 25 Oct 2010 16:37:39 GMT
On Sun, Oct 24, 2010 at 9:09 PM, Takayuki Tsunakawa
<> wrote:
> From: "Jonathan Ellis" <>
>> (b) Cassandra generates input splits from the sampling of keys each
>> node has in memory.  So if a node does end up with no data for a
>> keyspace (because of bad OOP balancing for instance) it will have no
>> splits generated or mapped.
> I understood you are referring to StorageService.getSplits(). This
> seems to filter out the Cassandra nodes which have no data for the
> target (keyspace, column family) pair.


> [Q1]
> I understood that ColumnFamilyInputFormat requests the above node (or
> split) filtering to all nodes in the cluster. Is this correct?


> [Q2]
> If Q1 is yes, more nodes result in higher cost of MapReduce job
> startup (for executing InputFormat.getSplits()).

Not really:
 1) Each node has a sample of its keys in memory all the time for the
SSTable row indexes.  So getSplits() is a purely in-JVM-memory
 2) Each node computes its own splits.  Adding more nodes does not change this.

> [Q3-1]
> How much data is aimed at by the 400 node cluster Riptano is planning?
> If each node has 4 TB of disks and the replication factor is 3, the
> simple calculation shows 4 TB * 400 / 3 = 533 TB (ignoring commit log,
> OS areas, etc).

We do not yet have permission to talk about details of this cluster, sorry.

> [Q3-2]
> Based on the current architecture, how many nodes is the limit and how
> much (approximate) data is the practical limit?

There is no reason Cassandra cannot scale to 1000s or more nodes with
the current architecture.

Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support

View raw message