incubator-cassandra-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeff Hodges <j...@somethingsimilar.com>
Subject Re: hadoop tasks reading from cassandra
Date Wed, 29 Jul 2009 06:37:48 GMT
Comments inline.

On Fri, Jul 24, 2009 at 10:00 AM, Jonathan Ellis<jbellis@gmail.com> wrote:
> On Fri, Jul 24, 2009 at 11:08 AM, Jun Rao<junrao@almaden.ibm.com> wrote:
>> 1. In addition to OrderPreservingPartitioner, it would be useful to support
>> MapReduce on RandomPartitioned Cassandra as well. We had a rough prototype
>> that sort-of works at this moment. The difficulty with random partitioner
>> is that it's a bit hard to generate the splits. In our prototype, we simply
>> map each row to a split. This is ok for fat rows (e.g., a row includes all
>> info for a user), but may be too fine-grained for other cases. Another
>> possibility is to generate a split that corresponds to a set of rows in a
>> hash-range (instead of key range). This requires some new apis in
>> cassandra.
>
> -1 on adding new apis to pound a square peg into a round hole.
>
> like range queries, hadoop splits only really make sense on OPP.
>

Why would it only make sense on OPP? If it wasn't an externally
exposed part of the api, what other concerns do you have about a hash
range query? I can't think of any beyond the usual increased code
complexity argument (i.e. development, testing and maintenance costs
for it).

>> 2. For better performance, in the future, it would be useful to expose and
>> exploit data locality in cassandra so that a map task is executed on a
>> cassandra node that owns the data locally. A related issue is
>> https://issues.apache.org/jira/browse/CASSANDRA-197. It breaks
>> encapsulation, but it's worth thinking about. Google's DFS and Bigtable
>> both expose certain locality info for better performance.
>
> That's why I'd like to ship hadoop integration out of the box, instead
> of adding apis that should really be internal-use only for an external
> hadoop layer.
>

There is something in Hadoop that attempts to solve some of the data
locality problem called NetworkTopology. It's used to provide data
locality for CompileFileInputFormat (among, I'm sure, other things).

Combining this with the knowledge we would have of which Node each key
range would be from, there is a chance Hadoop could do some of the
locality work for us. Looking at the code for CombineFileInputFormat,
it doesn't seem to be particularly straightforward bit of work to
translate to Cassandra, but I'm sure with a little time and maybe a
little guidance from some Hadoop folks, we could make it happen.

In any case, this seems to be evidence that locality can be added on
later. It will not be a simple drop in deal, but it wouldn't seem to
require us to completely overhaul how we think about the input
splitting.

I'm going to get this thing rolling. I'm still a little foggy on how
data flows inside the cassandra codebase, so forgive me if the start
is a little slow.

(Oh, and has anyone got a mnemonic or anything to remember which of
org.apache.hadoop.mapred and org.apache.hadoop.mapreduce is the new
one? I'll be jiggered if I can keep it straight.)
--
Jeff

Mime
View raw message