cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mck SembWever (JIRA)" <>
Subject [jira] Updated: (CASSANDRA-1927) Hadoop Integration doesn't work when one node is down
Date Mon, 03 Jan 2011 10:46:50 GMT


Mck SembWever updated CASSANDRA-1927:

    Attachment: CASSANDRA-1927.patch

Utku: are you able to test this patch?

It does not work for me because i'm using ByteOrderedPartitioner which doesn't return multiple
endpoints for each TokenRange returned by client.describe_ring(..)
(Or maybe endpoints are not suppose to reference available replicas. stu?)

> Hadoop Integration doesn't work when one node is down
> -----------------------------------------------------
>                 Key: CASSANDRA-1927
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Hadoop
>    Affects Versions: 0.7.0 rc 2
>            Reporter: Utku Can Topcu
>            Assignee: Mck SembWever
>             Fix For: 0.7.1
>         Attachments: CASSANDRA-1927.patch
> using the same directives in the sample code:
> When I start the CFInputFormat to read a CF in a keyspace of RF=3 on a 4-node cluster:
> - If all the nodes are all up, everything works fine and I don't have any problems walking
through the all data in the CF, however
> - If there's a node down, the hadoop job does not even start, just dies without any errors
or exceptions.
> So I'm really sorry for not being able to post any errors or exceptions, though it's
really easy to reproduce. Just startup a cluster and take one node down and you're there :)

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message