incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jonathan Ellis <>
Subject Re: get_key_range (CASSANDRA-169)
Date Tue, 08 Sep 2009 22:26:07 GMT
getting temporary errors when a node goes down, until the other nodes'
failure detectors realize it's down, is normal.  (this should only
take a dozen seconds, or so.)

but after that it should route requests to other nodes, and it should
also realize when you restart #5 that it is alive again.  those are
two separate issues.

can you verify that "bin/nodeprobe cluster" shows that node 1
eventually does/does not see #5 dead, and alive again?


On Tue, Sep 8, 2009 at 5:05 PM, Simon Smith<> wrote:
> I'm seeing an issue similar to:
> Here is when I see it.  I'm running Cassandra on 5 nodes using the
> OrderPreservingPartitioner, and have populated Cassandra with 78
> records, and I can use get_key_range via Thrift just fine.  Then, if I
> manually kill one of the nodes (if I kill off node #5), the node (node
> #1) which I've been using to call get_key_range will timeout and the
> error:
>  Thrift: Internal error processing get_key_range
> And the Cassandra output shows the same trace as in 169:
> ERROR - Encountered IOException on connection:
> java.nio.channels.SocketChannel[closed]
> Connection refused
>        at Method)
>        at
>        at
>        at
>        at
> WARN - Closing down connection java.nio.channels.SocketChannel[closed]
> ERROR - Internal error processing get_key_range
> java.lang.RuntimeException: java.util.concurrent.TimeoutException:
> Operation timed out.
>        at org.apache.cassandra.service.StorageProxy.getKeyRange(
>        at org.apache.cassandra.service.CassandraServer.get_key_range(
>        at org.apache.cassandra.service.Cassandra$Processor$get_key_range.process(
>        at org.apache.cassandra.service.Cassandra$Processor.process(
>        at org.apache.thrift.server.TThreadPoolServer$
>        at java.util.concurrent.ThreadPoolExecutor.runWorker(
>        at java.util.concurrent.ThreadPoolExecutor$
>        at
> Caused by: java.util.concurrent.TimeoutException: Operation timed out.
>        at
>        at org.apache.cassandra.service.StorageProxy.getKeyRange(
>        ... 7 more
> If it was giving an error just one time, I could just rely on catching
> the error and trying again.  But a get_key_range call to that node I
> was already making get_key_range queries against (node #1) never works
> again (it is still up and it responds fine to multiget Thrift calls),
> sometimes not even after I restart the down node (node #5).  I end up
> having to restart node #1 in addition to node #5.  The behavior for
> the other 3 nodes varies - some of them  are also unable to respond to
> get_key_range calls, but some of them do respond to get_key_range
> calls.
> My question is, what path should I go down in terms of reproducing
> this problem?  I'm using Aug 27 trunk code - should I update my
> Cassandra install prior to gathering more information for this issue,
> and if so, which version (0.4 or trunk).  If there is anyone who is
> familiar with this issue, could you let me know what I might be doing
> wrong, or what the next info-gathering step should be for me?
> Thank you,
> Simon Smith
> Arcode Corporation

View raw message