cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Paulo Motta (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-12008) Make decommission operations resumable
Date Mon, 25 Jul 2016 16:30:20 GMT


Paulo Motta commented on CASSANDRA-12008:

bq. it seems StreamStateStore is not recording properly transferred ranges (nothing is recorded).
I guess everything is set-up correctly, would you mind to taking a look?

It seems {{SessionCompleteEvent}} currently only exposes [requested ranges|],
which will be empty since decommission does not request any ranges, but instead transfer its
ranges to other nodes. 

But it seems adding transferred ranges to {{SessionCompleteEvent}} will not be sufficient,
as it is possible for a leaving node to transfer a range to multiple nodes (if there are 2
nodes leaving the ring at the same time, for example), so we cannot mark the range as transferred
when a session completes with a particular peer. While this seem highly unlikely, it is a
possible scenario so we should probably protect against that. WDYT [~yukim] ?

My suggestion is to create a new system table {{streamed_ranges}} with the following schema:
CREATE TABLE streamed_ranges (
operation text,
peer inet,
keyspace_name text,
ranges set<blob>,
PRIMARY KEY ((operation, keyspace_name), peer))

In this table we can store received or transferred ranges from any operation (rebuild, bootstrap,
stream) per peer, and deprecate the {{available_ranges}} table in favor of this new table.
With this we will be able to know if we can skip streaming a particular range to/from a specific
peer, and account for the case where we stream a range to multiple peer, such as in decommission.

After that, we will probaby need to add transferred ranges to {{StreamTransferTask}} so the
transferred ranges can be added to {{SessionCompleteEvent}}, so we can mark them in the {{streamed_ranges}}

> Make decommission operations resumable
> --------------------------------------
>                 Key: CASSANDRA-12008
>                 URL:
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Streaming and Messaging
>            Reporter: Tom van der Woerdt
>            Assignee: Kaide Mu
>            Priority: Minor
> We're dealing with large data sets (multiple terabytes per node) and sometimes we need
to add or remove nodes. These operations are very dependent on the entire cluster being up,
so while we're joining a new node (which sometimes takes 6 hours or longer) a lot can go wrong
and in a lot of cases something does.
> It would be great if the ability to retry streams was implemented.
> Example to illustrate the problem :
> {code}
> 03:18 PM   ~ $ nodetool decommission
> error: Stream failed
> -- StackTrace --
> org.apache.cassandra.streaming.StreamException: Stream failed
>         at
>         at$
>         at$DirectExecutor.execute(
>         at
>         at
>         at
>         at org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(
>         at org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(
>         at org.apache.cassandra.streaming.StreamSession.closeSession(
>         at org.apache.cassandra.streaming.StreamSession.complete(
>         at org.apache.cassandra.streaming.StreamSession.messageReceived(
>         at org.apache.cassandra.streaming.ConnectionHandler$
>         at
> 08:04 PM   ~ $ nodetool decommission
> nodetool: Unsupported operation: Node in LEAVING state; wait for status to become normal
or restart
> See 'nodetool help' or 'nodetool help <command>'.
> {code}
> Streaming failed, probably due to load :
> {code}
> ERROR [STREAM-IN-/<ipaddr>] 2016-06-14 18:05:47,275 - [Stream
#<streamid>] Streaming error occurred
> null
>         at$ ~[na:1.8.0_77]
>         at ~[na:1.8.0_77]
>         at java.nio.channels.Channels$
>         at org.apache.cassandra.streaming.messages.StreamMessage.deserialize(
>         at org.apache.cassandra.streaming.ConnectionHandler$
>         at [na:1.8.0_77]
> {code}
> If implementing retries is not possible, can we have a 'nodetool decommission resume'?

This message was sent by Atlassian JIRA

View raw message