cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kaide Mu (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-12008) Make decommission operations resumable
Date Thu, 21 Jul 2016 21:20:20 GMT


Kaide Mu commented on CASSANDRA-12008:

I'm working on this ticket, currently decommission is not resumable after failure due to:

- Node state is changed to {{LEAVING}} after decommission starts, and current source code
prevents all states different from {{NORMAL}} to restart a decommission operation.
- Streamed ranges are unknown for decommission node, thus although we could resume decommission,
this operation will stream again all ranges.

For solving them I propose the following initial approach:
# Add a new {{isDecommissionMode}} flag.
# Add a new {{SystemKeyspace.streamedRanges}} for storing transferred ranges.
# Add a new {{SystemKeyspace.updateStreamedRanges}}.
# Modify {{StorageService.streamRanges}}.

2. and 3. may not be necessary because {{StreamStateStore.handleStreamEvent}} always updates
{{SystemKeyspace.availableRanges}} using {{SystemKeyspace.updateAvailableRanges}} no matter
if the session transferred or received ranges (Although received ranges is stored, current
source code is not using this functionality), if we want to keep another keyspace for transferred
ranges, we will still need to use {{SystemKeyspace.updateStreamedRanges}} which will be identical
than {{SystemKeyspace.updateAvailableRanges}}. So maybe we should adapt {{StorageService.streamRanges}}
to use RangeStreamer that already has all implemented. WDYT [~pauloricardomg] and [~yukim]?

Here's the source code of [StreamStateStore.handleStreamEvent|]

> Make decommission operations resumable
> --------------------------------------
>                 Key: CASSANDRA-12008
>                 URL:
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Streaming and Messaging
>            Reporter: Tom van der Woerdt
>            Assignee: Kaide Mu
>            Priority: Minor
> We're dealing with large data sets (multiple terabytes per node) and sometimes we need
to add or remove nodes. These operations are very dependent on the entire cluster being up,
so while we're joining a new node (which sometimes takes 6 hours or longer) a lot can go wrong
and in a lot of cases something does.
> It would be great if the ability to retry streams was implemented.
> Example to illustrate the problem :
> {code}
> 03:18 PM   ~ $ nodetool decommission
> error: Stream failed
> -- StackTrace --
> org.apache.cassandra.streaming.StreamException: Stream failed
>         at
>         at$
>         at$DirectExecutor.execute(
>         at
>         at
>         at
>         at org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(
>         at org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(
>         at org.apache.cassandra.streaming.StreamSession.closeSession(
>         at org.apache.cassandra.streaming.StreamSession.complete(
>         at org.apache.cassandra.streaming.StreamSession.messageReceived(
>         at org.apache.cassandra.streaming.ConnectionHandler$
>         at
> 08:04 PM   ~ $ nodetool decommission
> nodetool: Unsupported operation: Node in LEAVING state; wait for status to become normal
or restart
> See 'nodetool help' or 'nodetool help <command>'.
> {code}
> Streaming failed, probably due to load :
> {code}
> ERROR [STREAM-IN-/<ipaddr>] 2016-06-14 18:05:47,275 - [Stream
#<streamid>] Streaming error occurred
> null
>         at$ ~[na:1.8.0_77]
>         at ~[na:1.8.0_77]
>         at java.nio.channels.Channels$
>         at org.apache.cassandra.streaming.messages.StreamMessage.deserialize(
>         at org.apache.cassandra.streaming.ConnectionHandler$
>         at [na:1.8.0_77]
> {code}
> If implementing retries is not possible, can we have a 'nodetool decommission resume'?

This message was sent by Atlassian JIRA

View raw message