cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alain RODRIGUEZ (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (CASSANDRA-9509) Streams throughput control
Date Fri, 20 May 2016 15:40:12 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-9509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Alain RODRIGUEZ updated CASSANDRA-9509:
---------------------------------------
    Description: 
Currently, I have to keep tuning stream throughput all the time manually (through nodetool
setstreamthroughput) since the same value stands for example for a decommission or a removenode.
The point is in first case data goes from 1 to N nodes (and is obviously limited by the node
sending), in the second it goes from ALL to N nodes (N being number of nodes - 1). While removing
a node with 'nodetool removenode', throughput limit will not be reached in most cases, and
all the nodes will be under heavy load. So with the same value of stream throughput, we send
N times faster on a removenode than using decommission to the nodes receiving the data. 

An other example is running repair. We have 20 nodes, taking 2+ days to repair data, and repair
have to run within 10 days, can't be one at the time, and stream throughput needs to be adjusted
accordingly.

Is there a way to:

- limit incoming streaming throughput on a node ?
- limit outgoing streaming speed, make sure all the nodes never send more than x Mbps per
second to any other node?
- make streaming processes a background task (using remaining resources only, handle priority)
?

If none of those ideas are doable, can we imagine to dissociate stream throughputs depending
on the operation '1 to many' and 'many to 1' (decommission, rebuild, bootstrap) AND 'N to
N' (repairs, removenode), to configure them individually in cassandra.yaml ?

  was:
Currently, I have to keep tuning stream throughput all the time manually (through nodetool
setstreamthroughput) since the same value stands for example for a decommission or a removenode
(for exemple). The point is in first case Network goes from 1 --> N nodes (and is obviously
limited by the node sending), in the second it is a N --> N nodes (N being number of remaining
nodes). Removing node, throughput limit will not be reached in most cases, and all the nodes
will be under heavy load. So with the same value of stream throughput, we send N times faster
on a removenode than using decommission. 

An other exemple is repair is also faster as  more nodes start repairing (we have 20 nodes,
taking 2+ days to repair data, and repair have to run within 10 days, can't be one at the
time, and stream throughput needs to be adjusted accordingly.

Is there a way to:

- limit incoming network on a node ?
- limit cluster wide sent network ?
- make streaming processes background task (using remaining resources) ? This looks harder
to me since the bottleneck depends on the node hardware and the workload. It can be either
the CPU, the network, the disk throughput or even the memory...  

If none of those ideas are doable, can we imagine to dissociate stream throughputs depending
on the operation, to configure them individually in cassandra.yaml ?


> Streams throughput control
> --------------------------
>
>                 Key: CASSANDRA-9509
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-9509
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Configuration
>            Reporter: Alain RODRIGUEZ
>            Priority: Minor
>
> Currently, I have to keep tuning stream throughput all the time manually (through nodetool
setstreamthroughput) since the same value stands for example for a decommission or a removenode.
The point is in first case data goes from 1 to N nodes (and is obviously limited by the node
sending), in the second it goes from ALL to N nodes (N being number of nodes - 1). While removing
a node with 'nodetool removenode', throughput limit will not be reached in most cases, and
all the nodes will be under heavy load. So with the same value of stream throughput, we send
N times faster on a removenode than using decommission to the nodes receiving the data. 
> An other example is running repair. We have 20 nodes, taking 2+ days to repair data,
and repair have to run within 10 days, can't be one at the time, and stream throughput needs
to be adjusted accordingly.
> Is there a way to:
> - limit incoming streaming throughput on a node ?
> - limit outgoing streaming speed, make sure all the nodes never send more than x Mbps
per second to any other node?
> - make streaming processes a background task (using remaining resources only, handle
priority) ?
> If none of those ideas are doable, can we imagine to dissociate stream throughputs depending
on the operation '1 to many' and 'many to 1' (decommission, rebuild, bootstrap) AND 'N to
N' (repairs, removenode), to configure them individually in cassandra.yaml ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message