cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sebastian Estevez <>
Subject Re: compact/repair shouldn't compete for normal compaction resources.
Date Mon, 19 Oct 2015 15:30:51 GMT
The validation compaction part of repair is susceptible to the compaction
throttling knob `nodetool getcompactionthroughput`
/ `nodetool setcompactionthroughput` and you can use that to tune down the
resources that are being used by repair.

Check out this post by driftx on advanced repair techniques

Given your other question, I agree with Raj that it might be a good idea to
decommission the new nodes rather than repairing depending on how much data
has made it to them and how tight you were on resources before adding nodes.

All the best,

[image: datastax_logo.png] <>

Sebastián Estévez

Solutions Architect | 954 905 8615 |

[image: linkedin.png] <> [image:
facebook.png] <> [image: twitter.png]
<> [image: g+.png]


DataStax is the fastest, most scalable distributed database technology,
delivering Apache Cassandra to the world’s most innovative enterprises.
Datastax is built to be agile, always-on, and predictably scalable to any
size. With more than 500 customers in 45 countries, DataStax is the
database technology and transactional backbone of choice for the worlds
most innovative companies such as Netflix, Adobe, Intuit, and eBay.

On Sun, Oct 18, 2015 at 8:18 PM, Kevin Burton <> wrote:

> I'm doing a big nodetool repair right now and I'm pretty sure the added
> overhead is impacting our performance.
> Shouldn't you be able to throttle repair so that normal compactions can
> use most of the resources?
> --
> We’re hiring if you know of any awesome Java Devops or Linux Operations
> Engineers!
> Founder/CEO
> Location: *San Francisco, CA*
> blog:
> … or check out my Google+ profile
> <>

View raw message