cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jens Rantil" <jens.ran...@tink.se>
Subject RE: nodetool repair
Date Sat, 20 Jun 2015 01:20:00 GMT
Hi,


For the record I've succesfully used https://github.com/BrianGallew/cassandra_range_repair
to make smooth repairing. Could maybe also be of interest don't know...




Cheers,

Jens





–
Skickat från Mailbox

On Fri, Jun 19, 2015 at 8:36 PM, null <SEAN_R_DURITY@homedepot.com> wrote:

> It seems to me that running repair on any given node may also induce repairs to related
replica nodes. For example, if I run repair on node A and node B has some replicas, data might
stream from A to B (assuming A has newer/more data). Now, that does NOT mean that node B will
be fully repaired. You still need to run repair -pr on all nodes before gc_grace_seconds.
> You can run repairs on multiple nodes at the same time. However, you might end up with
a large amount of streaming, if many repairs are needed. So, you should be aware of a performance
impact.
> I run weekly repairs on one node at a time, if possible. On, larger rings, though, I
run repairs on multiple nodes staggered by a few hours. Once your routine maintenance is established,
repairs will not run for very long. But, if you have a large ring that hasn’t been repaired,
those first repairs may take days (but should get faster as you get further through the ring).
> Sean Durity
> From: Alain RODRIGUEZ [mailto:arodrime@gmail.com]
> Sent: Friday, June 19, 2015 3:56 AM
> To: user@cassandra.apache.org
> Subject: Re: nodetool repair
> Hi,
> This is not necessarily true. Repair will induce compactions only if you have entropy
in your cluster. If not it will just read your data to compare all the replica of each piece
of data (using indeed cpu and disk IO).
> If there is some data missing it will "repair" it. Though, due to merkle tree size, you
will generally stream more data than just the data needed. To limit this downside and the
compactions amount, use range repairs --> http://www.datastax.com/dev/blog/advanced-repair-techniques.
> About tombstones, they will be evicted only after gc_grace_period and only if all the
parts of the row are part of the compaction.
> C*heers,
> Alain
> 2015-06-19 9:08 GMT+02:00 arun sirimalla <arunsirik@gmail.com<mailto:arunsirik@gmail.com>>:
> Yes compactions will remove tombstones
> On Thu, Jun 18, 2015 at 11:46 PM, Jean Tremblay <jean.tremblay@zen-innovations.com<mailto:jean.tremblay@zen-innovations.com>>
wrote:
> Perfect thank you.
> So making a weekly "nodetool repair -pr”  on all nodes one after the other will repair
my cluster. That is great.
> If it does a compaction, does it mean that it would also clean up my tombstone from my
LeveledCompactionStrategy tables at the same time?
> Thanks for your help.
> On 19 Jun 2015, at 07:56 , arun sirimalla <arunsirik@gmail.com<mailto:arunsirik@gmail.com>>
wrote:
> Hi Jean,
> Running nodetool repair on a node will repair only that node in the cluster. It is recommended
to run nodetool repair on one node at a time.
> Few things to keep in mind while running repair
>    1. Running repair will trigger compactions
>    2. Increase in CPU utilization.
> Run node tool repair with -pr option, so that it will repair only the range that node
is responsible for.
> On Thu, Jun 18, 2015 at 10:50 PM, Jean Tremblay <jean.tremblay@zen-innovations.com<mailto:jean.tremblay@zen-innovations.com>>
wrote:
> Thanks Jonathan.
> But I need to know the following:
> If you issue a “nodetool repair” on one node will it repair all the nodes in the
cluster or only the one on which we issue the command?
> If it repairs only one node, do I have to wait that the nodetool repair ends, and only
then issue another “nodetool repair” on the next node?
> Kind regards
> On 18 Jun 2015, at 19:19 , Jonathan Haddad <jon@jonhaddad.com<mailto:jon@jonhaddad.com>>
wrote:
> If you're using DSE, you can schedule it automatically using the repair service.  If
you're open source, check out Spotify cassandra reaper, it'll manage it for you.
> https://github.com/spotify/cassandra-reaper
> On Thu, Jun 18, 2015 at 12:36 PM Jean Tremblay <jean.tremblay@zen-innovations.com<mailto:jean.tremblay@zen-innovations.com>>
wrote:
> Hi,
> I want to make on a regular base repairs on my cluster as suggested by the documentation.
> I want to do this in a way that the cluster is still responding to read requests.
> So I understand that I should not use the -par switch for that as it will do the repair
in parallel and consume all available resources.
> If you issue a “nodetool repair” on one node will it repair all the nodes in the
cluster or only the one on which we issue the command?
> If it repairs only one node, do I have to wait that the nodetool repair ends, and only
then issue another “nodetool repair” on the next node?
> If we had down time periods I would issue a nodetool -par, but we don’t have down time
periods.
> Sorry for the stupid questions.
> Thanks for your help.
> --
> Arun
> Senior Hadoop/Cassandra Engineer
> Cloudwick
> 2014 Data Impact Award Winner (Cloudera)
> http://www.cloudera.com/content/cloudera/en/campaign/data-impact-awards.html
> --
> Arun
> Senior Hadoop/Cassandra Engineer
> Cloudwick
> 2014 Data Impact Award Winner (Cloudera)
> http://www.cloudera.com/content/cloudera/en/campaign/data-impact-awards.html
> ________________________________
> The information in this Internet Email is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this Email by anyone else is unauthorized.
If you are not the intended recipient, any disclosure, copying, distribution or any action
taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. When addressed
to our clients any opinions or advice contained in this Email are subject to the terms and
conditions expressed in any applicable governing The Home Depot terms of business or client
engagement letter. The Home Depot disclaims all responsibility and liability for the accuracy
and content of this attachment and for any damages or losses arising from any inaccuracies,
errors, viruses, e.g., worms, trojan horses, etc., or other items of a destructive nature,
which may be contained in this attachment and shall not be liable for direct, indirect, consequential
or special damages in connection with this e-mail message or its attachment.
Mime
View raw message