cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <>
Subject [jira] Commented: (CASSANDRA-483) clean up bootstrap code, 2
Date Wed, 28 Oct 2009 12:36:59 GMT


Hudson commented on CASSANDRA-483:

Integrated in Cassandra #241 (See [])
    refactor bootstrap to only concern itself with bootstrapping the local node, which greatly
simplifies things
patch by jbellis; reviewed by goffinet for 
rename getRangeMap -> getRangeAddresses; add inverse getAddressRanges
patch by jbellis; reviewed by goffinet for 
fix the bootstrap interaction with gossip; there were two main problems:
1) token and bootstrap state are not guaranteed to be gossiped together; since we only updated
TMD.bootstrapNodes on an update of the token, this means we could actually miss the bootstrap
2) deletions of state are not actually supported by Gossiper; there is no concept of that
at the protocol level. so if we delete state locally it will never get gossiped. Instead,
we have a MODE that is either MOVING or NORMAL, corresponding to bootstrap & normal operation.

patch by jbellis; reviewed by goffinet for 
rename away underscores
patch by jbellis; reviewed by goffinet for 
r/m single-use executor in favor of a Thread
patch by jbellis; reviewed by goffinet for 

> clean up bootstrap code, 2
> --------------------------
>                 Key: CASSANDRA-483
>                 URL:
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Tools
>            Reporter: Jonathan Ellis
>            Assignee: Jonathan Ellis
>             Fix For: 0.5
>         Attachments: 0001-CASSANDRA-483-r-m-single-use-executor-in-favor-of-a-Th.txt,
0002-rename-away-underscores.txt, 0003-fix-the-bootstrap-interaction-with-gossip-there-were.txt,
0004-rename-getRangeMap-getRangeAddresses-add-inverse-g.txt, 0005-refactor-bootstrap-to-only-concern-itself-with-bootstr.txt
> existing bootstrap code overengineers things a bit by allowing multiple nodes to bootstrap
into the same span of the ring simultaneously.  but, this doesn't handle the case where one
of them doesn't complete the bootstrap.  one possible response would be to transfer that node's
span to one of the other new nodes, but then you're no longer evenly dividing the ring.  starting
over with recomputed tokens for the remaining nodes is significantly complicated.
> in short I think the right solution is to handle each node independently.   if only one
node bootstraps into a ring segment at a time, nothing changes.  but if another node bootstraps
in before the first finishes, we just say "okay" and send them each the data they would get
_if it were the only node bootstrapping_.  So if one fails, we don't have to do any extra
work.  If all succeed, the penalty is we transferred too much to some nodes but that will
be taken care of by the existing cleanup compaction code.
> (this does mean that we can't automatically pick tokens while a bootstrap is in progress,
though, or it will pick the same one for both, which is undesireable.  but saying "if you
want to bootstrap multiple nodes into the same ring span at once, you have to manually specify
the tokens" seems reasonable to me.  (especially since that was already the case under the
old system, if you didn't want just random tokens.)

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message