cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Burroughs (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-4288) prevent thrift server from starting before gossip has settled
Date Fri, 08 Nov 2013 02:30:21 GMT


Chris Burroughs commented on CASSANDRA-4288:

I'd like to expand this to the native protocol and bootstrap.  driftx did you have an objection
to the "are there pending tasks?" heuristic?  [~jbellis] mentioned "I have enough gossip data
now?" in CASSANDRA-6127 but on first pass it's not clear to me how to easily define "enough".

> prevent thrift server from starting before gossip has settled
> -------------------------------------------------------------
>                 Key: CASSANDRA-4288
>                 URL:
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: Peter Schuller
>            Assignee: Chris Burroughs
>         Attachments: CASSANDRA-4288-trunk.txt
> A serious problem is that there is no co-ordination whatsoever between gossip and the
consumers of gossip. In particular, on a large cluster with hundreds of nodes, it takes several
seconds for gossip to settle because the gossip stage is CPU bound. This leads to a node starting
up and accessing thrift traffic long before it has any clue of what up and down. This leads
to client-visible timeouts (for nodes that are down but not identified as such) and UnavailableException
(for nodes that are up but not yet identified as such). This is really bad in general, but
in particular for clients doing non-idempotent writes (counter increments).
> I was going to fix this as part of more significant re-writing in other tickets having
to do with gossip/topology/etc, but that's not going to happen. So, the attached patch is
roughly what we're running with in production now to make restarts bearable. The minimum wait
time is both for ensuring that gossip has time to start becoming CPU bound if it will be,
and the reason it's large is to allow for down nodes to be identified as such in most typical
cases with a default phi conviction threshold (untested, we actually ran with a smaller number
of 5 seconds minimum, but from past experience I believe 15 seconds is enough).
> The patch is tested on our 1.1 branch. It applies on trunk, and the diff is against trunk,
but I have not tested it against trunk.

This message was sent by Atlassian JIRA

View raw message