cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Quentin Conner (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-6127) vnodes don't scale to hundreds of nodes
Date Thu, 24 Oct 2013 16:28:03 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13804361#comment-13804361
] 

Quentin Conner commented on CASSANDRA-6127:
-------------------------------------------

*Background and Reproduction*

The symptom is evident with the presence of "is now DOWN" messages in the Cassandra system.log
file.  The recording of a node DOWN is often followed by a node UP a few seconds later.  Users
have coined this phenomenon "gossip flap" and the occurence of "Gossip flaps" has a machine
and a human consequence.

Humans react strongly to the (temporary) marking of a node down.  Automated monitoring may
trigger SNMP traps, etc.  A "busy" node that doesn't transmit heartbeat gossip messages on
time will be marked as "down" though it may still be performing useful work.

Machine reactions include other C* nodes buffering of row mutations and storage of hints on
disk when another node is marked down.  I have not explored the machine reactions but imagine
the endpointSnitch could also be affected from the client frame of reference.

One piece of good news is that I was able to reproduce two different use cases that elicit
the "is now DOWN" message in Log4J log files.

Use Case #1 is as follows:
  provision 256 or 512 nodes in EC2
  install Cassandra 1.2.9
  take defaults except specify num_tokens=256 in c*.yaml
  start one node at a time

Use Case #2 is as follows:
  provision 32 nodes in EC2
  install Cassandra 1.2.9
  take defaults in c*.yaml
  configure rack
  start one node at a time
  when all nodes are up create about 1GB of data
    e.g. "tools/bin/cassandra-stress -c 20 -l 3 -n 1000000"
  provision a 33rdxtra node in EC2
  install Cassandra 1.2.9
  take defaults except specify num_tokens=256
  start the node (auto_bootstrap=true)




> vnodes don't scale to hundreds of nodes
> ---------------------------------------
>
>                 Key: CASSANDRA-6127
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6127
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>         Environment: Any cluster that has vnodes and consists of hundreds of physical
nodes.
>            Reporter: Tupshin Harper
>            Assignee: Jonathan Ellis
>
> There are a lot of gossip-related issues related to very wide clusters that also have
vnodes enabled. Let's use this ticket as a master in case there are sub-tickets.
> The most obvious symptom I've seen is with 1000 nodes in EC2 with m1.xlarge instances.
Each node configured with 32 vnodes.
> Without vnodes, cluster spins up fine and is ready to handle requests within 30 minutes
or less. 
> With vnodes, nodes are reporting constant up/down flapping messages with no external
load on the cluster. After a couple of hours, they were still flapping, had very high cpu
load, and the cluster never looked like it was going to stabilize or be useful for traffic.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Mime
View raw message