cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Daniel Klopp (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-12557) Cassandra 3.0.6 New Node Perpetually in UJ State and Streams More Data Than Any Node
Date Fri, 26 Aug 2016 19:07:21 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-12557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15439599#comment-15439599
] 

Daniel Klopp commented on CASSANDRA-12557:
------------------------------------------

Adding cassandra.yaml file

> Cassandra 3.0.6 New Node Perpetually in UJ State and Streams More Data Than Any Node
> ------------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-12557
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-12557
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Streaming and Messaging
>         Environment: Ubuntu 14.04, AWS EC2, m4.2xlarge, 2TB dedicated data disks per
node (except node 5, with 2x2TB dedicated data disks), Cassandra 3.0.6
>            Reporter: Daniel Klopp
>             Fix For: 3.x
>
>         Attachments: cassandra.yaml
>
>
> Hello,
> We are using Cassandra 3.0.6, we've added a fifth Cassandra node to our four node cluster.
 Earlier on the streams kept failing.  I tweaked some cassandra.yaml settings and got them
to not fail.  However, we have noticed strange behavior in the sync.  Please see the output
of nodetool:
> ubuntu@ip-172.16.1.5:~$ nodetool status
> Datacenter: datacenter1
> =======================
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address       Load       Tokens       Owns    Host ID                           
   Rack
> UJ  172.16.1.5  1.48 TB    256          ?       a797ed18-1d50-4b19-924a-f6b37b8859af
 rack1
> UN  172.16.1.1   988.83 GB  256          ?       9eec70ec-5d7a-4ba8-bba8-f7d229d00358
 rack1
> UN  172.16.1.2   891.9 GB   256          ?       1d429d87-ec4a-4e14-92d7-df2aa129041e
 rack1
> UN  172.16.1.3  985.48 GB  256          ?       677c7585-ed31-4afc-b17c-288a3a1e3666
 rack1
> UN  172.16.1.4  760.38 GB  256          ?       13ab7037-ec9b-4031-8d6c-4db95b91fa21
 rack1
> Note: Non-system keyspaces don't have the same replication settings, effective ownership
information is meaningless
> ubuntu@ip-172.16.1.5:~$ 
> The fifth node is 172.16.1.5.  Why is its load 1.48 TB, when all of the original four
nodes are less than 1 TB?  I can also see this on disk usage.  The original four nodes are
utilizing 900 GB to 1100 GB on data volume.  The fifth node, however, has ballooned to 2380
GB.  I had to stop the sync and add a second disk to support it.
> I've attached our cassandra.yaml file.  What could be causing this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message