cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Fernando Neves <fernando1ne...@gmail.com>
Subject Re: Phantom growth resulting automatically node shutdown
Date Mon, 23 Apr 2018 08:17:31 GMT
Thank you all guys!
We will plan to upgrade our cluster to the latest 3.11.x version.

2018-04-20 7:09 GMT+08:00 kurt greaves <kurt@instaclustr.com>:

> This was fixed (again) in 3.0.15. https://issues.apache.
> org/jira/browse/CASSANDRA-13738
>
> On Fri., 20 Apr. 2018, 00:53 Jeff Jirsa, <jjirsa@gmail.com> wrote:
>
>> There have also been a few sstable ref counting bugs that would over
>> report load in nodetool ring/status due to overlapping normal and
>> incremental repairs (which you should probably avoid doing anyway)
>>
>> --
>> Jeff Jirsa
>>
>>
>> On Apr 19, 2018, at 9:27 AM, Rahul Singh <rahul.xavier.singh@gmail.com>
>> wrote:
>>
>> I’ve seen something similar in 2.1. Our issue was related to file
>> permissions being flipped due to an automation and C* stopped seeing
>> Sstables so it started making new data — via read repair or repair
>> processes.
>>
>> In your case if nodetool is reporting data that means that it’s growing
>> due to data growth. What does your cfstats / tablestats day? Are you
>> monitoring your key tables data via cfstats metrics like SpaceUsedLive or
>> SpaceUsedTotal. What is your snapshottjng / backup process doing?
>>
>> --
>> Rahul Singh
>> rahul.singh@anant.us
>>
>> Anant Corporation
>>
>> On Apr 19, 2018, 7:01 AM -0500, horschi <horschi@gmail.com>, wrote:
>>
>> Did you check the number of files in your data folder before & after the
>> restart?
>>
>> I have seen cases where cassandra would keep creating sstables, which
>> disappeared on restart.
>>
>> regards,
>> Christian
>>
>>
>> On Thu, Apr 19, 2018 at 12:18 PM, Fernando Neves <
>> fernando1neves@gmail.com> wrote:
>>
>>> I am facing one issue with our Cassandra cluster.
>>>
>>> Details: Cassandra 3.0.14, 12 nodes, 7.4TB(JBOD) disk size in each node,
>>> ~3.5TB used physical data in each node, ~42TB whole cluster and default
>>> compaction setup. This size maintain the same because after the retention
>>> period some tables are dropped.
>>>
>>> Issue: Nodetool status is not showing the correct used size in the
>>> output. It keeps increasing the used size without limit until automatically
>>> node shutdown or until our sequential scheduled restart(workaround 3 times
>>> week). After the restart, nodetool shows the correct used space but for few
>>> days.
>>> Did anybody have similar problem? Is it a bug?
>>>
>>> Stackoverflow: https://stackoverflow.com/questions/49668692/cassandra-
>>> nodetool-status-is-not-showing-correct-used-space
>>>
>>>
>>

Mime
View raw message