cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Paulo Motta <pauloricard...@gmail.com>
Subject Re: C* memory leak during compaction
Date Tue, 15 Mar 2016 13:09:14 GMT
Did you check bloom filter sizes with nodetool tablestats to see if you're
hitting CASSANDRA-11344? If that`s the case there's a patch available along
with instructions in some other recent thread on how to fix it.

2016-03-15 6:49 GMT-03:00 ssivikt@gmail.com <ssivikt@gmail.com>:

> Duplicate the answer from Russell Hatch
>
> On 03/14/2016 07:32 PM, Russell Hatch wrote:
>
> Of course, no problem.
>
> On Sat, Mar 12, 2016 at 3:35 PM, <ssivikt@gmail.com>ssivikt@gmail.com <
> ssivikt@gmail.com> wrote:
>
>> Hi,
>>
>> Thank you for you reply!
>> The thing is that I've only inserted the data and just waiting until
>> compaction is finished. C* process allocates all available memory during
>> compaction... I'd added swap ~700GB and C* has occupied it too..
>>
>> Will itl be "ok" if I duplicate your answer to user@cassandra ?
>>
>> On 03/12/2016 02:46 AM, Russell Hatch wrote:
>>
>> Hi there -- not sure if anyone got back to you on this question. I think
>> I saw your question on irc the other day -- I'm not aware of any memory
>> specific issues with 2.2.5.
>>
>> It might be worthwhile to see if you have any very large partitions in
>> your database, and any potential code that could be trying to retrieve
>> those very large partitions -- I think that could be one source for a
>> problem such as this.
>>
>> You might get some more traction on your question using the regular
>> cassandra mailing list (this list is for development of cassandra itself,
>> not development with cassandra).
>>
>> Cheers,
>>
>> Russ
>>
>> On Fri, Mar 11, 2016 at 5:38 AM, ssivikt@gmail.com <ssivikt@gmail.com>
>> wrote:
>>
>>> I have 7 nodes of C* v2.2.5 running on CentOS 7 and using jemalloc for
>>> dynamic storage allocation.
>>> Use only one keyspace and one table with Leveled compaction strategy.
>>> I've loaded ~500 GB of data into the cluster with replication factor
>>> equals to 3 and waiting until compaction is finished. But during compaction
>>> each of the C* nodes allocates all the available memory (~128GB) and just
>>> stops its process.
>>>
>>> This is a known bug ?
>>>
>>> --
>>> Thanks,
>>> Serj
>>>
>>>
>>
>> --
>> Thanks,
>> Serj
>>
>>
>
> --
> Thanks,
> Serj
>
>

Mime
View raw message