cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hossein Ghiyasi Mehr <ghiyasim...@gmail.com>
Subject Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"
Date Wed, 04 Dec 2019 10:30:05 GMT
"3. Though Datastax do not recommended and recommends Horizontal scale, so
based on your requirement alternate old fashion option is to add swap
space."
Hi Shishir,
swap isn't recommended by DataStax!

*-------------------------------------------------------*
*VafaTech.com - A Total Solution for Data Gathering & Analysis*
*-------------------------------------------------------*


On Tue, Dec 3, 2019 at 5:53 PM Shishir Kumar <shishirroy2000@gmail.com>
wrote:

> Options: Assuming model and configurations are good and Data size per node
> less than 1 TB (though no such Benchmark).
>
> 1. Infra scale for memory
> 2. Try to change disk_access_mode to mmap_index_only.
> In this case you should not have any in memory DB tables.
> 3. Though Datastax do not recommended and recommends Horizontal scale, so
> based on your requirement alternate old fashion option is to add swap space.
>
> -Shishir
>
> On Tue, 3 Dec 2019, 15:52 John Belliveau, <belliveau.john@gmail.com>
> wrote:
>
>> Reid,
>>
>> I've only been working with Cassandra for 2 years, and this echoes my
>> experience as well.
>>
>> Regarding the cache use, I know every use case is different, but have you
>> experimented and found any performance benefit to increasing its size?
>>
>> Thanks,
>> John Belliveau
>>
>>
>> On Mon, Dec 2, 2019, 11:07 AM Reid Pinchback <rpinchback@tripadvisor.com>
>> wrote:
>>
>>> Rahul, if my memory of this is correct, that particular logging message
>>> is noisy, the cache is pretty much always used to its limit (and why not,
>>> it’s a cache, no point in using less than you have).
>>>
>>>
>>>
>>> No matter what value you set, you’ll just change the “reached (….)” part
>>> of it.  I think what would help you more is to work with the team(s) that
>>> have apps depending upon C* and decide what your performance SLA is with
>>> them.  If you are meeting your SLA, you don’t care about noisy messages.
>>> If you aren’t meeting your SLA, then the noisy messages become sources of
>>> ideas to look at.
>>>
>>>
>>>
>>> One thing you’ll find out pretty quickly.  There are a lot of knobs you
>>> can turn with C*, too many to allow for easy answers on what you should
>>> do.  Figure out what your throughput and latency SLAs are, and you’ll know
>>> when to stop tuning.  Otherwise you’ll discover that it’s a rabbit hole you
>>> can dive into and not come out of for weeks.
>>>
>>>
>>>
>>>
>>>
>>> *From: *Hossein Ghiyasi Mehr <ghiyasimehr@gmail.com>
>>> *Reply-To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
>>> *Date: *Monday, December 2, 2019 at 10:35 AM
>>> *To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
>>> *Subject: *Re: "Maximum memory usage reached (512.000MiB), cannot
>>> allocate chunk of 1.000MiB"
>>>
>>>
>>>
>>> *Message from External Sender*
>>>
>>> It may be helpful:
>>> https://thelastpickle.com/blog/2018/08/08/compression_performance.html
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__thelastpickle.com_blog_2018_08_08_compression-5Fperformance.html&d=DwMFaQ&c=9Hv6XPedRSA-5PSECC38X80c1h60_XWA4z1k_R1pROA&r=OIgB3poYhzp3_A7WgD7iBCnsJaYmspOa2okNpf6uqWc&m=BlMYluADfxjSCocEBuEfptXuOJCAamgGaQreoJcMRJQ&s=rPo3nouxhBU2Yf2HRb2Udl87roS0KkGuPr-l2ferKXA&e=>
>>>
>>> It's complex. Simple explanation, cassandra keeps sstables in memory
>>> based on chunk size and sstable parts. It manage loading new sstables to
>>> memory based on requests on different sstables correctly . You should be
>>> worry about it (sstables loaded in memory)
>>>
>>>
>>> *VafaTech.com - A Total Solution for Data Gathering & Analysis*
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Dec 2, 2019 at 6:18 PM Rahul Reddy <rahulreddy1234@gmail.com>
>>> wrote:
>>>
>>> Thanks Hossein,
>>>
>>>
>>>
>>> How does the chunks are moved out of memory (LRU?) if it want to make
>>> room for new requests to get chunks?if it has mechanism to clear chunks
>>> from cache what causes to cannot allocate chunk? Can you point me to any
>>> documention?
>>>
>>>
>>>
>>> On Sun, Dec 1, 2019, 12:03 PM Hossein Ghiyasi Mehr <
>>> ghiyasimehr@gmail.com> wrote:
>>>
>>> Chunks are part of sstables. When there is enough space in memory to
>>> cache them, read performance will increase if application requests it
>>> again.
>>>
>>>
>>>
>>> Your real answer is application dependent. For example write heavy
>>> applications are different than read heavy or read-write heavy. Real time
>>> applications are different than time series data environments and ... .
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Sun, Dec 1, 2019 at 7:09 PM Rahul Reddy <rahulreddy1234@gmail.com>
>>> wrote:
>>>
>>> Hello,
>>>
>>>
>>>
>>> We are seeing memory usage reached 512 mb and cannot allocate 1MB.  I
>>> see this because file_cache_size_mb by default set to 512MB.
>>>
>>>
>>>
>>> Datastax document recommends to increase the file_cache_size.
>>>
>>>
>>>
>>> We have 32G over all memory allocated 16G to Cassandra. What is the
>>> recommended value in my case. And also when does this memory gets filled up
>>> frequent does nodeflush helps in avoiding this info messages?
>>>
>>>

Mime
View raw message