cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hannes Schmidt <han...@eyealike.com>
Subject Re: Native heap leaks?
Date Mon, 09 May 2011 17:17:12 GMT
On Thu, May 5, 2011 at 4:16 PM, aaron morton <aaron@thelastpickle.com> wrote:
> Hannes,
>        To get a baseline of behaviour set disk_access to standard. You will probably
want to keep it like that if you want better control over the memory on the box.

I'll do a test with standard and report back.

>
>        Also connect to the box with JConsole and look at the PermGen space used it
is not included in the max heap space setting. You can also check the heap usage there, running
inside of 1G is very tricky.

PermGen is at 25M which doesn't explain the 700-1000M RSS overhead.
Nevertheless, I wasn't aware that PermGen isn't capped by -Xmx, so
thank you for pointing it out.

>
>        If you want to keep it inside of 2Gb trying setting the heap max to 1.5G,
use standard IO, disable caches, and use a low memtable threshold (it depends on how many
CF's you have, try 32mb)

I'm not sure I follow. Besides the slowly increasing RSS size,
Cassandra works great for us with 1G. Don't the caches and memtables
live in the heap? I am not seeing any GC pressure at all so 1G should
be ok. Or do the caches and memtables have native components attached
to them like JNA-allocated memory or direct byte buffers?

>
> Hope that helps.
>
> -----------------
> Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 5 May 2011, at 22:30, Hannes Schmidt wrote:
>
>> This was my first thought, too. We switched to mmap_index_only and
>> didn't see any change in behavior. Looking at the smaps file attached
>> to my original post, one can see that the mmapped index files take up
>> only a minuscule part of RSS.
>>
>> On Wed, May 4, 2011 at 11:37 PM, Oleg Anastasyev <oleganas@gmail.com> wrote:
>>> Probably this is because of mmapped io access mode, which is enabled by default
>>> in 64-bit VMs - RAM is occupied by data files.
>>> If you have such a tight memory reqs, you can turn on standard access mode in
>>> storage-conf.xml, but dont expect it to work fast then:
>>> <!--
>>>
>>>
>>>  ~ Access mode.  mmapped i/o is substantially faster, but only practical on
>>>
>>>
>>>  ~ a 64bit machine (which notably does not include EC2 "small" instances)
>>>
>>>
>>>  ~ or relatively small datasets.  "auto", the safe choice, will enable
>>>
>>>
>>>  ~ mmapping on a 64bit JVM.  Other values are "mmap", "mmap_index_only"
>>>
>>>
>>>  ~ (which may allow you to get part of the benefits of mmap on a 32bit
>>>
>>>
>>>  ~ machine by mmapping only index files) and "standard".
>>>
>>>
>>>  ~ (The buffer size settings that follow only apply to standard,
>>>
>>>
>>>  ~ non-mmapped i/o.)
>>>
>>>
>>>  -->
>>>
>>>
>>>  <DiskAccessMode>standard</DiskAccessMode>
>>>
>>>
>
>

Mime
View raw message