Think of GB to OS as something intended to support file caching. As such the amount is whatever suits your usage. If your use is almost exclusively reading, then file cache memory doesn’t matter that much if you’re operating with your storage as those nvme ssd drives that the i3’s come with. There is already a chunk cache that you should be tuning in C* instead, and feeding fast from the O/S file cache, assuming compressed SSTables, maybe turns out to be less of a concern.
If you have moderate write activity then your situation changes because then that same file cache is how your dirty background pages turn into eventual flushes to disk, and so you have to watch the impact of read stalls when the I/O fills with write requests. You might not see this so obviously on nvme drives, but that could depend a lot on the distro and kernels and how the filesystem is mounted.
My super strong advice on issues like this is to not cargo-cult other people’s tunings. Look at them for ideas, sure. But learn how to do your own investigations, and budget the time for it into your project. Budget a LOT of time for it if your measure of “good performance” is based on latency; when “good” is defined in terms of throughput your life is easier. Also, everything is always a little different in virtualization, and lord knows you can have screwball things appear in AWS. The good news is you don’t need a perfect configuration out of the gate; you need a configuration you understand and can refine; understanding comes from knowing how to do your own performance monitoring.
Message from External Sender
I just copied and paste what I found on our test machines but I can confirm that we have the same settings except for 8GB in production.
I didn't select these settings and I need to verify why these settings are there.
If any of you want to share your flags for a read-heavy workload it would be appreciated, so I would replace and test those flags with TLP-STRESS.
I am thinking about different approaches (G1GC vs ParNew + CMS)
How many GB for RAM do you dedicate to the OS in percentage or in an exact number?
Can you share the flags for ParNew + CMS that I can play with it and perform a test?
Il giorno lun 21 ott 2019 alle ore 09:27 Reid Pinchback <email@example.com> ha scritto:
Since the instance size is < 32gb, hopefully swap isn’t being used, so it should be moot.
Sergio, also be aware that -XX:+CMSClassUnloadingEnabled probably doesn’t do anything for you. I believe that only applies to CMS, not G1GC. I also wouldn’t take it as gospel truth that -XX:+UseNUMA is a good thing on AWS (or anything virtualized), you’d have to run your own tests and find out.
Message from External Sender
One thing to note, if you're going to use a big heap, cap it at 31GB, not 32. Once you go to 32GB, you don't get to use compressed pointers , so you get less addressable space than at 31GB.
On Mon, Oct 21, 2019 at 11:39 AM Durity, Sean R <SEAN_R_DURITY@homedepot.com> wrote:
I don’t disagree with Jon, who has all kinds of performance tuning experience. But for ease of operation, we only use G1GC (on Java 8), because the tuning of ParNew+CMS requires a high degree of knowledge and very repeatable testing harnesses. It isn’t worth our time. As a previous writer mentioned, there is usually better return on our time tuning the schema (aka helping developers understand Cassandra’s strengths).
We use 16 – 32 GB heaps, nothing smaller than that.
From: Jon Haddad <firstname.lastname@example.org>
Sent: Monday, October 21, 2019 10:43 AM
Subject: [EXTERNAL] Re: GC Tuning https://thelastpickle.com/blog/2018/04/11/gc-tuning.html
I still use ParNew + CMS over G1GC with Java 8. I haven't done a comparison with JDK 11 yet, so I'm not sure if it's any better. I've heard it is, but I like to verify first. The pause times with ParNew + CMS are generally lower than G1 when tuned right, but as Chris said it can be tricky. If you aren't willing to spend the time understanding how it works and why each setting matters, G1 is a better option.
I wouldn't run Cassandra in production on less than 8GB of heap - I consider it the absolute minimum. For G1 I'd use 16GB, and never 4GB with Cassandra unless you're rarely querying it.
I typically use the following as a starting point now:
ParNew + CMS
10GB new gen
2GB memtable cap, otherwise you'll spend a bunch of time copying around memtables (cassandra.yaml)
Max tenuring threshold: 2
survivor ratio 6
I've also done some tests with a 30GB heap, 24 GB of which was new gen. This worked surprisingly well in my tests since it essentially keeps everything out of the old gen. New gen allocations are just a pointer bump and are pretty fast, so in my (limited) tests of this I was seeing really good p99 times. I was seeing a 200-400 ms pause roughly once a minute running a workload that deliberately wasn't hitting a resource limit (testing real world looking stress vs overwhelming the cluster).
We built tlp-cluster  and tlp-stress  to help figure these things out.
On Mon, Oct 21, 2019 at 10:24 AM Reid Pinchback <email@example.com> wrote:
An i3x large has 30.5 gb of RAM but you’re using less than 4gb for C*. So minus room for other uses of jvm memory and for kernel activity, that’s about 25 gb for file cache. You’ll have to see if you either want a bigger heap to allow for less frequent gc cycles, or you could save money on the instance size. C* generates a lot of medium-length lifetime objects which can easily end up in old gen. A larger heap will reduce the burn of more old-gen collections. There are no magic numbers to just give because it’ll depend on your usage patterns.
Message from External Sender
Thanks for the answer.
This is the JVM version that I have right now.
openjdk version "1.8.0_161"
OpenJDK Runtime Environment (build 1.8.0_161-b14)
OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode)
These are the current flags. Would you change anything in a i3x.large aws node?
Il giorno sab 19 ott 2019 alle ore 14:30 Chris Lohfink <firstname.lastname@example.org> ha scritto:
"It depends" on your version and heap size but G1 is easier to get right so probably wanna stick with that unless you are using small heaps or really interested in tuning it (likely for massively smaller gains then tuning your data model). There is no GC algo that is strictly better than others in all scenarios unfortunately. If your JVM supports it, ZGC or Shenandoah are likely going to give you the best latencies.
On Fri, Oct 18, 2019 at 8:41 PM Sergio Bilello <email@example.com> wrote:
Is it still better to use ParNew + CMS Is it still better than G1GC these days?
Any recommendation for i3.xlarge nodes read-heavy workload?
To unsubscribe, e-mail: firstname.lastname@example.org
For additional commands, e-mail: email@example.com
The information in this Internet Email is confidential and may be legally privileged. It is intended solely for the addressee. Access to this Email by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. When addressed to our clients any opinions or advice contained in this Email are subject to the terms and conditions expressed in any applicable governing The Home Depot terms of business or client engagement letter. The Home Depot disclaims all responsibility and liability for the accuracy and content of this attachment and for any damages or losses arising from any inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other items of a destructive nature, which may be contained in this attachment and shall not be liable for direct, indirect, consequential or special damages in connection with this e-mail message or its attachment.