cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Benjamin Roth (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (CASSANDRA-13241) Lower default chunk_length_in_kb from 64kb to 4kb
Date Mon, 27 Feb 2017 22:35:45 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Benjamin Roth updated CASSANDRA-13241:
--------------------------------------

Hm. I read recommendations that a single node should not have a load of
more than 1-2 TB per node. And I read recommendations of having at least
128gb RAM. If I pay 2gb for a recommended max load to have a MUCH better
performance on uncached io that makes more than 80% of recommended sizings
(on equally hot data) it seems a quite fair price to me.
If there is much less hot data it probably still works as you only deal
page cache for faster io. The fewer hot data the fewer page cache is
required.

Did I miss a point?

Btw 4kb worked perfectly for me with 460gb load/128gb RAM. 64kb did not
work well. Really.




> Lower default chunk_length_in_kb from 64kb to 4kb
> -------------------------------------------------
>
>                 Key: CASSANDRA-13241
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-13241
>             Project: Cassandra
>          Issue Type: Wish
>          Components: Core
>            Reporter: Benjamin Roth
>
> Having a too low chunk size may result in some wasted disk space. A too high chunk size
may lead to massive overreads and may have a critical impact on overall system performance.
> In my case, the default chunk size lead to peak read IOs of up to 1GB/s and avg reads
of 200MB/s. After lowering chunksize (of course aligned with read ahead), the avg read IO
went below 20 MB/s, rather 10-15MB/s.
> The risk of (physical) overreads is increasing with lower (page cache size) / (total
data size) ratio.
> High chunk sizes are mostly appropriate for bigger payloads pre request but if the model
consists rather of small rows or small resultsets, the read overhead with 64kb chunk size
is insanely high. This applies for example for (small) skinny rows.
> Please also see here:
> https://groups.google.com/forum/#!topic/scylladb-dev/j_qXSP-6-gY
> To give you some insights what a difference it can make (460GB data, 128GB RAM):
> - Latency of a quite large CF: https://cl.ly/1r3e0W0S393L
> - Disk throughput: https://cl.ly/2a0Z250S1M3c
> - This shows, that the request distribution remained the same, so no "dynamic snitch
magic": https://cl.ly/3E0t1T1z2c0J



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message