cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Romain Hardouin <romainh...@yahoo.fr>
Subject Re: Optimal value for concurrent_reads for a single NVMe Disk
Date Tue, 20 Sep 2016 12:24:04 GMT
Hi,
You should make a benchmark with cassandra-stress to find the sweet spot. With NVMe I guess
you can start with a high value, 128?
Please let us know the results of your findings, it's interesting to know if we can go crazy
with such pieces of hardware :-)
Best,
Romain 

    Le Mardi 20 septembre 2016 12h11, Thomas Julian <thomasjulian@zoho.com> a écrit
:
 

 Hello,

We are using Cassandra 2.1.13 with each node having a NVMe disk with the configuration of
Total Capacity - 1.2TB, Alloted Capacity -  880GB. We would like to increase the default
value of 32 for the param concurrent_reads. But the document says 

"(Default: 32)note For workloads with more data than can fit in memory, the bottleneck is
reads fetching data from disk. Setting to (16 × number_of_drives) allows operations to queue
low enough in the stack so that the OS and drives can reorder them. The default setting applies
to both logical volume managed (LVM) and RAID drives."

https://docs.datastax.com/en/cassandra/2.1/cassandra/configuration/configCassandra_yaml_r.html#reference_ds_qfg_n1r_1k__concurrent_reads

According to this hardware specification, what could be the optimal value that can be set
for concurrent_reads?

Best Regards,
Julian.







   
Mime
View raw message