tlp-stress allow us to define size of rows? Because I will see the benefit of compression in terms of request rates only if the compression ratio is significant, i.e. requires less network round trips.
This could be done generating bigger partitions with parameters -n and -p, i.e. decreasing the -p?

Also, don't you think that driver should allow configuring compression per query? Because one table with wide rows could benefit from compression while another one with less payload could not.

Thanks for your help Jon.

El lun., 8 abr. 2019 a las 19:13, Jon Haddad (<>) escribió:
If it were me, I'd look at raw request rates (in terms of requests /
second as well as request latency), network throughput and then some
flame graphs of both the server and your application:

I've created an issue in tlp-stress to add compression options for the
driver:  If
you're interested in contributing the feature I think tlp-stress will
more or less solve the remainder of the problem for you (the load
part, not the os numbers).


On Mon, Apr 8, 2019 at 7:26 AM Gabriel Giussi <> wrote:
> Hi, I'm trying to test if adding driver compression will bring me any benefit.
> I understand that the trade-off is less bandwidth but increased CPU usage in both cassandra nodes (compression) and client nodes (decompression) but I want to know what are the key metrics and how to monitor them to probe compression is giving good results?
> I guess I should look at latency percentiles reported by com.datastax.driver.core.Metrics and CPU usage, but what about bandwith usage and compression ratio?
> Should I use tcpdump to capture packets length coming from cassandra nodes? Something like tcpdump -n "src port 9042 and tcp[13] & 8 != 0" | sed -n "s/^.*length \(.*\).*$/\1/p" would be enough?
> Thanks

To unsubscribe, e-mail:
For additional commands, e-mail: