cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marco Gasparini <marco.gaspar...@competitoor.com>
Subject read failures and high read latency
Date Mon, 26 Aug 2019 09:09:50 GMT
hi everybody,

I'm experiencing some read failures and high read latency (watch the
attached picture for more detailes).

- I have a cluster of 6 nodes with 1.5TB of occupied disk space for each
node. Running Cassandra 3.11.4

4 nodes have 32GB of RAM. Cassandra allocations is Xms8G Xmx8G.
2 nodes have 16GB of RAM. Cassndra allocations is Xms4G Xmx4G.

Each node has spinning disk.

- Some fields from cassandra.yaml configuration:

concurrent_reads: 64
concurrent_writes: 64
concurrent_counter_writes: 64

file_cache_size_in_mb: 2048

memtable_cleanup_threshold: 0.2
memtable_flush_writers: 4
memtable_allocation_type: offheap_objects

- CQL schema and RF:

CREATE KEYSPACE myks WITH replication = {'class':
'NetworkTopologyStrategy', 'DC1': '3'}  AND durable_writes = false;
CREATE TABLE myks.mytable (
    id bigint,
    type text,
    page int,
    event_datetime timestamp,
    agent text,
    portion text,
    raw text,
    status int,
    status_code_pass int,
    dom bigint,
    reached text,
    tt text,
    PRIMARY KEY ((id, type), page, event_datetime)
) WITH CLUSTERING ORDER BY (page DESC, event_datetime DESC)
    AND bloom_filter_fp_chance = 0.01
    AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
    AND comment = ''
    AND compaction = {'class':
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy',
'max_threshold': '32', 'min_threshold': '4'}
    AND compression = {'chunk_length_in_kb': '64', 'class':
'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND crc_check_chance = 1.0
    AND dclocal_read_repair_chance = 0.1
    AND default_time_to_live = 0
    AND gc_grace_seconds = 90000
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair_chance = 0.0
    AND speculative_retry = '99PERCENTILE';


- I do queries that reads 3 rows at a time where the total data size is
between 5MB and 20MB.


How can I improve the reading performances?
I could stand losing some writing speed in order to improve the reading
speed.

if you need more information, please ask.


Thanks
Marco
[image: grafana_cassandra.png]

Mime
View raw message