cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Durity, Sean R" <>
Subject RE: READ Queries timing out.
Date Fri, 07 Jul 2017 16:51:43 GMT
1 GB heap is very small. Why not try increasing it to 50% of RAM and see if it helps you track
down the real issue. It is hard to tune around a bad data model, if that is indeed the issue.
Seeing your tables and queries would help.

Sean Durity

From: Pranay akula []
Sent: Friday, July 07, 2017 11:47 AM
Subject: Re: READ Queries timing out.

Thanks ZAIDI,

Using C++ driver doesn't have tracing with driver so executing those from cqlsh. when i am
tracing i am getting below error, i increased --request-timeout to 3600 in cqlsh.

ReadTimeout: code=1200 [Coordinator node timed out waiting for replica nodes' responses] message="Operation
timed out - received only 0 responses." info={'received_responses': 0, 'required_responses':
1, 'consistency': 'ONE'}
Statement trace did not complete within 10 seconds

The below are cfstats and cfhistograms, i can see  read latency, cell count and Maximum live
cells per slice (last five minutes) are high. is there any way to get around this with out
changing data model.

Percentile  SSTables     Write Latency      Read Latency          Partition Size        Cell
                                         (micros)          (micros)                      
50%             1.00             20.00               NaN                                 
 1331                20
75%             2.00             29.00               NaN                                 
6866                86
95%             8.00             60.00               NaN                                126934
98%            10.00            103.00               NaN                               315852
99%            12.00            149.00               NaN                                 
545791              8239
Min             0.00              0.00                0.00                               
               104                 0
Max            20.00       12730764.00  9773372036884776000.00          74975550         

        Read Count: 44514407
        Read Latency: 82.92876612928933 ms.
        Write Count: 3007585812
        Write Latency: 0.07094456590853208 ms.
        Pending Flushes: 0
                SSTable count: 9
                    Space used (live): 66946214374
                    Space used (total): 66946214374
                    Space used by snapshots (total): 0
                    Off heap memory used (total): 33706492
                    SSTable Compression Ratio: 0.5598380206656697
                    Number of keys (estimate): 2483819
                    Memtable cell count: 15008
                    Memtable data size: 330597
                    Memtable off heap memory used: 518502
                    Memtable switch count: 39915
                    Local read count: 44514407
                    Local read latency: 82.929 ms
                    Local write count: 3007585849
                    Local write latency: 0.071 ms
                    Pending flushes: 0
                    Bloom filter false positives: 0
                    Bloom filter false ratio: 0.00000
                    Bloom filter space used: 12623632
                    Bloom filter off heap memory used: 12623560
                    Index summary off heap memory used: 3285614
                    Compression metadata off heap memory used: 17278816
                    Compacted partition minimum bytes: 104
                    Compacted partition maximum bytes: 74975550
                    Compacted partition mean bytes: 27111
                    Average live cells per slice (last five minutes): 388.7486606077893
                    Maximum live cells per slice (last five minutes): 28983.0
                    Average tombstones per slice (last five minutes): 0.0
                    Maximum tombstones per slice (last five minutes): 0.0


On Fri, Jul 7, 2017 at 11:16 AM, Thakrar, Jayesh <<>>
Can you provide more details.
E.g. table structure, the app used for the query, the query itself and the error message.

Also get the output of the following commands from your cluster nodes (note that one command
uses "." and the other "space" between keyspace and tablename)

nodetool -h <hostname> tablestats <keyspace>.<tablename>
nodetool -h <hostname> tablehistograms <keyspace> <tablename>

Timeouts can happen at the client/application level (which can be tuned) and at the coordinator
node level (which too can be tuned).
But again those timeouts are a symptom of something.
It can happen at the client side because of connection pool queue too full (which is likely
due to response time from the cluster/coordinate nodes).
And the issues at the cluster side could be due to several reasons.
E.g. your query has to scan through too many tombstones, causing the delay or your query (if
using filter).

From: "ZAIDI, ASAD A" <<>>
Date: Friday, July 7, 2017 at 9:45 AM
To: "<>" <<>>
Subject: RE: READ Queries timing out.

>> I analysed the GC logs not having any issues with major GC's
            If you don’t have issues on GC , than why do you want to [tune] GC parameters
Instead focus on why select queries are taking time.. may be take a look on their trace?

From: Pranay akula [<>]
Sent: Friday, July 07, 2017 9:27 AM
Subject: READ Queries timing out.

Lately i am seeing some select queries timing out, data modelling to blame for but not in
a situation to redo it.

Does increasing heap will help ??

currently using 1GB new_heap, I analysed the GC logs not having any issues with major GC's

Using G1GC , does increasing new_heap will help ??

currently using JVM_OPTS="$JVM_OPTS -XX:MaxGCPauseMillis=500", even if i increase heap to
lets say 2GB is that effective b/c young GC's will kick in more frequently  to complete in
500ms right ??



The information in this Internet Email is confidential and may be legally privileged. It is
intended solely for the addressee. Access to this Email by anyone else is unauthorized. If
you are not the intended recipient, any disclosure, copying, distribution or any action taken
or omitted to be taken in reliance on it, is prohibited and may be unlawful. When addressed
to our clients any opinions or advice contained in this Email are subject to the terms and
conditions expressed in any applicable governing The Home Depot terms of business or client
engagement letter. The Home Depot disclaims all responsibility and liability for the accuracy
and content of this attachment and for any damages or losses arising from any inaccuracies,
errors, viruses, e.g., worms, trojan horses, etc., or other items of a destructive nature,
which may be contained in this attachment and shall not be liable for direct, indirect, consequential
or special damages in connection with this e-mail message or its attachment.
View raw message