cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ivan Burmistrov (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-10585) SSTablesPerReadHistogram seems wrong when row cache hit happend
Date Fri, 13 Nov 2015 07:00:16 GMT


Ivan Burmistrov commented on CASSANDRA-10585:

I have prepared patches to versions 2.1, 2.2 and 3.0 (and it is not hard to prepare patch
for trunk).

Important comment for 2.2 and 3.0 versions.
SSTablePerReadHistogram in these versions is EstimatedHistogram now.
But for this implementation of histogram zero values make almost no effect.  
It seems not good, because it is important to know if, for example, we read 0.1 SSTables per
read at average. 
For example, we want to know do and
optimizations works for some table. 
EstimatedHistogram returns only integer values and make this scenario impossible, while it
was possible in versions 2.1 and below.
So in patches for 2.2 and 3.0 I switched SSTablesPerReadHistogram to ExponentiallyDecayingHistogram

> SSTablesPerReadHistogram seems wrong when row cache hit happend
> ---------------------------------------------------------------
>                 Key: CASSANDRA-10585
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Ivan Burmistrov
>            Priority: Minor
>             Fix For: 2.1.x, 2.2.x, 3.0.x
>         Attachments: SSTablePerReadHistogram_RowCache-cassandra-2_1.patch, SSTablePerReadHistogram_RowCache-cassandra-2_2.patch,
> SSTablePerReadHistogram metric now not considers case when row has been read from row
> And so, this metric will have big values even almost all requests processed by row cache
(and without touching SSTables, of course).
> So, it seems that correct behavior is to consider that if we read row from row cache
then we read zero SSTables by this request.
> The patch at the attachment.

This message was sent by Atlassian JIRA

View raw message