cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tyler Hobbs (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (CASSANDRA-11920) bloom_filter_fp_chance needs to be validated up front
Date Tue, 21 Jun 2016 16:17:57 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Tyler Hobbs updated CASSANDRA-11920:
------------------------------------
       Resolution: Fixed
    Fix Version/s: 3.0.8
                   3.8
                   2.2.7
           Status: Resolved  (was: Patch Available)

The tests look good, so +1, committed as {{9e85e85bf259cc7839226a7c93475505d262946a}} to 2.2
and merged up to 3.0 and trunk.

I don't think any documentation change is required here.  There has always been a minimum
supported bloom filter FP ratio, we just failed to enforce at the right point.

Thanks again for the patch!

> bloom_filter_fp_chance needs to be validated up front
> -----------------------------------------------------
>
>                 Key: CASSANDRA-11920
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-11920
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Lifecycle, Local Write-Read Paths
>            Reporter: ADARSH KUMAR
>            Assignee: Arindam Gupta
>            Priority: Minor
>              Labels: lhf
>             Fix For: 2.2.7, 3.8, 3.0.8
>
>         Attachments: 11920-3.0.txt
>
>
> Hi,
> I was doing some bench-marking on bloom_filter_fp_chance values. Everything worked fine
for values .01(default for STCS), .001, .0001. But when I set bloom_filter_fp_chance = .00001
i observed following behaviour:
> 1). Reads and writes looked normal from cqlsh.
> 2). SSttables are never created.
> 3). It just creates two files (*-Data.db and *-index.db) of size 0kb.
> 4). nodetool flush does not work and produce following exception:
> java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 buckets per
element
>         at org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150)
.....
> I checked BloomCalculations class and following lines are responsible for this exception:
> if (maxFalsePosProb < probs[maxBucketsPerElement][maxK]) {
>       throw new UnsupportedOperationException(String.format("Unable to satisfy %s with
%s buckets per element",
>                                                  maxFalsePosProb, maxBucketsPerElement));
>   }
> From  the code it looks like a hard coaded validation (unless we can change the nuber
of buckets).
> So, if this validation is hard coaded then why it is even allowed to set such value of
bloom_fileter_fp_chance, that can prevent ssTable generation?
> Please correct this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message