cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jonathan Ellis (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (CASSANDRA-11834) Don't compute expensive MaxPurgeableTimestamp until we've verified there's an expired tombstone
Date Wed, 18 May 2016 17:33:13 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-11834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Jonathan Ellis updated CASSANDRA-11834:
---------------------------------------
       Resolution: Fixed
    Fix Version/s: 2.2.7
           Status: Resolved  (was: Patch Available)

committed to 2.1 and 2.2

> Don't compute expensive MaxPurgeableTimestamp until we've verified there's an expired
tombstone
> -----------------------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-11834
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-11834
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Compaction
>            Reporter: Jonathan Ellis
>            Assignee: Jonathan Ellis
>            Priority: Minor
>             Fix For: 2.1.15, 2.2.7
>
>         Attachments: 11834.txt
>
>
> In LCR's get reduced, we currently do this:
> {code}
>                 if (t.timestamp() < getMaxPurgeableTimestamp() && t.data.isGcAble(controller.gcBefore))
> {code}
> Should call the expensive getMaxPurgeableTimestamp only after we have called the cheap
isGcAble.
> Marking this as a bug since it can cause pathological performance problems (CASSANDRA-11831).
> Have verified that this is not a problem in 3.0 (CompactionIterator does the check in
the correct order).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message