cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aiman Parvaiz <ai...@flipagram.com>
Subject Re: Reading too many tombstones
Date Thu, 04 Jun 2015 18:31:06 GMT
yeah we don't update old data. One thing I am curious about is why are we
running in to so many tombstones with compaction happening normally. Is
compaction not removing tombstomes?

On Thu, Jun 4, 2015 at 11:25 AM, Jonathan Haddad <jon@jonhaddad.com> wrote:

> DateTiered is fantastic if you've got time series, TTLed data.  That means
> no updates to old data.
>
> On Thu, Jun 4, 2015 at 10:58 AM Aiman Parvaiz <aiman@flipagram.com> wrote:
>
>> Hi everyone,
>> We are running a 10 node Cassandra 2.0.9 without vnode cluster. We are
>> running in to a issue where we are reading too many tombstones and hence
>> getting tons of WARN messages and some ERROR query aborted.
>>
>> cass-prod4 2015-06-04 14:38:34,307 WARN ReadStage:
>> <https://logentries.com/app/9f95dbd4#>1998
>> SliceQueryFilter.collectReducedColumns - Read 46 live and 1560 tombstoned
>> cells in ABC.home_feed (see tombstone_warn_threshold). 100 columns was
>> requested, slices= <https://logentries.com/app/9f95dbd4#>[-], delInfo=
>> <https://logentries.com/app/9f95dbd4#>{deletedAt=
>> <https://logentries.com/app/9f95dbd4#>-9223372036854775808,
>> localDeletion= <https://logentries.com/app/9f95dbd4#>2147483647}
>>
>> cass-prod2 2015-05-31 12:55:55,331 ERROR ReadStage:
>> <https://logentries.com/app/9f95dbd4#>1953
>> SliceQueryFilter.collectReducedColumns - Scanned over 100000 tombstones in
>> ABC.home_feed; query aborted (see tombstone_fail_threshold)
>>
>> As you can see all of this is happening for CF home_feed. This CF is
>> basically maintaining a feed with TTL set to 2592000 (30 days).
>> gc_grace_seconds for this CF is 864000 and its SizeTieredCompaction.
>>
>> Repairs have been running regularly and automatic compactions are
>> occurring normally too.
>>
>> I can definitely use some help here in how to tackle this issue.
>>
>> Up till now I have the following ideas:
>>
>> 1) I can make gc_grace_seconds to 0 and then do a manual compaction for
>> this CF and bump up the gc_grace again.
>>
>> 2) Make gc_grace 0, run manual compaction on this CF and leave gc_grace
>> to zero. In this case have to be careful in running repairs.
>>
>> 3) I am also considering moving to DateTier Compaction.
>>
>> What would be the best approach here for my feed case. Any help is
>> appreciated.
>>
>> Thanks
>>
>>

Mime
View raw message