cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brandon Williams (JIRA)" <j...@apache.org>
Subject [jira] Commented: (CASSANDRA-1839) Keep a tombstone cache
Date Mon, 13 Dec 2010 21:08:02 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-1839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12971036#action_12971036
] 

Brandon Williams commented on CASSANDRA-1839:
---------------------------------------------

Point taken, the only way to avoid that is to proactively populate like row cache, which gives
you the same problem in the other direction.  I think you're right, this isn't worth doing,
but it's a shame to have to use row cache (and turn it over quickly with non-tombstone reads,
and possibly inherit other row cache barbs) to get around this.

> Keep a tombstone cache
> ----------------------
>
>                 Key: CASSANDRA-1839
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1839
>             Project: Cassandra
>          Issue Type: New Feature
>          Components: Core
>    Affects Versions: 0.3
>            Reporter: Brandon Williams
>            Priority: Minor
>
> There is a use case in production where the pattern is read-then-delete, where most of
the keys read will not exist, but be attempted many times.  If the key has never existed,
the bloom filter makes this operation cheap, however if the key has existed, especially if
it has been overwritten many times and thus spans multiple SSTables, the merge-on-read just
to end up with a tombstone can be expensive.  This can be mitigated with keycache and some
rowcache currently, but this can be further optimized by storing a sentinel value in the keycache
indicating that it's a tombstone, which we can invalidate on new writes to the row.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message