cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brandon Williams (JIRA)" <j...@apache.org>
Subject [jira] Updated: (CASSANDRA-1839) Keep a tombstone cache
Date Thu, 09 Dec 2010 17:18:04 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-1839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Brandon Williams updated CASSANDRA-1839:
----------------------------------------

    Description: There is a use case in production where the pattern is read-then-delete,
where most of the keys read will not exist, but be attempted many times.  If the key has never
existed, the bloom filter makes this operation cheap, however if the key has existed, especially
if it has been overwritten many times and thus spans multiple SSTables, the merge-on-read
just to end up with a tombstone can be expensive.  This can be mitigated with keycache and
some rowcache currently, but this can be further optimized by storing a sentinel value in
the keycache indicating that it's a tombstone, which we can invalidate on new writes to the
row.  (was: There is a use case in production where the pattern is read-then-delete, where
most of the keys read will not exist, but be attempted many times.  If the key has never existed,
the bloom filter makes this operation cheap, however if the key has exist, especially if it
has been overwritten many times and thus spans multiple SSTables, the merge-on-read just to
end up with a tombstone can be expensive.  This can be mitigated with keycache and some rowcache
currently, but this can be further optimized by storing a sentinel value in the keycache indicating
that it's a tombstone, which we can invalidate on new writes to the row.)

> Keep a tombstone cache
> ----------------------
>
>                 Key: CASSANDRA-1839
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1839
>             Project: Cassandra
>          Issue Type: New Feature
>    Affects Versions: 0.3
>            Reporter: Brandon Williams
>             Fix For: 0.7.1
>
>
> There is a use case in production where the pattern is read-then-delete, where most of
the keys read will not exist, but be attempted many times.  If the key has never existed,
the bloom filter makes this operation cheap, however if the key has existed, especially if
it has been overwritten many times and thus spans multiple SSTables, the merge-on-read just
to end up with a tombstone can be expensive.  This can be mitigated with keycache and some
rowcache currently, but this can be further optimized by storing a sentinel value in the keycache
indicating that it's a tombstone, which we can invalidate on new writes to the row.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message