cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jens Rantil" <>
Subject TombstoneOverwhelmingException for few tombstones
Date Wed, 07 Jan 2015 14:43:03 GMT

I have a single partition key that been nagging me because I am receiving org.apache.cassandra.db.filter.TombstoneOverwhelmingException.
After filing I managed to find the partition
key in question and which machine it was located on (by looking in system.log). Since I wanted
to see how many tombstones the partition key actually had I did:

    nodetool flush mykeyspace mytable

to make sure all changes were written to sstables (not sure this was necessary), then

    nodetool getsstables mykeyspace mytable PARTITIONKEY

which listed two sstables. I then had a look at both sstables for my key in question using

    sstable2json MYSSTABLE1 -k PARTITIONKEY | jq . > MYSSTABLE1.json
    sstable2json MYSSTABLE2 -k PARTITIONKEY | jq . > MYSSTABLE2.json

(piping through jq to format the json). Both JSON files contains data (so I have selected
the right key). Only one of the files contains any tombstones

$ cat MYSSTABLE1.json | grep '"t"'|wc -l
$ cat MYSSTABLE2.json | grep '"t"'|wc -l

But to my surprise, the number of tombstones are nowhere near

tombstone_failure_threshold: 100000

Can anyone explain why Cassandra is overwhelmed when I’m nowhere near the hard limit?


Jens Rantil
Backend engineer
Tink AB

Phone: +46 708 84 18 32

Facebook Linkedin Twitter
View raw message