cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Peter Schuller <peter.schul...@infidyne.com>
Subject Re: Mass deletion -- slowing down
Date Mon, 14 Nov 2011 02:44:54 GMT
> I'm not sure I entirely follow. By the oldest data, do you mean the
> primary key corresponding to the limit of the time horizon? Unfortunately,
> unique IDs and the timstamps do not correlate in the sense that
> chronologically
> "newer" entries might have a smaller sequential ID. That's because the
> timestamp
> corresponds to the last update that's stochastic in the sense that the jobs
> can take
> from seconds to days to complete. As I said I'm not sure I understood you
> correctly.

I was hoping there would be a "wave of deletions" that matched the
order of the index (whatever is being read that is subject to the
tombstones). If not, then my suggestion doesn't apply. Are you using
cassandra secondary indexes or maintaining your own index btw?

> Theoretically -- would compaction or cleanup help?

Not directly. The only way to eliminate tombstones is for them to (1)
expire according to gc grace seconds (again see
http://wiki.apache.org/cassandra/DistributedDeletes) and then (2) for
compaction to remove them.

So while decreasing the gc grace period might mitigate it somewhat, I
would advise against going that route since it doesn't solve the
fundamental problem and it can be dangerous: gc grace has the usual
implications on how often anti-entropy/repair must be run, and a
cluster which is super-sensitive to a small grace time makes it a lot
more volatile if e.g. you have repair problems and must temporarily
increase gc grace.

It seems better to figure out some way of structuring the data that
the reads in question do not suffer from this problem.

Note that reading individual columns should still scale well despite
tombstones, as should slicing as long as the slices you're reading are
reasonably dense (in terms of data vs. tombstone ratio) even if
surrounding data is sparse.

How many entries are you reading per query? I have been presuming it's
the index read that is causing the timeout rather than the reading of
the individual matching columns, since the maximum "per column"
penalty when reading individual columns is finite, regardless of the
sparsity of the data.

-- 
/ Peter Schuller (@scode, http://worldmodscode.wordpress.com)

Mime
View raw message