cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Marcus Eriksson (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-9597) DTCS should consider file SIZE in addition to time windowing
Date Tue, 16 Jun 2015 13:50:02 GMT


Marcus Eriksson commented on CASSANDRA-9597:

Yeah repair is a problem with vnodes (CASSANDRA-5220), and yes we should probably do something
smarter when we have > max_threshold sstables to compact.

[~Bj0rn] do you have time to take a stab at this?

Note that it is still going to be quite painful, if you are on 2.1 incremental repair should
reduce the pain a lot (ie, never stream in any old data in)

> DTCS should consider file SIZE in addition to time windowing
> ------------------------------------------------------------
>                 Key: CASSANDRA-9597
>                 URL:
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: Jeff Jirsa
>            Priority: Minor
>              Labels: dtcs
> DTCS seems to work well for the typical use case - writing data in perfect time order,
compacting recent files, and ignoring older files.
> However, there are "normal" operational actions where DTCS will fall behind and is unlikely
to recover.
> An example of this is streaming operations (for example, bootstrap or loading data into
a cluster using sstableloader), where lots (tens of thousands) of very small sstables can
be created spanning multiple time buckets. In these case, even if max_sstable_age_days is
extended to allow the older incoming files to be compacted, the selection logic is likely
to re-compact large files with fewer small files over and over, rather than prioritizing selection
of max_threshold smallest files to decrease the number of candidate sstables as quickly as

This message was sent by Atlassian JIRA

View raw message