cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jeff Jirsa (JIRA)" <j...@apache.org>
Subject [jira] [Issue Comment Deleted] (CASSANDRA-9597) DTCS should consider file SIZE in addition to time windowing
Date Tue, 16 Jun 2015 04:58:00 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-9597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Jeff Jirsa updated CASSANDRA-9597:
----------------------------------
    Comment: was deleted

(was: You can understand why this happens when you realize that the sstables are filtered
by max timestamp:

https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/DateTieredCompactionStrategy.java#L178

And then the resulting list is sorted by min timestamp:  

https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/DateTieredCompactionStrategy.java#L357-L367

The result is that for roughly evenly distributed time periods (file size proportional to
sstable maxTimestamp - sstable minTimestamp, which is likely mostly true for most DTCS workloads),
larger files will always be at the front of {{trimToThreshold}}, which virtually guarantees
we'll re-compact a very large sstable over and over and over if any other sstables are in
the window for compaction.
)

> DTCS should consider file SIZE in addition to time windowing
> ------------------------------------------------------------
>
>                 Key: CASSANDRA-9597
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-9597
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: Jeff Jirsa
>            Priority: Minor
>              Labels: dtcs
>
> DTCS seems to work well for the typical use case - writing data in perfect time order,
compacting recent files, and ignoring older files.
> However, there are "normal" operational actions where DTCS will fall behind and is unlikely
to recover.
> An example of this is streaming operations (for example, bootstrap or loading data into
a cluster using sstableloader), where lots (tens of thousands) of very small sstables can
be created spanning multiple time buckets. In these case, even if max_sstable_age_days is
extended to allow the older incoming files to be compacted, the selection logic is likely
to re-compact large files with fewer small files over and over, rather than prioritizing selection
of max_threshold smallest files to decrease the number of candidate sstables as quickly as
possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message