cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pedro Gordo (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-12201) Burst Hour Compaction Strategy
Date Fri, 05 May 2017 08:19:04 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-12201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997921#comment-15997921
] 

Pedro Gordo commented on CASSANDRA-12201:
-----------------------------------------

So far I've implemented the abstract methods from the super class, did some testing in the
beginning of this week and at least the background task generation seems to be performing
correctly. 
Now I need to figure out how to introduce the timers correctly in BHCS. You can find the implementation
for BHCS here: https://github.com/sedulam/CASSANDRA-12201
This is all still untested. I'll try to start testing next week, but I still need to check
what are the guidelines for testing Cassandra, code style, and other constraints that might
exist.

> Burst Hour Compaction Strategy
> ------------------------------
>
>                 Key: CASSANDRA-12201
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-12201
>             Project: Cassandra
>          Issue Type: New Feature
>          Components: Compaction
>            Reporter: Pedro Gordo
>   Original Estimate: 1,008h
>  Remaining Estimate: 1,008h
>
> Although it may be subject to changes, for the moment I plan to create a strategy that
will revolve around taking advantage of periods of the day where there's less I/O on the cluster.
This time of the day will be called “Burst Hour” (BH), and hence the strategy will be
named “Burst Hour Compaction Strategy” (BHCS). 
> The following process would be fired during BH:
> 1. Read all the SSTables and detect which partition keys are present in more than a configurable
value which I'll call referenced_sstable_limit. This value will be three by default.
> 2. Group all the repeated keys with a reference to the SSTables containing them.
> 3. Calculate the total size of the SSTables which will be merged for the first partition
key on the list created in step 2. If the size calculated is bigger than property which I'll
call max_sstable_size (also configurable), more than one table will be created in step 4.
> 4. During the merge, the data will be streamed from SSTables up to a point when we have
a size close to max_sstable_size. After we reach this point, the stream is paused, and the
new SSTable will be closed, becoming immutable. Repeat the streaming process until we've merged
all tables for the partition key that we're iterating.
> 5. Cycle through the rest of the collection created in step 2 and remove any SSTables
which don't exist anymore because they were merged in step 5. An alternative course of action
here would be to, instead of removing the SSTable from the collection, to change its reference
to the SSTable(s) which was created in step 5. 
> 6. Repeat from step 3 to step 6 until we traversed the entirety of the collection created
in step 2.
> This strategy addresses three issues of the existing compaction strategies:
> - Due to max_sstable_size_limit, there's no need to reserve disc space for a huge compaction,
as it can happen on STCS.
> - The number of SSTables that we need to read from to reply to a read query will be consistently
maintained at a low level and controllable through the referenced_sstable_limit property.
This addresses the scenario of STCS when we might have to read from a lot of SSTables.
> - It removes the dependency of a continuous high I/O of LCS.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@cassandra.apache.org
For additional commands, e-mail: commits-help@cassandra.apache.org


Mime
View raw message