cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pedro Gordo (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (CASSANDRA-12201) Burst Hour Compaction Strategy
Date Thu, 08 Jun 2017 07:13:18 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-12201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Pedro Gordo updated CASSANDRA-12201:
------------------------------------
    Description: 
This strategy motivation revolves around taking advantage of periods of the day where there's
less I/O on the cluster. This time of the day will be called “Burst Hour” (BH), and hence
the strategy will be named “Burst Hour Compaction Strategy” (BHCS). 
The following process would be fired during BH:

1. Read all the SSTables and detect which partition keys are present in more than the compaction
minimum threshold value.

2. Gather all the tables that have keys present in other tables, with a minimum of replicas
equal to the minimum compaction threshold. 

3. Repeat step 2 until the bucket for gathered SSTables reaches the maximum compaction threshold
(32 by default), or until we've searched all the keys.

4. The compaction per se will be done through by MaxSSTableSizeWriter. The compacted tables
will have a maximum size equal to the configurable value of max_sstable_size. 

The maximum compaction task (nodetool compact command), does exactly the same operation as
the background compaction task, but differing in that it can be triggered outside of the Burst
Hour.

This strategy tries to address three issues of the existing compaction strategies:
- Due to max_sstable_size_limit, there's no need to reserve disc space for a huge compaction.
- The number of SSTables that we need to read from to reply to a read query will be consistently
maintained at a low level and controllable through the referenced_sstable_limit property.
- It removes the dependency of a continuous high I/O.

Possible future improvements:
- Continuously evaluate how many pending compactions we have and I/O status, and then based
on that, we start (or not) the compaction.
- If during the day, the size for all the SSTables in a family set reaches a certain maximum,
then background compaction can occur anyway. This maximum should be elevated due to the high
CPU usage of BHCS.

  was:
Although it may be subject to changes, for the moment I plan to create a strategy that will
revolve around taking advantage of periods of the day where there's less I/O on the cluster.
This time of the day will be called “Burst Hour” (BH), and hence the strategy will be
named “Burst Hour Compaction Strategy” (BHCS). 
The following process would be fired during BH:

1. Read all the SSTables and detect which partition keys are present in more than a configurable
value which I'll call referenced_sstable_limit. This value will be three by default.

2. Group all the repeated keys with a reference to the SSTables containing them.

3. Calculate the total size of the SSTables which will be merged for the first partition key
on the list created in step 2. If the size calculated is bigger than property which I'll call
max_sstable_size (also configurable), more than one table will be created in step 4.

4. During the merge, the data will be streamed from SSTables up to a point when we have a
size close to max_sstable_size. After we reach this point, the stream is paused, and the new
SSTable will be closed, becoming immutable. Repeat the streaming process until we've merged
all tables for the partition key that we're iterating.

5. Cycle through the rest of the collection created in step 2 and remove any SSTables which
don't exist anymore because they were merged in step 5. An alternative course of action here
would be to, instead of removing the SSTable from the collection, to change its reference
to the SSTable(s) which was created in step 5. 

6. Repeat from step 3 to step 6 until we traversed the entirety of the collection created
in step 2.


This strategy addresses three issues of the existing compaction strategies:
- Due to max_sstable_size_limit, there's no need to reserve disc space for a huge compaction,
as it can happen on STCS.
- The number of SSTables that we need to read from to reply to a read query will be consistently
maintained at a low level and controllable through the referenced_sstable_limit property.
This addresses the scenario of STCS when we might have to read from a lot of SSTables.
- It removes the dependency of a continuous high I/O of LCS.


> Burst Hour Compaction Strategy
> ------------------------------
>
>                 Key: CASSANDRA-12201
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-12201
>             Project: Cassandra
>          Issue Type: New Feature
>          Components: Compaction
>            Reporter: Pedro Gordo
>   Original Estimate: 1,008h
>  Remaining Estimate: 1,008h
>
> This strategy motivation revolves around taking advantage of periods of the day where
there's less I/O on the cluster. This time of the day will be called “Burst Hour” (BH),
and hence the strategy will be named “Burst Hour Compaction Strategy” (BHCS). 
> The following process would be fired during BH:
> 1. Read all the SSTables and detect which partition keys are present in more than the
compaction minimum threshold value.
> 2. Gather all the tables that have keys present in other tables, with a minimum of replicas
equal to the minimum compaction threshold. 
> 3. Repeat step 2 until the bucket for gathered SSTables reaches the maximum compaction
threshold (32 by default), or until we've searched all the keys.
> 4. The compaction per se will be done through by MaxSSTableSizeWriter. The compacted
tables will have a maximum size equal to the configurable value of max_sstable_size. 
> The maximum compaction task (nodetool compact command), does exactly the same operation
as the background compaction task, but differing in that it can be triggered outside of the
Burst Hour.
> This strategy tries to address three issues of the existing compaction strategies:
> - Due to max_sstable_size_limit, there's no need to reserve disc space for a huge compaction.
> - The number of SSTables that we need to read from to reply to a read query will be consistently
maintained at a low level and controllable through the referenced_sstable_limit property.
> - It removes the dependency of a continuous high I/O.
> Possible future improvements:
> - Continuously evaluate how many pending compactions we have and I/O status, and then
based on that, we start (or not) the compaction.
> - If during the day, the size for all the SSTables in a family set reaches a certain
maximum, then background compaction can occur anyway. This maximum should be elevated due
to the high CPU usage of BHCS.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@cassandra.apache.org
For additional commands, e-mail: commits-help@cassandra.apache.org


Mime
View raw message