cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jianwei Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (CASSANDRA-7184) improvement of SizeTieredCompaction
Date Wed, 07 May 2014 07:42:41 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-7184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Jianwei Zhang updated CASSANDRA-7184:
-------------------------------------

    Description: 
1,  In our usage scenario, there is no duplicated insert and no delete . The data increased
all the time, and some big sstables are generated (100GB for example).  we don't want these
sstables to participate in the SizeTieredCompaction any more. so we add a max threshold which
is set to 100GB . Sstables larger than the threshold will not be compacted. Can this strategy
be added to the trunk ?

2,  In our usage scenario, maybe hundreds of sstable need to be compacted in a Major Compaction.
The total size would be larger to 5TB. So during the compaction, when the size writed reach
to a configed threshhold(200GB for example), it switch to write a new sstable. In this way,
we avoid to generate too huge sstables. Too huge sstable have some bad infuence: 
 (1) It will be larger than the capacity of a disk;
 (2) If the sstable is corrupt, lots of objects will be influenced .
Can this strategy be added to the trunk ?

  was:
1,  In our usage scenario, there is no duplicated insert and no delete . The data increased
all the time, and some huge sstables generate (100GB for example).  we don't want these sstables
to participate in the SizeTieredCompaction any more. so we add a max threshold which we set
to 100GB . Sstables larger than the threshold will not be compacted. Can this strategy be
added to the trunk ?

2,  In our usage scenario, maybe hundreds of sstable need to be compacted in a Major Compaction.
The total size would be larger to 5TB. So during the compaction, when the size writed reach
to a configed threshhold(200GB for example), it switch to write a new sstable. In this way,
we avoid to generate too huge sstables. Too huge sstable have some bad infuence: 
 (1) It will be larger than the capacity of a disk;
 (2) If the sstable is corrupt, lots of objects will be influenced .
Can this strategy be added to the trunk ?


> improvement  of  SizeTieredCompaction
> -------------------------------------
>
>                 Key: CASSANDRA-7184
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-7184
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: Jianwei Zhang
>            Assignee: Jianwei Zhang
>            Priority: Minor
>              Labels: compaction
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> 1,  In our usage scenario, there is no duplicated insert and no delete . The data increased
all the time, and some big sstables are generated (100GB for example).  we don't want these
sstables to participate in the SizeTieredCompaction any more. so we add a max threshold which
is set to 100GB . Sstables larger than the threshold will not be compacted. Can this strategy
be added to the trunk ?
> 2,  In our usage scenario, maybe hundreds of sstable need to be compacted in a Major
Compaction. The total size would be larger to 5TB. So during the compaction, when the size
writed reach to a configed threshhold(200GB for example), it switch to write a new sstable.
In this way, we avoid to generate too huge sstables. Too huge sstable have some bad infuence:

>  (1) It will be larger than the capacity of a disk;
>  (2) If the sstable is corrupt, lots of objects will be influenced .
> Can this strategy be added to the trunk ?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message