hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sergey Shelukhin (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-7842) Add compaction policy that explores more storefile groups
Date Wed, 06 Mar 2013 18:52:14 GMT

    [ https://issues.apache.org/jira/browse/HBASE-7842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13594969#comment-13594969
] 

Sergey Shelukhin commented on HBASE-7842:
-----------------------------------------

{code}
compactionPolicy = new ExploringCompactionPolicy(this.conf, this.store/*as StoreConfigInfo*/);
{code}
HBASE-7935 will make this pluggable separately, I am planning to commit it after running tests
(it has 2 +1s).
Regardless, it appears that this patch both replaces default policy and hijacks the default
test, so the default ratio
algorithm becomes a poor bastard child, not used and not tested :) Should we delete it altogether
and swap it with yours in place?

bq.  I've seen some really nasty compaction behavior lately with what's currently the default.
Can you add tests for those with new policy? Would also be useful to illustrate.

{code}
        if (potentialMatchFiles.size() > bestSelection.size() ||
            (potentialMatchFiles.size() == bestSelection.size() && size < bestSize))
{
{code}
Interesting heuristic... it looks reasonable, but have you tried ratio proposed above? E.g.
getting rid of 3 files of 5Mb each is better than getting rid of 4 files of 500Mb each.

{code}
    if (files.size() <= 2) {
      return  true;
    }
{code}
Why? What if files are 500 5?

{code}
      long sumAllOtherFilesize = 0;
      for (int j =0; j < files.size(); j++) {
        if (i == j) continue;
        sumAllOtherFilesize += files.get(j).getReader().length();
      }
{code}
Double nested loop is unnecessary. In fact, we already get total size outside; we might as
well do it before this check, pass it in, and then we can just substract each file from it
in a loop to get size of all other files. Shorter code too :)


{code}
    minFiles = comConf.getMinFilesToCompact();
    maxFiles = comConf.getMaxFilesToCompact();
    minCompactionSize = comConf.getMinCompactSize();
    ratio = comConf.getCompactionRatio();
    offPeakRatio = comConf.getCompactionRatioOffPeak();
{code}
Nit: necessary? The whole point of the compaction config from FB patch was to make these easily
accessible everywhere,
as far as I see.

{code}
    List<StoreFile> bestSelection = new ArrayList<StoreFile>(0);
{code}
Nit: not necessary.

{code}
for (int start = 0; start < candidates.size(); start++) {
{code}
Doesn't have to consider with less than minfiles to the end.

{code}
 for(int currentEnd = start; currentEnd < candidates.size(); currentEnd++) {
{code}
Can go from start plus minfiles, and check maxfiles too, then checks inside become unnecessary...

{code}
return  true;
for (int j =0;
singleFileSize >  sumAllOtherFilesize
{code}
Etc.
Nit: spacing.
Nit^2: Many blank lines.

Nit: the main loop too could be done in a more optimal manner, probably doesn't matter.

                
> Add compaction policy that explores more storefile groups
> ---------------------------------------------------------
>
>                 Key: HBASE-7842
>                 URL: https://issues.apache.org/jira/browse/HBASE-7842
>             Project: HBase
>          Issue Type: New Feature
>          Components: Compaction
>            Reporter: Elliott Clark
>            Assignee: Elliott Clark
>         Attachments: HBASE-7842-0.patch, HBASE-7842-2.patch, HBASE-7842-3.patch, HBASE-7842-4.patch
>
>
> Some workloads that are not as stable can have compactions that are too large or too
small using the current storefile selection algorithm.
> Currently:
> * Find the first file that Size(fi) <= Sum(0, i-1, FileSize(fx))
> * Ensure that there are the min number of files (if there aren't then bail out)
> * If there are too many files keep the larger ones.
> I would propose something like:
> * Find all sets of storefiles where every file satisfies 
> ** FileSize(fi) <= Sum(0, i-1, FileSize(fx))
> ** Num files in set =< max
> ** Num Files in set >= min
> * Then pick the set of files that maximizes ((# storefiles in set) / Sum(FileSize(fx)))
> The thinking is that the above algorithm is pretty easy reason about, all files satisfy
the ratio, and should rewrite the least amount of data to get the biggest impact in seeks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message