hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sergey Shelukhin (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (HBASE-8665) bad compaction priority behavior in queue can cause store to be blocked
Date Fri, 31 May 2013 17:32:22 GMT

    [ https://issues.apache.org/jira/browse/HBASE-8665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13671675#comment-13671675
] 

Sergey Shelukhin edited comment on HBASE-8665 at 5/31/13 5:32 PM:
------------------------------------------------------------------

Well, the effect of getting a faster compaction in this case is a pure accident, if there
was not a smaller one already queued, it would still compact 6 according to policy. Also,
out of many possible faster compactions in this case, bad one (later files) is chosen, so
it's not really what user would expect.
Policy should make such decisions - if we prefer faster compactions for blocked store, we
should have it in the policy, and so last-moment selection would still choose the best one.

As for bumping the priority of current to what it would have been, it is actually equivalent
to just sorting them by current store priority... 

I wonder if there's any fundamental reason to divorce selection from compaction?
If we introduce compaction-based priority modifiers, not just store based, we could still
apply them by doing selection in multiple stores and comparing priorities. Selecting is not
that expensive, given how frequently we compact.
                
      was (Author: sershe):
    Well, the effect of getting a faster compaction in this case is a pure accident, if there
was not a smaller one already queued, it would still compact 6 according to policy. Also,
out of many possible faster compactions in this case, bad one (later files) is chosen, so
it's not really what user would expect.
Policy should make such decisions - if we made policy prefer faster compactions for blocked
store, we should have it in the policy, and so last-moment selection would still choose the
best one.

As for bumping the priority of current to what it would have been, it is actually equivalent
to just sorting them by current store priority... 

I wonder if there's any fundamental reason to divorce selection from compaction?
If we introduce compaction-based priority modifiers, not just store based, we could still
apply them by doing selection in multiple stores and comparing priorities. Selecting is not
that expensive, given how frequently we compact.
                  
> bad compaction priority behavior in queue can cause store to be blocked
> -----------------------------------------------------------------------
>
>                 Key: HBASE-8665
>                 URL: https://issues.apache.org/jira/browse/HBASE-8665
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Sergey Shelukhin
>            Assignee: Sergey Shelukhin
>
> Note that this can be solved by bumping up the number of compaction threads but still
it seems like this priority "inversion" should be dealt with.
> There's a store with 1 big file and 3 flushes (1 2 3 4) sitting around and minding its
own business when it decides to compact. Compaction (2 3 4) is created and put in queue, it's
low priority, so it doesn't get out of the queue for some time - other stores are compacting.
Meanwhile more files are flushed and at (1 2 3 4 5 6 7) it decides to compact (5 6 7). This
compaction now has higher priority than the first one. After that if the load is high it enters
vicious cycle of compacting and compacting files as they arrive, with store being blocked
on and off, with the (2 3 4) compaction staying in queue for up to ~20 minutes (that I've
seen).
> I wonder why we do thing thing where we queue compaction and compact separately. Perhaps
we should take snapshot of all store priorities, then do select in order and execute the first
compaction we find. This will need starvation safeguard too but should probably be better.
> Btw, exploring compaction policy may be more prone to this, as it can select files from
the middle, not just beginning, which, given the treatment of already selected files that
was not changed from the old ratio-based one (all files with lower seqNums than the ones selected
are also ineligible for further selection), will make more files ineligible (e.g. imagine
with 10 blocking files, with 8 present (1-8), (6 7 8) being selected and getting stuck). Today
I see the case that would also apply to old policy, but yesterday I saw file distribution
something like this: 4,5g, 2,1g, 295,9m, 113,3m, 68,0m, 67,8m, 1,1g, 295,1m, 100,4m, unfortunately
w/o enough logs to figure out how it resulted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message