hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yu Li (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-15213) Fix increment performance regression caused by HBASE-8763 on branch-1.0
Date Fri, 04 Mar 2016 20:09:40 GMT

    [ https://issues.apache.org/jira/browse/HBASE-15213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15180465#comment-15180465
] 

Yu Li commented on HBASE-15213:
-------------------------------

bq. No, if W2 is present in the queue, T3 cannot break out of the wait-loop. This patch allows
a thread to remove multiple consecutive entries that are marked complete at the front of the
queue at once without context switching.
Oh yes, agreed after checking the source code more carefully, my fault to misunderstand your
statement. The early markCompleted of WriteEntry in {{waitForPreviousTransactionsComplete}}
also makes sense since later logic will make sure the method won't return until all previous
transactions are done.

One minor improvement is that we could call advanceMemstore within the {{writeQueue}} lock
and further reduce context switch, like:
{code}
    do {
      synchronized (writeQueue) {
        // empty queue, no action required and simply break out
        if (writeQueue.isEmpty()) {
          break;
        }
        // WriteEntry already removed from the queue by another handler,
        // which also means read point already got updated, shortcut
        if (!writeQueue.contains(w)) {
          break;
        }
        firstEntry = writeQueue.iterator().next();
        if (firstEntry == w) {
          // all previous in-flight transactions are done
          // advance the read point within the lock
          advanceMemstore(w);
          break;
        }
        // we're still not the head, wait for all previous done
        try {
          writeQueue.wait(0);
        } catch (InterruptedException ie) {
          // We were interrupted... finish the loop -- i.e. cleanup --and then
          // on our way out, reset the interrupt flag.
          interrupted = true;
          break;
        }
      }
    } while (firstEntry != null);
{code}

Thanks for the clarification [~junegunn], it helps. And nice work! :-)

> Fix increment performance regression caused by HBASE-8763 on branch-1.0
> -----------------------------------------------------------------------
>
>                 Key: HBASE-15213
>                 URL: https://issues.apache.org/jira/browse/HBASE-15213
>             Project: HBase
>          Issue Type: Sub-task
>          Components: Performance
>            Reporter: Junegunn Choi
>            Assignee: Junegunn Choi
>             Fix For: 1.1.4, 1.0.4
>
>         Attachments: 15157v3.branch-1.1.patch, HBASE-15213-increment.png, HBASE-15213.branch-1.0.patch,
HBASE-15213.v1.branch-1.0.patch
>
>
> This is an attempt to fix the increment performance regression caused by HBASE-8763 on
branch-1.0.
> I'm aware that hbase.increment.fast.but.narrow.consistency was added to branch-1.0 (HBASE-15031)
to address the issue and a separate work is ongoing on master branch, but anyway, this is
my take on the problem.
> I read through HBASE-14460 and HBASE-8763 but it wasn't clear to me what caused the slowdown
but I could indeed reproduce the performance regression.
> Test setup:
> - Server: 4-core Xeon 2.4GHz Linux server running mini cluster (100 handlers, JDK 1.7)
> - Client: Another box of the same spec
> - Increments on random 10k records on a single-region table, recreated every time
> Increment throughput (TPS):
> || Num threads || Before HBASE-8763 (d6cc2fb) || branch-1.0 || branch-1.0 (narrow-consistency)
||
> || 1            | 2661                         | 2486        | 2359  |
> || 2            | 5048                         | 5064        | 4867  |
> || 4            | 7503                         | 8071        | 8690  |
> || 8            | 10471                        | 10886       | 13980 |
> || 16           | 15515                        | 9418        | 18601 |
> || 32           | 17699                        | 5421        | 20540 |
> || 64           | 20601                        | 4038        | 25591 |
> || 96           | 19177                        | 3891        | 26017 |
> We can clearly observe that the throughtput degrades as we increase the number of concurrent
requests, which led me to believe that there's severe context switching overhead and I could
indirectly confirm that suspicion with cs entry in vmstat output. branch-1.0 shows a much
higher number of context switches even with much lower throughput.
> Here are the observations:
> - WriteEntry in the writeQueue can only be removed by the very handler that put it, only
when it is at the front of the queue and marked complete.
> - Since a WriteEntry is marked complete after the wait-loop, only one entry can be removed
at a time.
> - This stringent condition causes O(N^2) context switches where n is the number of concurrent
handlers processing requests.
> So what I tried here is to mark WriteEntry complete before we go into wait-loop. With
the change, multiple WriteEntries can be shifted at a time without context switches. I changed
writeQueue to LinkedHashSet since fast containment check is needed as WriteEntry can be removed
by any handler.
> The numbers look good, it's virtually identical to pre-HBASE-8763 era.
> || Num threads || branch-1.0 with fix ||
> || 1            | 2459                 |
> || 2            | 4976                 |
> || 4            | 8033                 |
> || 8            | 12292                |
> || 16           | 15234                |
> || 32           | 16601                |
> || 64           | 19994                |
> || 96           | 20052                |
> So what do you think about it? Please let me know if I'm missing anything.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message