hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yu Li (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-16698) Performance issue: handlers stuck waiting for CountDownLatch inside WALKey#getWriteEntry under high writing workload
Date Sun, 23 Oct 2016 06:08:58 GMT

    [ https://issues.apache.org/jira/browse/HBASE-16698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15599149#comment-15599149
] 

Yu Li commented on HBASE-16698:
-------------------------------

bq. Lets not backport to 1.2 until it in 1.3. Thats how we generally do it. Else the discontinuity
confuses.
Ok, got it, thanks for the confirmation [~stack]

bq. On master, when I do a jstack with some load, almost all the handlers are waiting for
sync()... For async, we still have to have the latch I think.
I see, and makes sense. Let me test to make sure 1) the patch here introduce no perf regression
for SYNC_WAL and 2) it benefits ASYNC_WAL, for master branch.

bq. The other reason I was asking about this is that I have a hacked up patch which divides
the batchMutate() into 3 phases... After sync some other handler or thread will complete the
work.
Thanks for bring this up and mentioning the paper [~enis], I think this cooperates the idea
of "SEDA" JIRA mentioned weeks ago, and we also have some initial work in progress here in
Alibaba-search. I believe this is something able to increase our overall throughput and worth
a standalone JIRA for further discussion (smile).

Also glanced at HBASE-3899, seems like a similar idea but somehow commit reverted... mind
telling the whole story sir [~stack]?

> Performance issue: handlers stuck waiting for CountDownLatch inside WALKey#getWriteEntry
under high writing workload
> --------------------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-16698
>                 URL: https://issues.apache.org/jira/browse/HBASE-16698
>             Project: HBase
>          Issue Type: Improvement
>          Components: Performance
>    Affects Versions: 1.2.3
>            Reporter: Yu Li
>            Assignee: Yu Li
>             Fix For: 2.0.0
>
>         Attachments: HBASE-16698.branch-1.patch, HBASE-16698.branch-1.v2.patch, HBASE-16698.branch-1.v2.patch,
HBASE-16698.patch, HBASE-16698.v2.patch, hadoop0495.et2.jstack
>
>
> As titled, on our production environment we observed 98 out of 128 handlers get stuck
waiting for the CountDownLatch {{seqNumAssignedLatch}} inside {{WALKey#getWriteEntry}} under
a high writing workload.
> After digging into the problem, we found that the problem is mainly caused by advancing
mvcc in the append logic. Below is some detailed analysis:
> Under current branch-1 code logic, all batch puts will call {{WALKey#getWriteEntry}}
after appending edit to WAL, and {{seqNumAssignedLatch}} is only released when the relative
append call is handled by RingBufferEventHandler (see {{FSWALEntry#stampRegionSequenceId}}).
Because currently we're using a single event handler for the ringbuffer, the append calls
are handled one by one (actually lot's of our current logic depending on this sequential dealing
logic), and this becomes a bottleneck under high writing workload.
> The worst part is that by default we only use one WAL per RS, so appends on all regions
are dealt with in sequential, which causes contention among different regions...
> To fix this, we could also take use of the "sequential appends" mechanism, that we could
grab the WriteEntry before publishing append onto ringbuffer and use it as sequence id, only
that we need to add a lock to make "grab WriteEntry" and "append edit" a transaction. This
will still cause contention inside a region but could avoid contention between different regions.
This solution is already verified in our online environment and proved to be effective.
> Notice that for master (2.0) branch since we already change the write pipeline to sync
before writing memstore (HBASE-15158), this issue only exists for the ASYNC_WAL writes scenario.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message