hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yu Li (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-16698) Performance issue: handlers stuck waiting for CountDownLatch inside WALKey#getWriteEntry under high writing workload
Date Fri, 23 Sep 2016 22:15:21 GMT

    [ https://issues.apache.org/jira/browse/HBASE-16698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15517721#comment-15517721
] 

Yu Li commented on HBASE-16698:
-------------------------------

bq. So, why is this patch faster? In current implementation, contention is farmed out to be
per WALKey instance. Each has its own latch. 
Yes, each WALKey has its own latch, but the contention is not on the latch itself, but the
sequential handling of ringbuffer event. The whole process is like:
{noformat}
RingBufferEventHandler grab one append
-> FSHLog#append is called
-> FSWALEntry#stampRegionSequenceId is called
-> One CountDownLatch is released
-> RingBufferEventHandler grab another append
-> Another CountDownLatch is released
-> Repeat
{noformat}
So all CountDownLatch are released in sequential, no parallelism...

bq. I was thinking there a correctness issue but the numbering/mvcc is scoped to the region
so if you lock across the region append while getting the mvcc, and this is only place mvcc
is incremented, then all should be good
Yes, agree. And it seems our mighty [~eclark] has the same concern here. Hope this answers
your question also [~eclark] :-)

bq. Pity we have to lock. Could we be more radical and use the ringbuffer bucket number? Then
no locking needed. The change would be way more intrusive though. You'd have to change a lot
Cannot agree more... Actually I ever tried to use multiple event handlers, but too much logic
to make sure if breaking sequential append, so I finally quit... But I agree that we should
revisit this sometime later, worth the efforts I think.

> Performance issue: handlers stuck waiting for CountDownLatch inside WALKey#getWriteEntry
under high writing workload
> --------------------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-16698
>                 URL: https://issues.apache.org/jira/browse/HBASE-16698
>             Project: HBase
>          Issue Type: Improvement
>          Components: Performance
>    Affects Versions: 1.1.6, 1.2.3
>            Reporter: Yu Li
>            Assignee: Yu Li
>         Attachments: HBASE-16698.patch, hadoop0495.et2.jstack
>
>
> As titled, on our production environment we observed 98 out of 128 handlers get stuck
waiting for the CountDownLatch {{seqNumAssignedLatch}} inside {{WALKey#getWriteEntry}} under
a high writing workload.
> After digging into the problem, we found that the problem is mainly caused by advancing
mvcc in the append logic. Below is some detailed analysis:
> Under current branch-1 code logic, all batch puts will call {{WALKey#getWriteEntry}}
after appending edit to WAL, and {{seqNumAssignedLatch}} is only released when the relative
append call is handled by RingBufferEventHandler (see {{FSWALEntry#stampRegionSequenceId}}).
Because currently we're using a single event handler for the ringbuffer, the append calls
are handled one by one (actually lot's of our current logic depending on this sequential dealing
logic), and this becomes a bottleneck under high writing workload.
> The worst part is that by default we only use one WAL per RS, so appends on all regions
are dealt with in sequential, which causes contention among different regions...
> To fix this, we could also take use of the "sequential appends" mechanism, that we could
grab the WriteEntry before publishing append onto ringbuffer and use it as sequence id, only
that we need to add a lock to make "grab WriteEntry" and "append edit" a transaction. This
will still cause contention inside a region but could avoid contention between different regions.
This solution is already verified in our online environment and proved to be effective.
> Notice that for master (2.0) branch since we already change the write pipeline to sync
before writing memstore (HBASE-15158), this issue only exists for the ASYNC_WAL writes scenario.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message