hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jingyun Tian (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (HBASE-19358) Improve the stability of splitting log when do fail over
Date Mon, 08 Jan 2018 06:30:00 GMT

    [ https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279809#comment-16279809
] 

Jingyun Tian edited comment on HBASE-19358 at 1/8/18 6:29 AM:
--------------------------------------------------------------

[~carp84] here is my test result:
Split one 512MB HLog on a single regionserver
!https://issues.apache.org/jira/secure/attachment/12905029/split-1-log.png!
we can see in most situation new logic has a better performance than the old one.

The motivation I do this improvement is when a cluster has to restart, if there are too many
regions per region, the restart is prone to failure and we have to split one hlog each time
to avoid errors. So I test when restart the whole cluster, how many throughput it can reach
with different thread count.

Throughput when we restart a cluster, which has 18 regionservers and 18 datanodes
!https://issues.apache.org/jira/secure/attachment/12905030/split_test_result.png!
blue series represent the throughput of the cluster has 20000 regions and 1111 regions per
rs, while red series has 40000 regions, 2222 regions per rs and orange series has 80000 regions
and 4444 per rs.
This is the table if the chart is not clear:
!https://issues.apache.org/jira/secure/attachment/12905026/split-table.png!
Depend on this chart, I think the time cost when you restart the whole cluster is not related
to the thread count. More regions this Hlog contains, more time it will cost to split.


was (Author: tianjingyun):
[~carp84] here is my test result:
Split one 512MB HLog on a single regionserver
!https://issues.apache.org/jira/secure/attachment/12905029/split-1-log.png!
we can see in most situation new logic has a better performance than the old one.

The motivation I do this improvement is when a cluster has to restart, if there are too many
regions per region, the restart is prone to failure and we have to split one hlog each time
to avoid errors. So I test when restart the whole cluster, how many throughput it can reach
with different thread count.

Throughput when we restart a cluster, which has 18 regionservers and 18 datanodes
!https://issues.apache.org/jira/secure/attachment/12905030/split_test_result.png!
blue series represent the throughput of the cluster has 20000 regions and 1111 regions per
rs, while red series has 40000 regions, 2222 regions per rs and orange series has 80000 regions
and 4444 per rs.
This is the table if the chart is not clear:
!https://issues.apache.org/jira/secure/attachment/12904508/split-table.png!
Depend on this chart, I think the time cost when you restart the whole cluster is not related
to the thread count. More regions this Hlog contains, more time it will cost to split.

> Improve the stability of splitting log when do fail over
> --------------------------------------------------------
>
>                 Key: HBASE-19358
>                 URL: https://issues.apache.org/jira/browse/HBASE-19358
>             Project: HBase
>          Issue Type: Improvement
>          Components: MTTR
>    Affects Versions: 0.98.24
>            Reporter: Jingyun Tian
>            Assignee: Jingyun Tian
>             Fix For: 1.4.1, 1.5.0, 2.0.0-beta-2
>
>         Attachments: HBASE-18619-branch-2-v2.patch, HBASE-19358-branch-1-v2.patch, HBASE-19358-branch-1-v3.patch,
HBASE-19358-branch-1.patch, HBASE-19358-branch-2-v3.patch, HBASE-19358-v1.patch, HBASE-19358-v4.patch,
HBASE-19358-v5.patch, HBASE-19358-v6.patch, HBASE-19358-v7.patch, HBASE-19358-v8.patch, HBASE-19358.patch,
split-1-log.png, split-logic-new.jpg, split-logic-old.jpg, split-table.png, split_test_result.png
>
>
> The way we splitting log now is like the following figure:
> !https://issues.apache.org/jira/secure/attachment/12905027/split-logic-old.jpg!
> The problem is the OutputSink will write the recovered edits during splitting log, which
means it will create one WriterAndPath for each region and retain it until the end. If the
cluster is small and the number of regions per rs is large, it will create too many HDFS streams
at the same time. Then it is prone to failure since each datanode need to handle too many
streams.
> Thus I come up with a new way to split log.  
> !https://issues.apache.org/jira/secure/attachment/12905028/split-logic-new.jpg!
> We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, we will
pick the largest EntryBuffer and write it to a file (close the writer after finish). Then
after we read all entries into memory, we will start a writeAndCloseThreadPool, it starts
a certain number of threads to write all buffers to files. Thus it will not create HDFS streams
more than *_hbase.regionserver.hlog.splitlog.writer.threads_* we set.
> The biggest benefit is we can control the number of streams we create during splitting
log, 
> it will not exceeds *_hbase.regionserver.wal.max.splitters * hbase.regionserver.hlog.splitlog.writer.threads_*,
but before it is *_hbase.regionserver.wal.max.splitters * the number of region the hlog contains_*.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message