hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yu Li (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-19290) Reduce zk request when doing split log
Date Thu, 23 Nov 2017 03:59:00 GMT

    [ https://issues.apache.org/jira/browse/HBASE-19290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16263773#comment-16263773

Yu Li commented on HBASE-19290:

Let me try to add more information:

Once when upgrading the HDFS version, NN had some fencing problem and causing all RS aborted
one by one. And after HDFS restored and HBase cluster restarted, we observed Master threads
waiting on zookeeper to return:
"MASTER_SERVER_OPERATIONS-hdpet2mainsem2:60100-28"#2236 prio=5 os_prio=0 tid=0x00007ff526bad800
nid=0xa890 in Object.wait() [0x00007ff5150f6000]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:502)
        at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
        - locked <0x00000005d9c720d0> (a org.apache.zookeeper.ClientCnxn$Packet)
        at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1470)
        at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getChildren(RecoverableZooKeeper.java:295)
        at org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenNoWatch(ZKUtil.java:635)
        at org.apache.hadoop.hbase.coordination.ZKSplitLogManagerCoordination.remainingTasksInCoordination(ZKSplitLogManagerCoordination.java:150)
        at org.apache.hadoop.hbase.master.SplitLogManager.waitForSplittingCompletion(SplitLogManager.java:353)
        - locked <0x00000006440826e8> (a org.apache.hadoop.hbase.master.SplitLogManager$TaskBatch)
        at org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:274)

And after investigation we found the root cause is splitWAL znode contains too many children
and the {{getChildren}} call is too time-consuming.

After some further discussion on how to resolve the issue, we think the most efficient way
is to reduce the speed of publishing split tasks, or say only publish when there's available
WAL splitter. Publishing the task aggressively could help nothing but slowing down the {{getChildren}}
operation on splitWAL thus the whole world.

After the patched version went online, we encountered another disaster case (unfortunately...)
and experienced no more zk contention problem. The WAL split speed was stable at 0.2TB/minute

So we don't have any performance testing result, but theory proved by observation from real
world, and hope this is convincing (smile).

> Reduce zk request when doing split log
> --------------------------------------
>                 Key: HBASE-19290
>                 URL: https://issues.apache.org/jira/browse/HBASE-19290
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: binlijin
>            Assignee: binlijin
>         Attachments: HBASE-19290.master.001.patch, HBASE-19290.master.002.patch, HBASE-19290.master.003.patch,
> We observe once the cluster has 1000+ nodes and when hundreds of nodes abort and doing
split log, the split is very very slow, and we find the regionserver and master wait on the
zookeeper response, so we need to reduce zookeeper request and pressure for big cluster.
> (1) Reduce request to rsZNode, every time calculateAvailableSplitters will get rsZNode's
children from zookeeper, when cluster is huge, this is heavy. This patch reduce the request.

> (2) When the regionserver has max split tasks running, it may still trying to grab task
and issue zookeeper request, we should sleep and wait until we can grab tasks again.  

This message was sent by Atlassian JIRA

View raw message