hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Stephen Yuan Jiang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-15251) During a cluster restart, Hmaster thinks it is a failover by mistake
Date Sat, 13 Feb 2016 05:37:18 GMT

    [ https://issues.apache.org/jira/browse/HBASE-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145814#comment-15145814
] 

Stephen Yuan Jiang commented on HBASE-15251:
--------------------------------------------

I asked the performance gain is because that it looks like you added addition check to determine
whether this is failover; if it is not failover (the clean start up in your case), you continue
to the normal code path via "if(!failover) {".  I don't see how this help improves clean start
up scenario you are targeting.  This improves failover scenario.  Am I missing something?

> During a cluster restart, Hmaster thinks it is a failover by mistake
> --------------------------------------------------------------------
>
>                 Key: HBASE-15251
>                 URL: https://issues.apache.org/jira/browse/HBASE-15251
>             Project: HBase
>          Issue Type: Bug
>          Components: master
>    Affects Versions: 2.0.0, 0.98.15
>            Reporter: Clara Xiong
>            Assignee: Clara Xiong
>         Attachments: HBASE-15251-master.patch
>
>
> We often need to do cluster restart as part of release for a cluster of > 1000 nodes.
We have tried our best to get clean shutdown but 50% of the time, hmaster still thinks it
is a failover. This increases the restart time from 5 min to 30 min and decreases locality
from 99% to 5% since we didn't use a locality-aware balancer. We had a bug HBASE-14129 but
the fix didn't work. 
> After adding more logging and inspecting the logs, we identified two things that trigger
the failover handling:
> 1.  When Hmaster.AssignmentManager detects any dead servers on service manager during
joinCluster(), it determines this is a failover without further check. I added a check whether
there is even any region assigned to these servers. During a clean restart, the regions are
not even assigned.
> 2. When there are some leftover empty folders for log and split directories or empty
wal files, it is also treated as a failover. I added a check for that. Although this can be
resolved by manual cleanup, it is still too tedious for restarting a large cluster.
> Patch will follow shortly. The fix is tested and used in production now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message