hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ramkrishna.s.vasudevan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-6060) Regions's in OPENING state from failed regionservers takes a long time to recover
Date Fri, 08 Jun 2012 12:58:23 GMT

    [ https://issues.apache.org/jira/browse/HBASE-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291755#comment-13291755

ramkrishna.s.vasudevan commented on HBASE-6060:

Just to reiterate this JIRA is a backport of HBASE-5396.  The current patch that you gave
is similar to HBASE-5396 and it will solve the problem of assigning a region that has started
opening in a RS but the RS went down.
But if we don't have any common sharing between AM and SSH
RegionPlan regionPlan = getRegionPlan(regionState, sn, true);
In the first assign call we form a regionplan.  It so happened that we got the plan and issued
sendRegionOpen().  By the time the RS had gone down.  Now as per your patch(HBASE-5396) the
SSH will trigger and he will start a new assignment.  At the same time 
        LOG.warn("Failed assignment of " +
          state.getRegion().getRegionNameAsString() + " to " +
          plan.getDestination() + ", trying to assign elsewhere instead; " +
          "retry=" + i, t);
        // Clean out plan we failed execute and one that doesn't look like it'll
        // succeed anyways; we need a new plan!
        // Transition back to OFFLINE
        // Force a new plan and reassign.  Will return null if no servers.
                RegionPlan newPlan = getRegionPlan(state, plan.getDestination(), true);
                if (isNullPlan(newPlan, state, plan.getDestination())) {
                  // Whether no servers to assign too or region assignment is being handled
                  // skip out of this retry loop.
Excluding this server we will call one more assign because of retry logic.  And that is where
there is a chance of getting the master abort problem.  In the current patch that u gave getRegionPlan
will create the above problem.

This is where we need to have a common structure and also a new state of region plan to say
whether to make another assign call or not thro AM.

    if (!hijack && !state.isClosed() && !state.isOffline()) {
      String msg = "Unexpected state : " + state + " .. Cannot transit it to OFFLINE.";
      this.master.abort(msg, new IllegalStateException(msg));
      return -1;
Now the problem of master abort was reported in HBASE-5816.(Because HBASE-5396 is committed
to 0.90).  Now what we are doing is to solve this as a whole so that we don't end up in any
double assignment or any problem with master abort.

One more problem in patch is 'outstandingRegionPlans' should be cleared here 
} else if (addressFromAM != null && !addressFromAM.equals(this.serverName)) {
+              LOG.debug("Skip assigning region " + e.getKey().getRegionNameAsString()
+                  + " because it has been opened in " + addressFromAM.getServerName());
+            }
STack i feel that sharing something common is necessary to avoid this problem.  
Just to add, HBASE-6147 i have given one patch, which when combined with the patch here will
solve many other issues also.  

Do let me know if you still feel there are some more doubts on this?

> Regions's in OPENING state from failed regionservers takes a long time to recover
> ---------------------------------------------------------------------------------
>                 Key: HBASE-6060
>                 URL: https://issues.apache.org/jira/browse/HBASE-6060
>             Project: HBase
>          Issue Type: Bug
>          Components: master, regionserver
>            Reporter: Enis Soztutar
>            Assignee: rajeshbabu
>             Fix For: 0.96.0, 0.94.1, 0.92.3
>         Attachments: 6060-94-v3.patch, 6060-94-v4.patch, 6060-94-v4_1.patch, 6060-94-v4_1.patch,
6060-trunk.patch, 6060-trunk.patch, 6060-trunk_2.patch, 6060-trunk_3.patch, 6060_suggestion2_based_off_v3.patch,
6060_suggestion_based_off_v3.patch, HBASE-6060-92.patch, HBASE-6060-94.patch
> we have seen a pattern in tests, that the regions are stuck in OPENING state for a very
long time when the region server who is opening the region fails. My understanding of the
>  - master calls rs to open the region. If rs is offline, a new plan is generated (a new
rs is chosen). RegionState is set to PENDING_OPEN (only in master memory, zk still shows OFFLINE).
See HRegionServer.openRegion(), HMaster.assign()
>  - RegionServer, starts opening a region, changes the state in znode. But that znode
is not ephemeral. (see ZkAssign)
>  - Rs transitions zk node from OFFLINE to OPENING. See OpenRegionHandler.process()
>  - rs then opens the region, and changes znode from OPENING to OPENED
>  - when rs is killed between OPENING and OPENED states, then zk shows OPENING state,
and the master just waits for rs to change the region state, but since rs is down, that wont
>  - There is a AssignmentManager.TimeoutMonitor, which does exactly guard against these
kind of conditions. It periodically checks (every 10 sec by default) the regions in transition
to see whether they timedout (hbase.master.assignment.timeoutmonitor.timeout). Default timeout
is 30 min, which explains what you and I are seeing. 
>  - ServerShutdownHandler in Master does not reassign regions in OPENING state, although
it handles other states. 
> Lowering that threshold from the configuration is one option, but still I think we can
do better. 
> Will investigate more. 

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message