hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ramkrishna.s.vasudevan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-6060) Regions's in OPENING state from failed regionservers takes a long time to recover
Date Fri, 08 Jun 2012 06:08:23 GMT

    [ https://issues.apache.org/jira/browse/HBASE-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291562#comment-13291562
] 

ramkrishna.s.vasudevan commented on HBASE-6060:
-----------------------------------------------

bq.a Map of dead servers to their current set of region plans and regions-in-transition.
We are only maintaining regions and the RITS. not the regionplans
bq.Does this sound right?
Yes.  Exactly.  All this decision made on the current regionPlan.
bq.Is this patch even handling the root cause of this JIRA: i.e. dealing w/ OPENING znodes
that were made against the dead server?
Yes.  
bq.In AM#processServerShutdown, we iterate the list of regions we get back from AM#this.servers
and from this set we will remove RIT. But won't this original set of regions be missing regions
that are OPENING or even OPENED but not yet handled because its only after the OPENED region
has updated its znode and the znode has been removed by the OPEN handler do we add a region
to AM#this.servers? Or is the OPENING region handled elsewhere?
To answer this question, for the first part.  If region R1 was initially in RS A and now it
is getting moved to RS B.  If the RS A is going down then the existing SSH flow for RS A will
get the regions from AM#this.servers and remove the same from this.regions.  Because already
something is in RIT it will not assign it and so assignment happens smoothly to RS B.

Now take a case where the RS B goes down.  In the existing code no where we assigned those
regions and left to the TM to assign.  Now that is where this patch comes into place and tries
to assign it based on the regionplan.  When SSH sees that the dead server is in destination
of the regionplan it will add to the new data structure and based on that it will start the
assginment.  Surely this region will not be in AM#this.servers as RS B is not the owner of
this.

To answer the 2nd question about OPENED regions.  Again this should not be a problem.  There
are 2 cases in this
1> OpenedRegionHandler is completed. -> Here this means that this.servers is already
updated.  And the region plan is also cleared.  Now when SSH starts it will see the region
in AM.this.servers and also there is nothing in RIT.  So SSH will scan the meta find this
region and go and assign it to a new RS. (Patch is not needed here)
2> OpenedRegionHandler is not yet completed. -> Here there may be case OpenedRegionHandler
has not yet done his work.  So the regionPlan is still available but the this.servers and
this.regions are not yet updated.  Now as per patch we know that 
{code}
  if (rit != null && !rit.isClosing() && !rit.isPendingClose() &&
!rit.isSplitting()
+                && !regionsFromRegionPlansForServer.contains(rit.getRegion())) {
{code}
The new datastructure contatins that region and hence we don not skip the assignment and go
ahead with assignment. This region got added here because we have it in Regionplan with the
destination as the dead server.
bq.we can do to better encapsulate this new state management? 
Yes we will try to do that.

bq.Can't we just keep list of dead servers and any attempt at getting a region plan related
to the dead server or server being processed as dead is put off because SSH will get to it?
Already we are doing this.  But we cannot guarentee at what point the RS had gone down, is
it after forming the regionPlan or before forming the regionplan.  IF it is before forming
the regionplan
we are already excluding the dead servers.  

Pls correct me if am wrong and if you have any doubts
                
> Regions's in OPENING state from failed regionservers takes a long time to recover
> ---------------------------------------------------------------------------------
>
>                 Key: HBASE-6060
>                 URL: https://issues.apache.org/jira/browse/HBASE-6060
>             Project: HBase
>          Issue Type: Bug
>          Components: master, regionserver
>            Reporter: Enis Soztutar
>            Assignee: rajeshbabu
>             Fix For: 0.96.0, 0.94.1, 0.92.3
>
>         Attachments: 6060-94-v3.patch, 6060-94-v4.patch, 6060-94-v4_1.patch, 6060-94-v4_1.patch,
6060-trunk.patch, 6060-trunk.patch, 6060-trunk_2.patch, 6060-trunk_3.patch, 6060_suggestion_based_off_v3.patch,
HBASE-6060-92.patch, HBASE-6060-94.patch
>
>
> we have seen a pattern in tests, that the regions are stuck in OPENING state for a very
long time when the region server who is opening the region fails. My understanding of the
process: 
>  
>  - master calls rs to open the region. If rs is offline, a new plan is generated (a new
rs is chosen). RegionState is set to PENDING_OPEN (only in master memory, zk still shows OFFLINE).
See HRegionServer.openRegion(), HMaster.assign()
>  - RegionServer, starts opening a region, changes the state in znode. But that znode
is not ephemeral. (see ZkAssign)
>  - Rs transitions zk node from OFFLINE to OPENING. See OpenRegionHandler.process()
>  - rs then opens the region, and changes znode from OPENING to OPENED
>  - when rs is killed between OPENING and OPENED states, then zk shows OPENING state,
and the master just waits for rs to change the region state, but since rs is down, that wont
happen. 
>  - There is a AssignmentManager.TimeoutMonitor, which does exactly guard against these
kind of conditions. It periodically checks (every 10 sec by default) the regions in transition
to see whether they timedout (hbase.master.assignment.timeoutmonitor.timeout). Default timeout
is 30 min, which explains what you and I are seeing. 
>  - ServerShutdownHandler in Master does not reassign regions in OPENING state, although
it handles other states. 
> Lowering that threshold from the configuration is one option, but still I think we can
do better. 
> Will investigate more. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message