hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ramkrishna.s.vasudevan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-6060) Regions's in OPENING state from failed regionservers takes a long time to recover
Date Wed, 13 Jun 2012 05:08:44 GMT

    [ https://issues.apache.org/jira/browse/HBASE-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13294154#comment-13294154
] 

ramkrishna.s.vasudevan commented on HBASE-6060:
-----------------------------------------------

bq.Is this a fair characterization?
RegionState is something which the master uses to see the current progress of the assignment.
But regionplan is one thing which will tels me where the assignment is actually going on.
If master wants to start a new assignment its again the regionplan that gets changed.  So
we thought of using that as an indicator. But i agree two systems AM and SSH take decision
based on that..

One more thing on RegionState in RIT is that it is reactive.  Based on some changes in RS
it gets updated. But RegionPlan is the one decided by the master.
Every time Assignment starts RegionState in RIT goes thro a set of steps and may be that is
why we are not sure as in what step the RIT is and who made that change.

We also tried thought of one approach, like can we remove the retry logic itself in assign?
 Always use SSH to trigger assignment as in TRUNK the expiry of server is very much spontaneous
now.  May be some gaps are there which we have not yet found.

I should accept one thing is I did not try to change the RegionState in RIT and was seeing
to use the RegionPlan only.

bq.RS update the znode to OPENING form OFFINE before returning from the open rpc call.
Then this step has to be moved from OpenRegionHandler?


                
> Regions's in OPENING state from failed regionservers takes a long time to recover
> ---------------------------------------------------------------------------------
>
>                 Key: HBASE-6060
>                 URL: https://issues.apache.org/jira/browse/HBASE-6060
>             Project: HBase
>          Issue Type: Bug
>          Components: master, regionserver
>            Reporter: Enis Soztutar
>            Assignee: rajeshbabu
>             Fix For: 0.96.0, 0.94.1, 0.92.3
>
>         Attachments: 6060-94-v3.patch, 6060-94-v4.patch, 6060-94-v4_1.patch, 6060-94-v4_1.patch,
6060-trunk.patch, 6060-trunk.patch, 6060-trunk_2.patch, 6060-trunk_3.patch, 6060_alternative_suggestion.txt,
6060_suggestion2_based_off_v3.patch, 6060_suggestion_based_off_v3.patch, 6060_suggestion_toassign_rs_wentdown_beforerequest.patch,
HBASE-6060-92.patch, HBASE-6060-94.patch, HBASE-6060-trunk_4.patch, HBASE-6060_trunk_5.patch
>
>
> we have seen a pattern in tests, that the regions are stuck in OPENING state for a very
long time when the region server who is opening the region fails. My understanding of the
process: 
>  
>  - master calls rs to open the region. If rs is offline, a new plan is generated (a new
rs is chosen). RegionState is set to PENDING_OPEN (only in master memory, zk still shows OFFLINE).
See HRegionServer.openRegion(), HMaster.assign()
>  - RegionServer, starts opening a region, changes the state in znode. But that znode
is not ephemeral. (see ZkAssign)
>  - Rs transitions zk node from OFFLINE to OPENING. See OpenRegionHandler.process()
>  - rs then opens the region, and changes znode from OPENING to OPENED
>  - when rs is killed between OPENING and OPENED states, then zk shows OPENING state,
and the master just waits for rs to change the region state, but since rs is down, that wont
happen. 
>  - There is a AssignmentManager.TimeoutMonitor, which does exactly guard against these
kind of conditions. It periodically checks (every 10 sec by default) the regions in transition
to see whether they timedout (hbase.master.assignment.timeoutmonitor.timeout). Default timeout
is 30 min, which explains what you and I are seeing. 
>  - ServerShutdownHandler in Master does not reassign regions in OPENING state, although
it handles other states. 
> Lowering that threshold from the configuration is one option, but still I think we can
do better. 
> Will investigate more. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message