hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-6060) Regions's in OPENING state from failed regionservers takes a long time to recover
Date Sun, 10 Jun 2012 06:19:43 GMT

    [ https://issues.apache.org/jira/browse/HBASE-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13292475#comment-13292475
] 

stack commented on HBASE-6060:
------------------------------

bq. But the gain with bulk assignment is its faster which we wanted in HBASE-5914.

Yes.  We are keeping bulk assigns, its just that the suggested patch favors the single-assign
letting it do retries whereas the original patch wants to do all via bulk assign canceling
any ongoing single-assigns.

bq. Only after the RS changes to OPENING the znode of the regions belongs to the RS.

Yes, this is the case for znode states.  The problem is the PENDING_OPEN state, an in-memory
state kept in RegionState.  It exists across the sending of the rpc open to the regionserver
until the OPENING callback comes into the master saying the regionserver has moved the znode
from OFFLINE to OPENING.  This is the the messy state.

bq.  Infact this behaviour of using the owner of the node is the logic behind how we do processRIT
on a master restart.

The context is different I think; there is no ongoing retry of single-assign here.  Its a
cleaner case.

bq. But what if the RS goes down before sending the RPC call to RS and that is where the retry
logic comes, which i feel is really needed.

Do you mean the master sending the RPC?
                
> Regions's in OPENING state from failed regionservers takes a long time to recover
> ---------------------------------------------------------------------------------
>
>                 Key: HBASE-6060
>                 URL: https://issues.apache.org/jira/browse/HBASE-6060
>             Project: HBase
>          Issue Type: Bug
>          Components: master, regionserver
>            Reporter: Enis Soztutar
>            Assignee: rajeshbabu
>             Fix For: 0.96.0, 0.94.1, 0.92.3
>
>         Attachments: 6060-94-v3.patch, 6060-94-v4.patch, 6060-94-v4_1.patch, 6060-94-v4_1.patch,
6060-trunk.patch, 6060-trunk.patch, 6060-trunk_2.patch, 6060-trunk_3.patch, 6060_alternative_suggestion.txt,
6060_suggestion2_based_off_v3.patch, 6060_suggestion_based_off_v3.patch, 6060_suggestion_toassign_rs_wentdown_beforerequest.patch,
HBASE-6060-92.patch, HBASE-6060-94.patch
>
>
> we have seen a pattern in tests, that the regions are stuck in OPENING state for a very
long time when the region server who is opening the region fails. My understanding of the
process: 
>  
>  - master calls rs to open the region. If rs is offline, a new plan is generated (a new
rs is chosen). RegionState is set to PENDING_OPEN (only in master memory, zk still shows OFFLINE).
See HRegionServer.openRegion(), HMaster.assign()
>  - RegionServer, starts opening a region, changes the state in znode. But that znode
is not ephemeral. (see ZkAssign)
>  - Rs transitions zk node from OFFLINE to OPENING. See OpenRegionHandler.process()
>  - rs then opens the region, and changes znode from OPENING to OPENED
>  - when rs is killed between OPENING and OPENED states, then zk shows OPENING state,
and the master just waits for rs to change the region state, but since rs is down, that wont
happen. 
>  - There is a AssignmentManager.TimeoutMonitor, which does exactly guard against these
kind of conditions. It periodically checks (every 10 sec by default) the regions in transition
to see whether they timedout (hbase.master.assignment.timeoutmonitor.timeout). Default timeout
is 30 min, which explains what you and I are seeing. 
>  - ServerShutdownHandler in Master does not reassign regions in OPENING state, although
it handles other states. 
> Lowering that threshold from the configuration is one option, but still I think we can
do better. 
> Will investigate more. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message