hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jonathan Gray (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-4060) Making region assignment more robust
Date Tue, 05 Jul 2011 22:58:16 GMT

    [ https://issues.apache.org/jira/browse/HBASE-4060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13060183#comment-13060183
] 

Jonathan Gray commented on HBASE-4060:
--------------------------------------

Andrew, we are already doing something like what you describe.  It seems the issue is what
Ted describes in #2 but it's not clear to me how this bug is being triggered.

In TimeoutMonitor, we attempt to do an atomic change of state from OPENING to OFFLINE.  If
this fails, we don't do anything.  If it succeeds, we attempt to do a reassign.

In OpenRegionHandler (in the RS), we attempt an atomic change of state from OPENING to OPENED.
 If this fails, we roll back our open.  If it succeeds, we are opened and the node is at OPENED.

In OpenedRegionHandler (in the master), the first thing we do is delete a node but only if
in OPENED state.  If the TimeoutMonitor had done anything, it would have switched the state
to OFFLINE.


What am I missing?

> Making region assignment more robust
> ------------------------------------
>
>                 Key: HBASE-4060
>                 URL: https://issues.apache.org/jira/browse/HBASE-4060
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Ted Yu
>             Fix For: 0.92.0
>
>
> From Eran Kutner:
> My concern is that the region allocation process seems to rely too much on
> timing considerations and doesn't seem to take enough measures to guarantee
> conflicts do not occur. I understand that in a distributed environment, when
> you don't get a timely response from a remote machine you can't know for
> sure if it did or did not receive the request, however there are things that
> can be done to mitigate this and reduce the conflict time significantly. For
> example, when I run dbck it knows that some regions are multiply assigned,
> the master could do the same and try to resolve the conflict. Another
> approach would be to handle late responses, even if the response from the
> remote machine arrives after it was assumed to be dead the master should
> have enough information to know it had created a conflict by assigning the
> region to another server. An even better solution, I think, is for the RS to
> periodically test that it is indeed the rightful owner of every region it
> holds and relinquish control over the region if it's not.
> Obviously a state where two RSs hold the same region is pathological and can
> lead to data loss, as demonstrated in my case. The system should be able to
> actively protect itself against such a scenario. It probably doesn't need
> saying but there is really nothing worse for a data storage system than data
> loss.
> In my case the problem didn't happen in the initial phase but after
> disabling and enabling a table with about 12K regions.
> For more background information, see 'Errors after major compaction' discussion on user@hbase.apache.org

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message