hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HBASE-14012) Double Assignment and Dataloss when ServerCrashProcedure runs during Master failover
Date Sun, 05 Jul 2015 18:25:05 GMT

     [ https://issues.apache.org/jira/browse/HBASE-14012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

stack updated HBASE-14012:
--------------------------
    Attachment: 14012v2.txt

Test failure was legit. Failure was during the DLR run of the TestServerCrashProcedure. The
failure was an instance of this patch is trying to fix, that is:

+ The ServerCrashProcedure is set up to use the nice ProcedureV2 test rig that crashes out
the test rig after each step and replays the last step
+ In this case, we were rerunning the new compounded get-regions-to-assign and assign steps.
+ The test infra crashed us after doing above step so we were running through for the second
time.
+ Second time through, there were no regions to assign because we'd already assigned all.
+ When no regions to assign, I had a short-circuit in place so we just cut to the finish because
we were 'done'.
+ But then we missed the clean up of the DLR ZK markings of regions in recovery.

I don't think I need to add a test for this patch; the current TestServerCrashProcedure already
exercises the case we set out to fix.

Let me do a few runs on cluster to be sure I have not broke anything else.

> Double Assignment and Dataloss when ServerCrashProcedure runs during Master failover
> ------------------------------------------------------------------------------------
>
>                 Key: HBASE-14012
>                 URL: https://issues.apache.org/jira/browse/HBASE-14012
>             Project: HBase
>          Issue Type: Bug
>          Components: master, Region Assignment
>    Affects Versions: 2.0.0, 1.2.0
>            Reporter: stack
>            Assignee: stack
>            Priority: Critical
>             Fix For: 2.0.0, 1.2.0, 1.3.0
>
>         Attachments: 14012.txt, 14012v2.txt
>
>
> (Rewrite to be more explicit about what the problem is)
> ITBLL. Master comes up (It is being killed every 1-5 minutes or so). It is joining a
running cluster (all servers up except Master with most regions assigned out on cluster).
ProcedureStore has two ServerCrashProcedures unfinished (RUNNABLE state) for two separate
servers. One SCP is in the middle of the assign step when master crashes (SERVER_CRASH_ASSIGN).
This SCP step has this comment on it:
> {code}
>         // Assign may not be idempotent. SSH used to requeue the SSH if we got an IOE
assigning
>         // which is what we are mimicing here but it looks prone to double assignment
if assign
>         // fails midway. TODO: Test.
> {code}
> This issue is 1.2+ only since it is ServerCrashProcedure (Added in HBASE-13616, post
hbase-1.1.x).
> Looking at ServerShutdownHandler, how we used to do crash processing before we moved
over to the Pv2 framework, SSH may have (accidentally) avoided this issue since it does its
processing in a big blob starting over if killed mid-crash. In particular, post-crash, SSH
scans hbase:meta to find servers that were on the downed server. SCP scanneds Meta in one
step, saves off the regions it finds into the ProcedureStore, and then in the next step, does
actual assign. In this case, we crashed post-meta scan and during assign. Assign is a bulk
assign. It mostly succeeded but got this:
> {code}
>  809622 2015-06-09 20:05:28,576 INFO  [ProcedureExecutorThread-9] master.GeneralBulkAssigner:
Failed assigning 3 regions to server c2021.halxg.cloudera.com,16020,1433905510696, reassigning
them
> {code}
> So, most regions actually made it to new locations except for a few stragglers. All of
the successfully assigned regions then are reassigned on other side of master restart when
we replay the SCP assign step.
> Let me put together the scan meta and assign steps in SCP; this should do until we redo
all of assign to run on Pv2.
> A few other things I noticed:
> In SCP, we only check if failover in first step, not for every step, which means ServerCrashProcedure
will run if on reload it is beyond the first step.
> {code}
>     // Is master fully online? If not, yield. No processing of servers unless master
is up
>     if (!services.getAssignmentManager().isFailoverCleanupDone()) {
>       throwProcedureYieldException("Waiting on master failover to complete");
>     }
> {code}
> This means we are assigning while Master is still coming up, a no-no (though it does
not seem to have caused problem here). Fix.
> Also, I see that over the 8 hours of this particular log, each time the master crashes
and comes back up, we queue a ServerCrashProcedure for c2022 because an empty dir never gets
cleaned up:
> {code}
>  39 2015-06-09 22:15:33,074 WARN  [ProcedureExecutorThread-0] master.SplitLogManager:
returning success without actually splitting and deleting all the log files in path hdfs://c2020.halxg.cloudera.com:8020/hbase/WALs/c2022.halxg.cloudera.com,16020,1433902151857-splitting
> {code}
> Fix this too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message