hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Zhihong Yu (Updated) (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HBASE-5081) Distributed log splitting deleteNode races againsth splitLog retry
Date Wed, 21 Dec 2011 21:43:31 GMT

     [ https://issues.apache.org/jira/browse/HBASE-5081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Zhihong Yu updated HBASE-5081:
------------------------------

    Comment: was deleted

(was: -1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12508264/patch_for_92_v2.txt
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    -1 tests included.  The patch doesn't appear to include any new or modified tests.
                        Please justify why no new tests are needed for this patch.
                        Also please list what manual steps were performed to verify this patch.

    -1 javadoc.  The javadoc tool appears to have generated -152 warning messages.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    -1 findbugs.  The patch appears to introduce 76 new Findbugs (version 1.3.9) warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit
warnings.

     -1 core tests.  The patch failed these unit tests:
                       org.apache.hadoop.hbase.replication.TestReplication
                  org.apache.hadoop.hbase.replication.TestMultiSlaveReplication
                  org.apache.hadoop.hbase.replication.TestMasterReplication

Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/570//testReport/
Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/570//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/570//console

This message is automatically generated.)
    
> Distributed log splitting deleteNode races againsth splitLog retry 
> -------------------------------------------------------------------
>
>                 Key: HBASE-5081
>                 URL: https://issues.apache.org/jira/browse/HBASE-5081
>             Project: HBase
>          Issue Type: Bug
>          Components: wal
>    Affects Versions: 0.92.0, 0.94.0
>            Reporter: Jimmy Xiang
>            Assignee: Jimmy Xiang
>         Attachments: distributed-log-splitting-screenshot.png, hbase-5081-patch-v6.txt,
hbase-5081_patch_for_92_v4.txt, hbase-5081_patch_v5.txt, patch_for_92.txt, patch_for_92_v2.txt,
patch_for_92_v3.txt
>
>
> Recently, during 0.92 rc testing, we found distributed log splitting hangs there forever.
 Please see attached screen shot.
> I looked into it and here is what happened I think:
> 1. One rs died, the servershutdownhandler found it out and started the distributed log
splitting;
> 2. All three tasks failed, so the three tasks were deleted, asynchronously;
> 3. Servershutdownhandler retried the log splitting;
> 4. During the retrial, it created these three tasks again, and put them in a hashmap
(tasks);
> 5. The asynchronously deletion in step 2 finally happened for one task, in the callback,
it removed one
> task in the hashmap;
> 6. One of the newly submitted tasks' zookeeper watcher found out that task is unassigned,
and it is not
> in the hashmap, so it created a new orphan task.
> 7.  All three tasks failed, but that task created in step 6 is an orphan so the batch.err
counter was one short,
> so the log splitting hangs there and keeps waiting for the last task to finish which
is never going to happen.
> So I think the problem is step 2.  The fix is to make deletion sync, instead of async,
so that the retry will have
> a clean start.
> Async deleteNode will mess up with split log retrial.  In extreme situation, if async
deleteNode doesn't happen
> soon enough, some node created during the retrial could be deleted.
> deleteNode should be sync.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message