hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "jiraposter@reviews.apache.org (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-5081) Distributed log splitting deleteNode races againsth splitLog retry
Date Thu, 22 Dec 2011 23:31:34 GMT

    [ https://issues.apache.org/jira/browse/HBASE-5081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13175149#comment-13175149
] 

jiraposter@reviews.apache.org commented on HBASE-5081:
------------------------------------------------------



bq.  On 2011-12-22 22:55:55, Prakash Khemani wrote:
bq.  > I feel that the proper fix should go in the method createTaskIfAbsent()
bq.  > 
bq.  > before attempting to delete a task in zk, task.deleted is set to true. The task
is not removed from tasks array until task is successfully removed from zk.
bq.  > 
bq.  > In createTaskIfAbsent() when you find a deleted task we should do the following
bq.  > * If the task had completed successfully then return null. (It is as if the task
is completed right away).
bq.  > * if the task had completed unsuccessfully then block (with timeouts) until the
task is removed from the tasks array.
bq.  > 
bq.  > Without fixing anything, the problem, I think is present only in the following scenario
bq.  > - at startup the master acquires orphan tasks  listed in zookeeper. One of these
orphan tasks fails. Before that orphan task could be deleted some master thread asks for that
task to be completed. As  things currently stand, the SplitLogManager will reply with SUCCESS
immediately. (This is because of the logic in createTaskIfAbsent())
bq.  > 
bq.  > The common case where  this race happens should work ...
bq.  > - a master thread asks for a log dir to be split. That task fails but it has not
been deleted from zk yet nor removed from tasks yet. The log-dir-split is retried and the
retry finds the old, soon to be deleted task. But the retry will also see that task.batch
is set and it will immediately throw an error saying 'someone else is waiting for this task'.
And the next time log-dir-split is retried the tasks map might have been cleared and things
will work.

"The task is not removed from tasks array until task is successfully removed from zk."

This seems not correct.  stopTrackingTasks() will remove all tasks even if the task is not
removed from zk.
That's why createTaskIfAbsent() can put a new task in the set.

If we remove stopTrackingTasks(), then the task should be still in tasks, then this alternative
will work.
Will removing stopTrackingTasks() cause other issues?  For the second *, how long should we
block?  If
the task is still not removed from the tasks array after the timeout, what should we do?

Can you come up a patch? I am very open to any fix.


bq.  On 2011-12-22 22:55:55, Prakash Khemani wrote:
bq.  > src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java, line 382
bq.  > <https://reviews.apache.org/r/3292/diff/8/?file=65682#file65682line382>
bq.  >
bq.  >     The task corresponding to this path has to be removed from the tasks map (as
in deleteNodeSuccess())

It is removed in the stopTrackingTasks() methods, since this one is failed, so batch.installed
!= batch.done.


- Jimmy


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/3292/#review4089
-----------------------------------------------------------


On 2011-12-22 00:31:23, Jimmy Xiang wrote:
bq.  
bq.  -----------------------------------------------------------
bq.  This is an automatically generated e-mail. To reply, visit:
bq.  https://reviews.apache.org/r/3292/
bq.  -----------------------------------------------------------
bq.  
bq.  (Updated 2011-12-22 00:31:23)
bq.  
bq.  
bq.  Review request for hbase, Ted Yu, Michael Stack, and Lars Hofhansl.
bq.  
bq.  
bq.  Summary
bq.  -------
bq.  
bq.  In this patch, after a task is done, we don't delete the node if the task is failed.
 So that when it's retried later on, there won't be race problem.
bq.  
bq.  It used to delete the node always.
bq.  
bq.  
bq.  This addresses bug HBASE-5081.
bq.      https://issues.apache.org/jira/browse/HBASE-5081
bq.  
bq.  
bq.  Diffs
bq.  -----
bq.  
bq.    src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java 667a8b1 
bq.    src/test/java/org/apache/hadoop/hbase/master/TestSplitLogManager.java 32ad7e8 
bq.  
bq.  Diff: https://reviews.apache.org/r/3292/diff
bq.  
bq.  
bq.  Testing
bq.  -------
bq.  
bq.  mvn -Dtest=TestDistributedLogSplitting clean test
bq.  
bq.  
bq.  Thanks,
bq.  
bq.  Jimmy
bq.  
bq.


                
> Distributed log splitting deleteNode races againsth splitLog retry 
> -------------------------------------------------------------------
>
>                 Key: HBASE-5081
>                 URL: https://issues.apache.org/jira/browse/HBASE-5081
>             Project: HBase
>          Issue Type: Bug
>          Components: wal
>    Affects Versions: 0.92.0, 0.94.0
>            Reporter: Jimmy Xiang
>            Assignee: Jimmy Xiang
>             Fix For: 0.92.0
>
>         Attachments: distributed-log-splitting-screenshot.png, hbase-5081-patch-v6.txt,
hbase-5081-patch-v7.txt, hbase-5081_patch_for_92_v4.txt, hbase-5081_patch_v5.txt, patch_for_92.txt,
patch_for_92_v2.txt, patch_for_92_v3.txt
>
>
> Recently, during 0.92 rc testing, we found distributed log splitting hangs there forever.
 Please see attached screen shot.
> I looked into it and here is what happened I think:
> 1. One rs died, the servershutdownhandler found it out and started the distributed log
splitting;
> 2. All three tasks failed, so the three tasks were deleted, asynchronously;
> 3. Servershutdownhandler retried the log splitting;
> 4. During the retrial, it created these three tasks again, and put them in a hashmap
(tasks);
> 5. The asynchronously deletion in step 2 finally happened for one task, in the callback,
it removed one
> task in the hashmap;
> 6. One of the newly submitted tasks' zookeeper watcher found out that task is unassigned,
and it is not
> in the hashmap, so it created a new orphan task.
> 7.  All three tasks failed, but that task created in step 6 is an orphan so the batch.err
counter was one short,
> so the log splitting hangs there and keeps waiting for the last task to finish which
is never going to happen.
> So I think the problem is step 2.  The fix is to make deletion sync, instead of async,
so that the retry will have
> a clean start.
> Async deleteNode will mess up with split log retrial.  In extreme situation, if async
deleteNode doesn't happen
> soon enough, some node created during the retrial could be deleted.
> deleteNode should be sync.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message