hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bikas Saha (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAPREDUCE-4607) Race condition in ReduceTask completion can result in Task being incorrectly failed
Date Wed, 05 Sep 2012 21:07:08 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-4607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449134#comment-13449134
] 

Bikas Saha commented on MAPREDUCE-4607:
---------------------------------------

Thats what I explained above.
The other option would be to do surgery in all tests and have all of them create their own
mockTask by specifying taskType in the argument. setup() wont be able to pass parameters without
context. Also, things like generating taskId in all tests use the taskType member outside
of mockTask. I can make that change if you feel strongly about it. I made the minimal changes
that suffice.
                
> Race condition in ReduceTask completion can result in Task being incorrectly failed
> -----------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-4607
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4607
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>    Affects Versions: 2.1.0-alpha
>            Reporter: Bikas Saha
>            Assignee: Bikas Saha
>         Attachments: MAPREDUCE-4607.1.patch, MAPREDUCE-4607.2.patch, MAPREDUCE-4607.3.patch
>
>
> Problem reported by chackaravarthy in MAPREDUCE-4252
> This problem has been handled when speculative task launched for map task and other attempt
got failed (not killed)
> Can the similar kind of scenario can happen in case of reduce task?
> Consider the following scenario for reduce task in case of speculation (one attempt got
killed):
> 1. A task attempt is started.
> 2. A speculative task attempt for the same task is started.
> 3. The first task attempt completes and causes the task to transition to SUCCEEDED.
> 4. Then speculative task attempt will be killed because of the completion of first attempt.
> As a result, internal error will be thrown from this attempt (TaskImpl.MapRetroactiveKilledTransition)
and hence task attempt failure leads to job failure.
> TaskImpl.MapRetroactiveKilledTransition
> if (!TaskType.MAP.equals(task.getType())) {
>         LOG.error("Unexpected event for REDUCE task " + event.getType());
>         task.internalError(event.getType());
>       }
> So, do we need to have following code in MapRetroactiveKilledTransition also just like
in MapRetroactiveFailureTransition.
> if (event instanceof TaskTAttemptEvent) {
>         TaskTAttemptEvent castEvent = (TaskTAttemptEvent) event;
>         if (task.getState() == TaskState.SUCCEEDED &&
>             !castEvent.getTaskAttemptID().equals(task.successfulAttempt)) {
>           // don't allow a different task attempt to override a previous
>           // succeeded state
>           return TaskState.SUCCEEDED;
>         }
>       }
> please check whether this is a valid case and give your suggestion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message