hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Devaraj Das (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-5474) All reduce tasks should be re-executed when tasktracker with a completed map task failed
Date Thu, 12 Mar 2009 08:52:50 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12681219#action_12681219
] 

Devaraj Das commented on HADOOP-5474:
-------------------------------------

bq. In this situation, if the outputs of multi map tasks on the same dataset are different,
for example outputting a random number, the outputs of maptask and the re-executed maptask
will probably are different. Then the re-executed reduce tasks will read the new output of
the re-executed maptask, but reduce tasks that have read the data from the failed tasktracker
have read the old output. This probably will cause correctness of the result.

I think your application should be tolerant to this happening and be written assuming that
maps/reduces could fail or get killed, etc. We really don't want to do what you suggest.

> All reduce tasks should be re-executed when tasktracker with a completed map task failed
> ----------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5474
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5474
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.19.0
>         Environment: CentOS 5,
> hadoop-0.19.0
>            Reporter: Leitao Guo
>            Priority: Critical
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> When a tasktracker with a completed map task failed, the map task will be re-exectuted,
and all reduce tasks that haven't read the data from that tasktracker should be re-executed.
But the reduce task that have read the data from that tasktracker will not be re-executed.

> In this situation, if the outputs of multi map tasks on the same dataset are different,
for example outputting a random number, the outputs of maptask and the re-executed maptask
will probably are different. Then the re-executed reduce tasks will read the new output of
the re-executed maptask, but reduce tasks that have read the data from the failed tasktracker
have read the old output. This probably will cause correctness of the result.
> A recommended solution is that all reduce tasks should be re-executed if one tasktracker
with a completed map task failed.
> Any comments? thanks!

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message