hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ravi Gummadi (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAPREDUCE-3473) Task failures shouldn't result in Job failures
Date Mon, 28 Nov 2011 08:14:40 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-3473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13158272#comment-13158272
] 

Ravi Gummadi commented on MAPREDUCE-3473:
-----------------------------------------

Currently in trunk (and even in 0.20), task gets re-launched multiple times even after failures
on (say until it fails on ) 4 different nodes. Right ? The possibility of a task failing on
4 different nodes because of these special type of task-failures (like disk failures) is very
very low. Right ?
                
> Task failures shouldn't result in Job failures 
> -----------------------------------------------
>
>                 Key: MAPREDUCE-3473
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3473
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: tasktracker
>    Affects Versions: 0.20.205.0, 0.23.0
>            Reporter: Eli Collins
>
> Currently some task failures may result in job failures. Eg a local TT disk failure seen
in TaskLauncher#run, TaskRunner#run, MapTask#run is visible to and can hang the JobClient,
causing the job to fail. Job execution should always be able to survive a task failure if
there are sufficient resources. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message