hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Christian Kunz (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-1144) Hadoop should allow a configurable percentage of failed map tasks before declaring a job failed.
Date Thu, 22 Mar 2007 05:10:32 GMT
Hadoop should allow a configurable percentage of failed map tasks before declaring a job failed.
------------------------------------------------------------------------------------------------

                 Key: HADOOP-1144
                 URL: https://issues.apache.org/jira/browse/HADOOP-1144
             Project: Hadoop
          Issue Type: Improvement
          Components: mapred
    Affects Versions: 0.12.0
            Reporter: Christian Kunz
             Fix For: 0.13.0


In our environment it can occur that some map tasks will fail repeatedly because of corrupt
input data, which sometimes is non-critical as long as the amount is limited. In this case
it is annoying that the whole Hadoop job fails and cannot be restarted till the corrupt data
are identified and eliminated from the input. It would be extremely helpful if the job configuration
would allow to indicate how many map tasks are allowed to fail.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message