hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Devaraj Das (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-1304) MAX_TASK_FAILURES should be configurable
Date Wed, 02 May 2007 11:56:15 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-1304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Devaraj Das updated HADOOP-1304:
--------------------------------

    Attachment: 1304.patch

This patch is with ("Expert:") setters in the JobConf for the config items mapred.{map/reduce}.max.attempts.
Now users can use the APIs in JobConf to set the values of these config items when they submit
jobs, and the framework reads those values from the user's JobConf on a per-job basis.

> MAX_TASK_FAILURES should be configurable
> ----------------------------------------
>
>                 Key: HADOOP-1304
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1304
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: mapred
>    Affects Versions: 0.12.3
>            Reporter: Christian Kunz
>         Assigned To: Devaraj Das
>         Attachments: 1304.patch, 1304.patch, 1304.patch, 1304.patch
>
>
> After a couple of weeks of failed attempts I was able to finish a large job only after
I changed MAX_TASK_FAILURES to a higher value. In light of HADOOP-1144 (allowing a certain
amount of task failures without failing the job) it would be even better if this value could
be configured separately for mappers and reducers, because often a success of a job requires
the success of all reducers but not of all mappers.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message