hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Doug Cutting (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-1304) MAX_TASK_FAILURES should be configurable
Date Tue, 01 May 2007 16:49:15 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-1304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Doug Cutting updated HADOOP-1304:

    Status: Open  (was: Patch Available)

If these parameters are not per-job, then, yes, it makes no sense to add JobConf methods.
 Long-term it may make sense to make these per-job, since some jobs may be less reliable than
others, requiring more retries.  But simply making this configurable is a step in the right

The current patch reads values from the JobTracker's configuration, not from the job's, yet
it includes JobConf setters & getters.  So we should either (a) remove the JobConf methods
from this patch; or (b) change it so these are per-job, and add "Expert:" methods to JobConf.

> MAX_TASK_FAILURES should be configurable
> ----------------------------------------
>                 Key: HADOOP-1304
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1304
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: mapred
>    Affects Versions: 0.12.3
>            Reporter: Christian Kunz
>         Assigned To: Devaraj Das
>         Attachments: 1304.patch, 1304.patch, 1304.patch
> After a couple of weeks of failed attempts I was able to finish a large job only after
I changed MAX_TASK_FAILURES to a higher value. In light of HADOOP-1144 (allowing a certain
amount of task failures without failing the job) it would be even better if this value could
be configured separately for mappers and reducers, because often a success of a job requires
the success of all reducers but not of all mappers.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message