hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Devaraj Das (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-1304) MAX_TASK_FAILURES should be configurable
Date Mon, 30 Apr 2007 19:47:16 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-1304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Devaraj Das updated HADOOP-1304:
--------------------------------

    Attachment: 1304.patch

Attached is another patch with the JobConf accessor methods.
Regarding the potential problem that Arun raised, yes, the problem exists but the hope is
that HADOOP-785 will address these vulnerabilities.

> MAX_TASK_FAILURES should be configurable
> ----------------------------------------
>
>                 Key: HADOOP-1304
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1304
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: mapred
>    Affects Versions: 0.12.3
>            Reporter: Christian Kunz
>         Assigned To: Devaraj Das
>         Attachments: 1304.patch, 1304.patch, 1304.patch
>
>
> After a couple of weeks of failed attempts I was able to finish a large job only after
I changed MAX_TASK_FAILURES to a higher value. In light of HADOOP-1144 (allowing a certain
amount of task failures without failing the job) it would be even better if this value could
be configured separately for mappers and reducers, because often a success of a job requires
the success of all reducers but not of all mappers.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message