hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Devaraj Das (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1304) MAX_TASK_FAILURES should be configurable
Date Mon, 30 Apr 2007 16:41:15 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12492713
] 

Devaraj Das commented on HADOOP-1304:
-------------------------------------

I think HADOOP-1144 only allows tolerating certain percentage of *map* failures. *All* the
reduce tasks are supposed to successfully execute for the job to succeed. MAX_TASK_FAILURES
signifies the max *attempts* that the framework would make per task (as opposed to max failures
considering all the tasks for the job). So unless you are saying that we should have fewer
attempts for a single map (i.e., less than the hardcoded 4 attempts) and more for reduces,
i don't see the need for having two different config values. Am i missing something here?

> MAX_TASK_FAILURES should be configurable
> ----------------------------------------
>
>                 Key: HADOOP-1304
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1304
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: mapred
>    Affects Versions: 0.12.3
>            Reporter: Christian Kunz
>
> After a couple of weeks of failed attempts I was able to finish a large job only after
I changed MAX_TASK_FAILURES to a higher value. In light of HADOOP-1144 (allowing a certain
amount of task failures without failing the job) it would be even better if this value could
be configured separately for mappers and reducers, because often a success of a job requires
the success of all reducers but not of all mappers.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message