hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Doug Cutting (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-39) Job killed when backup tasks fail
Date Thu, 16 Feb 2006 20:35:10 GMT
    [ http://issues.apache.org/jira/browse/HADOOP-39?page=comments#action_12366674 ] 

Doug Cutting commented on HADOOP-39:
------------------------------------

The point is to try to get map tasks with side effects to sometimes succeed, even with speculative
execution?  That sounds like it could be a bad idea.  Wouldn't it be better to have map tasks
with side effects fail more frequently with speculative execution, so that you find such problems
sooner, with smaller datasets on a smaller cluster, before you try a big run?  Or am I misunderstanding
you?

> Job killed when backup tasks fail
> ---------------------------------
>
>          Key: HADOOP-39
>          URL: http://issues.apache.org/jira/browse/HADOOP-39
>      Project: Hadoop
>         Type: Bug
>   Components: mapred
>     Reporter: Owen O'Malley

>
> I had a map job with side effects that meant that any speculative tasks would fail.
> Currently, the job tracker kills the job when the speculative task fails 4 times.
> It would be better to stop scheduling speculative tasks for that fragment, but let the
job continue as long as one of the the instances of that fragment continue to run.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


Mime
View raw message