hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4246) Reduce task copy errors may not kill it eventually
Date Sat, 04 Oct 2008 13:45:45 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12636823#action_12636823
] 

Hudson commented on HADOOP-4246:
--------------------------------

Integrated in Hadoop-trunk #623 (See [http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/623/])
    . Ensure we have the correct lower bound on the number of retries for fetching map-outputs;
also fixed the case where the reducer automatically kills on too many unique map-outputs could
not be fetched for small jobs. Contributed by Amareshwari Sri Ramadasu.


> Reduce task copy errors may not kill it eventually
> --------------------------------------------------
>
>                 Key: HADOOP-4246
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4246
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.19.0
>            Reporter: Amareshwari Sriramadasu
>            Assignee: Amareshwari Sriramadasu
>            Priority: Blocker
>             Fix For: 0.19.0
>
>         Attachments: patch-4246.txt, patch-4246.txt, patch-4246.txt, patch-4246.txt
>
>
> maxFetchRetriesPerMap in reduce task can be zero some times (when maxMapRunTime is less
than 4 seconds or mapred.reduce.copy.backoff is less than 4). This will not count reduce task
copy errors to kill it eventually.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message