hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAPREDUCE-1521) Protection against incorrectly configured reduces
Date Mon, 28 Feb 2011 06:43:36 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-1521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13000141#comment-13000141
] 

Hadoop QA commented on MAPREDUCE-1521:
--------------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12471049/resourcestimator-overflow.txt
  against trunk revision 1075216.

    +1 @author.  The patch does not contain any @author tags.

    -1 tests included.  The patch doesn't appear to include any new or modified tests.
                        Please justify why no new tests are needed for this patch.
                        Also please list what manual steps were performed to verify this patch.

    -1 patch.  The patch command could not apply the patch.

Console output: https://hudson.apache.org/hudson/job/PreCommit-MAPREDUCE-Build/86//console

This message is automatically generated.

> Protection against incorrectly configured reduces
> -------------------------------------------------
>
>                 Key: MAPREDUCE-1521
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1521
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: jobtracker
>            Reporter: Arun C Murthy
>            Assignee: Mahadev konar
>            Priority: Critical
>             Fix For: 0.22.0
>
>         Attachments: MAPREDUCE-1521-0.20-yahoo.patch, MAPREDUCE-1521-0.20-yahoo.patch,
MAPREDUCE-1521-0.20-yahoo.patch, MAPREDUCE-1521-0.20-yahoo.patch, MAPREDUCE-1521-0.20-yahoo.patch,
MAPREDUCE-1521-trunk.patch, resourceestimator-threshold.txt, resourcestimator-overflow.txt
>
>
> We've seen a fair number of instances where naive users process huge data-sets (>10TB)
with badly mis-configured #reduces e.g. 1 reduce.
> This is a significant problem on large clusters since it takes each attempt of the reduce
a long time to shuffle and then run into problems such as local disk-space etc. Then it takes
4 such attempts.
> Proposal: Come up with heuristics/configs to fail such jobs early. 
> Thoughts?

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message