hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Allen Wittenauer (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAPREDUCE-177) Hadoop performance degrades significantly as more and more jobs complete
Date Fri, 04 Dec 2009 19:37:21 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12786104#action_12786104
] 

Allen Wittenauer commented on MAPREDUCE-177:
--------------------------------------------

What is the latest status of this patch?  It doesn't appear to be committed or, heck, even
resolved as to how the fix is going to be applied.

> Hadoop performance degrades significantly as more and more jobs complete
> ------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-177
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-177
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>            Reporter: Runping Qi
>            Assignee: Ioannis Koltsidas
>            Priority: Critical
>         Attachments: HADOOP-4766-v1.patch, HADOOP-4766-v2.10.patch, HADOOP-4766-v2.4.patch,
HADOOP-4766-v2.6.patch, HADOOP-4766-v2.7-0.18.patch, HADOOP-4766-v2.7-0.19.patch, HADOOP-4766-v2.7.patch,
HADOOP-4766-v2.8-0.18.patch, HADOOP-4766-v2.8-0.19.patch, HADOOP-4766-v2.8.patch, HADOOP-4766-v3.4-0.19.patch,
map_scheduling_rate.txt
>
>
> When I ran the gridmix 2 benchmark load on a fresh cluster of 500 nodes with hadoop trunk,

> the gridmix load, consisting of 202 map/reduce jobs of various sizes, completed in 32
minutes. 
> Then I ran the same set of the jobs on the same cluster, yhey completed in 43 minutes.
> When I ran them the third times, it took (almost) forever --- the job tracker became
non-responsive.
> The job  tracker's heap size was set to 2GB. 
> The cluster is configured to keep up to 500 jobs in memory.
> The job tracker kept one cpu busy all the time. Look like it was due to GC.
> I believe the release 0.18/0.19 have the similar behavior.
> I believe 0.18 and 0.18 also have the similar behavior.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message