hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Devaraj Das (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4766) Hadoop performance degrades significantly as more and more jobs complete
Date Thu, 04 Dec 2008 16:50:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12653351#action_12653351
] 

Devaraj Das commented on HADOOP-4766:
-------------------------------------

bq. I believe the release 0.18/0.19 have the similar behavior. I believe 0.18 and 0.18 also
have the similar behavior.

Runping, could you please clarify this? Did you actually run gridmix with both 0.18 and 0.19?


bq. The cluster is configured to keep up to 500 jobs in memory.

This seems quite a large number. So when the second gridmix is run, the JobTracker has the
entire set of jobs from the first gridmix. i.e., nearly a 100K maps and 1000s of reducers,
right? Although the JobTracker should behave gracefully but am wondering whether the memory
pressure is too high. Again, was such a configuration (500 jobs in memory) ever used with
gridmix earlier ?

bq.  After setting the heapsize of the job tracker to 3GB, the situation becomes even worse
— the first set of gridmix 2 jobs did not finish in 4+ hours.

Agree this is really weird.

> Hadoop performance degrades significantly as more and more jobs complete
> ------------------------------------------------------------------------
>
>                 Key: HADOOP-4766
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4766
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.18.2, 0.19.0
>            Reporter: Runping Qi
>            Priority: Blocker
>             Fix For: 0.18.3, 0.19.1, 0.20.0
>
>
> When I ran the gridmix 2 benchmark load on a fresh cluster of 500 nodes with hadoop trunk,

> the gridmix load, consisting of 202 map/reduce jobs of various sizes, completed in 32
minutes. 
> Then I ran the same set of the jobs on the same cluster, yhey completed in 43 minutes.
> When I ran them the third times, it took (almost) forever --- the job tracker became
non-responsive.
> The job  tracker's heap size was set to 2GB. 
> The cluster is configured to keep up to 500 jobs in memory.
> The job tracker kept one cpu busy all the time. Look like it was due to GC.
> I believe the release 0.18/0.19 have the similar behavior.
> I believe 0.18 and 0.18 also have the similar behavior.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message