hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Amareshwari Sriramadasu (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3670) JobTracker running out of heap space
Date Fri, 04 Jul 2008 12:09:37 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12610565#action_12610565
] 

Amareshwari Sriramadasu commented on HADOOP-3670:
-------------------------------------------------

Looking at the hprof file, the following are the observations:
Job tracker's memory has reached 2.4GB, in which byte[] objects are contributing to 79% of
the memory. i.e. 1.88GB.
Detailed allocation of byte[] objects contributing the high memory  is shown below.
{noformat}
    +---org.apache.hadoop.io.BytesWritable    |   120,168  99 %  |   1,822,933,232  97 % 
|
    | +---org.apache.hadoop.mapred.JobClient$RawSplit    |    63,536  53 %  |     986,661,560
 52 %  |
    | | +---org.apache.hadoop.mapred.TaskInProgress    |    60,725  50 %  |     936,490,832
 50 %  |
    | | | +---org.apache.hadoop.mapred.TaskInProgress[]    |    60,478  50 %  |     936,433,528
 50 %  |
    | | | | +---org.apache.hadoop.mapred.JobInProgress    |    60,478  50 %  |     936,433,528
 50 %  |
    | | | |   +---<Objects are retained by instances of several classes>    |    60,478
 50 %  |     936,433,528  50 %  |
    | | | |     +---java.lang.Object[]   |                  |                        |
    | | | |     +---org.apache.hadoop.mapred.TaskInProgress    |                  |      
                 |
    | | | |     +---java.util.TreeMap$Entry    |                  |                      
 |
    | +---org.apache.hadoop.mapred.MapTask    |    56,629  47 %  |     836,271,336  44 % 
|
    | | +---<Objects are retained by instances of several classes>    |    56,629  47
%  |     836,271,336  44 %  |
    | |   +---java.util.TreeMap$Entry    |                  |                        |
    | |   +---org.apache.hadoop.mapred.Task$FileSystemStatisticUpdater    |              
   |                        |
{noformat}

Clearly, the RawSplits in TIP are contributing almost 1GB. And MapTask objects are contributing
another 1GB .
Again, In MapTask, the BytesWritable split is contributing to the high memory.



> JobTracker running out of heap space
> ------------------------------------
>
>                 Key: HADOOP-3670
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3670
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.17.0
>            Reporter: Christian Kunz
>            Assignee: Amareshwari Sriramadasu
>         Attachments: memory-dump.txt
>
>
> The JobTracker on our 0.17.0 installation runs out of heap space rather quickly, with
less than 100 jobs (at one time even after just 16 jobs).
> Running in 64-bit mode with larger heap space does not help -- it will use up all available
RAM.
> 2008-06-28 05:17:06,661 INFO org.apache.hadoop.ipc.Server: IPC Server handler 62 on 9020,
call he
> artbeat(org.apache.hadoop.mapred.TaskTrackerStatus@6f81c6, false, true, 17384) from xxx.xxx.xxx.xxx
> :51802: error: java.io.IOException: java.lang.OutOfMemoryError: GC overhead limit exceeded
> java.io.IOException: java.lang.OutOfMemoryError: GC overhead limit exceeded

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message