hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-2907) dead datanodes because of OutOfMemoryError
Date Wed, 27 Feb 2008 19:01:51 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-2907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Raghu Angadi updated HADOOP-2907:
---------------------------------

    Attachment: jmap-histo.txt

Koji and I looked at one of the datanodes. Attached jmap-histo.txt is the output of 'jmap
-histo'. The top entry takes pretty much all the memory :

{noformat}
num   #instances    #bytes  class name
--------------------------------------
  1:     25039   918792056  [B
  2:     38666     4910272  [C
  3:     23620     2674272  <constMethodKlass>
  4:     23620     1893912  <methodKlass>
  5:     38347     1549664  <symbolKlass>
{noformat}

Does "[B" refer to byte arrays? Any ideas about how get anymore info out of this process are
welcome.

> dead datanodes because of OutOfMemoryError
> ------------------------------------------
>
>                 Key: HADOOP-2907
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2907
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Christian Kunz
>         Attachments: jmap-histo.txt
>
>
> We see more dead datanodes than in previous releases. The common exception is found in
the out file:
> Exception in thread "org.apache.hadoop.dfs.DataBlockScanner@18166e5" java.lang.OutOfMemoryError:
Java heap space
> Exception in thread "DataNode: [dfs.data.dir-value]" java.lang.OutOfMemoryError: Java
heap space

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message