hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2907) dead datanodes because of OutOfMemoryError
Date Fri, 29 Feb 2008 01:33:51 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12573551#action_12573551
] 

Raghu Angadi commented on HADOOP-2907:
--------------------------------------

> (my impression is that that release still wrote to local disk)
0.16. does not write to local disk. Local Disk write was remove quite some time back. But
it had removed some buffers on  DataNode. HADOOP-2768 went into svn revision 618349 and you
are running 618351.

> dead datanodes because of OutOfMemoryError
> ------------------------------------------
>
>                 Key: HADOOP-2907
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2907
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Christian Kunz
>
> We see more dead datanodes than in previous releases. The common exception is found in
the out file:
> Exception in thread "org.apache.hadoop.dfs.DataBlockScanner@18166e5" java.lang.OutOfMemoryError:
Java heap space
> Exception in thread "DataNode: [dfs.data.dir-value]" java.lang.OutOfMemoryError: Java
heap space

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message