hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Christian Kunz (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2907) dead datanodes because of OutOfMemoryError
Date Fri, 29 Feb 2008 03:44:51 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12573578#action_12573578

Christian Kunz commented on HADOOP-2907:

When we ran into the same problem with #810 a month ago on a different cluster it was not
clear to us whether we should report it immediately because it was a trunk release and it
might have been just a transient problem. We wanted to wait for a stable release and check
whether it stlil happens.

BTW, I checked the logs:
Around 2% of datanodes had OutOfMemoryError exceptions. By itself, it would probably be not
much of a problem, but it happened that some of the datanodes went out within a short time
of period such that we lossed a few blocks.

> dead datanodes because of OutOfMemoryError
> ------------------------------------------
>                 Key: HADOOP-2907
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2907
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Christian Kunz
> We see more dead datanodes than in previous releases. The common exception is found in
the out file:
> Exception in thread "org.apache.hadoop.dfs.DataBlockScanner@18166e5" java.lang.OutOfMemoryError:
Java heap space
> Exception in thread "DataNode: [dfs.data.dir-value]" java.lang.OutOfMemoryError: Java
heap space

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message