Return-Path: Delivered-To: apmail-hadoop-core-dev-archive@www.apache.org Received: (qmail 92873 invoked from network); 29 Feb 2008 03:45:52 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 29 Feb 2008 03:45:52 -0000 Received: (qmail 6984 invoked by uid 500); 29 Feb 2008 03:45:45 -0000 Delivered-To: apmail-hadoop-core-dev-archive@hadoop.apache.org Received: (qmail 6961 invoked by uid 500); 29 Feb 2008 03:45:45 -0000 Mailing-List: contact core-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-dev@hadoop.apache.org Received: (qmail 6952 invoked by uid 99); 29 Feb 2008 03:45:45 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 28 Feb 2008 19:45:45 -0800 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 29 Feb 2008 03:45:07 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 281FE234C04A for ; Thu, 28 Feb 2008 19:44:51 -0800 (PST) Message-ID: <721431233.1204256691163.JavaMail.jira@brutus> Date: Thu, 28 Feb 2008 19:44:51 -0800 (PST) From: "Christian Kunz (JIRA)" To: core-dev@hadoop.apache.org Subject: [jira] Commented: (HADOOP-2907) dead datanodes because of OutOfMemoryError In-Reply-To: <1667842456.1204075251034.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-2907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12573578#action_12573578 ] Christian Kunz commented on HADOOP-2907: ---------------------------------------- When we ran into the same problem with #810 a month ago on a different cluster it was not clear to us whether we should report it immediately because it was a trunk release and it might have been just a transient problem. We wanted to wait for a stable release and check whether it stlil happens. BTW, I checked the logs: Around 2% of datanodes had OutOfMemoryError exceptions. By itself, it would probably be not much of a problem, but it happened that some of the datanodes went out within a short time of period such that we lossed a few blocks. > dead datanodes because of OutOfMemoryError > ------------------------------------------ > > Key: HADOOP-2907 > URL: https://issues.apache.org/jira/browse/HADOOP-2907 > Project: Hadoop Core > Issue Type: Bug > Components: dfs > Affects Versions: 0.16.0 > Reporter: Christian Kunz > > We see more dead datanodes than in previous releases. The common exception is found in the out file: > Exception in thread "org.apache.hadoop.dfs.DataBlockScanner@18166e5" java.lang.OutOfMemoryError: Java heap space > Exception in thread "DataNode: [dfs.data.dir-value]" java.lang.OutOfMemoryError: Java heap space -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.