Return-Path: Delivered-To: apmail-hadoop-core-dev-archive@www.apache.org Received: (qmail 53549 invoked from network); 29 Feb 2008 02:01:52 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 29 Feb 2008 02:01:52 -0000 Received: (qmail 1030 invoked by uid 500); 29 Feb 2008 02:01:46 -0000 Delivered-To: apmail-hadoop-core-dev-archive@hadoop.apache.org Received: (qmail 1008 invoked by uid 500); 29 Feb 2008 02:01:46 -0000 Mailing-List: contact core-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-dev@hadoop.apache.org Received: (qmail 999 invoked by uid 99); 29 Feb 2008 02:01:46 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 28 Feb 2008 18:01:46 -0800 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 29 Feb 2008 02:01:20 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 327B4234C04E for ; Thu, 28 Feb 2008 18:00:54 -0800 (PST) Message-ID: <902183892.1204250454205.JavaMail.jira@brutus> Date: Thu, 28 Feb 2008 18:00:54 -0800 (PST) From: "Raghu Angadi (JIRA)" To: core-dev@hadoop.apache.org Subject: [jira] Commented: (HADOOP-2907) dead datanodes because of OutOfMemoryError In-Reply-To: <1667842456.1204075251034.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-2907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12573558#action_12573558 ] Raghu Angadi commented on HADOOP-2907: -------------------------------------- > #810 release was a trunk release from Jan 4, running on a different cluster. Because we did not experience HADOOP-2883 with that release, I assumed that it did not write directly to DFS. Am I wrong? Right. HADOOP-1707 went in on Jan 17th. What is relation to this Jira? All the logs and times mentioned here are around Feb 19-24th or so. > dead datanodes because of OutOfMemoryError > ------------------------------------------ > > Key: HADOOP-2907 > URL: https://issues.apache.org/jira/browse/HADOOP-2907 > Project: Hadoop Core > Issue Type: Bug > Components: dfs > Affects Versions: 0.16.0 > Reporter: Christian Kunz > > We see more dead datanodes than in previous releases. The common exception is found in the out file: > Exception in thread "org.apache.hadoop.dfs.DataBlockScanner@18166e5" java.lang.OutOfMemoryError: Java heap space > Exception in thread "DataNode: [dfs.data.dir-value]" java.lang.OutOfMemoryError: Java heap space -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.