Return-Path: X-Original-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 50937E955 for ; Tue, 12 Feb 2013 18:31:15 +0000 (UTC) Received: (qmail 90981 invoked by uid 500); 12 Feb 2013 18:31:14 -0000 Delivered-To: apmail-hadoop-hdfs-dev-archive@hadoop.apache.org Received: (qmail 90912 invoked by uid 500); 12 Feb 2013 18:31:14 -0000 Mailing-List: contact hdfs-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-dev@hadoop.apache.org Received: (qmail 90902 invoked by uid 99); 12 Feb 2013 18:31:14 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 12 Feb 2013 18:31:14 +0000 Date: Tue, 12 Feb 2013 18:31:14 +0000 (UTC) From: "Suresh Srinivas (JIRA)" To: hdfs-dev@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Resolved] (HDFS-4475) OutOfMemory by BPServiceActor.offerService() takes down DataNode MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-4475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas resolved HDFS-4475. ----------------------------------- Resolution: Invalid > OutOfMemory by BPServiceActor.offerService() takes down DataNode > ---------------------------------------------------------------- > > Key: HDFS-4475 > URL: https://issues.apache.org/jira/browse/HDFS-4475 > Project: Hadoop HDFS > Issue Type: Bug > Affects Versions: 3.0.0, 2.0.3-alpha > Reporter: Plamen Jeliazkov > Assignee: Plamen Jeliazkov > Fix For: 3.0.0, 2.0.3-alpha > > > In DataNode, there are catchs around BPServiceActor.offerService() call but no catch for OutOfMemory as there is for the DataXeiver as introduced in 0.22.0. > The issue can be replicated like this: > 1) Create a cluster of X DataNodes and 1 NameNode and low memory settings (-Xmx128M or something similar). > 2) Flood HDFS with small file creations (any should work actually). > 3) DataNodes will hit OoM, stop blockpool service, and shutdown. > The resolution is to catch the OoMException and handle it properly when calling BPServiceActor.offerService() in DataNode.java; like as done in 0.22.0 of Hadoop. DataNodes should not shutdown or crash but remain in a sort of frozen state until memory issues are resolved by GC. > LOG ERROR: > 2013-02-04 11:46:01,854 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Unexpected exception in block pool Block pool BP-1105714849-10.10.10.110-1360005776467 (storage id DS-1952316202-10.10.10.112-50010-1360005820993) service to vmhost2-vm0/10.10.10.110:8020 > java.lang.OutOfMemoryError: GC overhead limit exceeded > 2013-02-04 11:46:01,854 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool BP-1105714849-10.10.10.110-1360005776467 (storage id DS-1952316202-10.10.10.112-50010-1360005820993) service to vmhost2-vm0/10.10.10.110:8020 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira