Return-Path: X-Original-To: apmail-hadoop-common-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-common-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 12BFD116D0 for ; Wed, 6 Aug 2014 06:43:13 +0000 (UTC) Received: (qmail 35038 invoked by uid 500); 6 Aug 2014 06:43:12 -0000 Delivered-To: apmail-hadoop-common-issues-archive@hadoop.apache.org Received: (qmail 34987 invoked by uid 500); 6 Aug 2014 06:43:12 -0000 Mailing-List: contact common-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-issues@hadoop.apache.org Delivered-To: mailing list common-issues@hadoop.apache.org Received: (qmail 34975 invoked by uid 99); 6 Aug 2014 06:43:12 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 06 Aug 2014 06:43:12 +0000 Date: Wed, 6 Aug 2014 06:43:12 +0000 (UTC) From: "Eric Yang (JIRA)" To: common-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HADOOP-10759) Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HADOOP-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14087330#comment-14087330 ] Eric Yang commented on HADOOP-10759: ------------------------------------ The information in ZOOKEEPER-1670 is not entirely accurate. Java does a good job on calculate initial heap size, and it will use 1/4th of machine memory up to 1GB. See: http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html#par_gc.ergonomics.default_size Therefore, without this being specified, it may use up to 1GB for heap on machine that has greater than 4GB physical memory. However, for smaller machine such as a virtual machine, it would be nicer if it can scale dynamically. Another benefit of removing this hard coded value is to make sure that the Hadoop command line is not capped to 1GB for trivial operation such as GetConf, or dfs client operation to reduce memory starvation in executing too many cli operations in parallel to map reduce tasks. We have notice that while the machine may already hand out most memory to map reduce tasks, and some amount of cli command happens in parallel, may trigger excessively allocating memory and causes JVMs to aggressively running garbage collection and increases chances of dead lock in highly fragmented memory pages. It is somewhat a serious bug that I think it is worth while to be included in 2.x releases. > Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh > -------------------------------------------------- > > Key: HADOOP-10759 > URL: https://issues.apache.org/jira/browse/HADOOP-10759 > Project: Hadoop Common > Issue Type: Bug > Components: bin > Affects Versions: 2.4.0 > Environment: Linux64 > Reporter: sam liu > Priority: Minor > Fix For: 2.6.0 > > Attachments: HADOOP-10759.patch, HADOOP-10759.patch > > > In hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh, there is a hard code for Java parameter: 'JAVA_HEAP_MAX=-Xmx1000m'. It should be removed. -- This message was sent by Atlassian JIRA (v6.2#6252)