hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eric Yang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-10759) Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh
Date Thu, 07 Aug 2014 06:51:12 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088919#comment-14088919
] 

Eric Yang commented on HADOOP-10759:
------------------------------------

Allen, you are correct in picking on my latest statement.  I meant to say that HEAP can override
it with HADOOP_HEAPSIZE in either hadoop-env.sh or from environment when hadoop-env.sh does
not explicitly define it in running environment.  However, the hard code value is unnecessary
since JDK already cap to 1GB by default, if user's machine has more than 4GB.  For smaller
machines, it is better that we don't use the hard coded cap but let Java decide the right
value for it.

> Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh
> --------------------------------------------------
>
>                 Key: HADOOP-10759
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10759
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: bin
>    Affects Versions: 2.4.0
>         Environment: Linux64
>            Reporter: sam liu
>            Priority: Minor
>             Fix For: 2.6.0
>
>         Attachments: HADOOP-10759.patch, HADOOP-10759.patch
>
>
> In hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh, there is a hard
code for Java parameter: 'JAVA_HEAP_MAX=-Xmx1000m'. It should be removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message