ambari-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dmitry Sen <>
Subject Re: Unable to set the namenode options using blueprints
Date Tue, 13 Oct 2015 09:29:32 GMT
hadoop-env has no property HADOOP_NAMENODE_OPTS?, you should use namenode_opt_maxnewsize for
specifying XX:MaxHeapSize?

      "hadoop-env" : {
        "properties" : {
           "namenode_opt_maxnewsize" :  "16384m"

You may also want to check all available options in /var/lib/ambari-agent/cache/common-services/HDFS/

From: Stephen Boesch <>
Sent: Tuesday, October 13, 2015 9:41 AM
Subject: Unable to set the namenode options using blueprints

Given a blueprint that includes the following:

      "hadoop-env" : {
        "properties" : {
           "HADOOP_NAMENODE_OPTS" :  " -XX:InitialHeapSize=16384m -XX:MaxHeapSize=16384m -Xmx16384m

The following occurs when creating the cluster:

Error occurred during initialization of VM
Too small initial heap

The logs say:

CommandLine flags: -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1024
-XX:MaxHeapSize=1024 -XX:MaxNewSize=200 -XX:MaxTenuringThreshold=6 -XX:NewSize=200 -XX:OldPLABSize=16
-XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node"
-XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8
-XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCompressedClassPointers
-XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC

Notice that nowhere are the options provided included in the actual jvm launched values.

it is no wonder the low on resources given the only 1GB MaxHeapSize. totally inadequate for

btw this is HA - and both of the namenodes have same behavior.

View raw message