ambari-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stephen Boesch <java...@gmail.com>
Subject Re: Unable to set the namenode options using blueprints
Date Tue, 13 Oct 2015 10:29:09 GMT
Ok, it was not clear from your original email that the property you
referred to as not existing only applied to the ambari-generated version of
 "hadoop-env.sh" .  I do not have direct access to the system to check the
file you mention at this moment - will do so in a few hours.

2015-10-13 3:06 GMT-07:00 Dmitry Sen <dsen@hortonworks.com>:

> That's not an Ambari doc, but you are using ambari to deploy the cluster.
>
>
> /etc/hadoop/conf/hadoop-env.sh is generated from the template, which is
> "content" property in hadoop-env ambari config, + other properties listed
> in
> /var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/configuration/hadoop-env.xml​
>
>
>
>
> ------------------------------
> *From:* Stephen Boesch <javadba@gmail.com>
> *Sent:* Tuesday, October 13, 2015 12:39 PM
> *To:* user@ambari.apache.org
> *Subject:* Re: Unable to set the namenode options using blueprints
>
> Hi Dmitry,
>     That doe not appear to be correct.
>
> From Hortonworks own documentation for the lastest 2.3.0:
>
>
> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_installing_manually_book/content/ref-80953924-1cbf-4655-9953-1e744290a6c3.1.html
>
> If the cluster uses a Secondary NameNode, you should also set
> HADOOP_SECONDARYNAMENODE_OPTS to HADOOP_NAMENODE_OPTS in the hadoop-env.sh
> file:
>
> HADOOP_SECONDARYNAMENODE_OPTS=$HADOOP_NAMENODE_OPTS
>
> Another useful HADOOP_NAMENODE_OPTS setting is
> -XX:+HeapDumpOnOutOfMemoryError. This option specifies that a heap dump
> should be executed when an out of memory error occurs. You should also use
> -XX:HeapDumpPath to specify the location for the heap dump file. For
> example:
>
>
>
>
>
>
>
>
> 2015-10-13 2:29 GMT-07:00 Dmitry Sen <dsen@hortonworks.com>:
>
>> hadoop-env has no property HADOOP_NAMENODE_OPTS​, you should use namenode_opt_maxnewsize
>> for specifying *XX:MaxHeapSize*​
>>
>>       "hadoop-env" : {
>>         "properties" : {
>>            "namenode_opt_maxnewsize" :  "16384m"
>>         }
>>       }
>>
>>
>> You may also want to check all available options
>> in /var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/configuration/hadoop-env.xml​
>>
>>
>> ------------------------------
>> *From:* Stephen Boesch <javadba@gmail.com>
>> *Sent:* Tuesday, October 13, 2015 9:41 AM
>> *To:* user@ambari.apache.org
>> *Subject:* Unable to set the namenode options using blueprints
>>
>> Given a blueprint that includes the following:
>>
>>       "hadoop-env" : {
>>         "properties" : {
>>            "HADOOP_NAMENODE_OPTS" :  " -XX:InitialHeapSize=16384m
>> -XX:MaxHeapSize=16384m -Xmx16384m -XX:MaxPermSize=512m"
>>         }
>>       }
>>
>> The following occurs when creating the cluster:
>>
>> Error occurred during initialization of VM
>> Too small initial heap
>>
>> The logs say:
>>
>> CommandLine flags: -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log
>> -XX:InitialHeapSize=1024 *-XX:MaxHeapSize=1024* -XX:MaxNewSize=200
>> -XX:MaxTenuringThreshold=6 -XX:NewSize=200 -XX:OldPLABSize=16
>> -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node"
>> -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node"
>> -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node"
>> -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps
>> -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCompressedClassPointers
>> -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
>>
>>
>> Notice that nowhere are the options provided included in the actual jvm
>> launched values.
>>
>>
>> it is no wonder the low on resources given the only 1GB MaxHeapSize.
>> totally inadequate for namenode.
>>
>> btw this is HA - and both of the namenodes have same behavior.
>>
>>
>>
>>
>

Mime
View raw message