ambari-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Onischuk (JIRA)" <>
Subject [jira] [Created] (AMBARI-7119) log4j does not get used by hadoop as settings are present in
Date Tue, 02 Sep 2014 18:43:20 GMT
Andrew Onischuk created AMBARI-7119:

             Summary: log4j does not get used by hadoop as settings are present in
                 Key: AMBARI-7119
             Project: Ambari
          Issue Type: Bug
            Reporter: Andrew Onischuk
            Assignee: Andrew Onischuk
             Fix For: 1.7.0

PROBLEM: log4j settings made via Ambari update the log4j file but do not take
any affect when restarting HDFS. It seems there are hardcoded settings in
/usr/lib/hadoop/libexec/ such as this at line 221:

HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.root.logger=$



BUSINESS IMPACT: Customers have to change core files or set environment
variables explicitly by setting up a profile script

STEPS TO REPRODUCE: Log in to Ambari and change log4j properties such that
hadoop.root.logger=INFO,DRFA. The log4j file is updated in /etc/hadoop/conf.

Restart the HDFS service. Do a ps -ef | grep <PID> for the namenode. The
process shows duplicate entries for several properties and does not show the
logging change. Here is the duplication and incorrect root logger setting seen
locally in testing:

hdfs 4304 1 14 07:26 ? 00:00:10 /usr/jdk64/jdk1.7.0_45/bin/java
-Dproc_namenode -Xmx1024m
-Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log
amd64-64:/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Dhadoop.log.dir=/var/log/hadoop/hdfs
-Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/lib/hadoop/lib/native
-Dhadoop.policy.file=hadoop-policy.xml -server
-XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC
-XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=100m
-XX:MaxNewSize=50m -Xloggc:/var/log/hadoop/hdfs/gc.log-201407140726
-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
-Xms1024m -Xmx1024m,DRFAS
-Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=8
-XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log
-XX:NewSize=100m -XX:MaxNewSize=50m
-Xloggc:/var/log/hadoop/hdfs/gc.log-201407140726 -verbose:gc
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms1024m
-Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=8
-XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log
-XX:NewSize=100m -XX:MaxNewSize=50m
-Xloggc:/var/log/hadoop/hdfs/gc.log-201407140726 -verbose:gc
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms1024m

ACTUAL BEHAVIOR: log4j changes made in Ambari do not persist in process. It
seems there are values set in /usr/lib/hadoop/libexec/ that
override no matter what. There is also duplication of settings, assuming this
is from as well.

EXPECTED BEHAVIOR: Settings made in Ambari should be persisted and used by the

SUPPORT ANALYSIS: Support made changes to log4j in Ambari on a test cluster
and they were not used

This message was sent by Atlassian JIRA

View raw message