Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id CF6B2180D2 for ; Mon, 8 Feb 2016 10:23:25 +0000 (UTC) Received: (qmail 25335 invoked by uid 500); 8 Feb 2016 10:23:19 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 25213 invoked by uid 500); 8 Feb 2016 10:23:19 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 25193 invoked by uid 99); 8 Feb 2016 10:23:18 -0000 Received: from Unknown (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 08 Feb 2016 10:23:18 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 38E77C0420 for ; Mon, 8 Feb 2016 10:23:18 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.18 X-Spam-Level: * X-Spam-Status: No, score=1.18 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001, WEIRD_PORT=0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-us-east.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id vHX3HIVVOHDu for ; Mon, 8 Feb 2016 10:23:13 +0000 (UTC) Received: from mail-lb0-f179.google.com (mail-lb0-f179.google.com [209.85.217.179]) by mx1-us-east.apache.org (ASF Mail Server at mx1-us-east.apache.org) with ESMTPS id CAC2A42A54 for ; Mon, 8 Feb 2016 10:23:12 +0000 (UTC) Received: by mail-lb0-f179.google.com with SMTP id cw1so80035898lbb.1 for ; Mon, 08 Feb 2016 02:23:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=J5Yc/GAr9vb69q/tSyVZWBfWxiY3zswJWUVz6w7G7kQ=; b=MqALv7tsYnffaj8APuuHmkurnwcx8OqkNT/RuQG1hVFFRz+J8l5MOsfUSnOCfFMiXo 7ccoY8DrwnWjKzvz8RgIE2NVZir6RK0rDR8nBy0UgYy4ny9fjb6TgXmvJ8G3a9eaoahu e049Wa7LiBptxkcVeVGffkL+b14Vu01LRGYxCw4nAEbiy5U8tITSatPXIp5PDdcru0oz ApRtzhjn81esfEiQMz+hXbJnXvgxBh7JbydABziXrj2rKIH+eA05HCqBqRj3qyhBrRQy 8JODerZEjBWv0GORqsjTHiu1e7SyKLTR/5O5wUB2hAzvmYtnPdnUlzp8V87Z59xscuKH o/SQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:date:message-id:subject:from:to :content-type; bh=J5Yc/GAr9vb69q/tSyVZWBfWxiY3zswJWUVz6w7G7kQ=; b=SDiY9K97n0R7Ma4i/wcKzhamdIgIMtzG5o++9tnUbqa2wGVfqjwiFyoZs4kifSewXO q3D1C4+KHJ86bczqrRhYDxwkP5uUljzhc4Ki+GOqcrA1TWOVGgPKo+uHYWhiriWFF8dx Y8P01E5aDz1Upyl9J9l6L7UAVpzscKvoUF0VtT61k8op0a2VjUuq3h6mruCJcA+U+fl3 NX0NtbT+QFoCOcVmJGSQ1DCDvgOA0tej9va8ssIrq3uA9LB4dcd4Vvr7GyNSchHkF9Xq 1wC/CKrhlDEODY2UDbIj03CcrMpDUMLnJav4xdbgCmNr80CkqSOMwiq/CghACMiMCOQ1 cs9w== X-Gm-Message-State: AG10YOSAwhJvMCG/EYxp9qnEyr91ovvAKUyyW6HEMv9N6tFv8bzAxjgTEcLBXMsGl1RuqOHW5qguwEZEK6rt8w== MIME-Version: 1.0 X-Received: by 10.112.133.68 with SMTP id pa4mr10751226lbb.83.1454926991744; Mon, 08 Feb 2016 02:23:11 -0800 (PST) Received: by 10.25.20.156 with HTTP; Mon, 8 Feb 2016 02:23:11 -0800 (PST) Date: Mon, 8 Feb 2016 11:23:11 +0100 Message-ID: Subject: How can I see the LOG.debug output from CopyMapper of DistCp? From: Emre Sevinc To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=047d7b3a8054846540052b3f96b0 --047d7b3a8054846540052b3f96b0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hello, I'm using a recent version of Hadoop with YARN, and after running a `distcp` job successfully, I'm trying to see the output of LOG.debug lines from CopyMapper.java, but even though I've enabled DEBUG logging in log4j.properties (and of course copied this file to all the nodes in my cluster), I cannot see the output of these lines. The LOG.debug statements I'm interested are: LOG.debug("DistCpMapper::map(): Received " + sourcePath + ", " + relPath)= ; (from: http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-tools/hadoop-distcp= /src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java?revision=3D16= 19197&view=3Dmarkup#l196 ) LOG.debug("Copying " + sourceFileStatus.getPath() + " to " + target); LOG.debug("Target file path: " + targetPath); (from http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-tools/hadoop-distcp= /src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java= ?revision=3D1596931&view=3Dmarkup#l113 ) The `distcp` job copies about 20 files from one cluster to another and reports success. Then I check the YARN WEB UI and see that job is listed under FINISHED jobs. When I click on that, application_1454924704123_0001 in my case, I see only 1 entry in the list, such as appattempt_1454924704123_0001_000001 Mon Feb 8 10:51:27 +0100 2016 http://hadoop10:8042 Logs And when I click on the "Logs" I see that there's a "syslog : Total file length is 165516 bytes." And when I examine its contents I *don't* see any DEBUG lines, I also don't see any strings such as "DistCpMapper" or "Target file path" that should have been produced by CopyMapper.java and RetriableFileCopyCommand.java. I also SSHed into `hadoop10` node, and did a `grep` but still couldn't find such DEBUG output, e.g.: grep -r "Target file" /var/log/hadoop/ return no result. In my log4j.propertie, I have lines such as: hadoop.root.logger=3DDEBUG,console,RFA log4j.logger.org.apache.hadoop.tools.mapred=3DDEBUG And in my hadoop-env.sh I have the following line: export HADOOP_DAEMON_ROOT_LOGGER=3DDEBUG,RFA Is this not enough to see the output of all LOG.debug statements from all of the classes in `org.apache.hadoop.tools.mapred` package such as `CopyMapper` and `RetriableFileCopyCommand`? Or am I looking at the wrong directory? You can see the contents of my log4j.properties and hadoop-env.sh files at the end of this message, I made sure that they are the same on all of the nodes in the cluster. log4j.properties =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Define some default values that can be overridden by system properties #hadoop.root.logger=3DINFO,console #hadoop.root.logger=3DINFO,console,RFA hadoop.root.logger=3DDEBUG,console,RFA hadoop.log.dir=3D. hadoop.log.file=3Dhadoop.log # Define the root logger to the system property "hadoop.root.logger". log4j.rootLogger=3D${hadoop.root.logger}, EventCounter # Logging Threshold log4j.threshold=3DALL # Null Appender log4j.appender.NullAppender=3Dorg.apache.log4j.varia.NullAppender # # Rolling File Appender - cap space usage at 5gb. # hadoop.log.maxfilesize=3D256MB hadoop.log.maxbackupindex=3D20 log4j.appender.RFA=3Dorg.apache.log4j.RollingFileAppender log4j.appender.RFA.File=3D${hadoop.log.dir}/${hadoop.log.file} log4j.appender.RFA.MaxFileSize=3D${hadoop.log.maxfilesize} log4j.appender.RFA.MaxBackupIndex=3D${hadoop.log.maxbackupindex} log4j.appender.RFA.layout=3Dorg.apache.log4j.PatternLayout # Pattern format: Date LogLevel LoggerName LogMessage log4j.appender.RFA.layout.ConversionPattern=3D%d{ISO8601} %p %c: %m%n # Debugging Pattern format #log4j.appender.RFA.layout.ConversionPattern=3D%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n # # Daily Rolling File Appender # log4j.appender.DRFA=3Dorg.apache.log4j.DailyRollingFileAppender log4j.appender.DRFA.File=3D${hadoop.log.dir}/${hadoop.log.file} # Rollover at midnight log4j.appender.DRFA.DatePattern=3D.yyyy-MM-dd log4j.appender.DRFA.layout=3Dorg.apache.log4j.PatternLayout # Pattern format: Date LogLevel LoggerName LogMessage log4j.appender.DRFA.layout.ConversionPattern=3D%d{ISO8601} %p %c: %m%n # Debugging Pattern format #log4j.appender.DRFA.layout.ConversionPattern=3D%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n # # console # Add "console" to rootlogger above if you want to use this # log4j.appender.console=3Dorg.apache.log4j.ConsoleAppender log4j.appender.console.target=3DSystem.err log4j.appender.console.layout=3Dorg.apache.log4j.PatternLayout log4j.appender.console.layout.ConversionPattern=3D%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n # # TaskLog Appender # #Default values hadoop.tasklog.taskid=3Dnull hadoop.tasklog.iscleanup=3Dfalse hadoop.tasklog.noKeepSplits=3D4 hadoop.tasklog.totalLogFileSize=3D100 hadoop.tasklog.purgeLogSplits=3Dtrue hadoop.tasklog.logsRetainHours=3D12 log4j.appender.TLA=3Dorg.apache.hadoop.mapred.TaskLogAppender log4j.appender.TLA.taskId=3D${hadoop.tasklog.taskid} log4j.appender.TLA.isCleanup=3D${hadoop.tasklog.iscleanup} log4j.appender.TLA.totalLogFileSize=3D${hadoop.tasklog.totalLogFileSize} log4j.appender.TLA.layout=3Dorg.apache.log4j.PatternLayout log4j.appender.TLA.layout.ConversionPattern=3D%d{ISO8601} %p %c: %m%n # # HDFS block state change log from block manager # # Uncomment the following to suppress normal block state change # messages from BlockManager in NameNode. #log4j.logger.BlockStateChange=3DWARN # #Security appender # hadoop.security.logger=3DINFO,NullAppender hadoop.security.log.maxfilesize=3D256MB hadoop.security.log.maxbackupindex=3D20 log4j.category.SecurityLogger=3D${hadoop.security.logger} hadoop.security.log.file=3DSecurityAuth-${user.name}.audit log4j.appender.RFAS=3Dorg.apache.log4j.RollingFileAppender log4j.appender.RFAS.File=3D${hadoop.log.dir}/${hadoop.security.log.file} log4j.appender.RFAS.layout=3Dorg.apache.log4j.PatternLayout log4j.appender.RFAS.layout.ConversionPattern=3D%d{ISO8601} %p %c: %m%n log4j.appender.RFAS.MaxFileSize=3D${hadoop.security.log.maxfilesize} log4j.appender.RFAS.MaxBackupIndex=3D${hadoop.security.log.maxbackupindex} # # Daily Rolling Security appender # log4j.appender.DRFAS=3Dorg.apache.log4j.DailyRollingFileAppender log4j.appender.DRFAS.File=3D${hadoop.log.dir}/${hadoop.security.log.file} log4j.appender.DRFAS.layout=3Dorg.apache.log4j.PatternLayout log4j.appender.DRFAS.layout.ConversionPattern=3D%d{ISO8601} %p %c: %m%n log4j.appender.DRFAS.DatePattern=3D.yyyy-MM-dd # # hadoop configuration logging # # Uncomment the following line to turn off configuration deprecation warnings. # log4j.logger.org.apache.hadoop.conf.Configuration.deprecation=3DWARN # # hdfs audit logging # hdfs.audit.logger=3DINFO,NullAppender hdfs.audit.log.maxfilesize=3D256MB hdfs.audit.log.maxbackupindex=3D20 log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=3D${= hdfs.audit.logger} log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit= =3Dfalse log4j.appender.RFAAUDIT=3Dorg.apache.log4j.RollingFileAppender log4j.appender.RFAAUDIT.File=3D${hadoop.log.dir}/hdfs-audit.log log4j.appender.RFAAUDIT.layout=3Dorg.apache.log4j.PatternLayout log4j.appender.RFAAUDIT.layout.ConversionPattern=3D%d{ISO8601} %p %c{2}: %m= %n log4j.appender.RFAAUDIT.MaxFileSize=3D${hdfs.audit.log.maxfilesize} log4j.appender.RFAAUDIT.MaxBackupIndex=3D${hdfs.audit.log.maxbackupindex} # # NameNode metrics logging. # The default is to retain two namenode-metrics.log files up to 64MB each. # namenode.metrics.logger=3DINFO,NullAppender log4j.logger.NameNodeMetricsLog=3D${namenode.metrics.logger} log4j.additivity.NameNodeMetricsLog=3Dfalse log4j.appender.NNMETRICSRFA=3Dorg.apache.log4j.RollingFileAppender log4j.appender.NNMETRICSRFA.File=3D${hadoop.log.dir}/namenode-metrics.log log4j.appender.NNMETRICSRFA.layout=3Dorg.apache.log4j.PatternLayout log4j.appender.NNMETRICSRFA.layout.ConversionPattern=3D%d{ISO8601} %m%n log4j.appender.NNMETRICSRFA.MaxBackupIndex=3D1 log4j.appender.NNMETRICSRFA.MaxFileSize=3D64MB # # DataNode metrics logging. # The default is to retain two datanode-metrics.log files up to 64MB each. # datanode.metrics.logger=3DINFO,NullAppender log4j.logger.DataNodeMetricsLog=3D${datanode.metrics.logger} log4j.additivity.DataNodeMetricsLog=3Dfalse log4j.appender.DNMETRICSRFA=3Dorg.apache.log4j.RollingFileAppender log4j.appender.DNMETRICSRFA.File=3D${hadoop.log.dir}/datanode-metrics.log log4j.appender.DNMETRICSRFA.layout=3Dorg.apache.log4j.PatternLayout log4j.appender.DNMETRICSRFA.layout.ConversionPattern=3D%d{ISO8601} %m%n log4j.appender.DNMETRICSRFA.MaxBackupIndex=3D1 log4j.appender.DNMETRICSRFA.MaxFileSize=3D64MB # # mapred audit logging # mapred.audit.logger=3DINFO,NullAppender mapred.audit.log.maxfilesize=3D256MB mapred.audit.log.maxbackupindex=3D20 log4j.logger.org.apache.hadoop.mapred.AuditLogger=3D${mapred.audit.logger} log4j.additivity.org.apache.hadoop.mapred.AuditLogger=3Dfalse log4j.appender.MRAUDIT=3Dorg.apache.log4j.RollingFileAppender log4j.appender.MRAUDIT.File=3D${hadoop.log.dir}/mapred-audit.log log4j.appender.MRAUDIT.layout=3Dorg.apache.log4j.PatternLayout log4j.appender.MRAUDIT.layout.ConversionPattern=3D%d{ISO8601} %p %c{2}: %m%= n log4j.appender.MRAUDIT.MaxFileSize=3D${mapred.audit.log.maxfilesize} log4j.appender.MRAUDIT.MaxBackupIndex=3D${mapred.audit.log.maxbackupindex} # Custom Logging levels #log4j.logger.org.apache.hadoop.mapred.JobTracker=3DDEBUG #log4j.logger.org.apache.hadoop.mapred.TaskTracker=3DDEBUG #log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=3DD= EBUG # Jets3t library log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=3DERROR # AWS SDK & S3A FileSystem log4j.logger.com.amazonaws=3DERROR log4j.logger.com.amazonaws.http.AmazonHttpClient=3DERROR #log4j.logger.org.apache.hadoop.fs.s3a.S3AFileSystem=3DWARN log4j.logger.org.apache.hadoop.fs.s3a.S3AFileSystem=3DDEBUG log4j.logger.org.apache.hadoop.tools.mapred=3DDEBUG #log4j.logger.org.apache.hadoop=3DDEBUG # # Event Counter Appender # Sends counts of logging messages at different severity levels to Hadoop Metrics. # log4j.appender.EventCounter=3Dorg.apache.hadoop.log.metrics.EventCounter # # Job Summary Appender # # Use following logger to send summary to separate file defined by # hadoop.mapreduce.jobsummary.log.file : # hadoop.mapreduce.jobsummary.logger=3DINFO,JSA # hadoop.mapreduce.jobsummary.logger=3D${hadoop.root.logger} hadoop.mapreduce.jobsummary.log.file=3Dhadoop-mapreduce.jobsummary.log hadoop.mapreduce.jobsummary.log.maxfilesize=3D256MB hadoop.mapreduce.jobsummary.log.maxbackupindex=3D20 log4j.appender.JSA=3Dorg.apache.log4j.RollingFileAppender log4j.appender.JSA.File=3D${hadoop.log.dir}/${hadoop.mapreduce.jobsummary.l= og.file} log4j.appender.JSA.MaxFileSize=3D${hadoop.mapreduce.jobsummary.log.maxfiles= ize} log4j.appender.JSA.MaxBackupIndex=3D${hadoop.mapreduce.jobsummary.log.maxba= ckupindex} log4j.appender.JSA.layout=3Dorg.apache.log4j.PatternLayout log4j.appender.JSA.layout.ConversionPattern=3D%d{yy/MM/dd HH:mm:ss} %p %c{2= }: %m%n log4j.logger.org.apache.hadoop.mapred.JobInProgress$JobSummary=3D${hadoop.m= apreduce.jobsummary.logger} log4j.additivity.org.apache.hadoop.mapred.JobInProgress$JobSummary=3Dfalse # # shuffle connection log from shuffleHandler # Uncomment the following line to enable logging of shuffle connections # log4j.logger.org.apache.hadoop.mapred.ShuffleHandler.audit=3DDEBUG # # Yarn ResourceManager Application Summary Log # # Set the ResourceManager summary log filename yarn.server.resourcemanager.appsummary.log.file=3Drm-appsummary.log # Set the ResourceManager summary log level and appender yarn.server.resourcemanager.appsummary.logger=3D${hadoop.root.logger} #yarn.server.resourcemanager.appsummary.logger=3DINFO,RMSUMMARY # To enable AppSummaryLogging for the RM, # set yarn.server.resourcemanager.appsummary.logger to # ,RMSUMMARY in hadoop-env.sh # Appender for ResourceManager Application Summary Log # Requires the following properties to be set # - hadoop.log.dir (Hadoop Log directory) # - yarn.server.resourcemanager.appsummary.log.file (resource manager app summary log filename) # - yarn.server.resourcemanager.appsummary.logger (resource manager app summary log level and appender) log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$App= licationSummary=3D${yarn.server.resourcemanager.appsummary.logger} log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager= $ApplicationSummary=3Dfalse log4j.appender.RMSUMMARY=3Dorg.apache.log4j.RollingFileAppender log4j.appender.RMSUMMARY.File=3D${hadoop.log.dir}/${yarn.server.resourceman= ager.appsummary.log.file} log4j.appender.RMSUMMARY.MaxFileSize=3D256MB log4j.appender.RMSUMMARY.MaxBackupIndex=3D20 log4j.appender.RMSUMMARY.layout=3Dorg.apache.log4j.PatternLayout log4j.appender.RMSUMMARY.layout.ConversionPattern=3D%d{ISO8601} %p %c{2}: %= m%n # HS audit log configs #mapreduce.hs.audit.logger=3DINFO,HSAUDIT #log4j.logger.org.apache.hadoop.mapreduce.v2.hs.HSAuditLogger=3D${mapreduce= .hs.audit.logger} #log4j.additivity.org.apache.hadoop.mapreduce.v2.hs.HSAuditLogger=3Dfalse #log4j.appender.HSAUDIT=3Dorg.apache.log4j.DailyRollingFileAppender #log4j.appender.HSAUDIT.File=3D${hadoop.log.dir}/hs-audit.log #log4j.appender.HSAUDIT.layout=3Dorg.apache.log4j.PatternLayout #log4j.appender.HSAUDIT.layout.ConversionPattern=3D%d{ISO8601} %p %c{2}: %m= %n #log4j.appender.HSAUDIT.DatePattern=3D.yyyy-MM-dd # Http Server Request Logs #log4j.logger.http.requests.namenode=3DINFO,namenoderequestlog #log4j.appender.namenoderequestlog=3Dorg.apache.hadoop.http.HttpRequestLogA= ppender #log4j.appender.namenoderequestlog.Filename=3D${hadoop.log.dir}/jetty-namen= ode-yyyy_mm_dd.log #log4j.appender.namenoderequestlog.RetainDays=3D3 #log4j.logger.http.requests.datanode=3DINFO,datanoderequestlog #log4j.appender.datanoderequestlog=3Dorg.apache.hadoop.http.HttpRequestLogA= ppender #log4j.appender.datanoderequestlog.Filename=3D${hadoop.log.dir}/jetty-datan= ode-yyyy_mm_dd.log #log4j.appender.datanoderequestlog.RetainDays=3D3 #log4j.logger.http.requests.resourcemanager=3DINFO,resourcemanagerrequestlo= g #log4j.appender.resourcemanagerrequestlog=3Dorg.apache.hadoop.http.HttpRequ= estLogAppender #log4j.appender.resourcemanagerrequestlog.Filename=3D${hadoop.log.dir}/jett= y-resourcemanager-yyyy_mm_dd.log #log4j.appender.resourcemanagerrequestlog.RetainDays=3D3 #log4j.logger.http.requests.jobhistory=3DINFO,jobhistoryrequestlog #log4j.appender.jobhistoryrequestlog=3Dorg.apache.hadoop.http.HttpRequestLo= gAppender #log4j.appender.jobhistoryrequestlog.Filename=3D${hadoop.log.dir}/jetty-job= history-yyyy_mm_dd.log #log4j.appender.jobhistoryrequestlog.RetainDays=3D3 #log4j.logger.http.requests.nodemanager=3DINFO,nodemanagerrequestlog #log4j.appender.nodemanagerrequestlog=3Dorg.apache.hadoop.http.HttpRequestL= ogAppender #log4j.appender.nodemanagerrequestlog.Filename=3D${hadoop.log.dir}/jetty-no= demanager-yyyy_mm_dd.log #log4j.appender.nodemanagerrequestlog.RetainDays=3D3 # Appender for viewing information for errors and warnings yarn.ewma.cleanupInterval=3D300 yarn.ewma.messageAgeLimitSeconds=3D86400 yarn.ewma.maxUniqueMessages=3D250 log4j.appender.EWMA=3Dorg.apache.hadoop.yarn.util.Log4jWarningErrorMetricsA= ppender log4j.appender.EWMA.cleanupInterval=3D${yarn.ewma.cleanupInterval} log4j.appender.EWMA.messageAgeLimitSeconds=3D${yarn.ewma.messageAgeLimitSec= onds} log4j.appender.EWMA.maxUniqueMessages=3D${yarn.ewma.maxUniqueMessages} =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D hadoop-env.sh =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D # # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Set Hadoop-specific environment variables here. ## ## THIS FILE ACTS AS THE MASTER FILE FOR ALL HADOOP PROJECTS. ## SETTINGS HERE WILL BE READ BY ALL HADOOP COMMANDS. THEREFORE, ## ONE CAN USE THIS FILE TO SET YARN, HDFS, AND MAPREDUCE ## CONFIGURATION OPTIONS INSTEAD OF xxx-env.sh. ## ## Precedence rules: ## ## {yarn-env.sh|hdfs-env.sh} > hadoop-env.sh > hard-coded defaults ## ## {YARN_xyz|HDFS_xyz} > HADOOP_xyz > hard-coded defaults ## # Many of the options here are built from the perspective that users # may want to provide OVERWRITING values on the command line. # For example: # # JAVA_HOME=3D/usr/java/testing hdfs dfs -ls # # Therefore, the vast majority (BUT NOT ALL!) of these defaults # are configured for substitution and not append. If append # is preferable, modify this file accordingly. ### # Generic settings for HADOOP ### # Technically, the only required environment variable is JAVA_HOME. # All others are optional. However, the defaults are probably not # preferred. Many sites configure these options outside of Hadoop, # such as in /etc/profile.d # The java implementation to use. By default, this environment # variable is REQUIRED on ALL platforms except OS X! export JAVA_HOME=3D/usr/lib/jvm/java-8-oracle # Location of Hadoop. By default, Hadoop will attempt to determine # this location based upon its execution path. export HADOOP_PREFIX=3D/opt/hadoop-3.0.0-SNAPSHOT # Location of Hadoop's configuration information. i.e., where this # file is probably living. Many sites will also set this in the # same location where JAVA_HOME is defined. If this is not defined # Hadoop will attempt to locate it based upon its execution # path. export HADOOP_CONF_DIR=3D$HADOOP_PREFIX/etc/hadoop # The maximum amount of heap to use (Java -Xmx). If no unit # is provided, it will be converted to MB. Daemons will # prefer any Xmx setting in their respective _OPT variable. # There is no default; the JVM will autoscale based upon machine # memory size. # export HADOOP_HEAPSIZE_MAX=3D # The minimum amount of heap to use (Java -Xms). If no unit # is provided, it will be converted to MB. Daemons will # prefer any Xms setting in their respective _OPT variable. # There is no default; the JVM will autoscale based upon machine # memory size. # export HADOOP_HEAPSIZE_MIN=3D # Extra Java runtime options for all Hadoop commands. We don't support # IPv6 yet/still, so by default the preference is set to IPv4. # export HADOOP_OPTS=3D"-Djava.net.preferIPv4Stack=3Dtrue" # Some parts of the shell code may do special things dependent upon # the operating system. We have to set this here. See the next # section as to why.... export HADOOP_OS_TYPE=3D${HADOOP_OS_TYPE:-$(uname -s)} # Under certain conditions, Java on OS X will throw SCDynamicStore errors # in the system logs. # See HADOOP-8719 for more information. If one needs Kerberos # support on OS X, one will want to change/remove this extra bit. case ${HADOOP_OS_TYPE} in Darwin*) export HADOOP_OPTS=3D"${HADOOP_OPTS} -Djava.security.krb5.realm=3D " export HADOOP_OPTS=3D"${HADOOP_OPTS} -Djava.security.krb5.kdc=3D " export HADOOP_OPTS=3D"${HADOOP_OPTS} -Djava.security.krb5.conf=3D " ;; esac # Extra Java runtime options for some Hadoop commands # and clients (i.e., hdfs dfs -blah). These get appended to HADOOP_OPTS for # such commands. In most cases, # this should be left empty and # let users supply it on the command line. # export HADOOP_CLIENT_OPTS=3D"" # # A note about classpaths. # # The classpath is configured such that entries are stripped prior # to handing to Java based either upon duplication or non-existence. # Wildcards and/or directories are *NOT* expanded as the # de-duplication is fairly simple. So if two directories are in # the classpath that both contain awesome-methods-1.0.jar, # awesome-methods-1.0.jar will still be seen by java. But if # the classpath specifically has awesome-methods-1.0.jar from the # same directory listed twice, the last one will be removed. # # An additional, custom CLASSPATH. This is really meant for # end users, but as an administrator, one might want to push # something extra in here too, such as the jar to the topology # method. Just be sure to append to the existing HADOOP_USER_CLASSPATH # so end users have a way to add stuff. # export HADOOP_USER_CLASSPATH=3D"/some/cool/path/on/your/machine" # Should HADOOP_USER_CLASSPATH be first in the official CLASSPATH? # export HADOOP_USER_CLASSPATH_FIRST=3D"yes" # If HADOOP_USE_CLIENT_CLASSLOADER is set, HADOOP_CLASSPATH along with the main # jar are handled by a separate isolated client classloader. If it is set, # HADOOP_USER_CLASSPATH_FIRST is ignored. Can be defined by doing # export HADOOP_USE_CLIENT_CLASSLOADER=3Dtrue # HADOOP_CLIENT_CLASSLOADER_SYSTEM_CLASSES overrides the default definition of # system classes for the client classloader when HADOOP_USE_CLIENT_CLASSLOADER # is enabled. Names ending in '.' (period) are treated as package names, an= d # names starting with a '-' are treated as negative matches. For example, # export HADOOP_CLIENT_CLASSLOADER_SYSTEM_CLASSES=3D"-org.apache.hadoop.UserClass,ja= va.,javax.,org.apache.hadoop." # You need the hadoop-aws-3.0.0-SNAPSHOT.jar (or similar) in your CLASSPATH # otherwise you might get the following error: # java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found export HADOOP_CLASSPATH=3D$HADOOP_CLASSPATH:$HADOOP_PREFIX/share/hadoop/tools/lib/= * ### # Options for remote shell connectivity ### # There are some optional components of hadoop that allow for # command and control of remote hosts. For example, # start-dfs.sh will attempt to bring up all NNs, DNS, etc. # Options to pass to SSH when one of the "log into a host and # start/stop daemons" scripts is executed # export HADOOP_SSH_OPTS=3D"-o BatchMode=3Dyes -o StrictHostKeyChecking=3Dn= o -o ConnectTimeout=3D10s" # The built-in ssh handler will limit itself to 10 simultaneous connections= . # For pdsh users, this sets the fanout size ( -f ) # Change this to increase/decrease as necessary. # export HADOOP_SSH_PARALLEL=3D10 # Filename which contains all of the hosts for any remote execution # helper scripts # such as slaves.sh, start-dfs.sh, etc. # export HADOOP_SLAVES=3D"${HADOOP_CONF_DIR}/slaves" ### # Options for all daemons ### # # # Many options may also be specified as Java properties. It is # very common, and in many cases, desirable, to hard-set these # in daemon _OPTS variables. Where applicable, the appropriate # Java property is also identified. Note that many are re-used # or set differently in certain contexts (e.g., secure vs # non-secure) # # Where (primarily) daemon log files are stored. # $HADOOP_PREFIX/logs # by default. # Java property: hadoop.log.dir export HADOOP_LOG_DIR=3D/var/log/hadoop # A string representing this instance of hadoop. $USER by default. # This is used in writing log and pid files, so keep that in mind! # Java property: hadoop.id.str # export HADOOP_IDENT_STRING=3D$USER # How many seconds to pause after stopping a daemon # export HADOOP_STOP_TIMEOUT=3D5 # Where pid files are stored. /tmp by default. # export HADOOP_PID_DIR=3D/tmp # Default log4j setting for interactive commands # Java property: hadoop.root.logger # export HADOOP_ROOT_LOGGER=3DINFO,console # Default log4j setting for daemons spawned explicitly by # --daemon option of hadoop, hdfs, mapred and yarn command. # Java property: hadoop.root.logger #export HADOOP_DAEMON_ROOT_LOGGER=3DINFO,RFA export HADOOP_DAEMON_ROOT_LOGGER=3DDEBUG,RFA # Default log level and output location for security-related messages. # You will almost certainly want to change this on a per-daemon basis via # the Java property (i.e., -Dhadoop.security.logger=3Dfoo). (Note that the # defaults for the NN and 2NN override this by default.) # Java property: hadoop.security.logger # export HADOOP_SECURITY_LOGGER=3DINFO,NullAppender # Default log level for file system audit messages. # Generally, this is specifically set in the namenode-specific # options line. # Java property: hdfs.audit.logger # export HADOOP_AUDIT_LOGGER=3DINFO,NullAppender # Default process priority level # Note that sub-processes will also run at this level! # export HADOOP_NICENESS=3D0 # Default name for the service level authorization file # Java property: hadoop.policy.file # export HADOOP_POLICYFILE=3D"hadoop-policy.xml" # # NOTE: this is not used by default! <----- # You can define variables right here and then re-use them later on. # For example, it is common to use the same garbage collection settings # for all the daemons. So one could define: # # export HADOOP_GC_SETTINGS=3D"-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps" # # .. and then use it as per the b option under the namenode. ### # Secure/privileged execution ### # # Out of the box, Hadoop uses jsvc from Apache Commons to launch daemons # on privileged ports. This functionality can be replaced by providing # custom functions. See hadoop-functions.sh for more information. # # The jsvc implementation to use. Jsvc is required to run secure datanodes # that bind to privileged ports to provide authentication of data transfer # protocol. Jsvc is not required if SASL is configured for authentication of # data transfer protocol using non-privileged ports. # export JSVC_HOME=3D/usr/bin # # This directory contains pids for secure and privileged processes. #export HADOOP_SECURE_PID_DIR=3D${HADOOP_PID_DIR} # # This directory contains the logs for secure and privileged processes. # Java property: hadoop.log.dir # export HADOOP_SECURE_LOG=3D${HADOOP_LOG_DIR} # # When running a secure daemon, the default value of HADOOP_IDENT_STRING # ends up being a bit bogus. Therefore, by default, the code will # replace HADOOP_IDENT_STRING with HADOOP_SECURE_xx_USER. If one wants # to keep HADOOP_IDENT_STRING untouched, then uncomment this line. # export HADOOP_SECURE_IDENT_PRESERVE=3D"true" ### # NameNode specific parameters ### # Default log level and output location for file system related change # messages. For non-namenode daemons, the Java property must be set in # the appropriate _OPTS if one wants something other than INFO,NullAppender # Java property: hdfs.audit.logger # export HDFS_AUDIT_LOGGER=3DINFO,NullAppender # Specify the JVM options to be used when starting the NameNode. # These options will be appended to the options specified as HADOOP_OPTS # and therefore may override any similar flags set in HADOOP_OPTS # # a) Set JMX options # export HADOOP_NAMENODE_OPTS=3D"-Dcom.sun.management.jmxremote=3Dtrue -Dcom.sun.management.jmxremote.authenticate=3Dfalse -Dcom.sun.management.jmxremote.ssl=3Dfalse -Dcom.sun.management.jmxremote.port=3D1026" # # b) Set garbage collection logs # export HADOOP_NAMENODE_OPTS=3D"${HADOOP_GC_SETTINGS} -Xloggc:${HADOOP_LOG_DIR}/gc-rm.log-$(date +'%Y%m%d%H%M')" # # c) ... or set them directly # export HADOOP_NAMENODE_OPTS=3D"-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xloggc:${HADOOP_LOG_DIR}/gc-rm.log-$(date +'%Y%m%d%H%M')" # this is the default: # export HADOOP_NAMENODE_OPTS=3D"-Dhadoop.security.logger=3DINFO,RFAS" ### # SecondaryNameNode specific parameters ### # Specify the JVM options to be used when starting the SecondaryNameNode. # These options will be appended to the options specified as HADOOP_OPTS # and therefore may override any similar flags set in HADOOP_OPTS # # This is the default: # export HADOOP_SECONDARYNAMENODE_OPTS=3D"-Dhadoop.security.logger=3DINFO,R= FAS" ### # DataNode specific parameters ### # Specify the JVM options to be used when starting the DataNode. # These options will be appended to the options specified as HADOOP_OPTS # and therefore may override any similar flags set in HADOOP_OPTS # # This is the default: # export HADOOP_DATANODE_OPTS=3D"-Dhadoop.security.logger=3DERROR,RFAS" # On secure datanodes, user to run the datanode as after dropping privileges. # This **MUST** be uncommented to enable secure HDFS if using privileged ports # to provide authentication of data transfer protocol. This **MUST NOT** b= e # defined if SASL is configured for authentication of data transfer protoco= l # using non-privileged ports. # This will replace the hadoop.id.str Java property in secure mode. # export HADOOP_SECURE_DN_USER=3Dhdfs # Supplemental options for secure datanodes # By default, Hadoop uses jsvc which needs to know to launch a # server jvm. # export HADOOP_DN_SECURE_EXTRA_OPTS=3D"-jvm server" # Where datanode log files are stored in the secure data environment. # This will replace the hadoop.log.dir Java property in secure mode. # export HADOOP_SECURE_DN_LOG_DIR=3D${HADOOP_SECURE_LOG_DIR} # Where datanode pid files are stored in the secure data environment. # export HADOOP_SECURE_DN_PID_DIR=3D${HADOOP_SECURE_PID_DIR} ### # NFS3 Gateway specific parameters ### # Specify the JVM options to be used when starting the NFS3 Gateway. # These options will be appended to the options specified as HADOOP_OPTS # and therefore may override any similar flags set in HADOOP_OPTS # # export HADOOP_NFS3_OPTS=3D"" # Specify the JVM options to be used when starting the Hadoop portmapper. # These options will be appended to the options specified as HADOOP_OPTS # and therefore may override any similar flags set in HADOOP_OPTS # # export HADOOP_PORTMAP_OPTS=3D"-Xmx512m" # Supplemental options for priviliged gateways # By default, Hadoop uses jsvc which needs to know to launch a # server jvm. # export HADOOP_NFS3_SECURE_EXTRA_OPTS=3D"-jvm server" # On privileged gateways, user to run the gateway as after dropping privileges # This will replace the hadoop.id.str Java property in secure mode. # export HADOOP_PRIVILEGED_NFS_USER=3Dnfsserver ### # ZKFailoverController specific parameters ### # Specify the JVM options to be used when starting the ZKFailoverController= . # These options will be appended to the options specified as HADOOP_OPTS # and therefore may override any similar flags set in HADOOP_OPTS # # export HADOOP_ZKFC_OPTS=3D"" ### # QuorumJournalNode specific parameters ### # Specify the JVM options to be used when starting the QuorumJournalNode. # These options will be appended to the options specified as HADOOP_OPTS # and therefore may override any similar flags set in HADOOP_OPTS # # export HADOOP_JOURNALNODE_OPTS=3D"" ### # HDFS Balancer specific parameters ### # Specify the JVM options to be used when starting the HDFS Balancer. # These options will be appended to the options specified as HADOOP_OPTS # and therefore may override any similar flags set in HADOOP_OPTS # # export HADOOP_BALANCER_OPTS=3D"" ### # HDFS Mover specific parameters ### # Specify the JVM options to be used when starting the HDFS Mover. # These options will be appended to the options specified as HADOOP_OPTS # and therefore may override any similar flags set in HADOOP_OPTS # # export HADOOP_MOVER_OPTS=3D"" ### # Advanced Users Only! ### # # When building Hadoop, one can add the class paths to the commands # via this special env var: # export HADOOP_ENABLE_BUILD_PATHS=3D"true" # # To prevent accidents, shell commands be (superficially) locked # to only allow certain users to execute certain subcommands. # # For example, to limit who can execute the namenode command, # export HADOOP_namenode_USER=3Dhdfs =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D -- Emre Sevin=C3=A7 --047d7b3a8054846540052b3f96b0 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hello,

I'm using a recent versio= n of Hadoop with YARN, and after running a `distcp` job successfully, I'= ;m trying to see the output of LOG.debug lines from CopyMapper.java, but ev= en though I've enabled DEBUG logging in log4j.properties (and of course= copied this file to all the nodes in my cluster), I cannot see the output = of these lines.

The LOG.debug statements I'm interested are:
=
=C2=A0 LOG.debug("DistCpMapper::map(): Received " + sourcePat= h + ", " + relPath);

=C2=A0 (from: http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-t= ools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.= java?revision=3D1619197&view=3Dmarkup#l196 )

=C2=A0 LOG.debu= g("Copying " + sourceFileStatus.getPath() + " to " + ta= rget);
=C2=A0 LOG.debug("Target file path: " + targetPath);
=C2=A0 (from ht= tp://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-tools/hadoop-distcp/s= rc/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java?r= evision=3D1596931&view=3Dmarkup#l113 )

The `distcp` job copi= es about 20 files from one cluster to another and reports success. Then I c= heck the YARN WEB UI and see that job is listed under FINISHED jobs. When I= click on that, application_1454924704123_0001 in my case, I see only 1 ent= ry in the list, such as

=C2=A0 =C2=A0=C2=A0 =C2=A0=C2=A0 appattempt_= 1454924704123_0001_000001=C2=A0=C2=A0=C2=A0=C2=A0 Mon Feb 8 10:51:27 +0100 = 2016=C2=A0=C2=A0=C2=A0 http://hadoop10:804= 2=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Logs

And when I clic= k on the "Logs" I see that there's a "syslog : Total fil= e length is 165516 bytes." And when I examine its contents I *don'= t* see any DEBUG lines, I also don't see any strings such as "Dist= CpMapper" or "Target file path" that should have been produc= ed by CopyMapper.java and RetriableFileCopyCommand.java.

= I also SSHed into `hadoop10` node, and did a `grep` but still couldn't = find such DEBUG output, e.g.:

=C2=A0 grep -r "Target file"= /var/log/hadoop/

return no result.

In my log4j.propertie, I have lines such as:

=C2=A0 hadoop.root.l= ogger=3DDEBUG,console,RFA
=C2=A0 log4j.logger.org.apache.hadoop.tools.ma= pred=3DDEBUG

And in my hadoop-env.sh I have the following line= :

=C2=A0 export HADOOP_DAEMON_ROOT_LOGGER=3DDEBUG,RFA
=
Is this not enough to see the output of all LOG.debug statements f= rom all of the classes in `org.apache.hadoop.tools.mapred` package such as = `CopyMapper` and `RetriableFileCopyCommand`? Or am I looking at the wrong d= irectory?

You can see the contents of my log4j.properties= and hadoop-env.sh files at the end of this message, I made sure that they = are the same on all of the nodes in the cluster.


log4j.properties
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D
# Licensed to the Apache Software Foundation (ASF) un= der one
# or more contributor license agreements.=C2=A0 See the NOTICE f= ile
# distributed with this work for additional information
# regardi= ng copyright ownership.=C2=A0 The ASF licenses this file
# to you under = the Apache License, Version 2.0 (the
# "License"); you may not= use this file except in compliance
# with the License.=C2=A0 You may ob= tain a copy of the License at
#
#=C2=A0=C2=A0=C2=A0=C2=A0 http://www.apache.org/licenses/= LICENSE-2.0
#
# Unless required by applicable law or agreed to in= writing, software
# distributed under the License is distributed on an = "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,= either express or implied.
# See the License for the specific language = governing permissions and
# limitations under the License.

# Defi= ne some default values that can be overridden by system properties
#hado= op.root.logger=3DINFO,console
#hadoop.root.logger=3DINFO,console,RFA
= hadoop.root.logger=3DDEBUG,console,RFA
hadoop.log.dir=3D.
hadoop.log.= file=3Dhadoop.log

# Define the root logger to the system property &q= uot;hadoop.root.logger".
log4j.rootLogger=3D${hadoop.root.logger}, = EventCounter

# Logging Threshold
log4j.threshold=3DALL

# N= ull Appender
log4j.appender.NullAppender=3Dorg.apache.log4j.varia.NullAp= pender

#
# Rolling File Appender - cap space usage at 5gb.
#hadoop.log.maxfilesize=3D256MB
hadoop.log.maxbackupindex=3D20
log4j= .appender.RFA=3Dorg.apache.log4j.RollingFileAppender
log4j.appender.RFA.= File=3D${hadoop.log.dir}/${hadoop.log.file}

log4j.appender.RFA.MaxFi= leSize=3D${hadoop.log.maxfilesize}
log4j.appender.RFA.MaxBackupIndex=3D$= {hadoop.log.maxbackupindex}

log4j.appender.RFA.layout=3Dorg.apache.l= og4j.PatternLayout

# Pattern format: Date LogLevel LoggerName LogMes= sage
log4j.appender.RFA.layout.ConversionPattern=3D%d{ISO8601} %p %c: %m= %n
# Debugging Pattern format
#log4j.appender.RFA.layout.ConversionPa= ttern=3D%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n


#
# Daily R= olling File Appender
#

log4j.appender.DRFA=3Dorg.apache.log4j.Dai= lyRollingFileAppender
log4j.appender.DRFA.File=3D${hadoop.log.dir}/${had= oop.log.file}

# Rollover at midnight
log4j.appender.DRFA.DatePatt= ern=3D.yyyy-MM-dd

log4j.appender.DRFA.layout=3Dorg.apache.log4j.Patt= ernLayout

# Pattern format: Date LogLevel LoggerName LogMessage
l= og4j.appender.DRFA.layout.ConversionPattern=3D%d{ISO8601} %p %c: %m%n
# = Debugging Pattern format
#log4j.appender.DRFA.layout.ConversionPattern= =3D%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n


#
# console
#= Add "console" to rootlogger above if you want to use this
#
log4j.appender.console=3Dorg.apache.log4j.ConsoleAppender
log4j.ap= pender.console.target=3DSystem.err
log4j.appender.console.layout=3Dorg.a= pache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPatter= n=3D%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n

#
# TaskLog Appender
= #

#Default values
hadoop.tasklog.taskid=3Dnull
hadoop.tasklog.= iscleanup=3Dfalse
hadoop.tasklog.noKeepSplits=3D4
hadoop.tasklog.tota= lLogFileSize=3D100
hadoop.tasklog.purgeLogSplits=3Dtrue
hadoop.tasklo= g.logsRetainHours=3D12

log4j.appender.TLA=3Dorg.apache.hadoop.mapred= .TaskLogAppender
log4j.appender.TLA.taskId=3D${hadoop.tasklog.taskid}log4j.appender.TLA.isCleanup=3D${hadoop.tasklog.iscleanup}
log4j.append= er.TLA.totalLogFileSize=3D${hadoop.tasklog.totalLogFileSize}

log4j.a= ppender.TLA.layout=3Dorg.apache.log4j.PatternLayout
log4j.appender.TLA.l= ayout.ConversionPattern=3D%d{ISO8601} %p %c: %m%n

#
# HDFS block = state change log from block manager
#
# Uncomment the following to su= ppress normal block state change
# messages from BlockManager in NameNod= e.
#log4j.logger.BlockStateChange=3DWARN

#
#Security appender<= br>#
hadoop.security.logger=3DINFO,NullAppender
hadoop.security.log.m= axfilesize=3D256MB
hadoop.security.log.maxbackupindex=3D20
log4j.cate= gory.SecurityLogger=3D${hadoop.security.logger}
hadoop.security.log.file= =3DSecurityAuth-${user.name}.audit
log4= j.appender.RFAS=3Dorg.apache.log4j.RollingFileAppender
log4j.appender.RF= AS.File=3D${hadoop.log.dir}/${hadoop.security.log.file}
log4j.appender.R= FAS.layout=3Dorg.apache.log4j.PatternLayout
log4j.appender.RFAS.layout.C= onversionPattern=3D%d{ISO8601} %p %c: %m%n
log4j.appender.RFAS.MaxFileSi= ze=3D${hadoop.security.log.maxfilesize}
log4j.appender.RFAS.MaxBackupInd= ex=3D${hadoop.security.log.maxbackupindex}

#
# Daily Rolling Secu= rity appender
#
log4j.appender.DRFAS=3Dorg.apache.log4j.DailyRollingF= ileAppender
log4j.appender.DRFAS.File=3D${hadoop.log.dir}/${hadoop.secur= ity.log.file}
log4j.appender.DRFAS.layout=3Dorg.apache.log4j.PatternLayo= ut
log4j.appender.DRFAS.layout.ConversionPattern=3D%d{ISO8601} %p %c: %m= %n
log4j.appender.DRFAS.DatePattern=3D.yyyy-MM-dd

#
# hadoop c= onfiguration logging
#

# Uncomment the following line to turn off= configuration deprecation warnings.
# log4j.logger.org.apache.hadoop.co= nf.Configuration.deprecation=3DWARN

#
# hdfs audit logging
#hdfs.audit.logger=3DINFO,NullAppender
hdfs.audit.log.maxfilesize=3D256= MB
hdfs.audit.log.maxbackupindex=3D20
log4j.logger.org.apache.hadoop.= hdfs.server.namenode.FSNamesystem.audit=3D${hdfs.audit.logger}
log4j.add= itivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=3Dfalselog4j.appender.RFAAUDIT=3Dorg.apache.log4j.RollingFileAppender
log4j.a= ppender.RFAAUDIT.File=3D${hadoop.log.dir}/hdfs-audit.log
log4j.appender.= RFAAUDIT.layout=3Dorg.apache.log4j.PatternLayout
log4j.appender.RFAAUDIT= .layout.ConversionPattern=3D%d{ISO8601} %p %c{2}: %m%n
log4j.appender.RF= AAUDIT.MaxFileSize=3D${hdfs.audit.log.maxfilesize}
log4j.appender.RFAAUD= IT.MaxBackupIndex=3D${hdfs.audit.log.maxbackupindex}

#
# NameNode= metrics logging.
# The default is to retain two namenode-metrics.log fi= les up to 64MB each.
#
namenode.metrics.logger=3DINFO,NullAppenderlog4j.logger.NameNodeMetricsLog=3D${namenode.metrics.logger}
log4j.addi= tivity.NameNodeMetricsLog=3Dfalse
log4j.appender.NNMETRICSRFA=3Dorg.apac= he.log4j.RollingFileAppender
log4j.appender.NNMETRICSRFA.File=3D${hadoop= .log.dir}/namenode-metrics.log
log4j.appender.NNMETRICSRFA.layout=3Dorg.= apache.log4j.PatternLayout
log4j.appender.NNMETRICSRFA.layout.Conversion= Pattern=3D%d{ISO8601} %m%n
log4j.appender.NNMETRICSRFA.MaxBackupIndex=3D= 1
log4j.appender.NNMETRICSRFA.MaxFileSize=3D64MB

#
# DataNode = metrics logging.
# The default is to retain two datanode-metrics.log fil= es up to 64MB each.
#
datanode.metrics.logger=3DINFO,NullAppender
= log4j.logger.DataNodeMetricsLog=3D${datanode.metrics.logger}
log4j.addit= ivity.DataNodeMetricsLog=3Dfalse
log4j.appender.DNMETRICSRFA=3Dorg.apach= e.log4j.RollingFileAppender
log4j.appender.DNMETRICSRFA.File=3D${hadoop.= log.dir}/datanode-metrics.log
log4j.appender.DNMETRICSRFA.layout=3Dorg.a= pache.log4j.PatternLayout
log4j.appender.DNMETRICSRFA.layout.ConversionP= attern=3D%d{ISO8601} %m%n
log4j.appender.DNMETRICSRFA.MaxBackupIndex=3D1=
log4j.appender.DNMETRICSRFA.MaxFileSize=3D64MB

#
# mapred aud= it logging
#
mapred.audit.logger=3DINFO,NullAppender
mapred.audit.= log.maxfilesize=3D256MB
mapred.audit.log.maxbackupindex=3D20
log4j.lo= gger.org.apache.hadoop.mapred.AuditLogger=3D${mapred.audit.logger}
log4j= .additivity.org.apache.hadoop.mapred.AuditLogger=3Dfalse
log4j.appender.= MRAUDIT=3Dorg.apache.log4j.RollingFileAppender
log4j.appender.MRAUDIT.Fi= le=3D${hadoop.log.dir}/mapred-audit.log
log4j.appender.MRAUDIT.layout=3D= org.apache.log4j.PatternLayout
log4j.appender.MRAUDIT.layout.ConversionP= attern=3D%d{ISO8601} %p %c{2}: %m%n
log4j.appender.MRAUDIT.MaxFileSize= =3D${mapred.audit.log.maxfilesize}
log4j.appender.MRAUDIT.MaxBackupIndex= =3D${mapred.audit.log.maxbackupindex}

# Custom Logging levels
#log4j.logger.org.apache.hadoop.mapred.JobTracker=3DDEBUG
#log4j.logger= .org.apache.hadoop.mapred.TaskTracker=3DDEBUG
#log4j.logger.org.apache.h= adoop.hdfs.server.namenode.FSNamesystem.audit=3DDEBUG

# Jets3t libra= ry
log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=3D= ERROR

# AWS SDK & S3A FileSystem
log4j.logger.com.amazonaws= =3DERROR
log4j.logger.com.amazonaws.http.AmazonHttpClient=3DERROR
#lo= g4j.logger.org.apache.hadoop.fs.s3a.S3AFileSystem=3DWARN

log4j.logge= r.org.apache.hadoop.fs.s3a.S3AFileSystem=3DDEBUG
log4j.logger.org.apache= .hadoop.tools.mapred=3DDEBUG
#log4j.logger.org.apache.hadoop=3DDEBUG
=
#
# Event Counter Appender
# Sends counts of logging messages at = different severity levels to Hadoop Metrics.
#
log4j.appender.EventCo= unter=3Dorg.apache.hadoop.log.metrics.EventCounter

#
# Job Summar= y Appender
#
# Use following logger to send summary to separate file = defined by
# hadoop.mapreduce.jobsummary.log.file :
# hadoop.mapreduc= e.jobsummary.logger=3DINFO,JSA
#
hadoop.mapreduce.jobsummary.logger= =3D${hadoop.root.logger}
hadoop.mapreduce.jobsummary.log.file=3Dhadoop-m= apreduce.jobsummary.log
hadoop.mapreduce.jobsummary.log.maxfilesize=3D25= 6MB
hadoop.mapreduce.jobsummary.log.maxbackupindex=3D20
log4j.appende= r.JSA=3Dorg.apache.log4j.RollingFileAppender
log4j.appender.JSA.File=3D$= {hadoop.log.dir}/${hadoop.mapreduce.jobsummary.log.file}
log4j.appender.= JSA.MaxFileSize=3D${hadoop.mapreduce.jobsummary.log.maxfilesize}
log4j.a= ppender.JSA.MaxBackupIndex=3D${hadoop.mapreduce.jobsummary.log.maxbackupind= ex}
log4j.appender.JSA.layout=3Dorg.apache.log4j.PatternLayout
log4j.= appender.JSA.layout.ConversionPattern=3D%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%= n
log4j.logger.org.apache.hadoop.mapred.JobInProgress$JobSummary=3D${had= oop.mapreduce.jobsummary.logger}
log4j.additivity.org.apache.hadoop.mapr= ed.JobInProgress$JobSummary=3Dfalse

#
# shuffle connection log fr= om shuffleHandler
# Uncomment the following line to enable logging of sh= uffle connections
# log4j.logger.org.apache.hadoop.mapred.ShuffleHandler= .audit=3DDEBUG

#
# Yarn ResourceManager Application Summary Log#
# Set the ResourceManager summary log filename
yarn.server.resour= cemanager.appsummary.log.file=3Drm-appsummary.log
# Set the ResourceMana= ger summary log level and appender
yarn.server.resourcemanager.appsummar= y.logger=3D${hadoop.root.logger}
#yarn.server.resourcemanager.appsummary= .logger=3DINFO,RMSUMMARY

# To enable AppSummaryLogging for the RM,# set yarn.server.resourcemanager.appsummary.logger to
# <LEVEL>= ,RMSUMMARY in hadoop-env.sh

# Appender for ResourceManager Applicati= on Summary Log
# Requires the following properties to be set
#=C2=A0= =C2=A0=C2=A0 - hadoop.log.dir (Hadoop Log directory)
#=C2=A0=C2=A0=C2=A0= - yarn.server.resourcemanager.appsummary.log.file (resource manager app su= mmary log filename)
#=C2=A0=C2=A0=C2=A0 - yarn.server.resourcemanager.ap= psummary.logger (resource manager app summary log level and appender)
log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$A= pplicationSummary=3D${yarn.server.resourcemanager.appsummary.logger}
log= 4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$Ap= plicationSummary=3Dfalse
log4j.appender.RMSUMMARY=3Dorg.apache.log4j.Rol= lingFileAppender
log4j.appender.RMSUMMARY.File=3D${hadoop.log.dir}/${yar= n.server.resourcemanager.appsummary.log.file}
log4j.appender.RMSUMMARY.M= axFileSize=3D256MB
log4j.appender.RMSUMMARY.MaxBackupIndex=3D20
log4j= .appender.RMSUMMARY.layout=3Dorg.apache.log4j.PatternLayout
log4j.append= er.RMSUMMARY.layout.ConversionPattern=3D%d{ISO8601} %p %c{2}: %m%n

#= HS audit log configs
#mapreduce.hs.audit.logger=3DINFO,HSAUDIT
#log4= j.logger.org.apache.hadoop.mapreduce.v2.hs.HSAuditLogger=3D${mapreduce.hs.a= udit.logger}
#log4j.additivity.org.apache.hadoop.mapreduce.v2.hs.HSAudit= Logger=3Dfalse
#log4j.appender.HSAUDIT=3Dorg.apache.log4j.DailyRollingFi= leAppender
#log4j.appender.HSAUDIT.File=3D${hadoop.log.dir}/hs-audit.log=
#log4j.appender.HSAUDIT.layout=3Dorg.apache.log4j.PatternLayout
#log= 4j.appender.HSAUDIT.layout.ConversionPattern=3D%d{ISO8601} %p %c{2}: %m%n#log4j.appender.HSAUDIT.DatePattern=3D.yyyy-MM-dd

# Http Server Re= quest Logs
#log4j.logger.http.requests.namenode=3DINFO,namenoderequestlo= g
#log4j.appender.namenoderequestlog=3Dorg.apache.hadoop.http.HttpReques= tLogAppender
#log4j.appender.namenoderequestlog.Filename=3D${hadoop.log.= dir}/jetty-namenode-yyyy_mm_dd.log
#log4j.appender.namenoderequestlog.Re= tainDays=3D3

#log4j.logger.http.requests.datanode=3DINFO,datanodereq= uestlog
#log4j.appender.datanoderequestlog=3Dorg.apache.hadoop.http.Http= RequestLogAppender
#log4j.appender.datanoderequestlog.Filename=3D${hadoo= p.log.dir}/jetty-datanode-yyyy_mm_dd.log
#log4j.appender.datanoderequest= log.RetainDays=3D3

#log4j.logger.http.requests.resourcemanager=3DINF= O,resourcemanagerrequestlog
#log4j.appender.resourcemanagerrequestlog=3D= org.apache.hadoop.http.HttpRequestLogAppender
#log4j.appender.resourcema= nagerrequestlog.Filename=3D${hadoop.log.dir}/jetty-resourcemanager-yyyy_mm_= dd.log
#log4j.appender.resourcemanagerrequestlog.RetainDays=3D3

#= log4j.logger.http.requests.jobhistory=3DINFO,jobhistoryrequestlog
#log4j= .appender.jobhistoryrequestlog=3Dorg.apache.hadoop.http.HttpRequestLogAppen= der
#log4j.appender.jobhistoryrequestlog.Filename=3D${hadoop.log.dir}/je= tty-jobhistory-yyyy_mm_dd.log
#log4j.appender.jobhistoryrequestlog.Retai= nDays=3D3

#log4j.logger.http.requests.nodemanager=3DINFO,nodemanager= requestlog
#log4j.appender.nodemanagerrequestlog=3Dorg.apache.hadoop.htt= p.HttpRequestLogAppender
#log4j.appender.nodemanagerrequestlog.Filename= =3D${hadoop.log.dir}/jetty-nodemanager-yyyy_mm_dd.log
#log4j.appender.no= demanagerrequestlog.RetainDays=3D3

# Appender for viewing informatio= n for errors and warnings
yarn.ewma.cleanupInterval=3D300
yarn.ewma.m= essageAgeLimitSeconds=3D86400
yarn.ewma.maxUniqueMessages=3D250
log4j= .appender.EWMA=3Dorg.apache.hadoop.yarn.util.Log4jWarningErrorMetricsAppend= er
log4j.appender.EWMA.cleanupInterval=3D${yarn.ewma.cleanupInterval}log4j.appender.EWMA.messageAgeLimitSeconds=3D${yarn.ewma.messageAgeLimitSe= conds}
log4j.appender.EWMA.maxUniqueMessages=3D${yarn.ewma.maxUniqueMess= ages}
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=


hadoop-env.sh
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D
#
# Licensed to the Apache Software Foundation = (ASF) under one
# or more contributor license agreements.=C2=A0 See the = NOTICE file
# distributed with this work for additional information
#= regarding copyright ownership.=C2=A0 The ASF licenses this file
# to yo= u under the Apache License, Version 2.0 (the
# "License"); you= may not use this file except in compliance
# with the License.=C2=A0 Yo= u may obtain a copy of the License at
#
#=C2=A0=C2=A0=C2=A0=C2=A0 http://www.apache.org/l= icenses/LICENSE-2.0
#
# Unless required by applicable law or agre= ed to in writing, software
# distributed under the License is distribute= d on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF A= NY KIND, either express or implied.
# See the License for the specific l= anguage governing permissions and
# limitations under the License.
# Set Hadoop-specific environment variables here.

##
## THIS FI= LE ACTS AS THE MASTER FILE FOR ALL HADOOP PROJECTS.
## SETTINGS HERE WIL= L BE READ BY ALL HADOOP COMMANDS.=C2=A0 THEREFORE,
## ONE CAN USE THIS F= ILE TO SET YARN, HDFS, AND MAPREDUCE
## CONFIGURATION OPTIONS INSTEAD OF= xxx-env.sh.
##
## Precedence rules:
##
## {yarn-env.sh|hdfs-en= v.sh} > hadoop-env.sh > hard-coded defaults
##
## {YARN_xyz|HDF= S_xyz} > HADOOP_xyz > hard-coded defaults
##

# Many of the = options here are built from the perspective that users
# may want to pro= vide OVERWRITING values on the command line.
# For example:
#
#=C2= =A0 JAVA_HOME=3D/usr/java/testing hdfs dfs -ls
#
# Therefore, the vas= t majority (BUT NOT ALL!) of these defaults
# are configured for substit= ution and not append.=C2=A0 If append
# is preferable, modify this file = accordingly.

###
# Generic settings for HADOOP
###

# Te= chnically, the only required environment variable is JAVA_HOME.
# All ot= hers are optional.=C2=A0 However, the defaults are probably not
# prefer= red.=C2=A0 Many sites configure these options outside of Hadoop,
# such = as in /etc/profile.d

# The java implementation to use. By default, t= his environment
# variable is REQUIRED on ALL platforms except OS X!export JAVA_HOME=3D/usr/lib/jvm/java-8-oracle

# Location of Hadoop.= =C2=A0 By default, Hadoop will attempt to determine
# this location base= d upon its execution path.
export HADOOP_PREFIX=3D/opt/hadoop-3.0.0-SNAP= SHOT

# Location of Hadoop's configuration information.=C2=A0 i.e= ., where this
# file is probably living. Many sites will also set this i= n the
# same location where JAVA_HOME is defined.=C2=A0 If this is not d= efined
# Hadoop will attempt to locate it based upon its execution
# = path.
export HADOOP_CONF_DIR=3D$HADOOP_PREFIX/etc/hadoop

# The ma= ximum amount of heap to use (Java -Xmx).=C2=A0 If no unit
# is provided= , it will be converted to MB.=C2=A0 Daemons will
# prefer any Xmx setti= ng in their respective _OPT variable.
# There is no default; the JVM wil= l autoscale based upon machine
# memory size.
# export HADOOP_HEAPSIZ= E_MAX=3D

# The minimum amount of heap to use (Java -Xms).=C2=A0 If n= o unit
# is provided, it will be converted to MB.=C2=A0 Daemons will # prefer any Xms setting in their respective _OPT variable.
# There is= no default; the JVM will autoscale based upon machine
# memory size.# export HADOOP_HEAPSIZE_MIN=3D

# Extra Java runtime options for al= l Hadoop commands. We don't support
# IPv6 yet/still, so by default = the preference is set to IPv4.
# export HADOOP_OPTS=3D"-Djava.net.p= referIPv4Stack=3Dtrue"

# Some parts of the shell code may do sp= ecial things dependent upon
# the operating system.=C2=A0 We have to set= this here. See the next
# section as to why....
export HADOOP_OS_TYP= E=3D${HADOOP_OS_TYPE:-$(uname -s)}


# Under certain conditions, J= ava on OS X will throw SCDynamicStore errors
# in the system logs.
# = See HADOOP-8719 for more information.=C2=A0 If one needs Kerberos
# supp= ort on OS X, one will want to change/remove this extra bit.
case ${HADOO= P_OS_TYPE} in
=C2=A0 Darwin*)
=C2=A0=C2=A0=C2=A0 export HADOOP_OPTS= =3D"${HADOOP_OPTS} -Djava.security.krb5.realm=3D "
=C2=A0=C2= =A0=C2=A0 export HADOOP_OPTS=3D"${HADOOP_OPTS} -Djava.security.krb5.kd= c=3D "
=C2=A0=C2=A0=C2=A0 export HADOOP_OPTS=3D"${HADOOP_OPTS}= -Djava.security.krb5.conf=3D "
=C2=A0 ;;
esac

# Extra Ja= va runtime options for some Hadoop commands
# and clients (i.e., hdfs df= s -blah).=C2=A0 These get appended to HADOOP_OPTS for
# such commands.= =C2=A0 In most cases, # this should be left empty and
# let users suppl= y it on the command line.
# export HADOOP_CLIENT_OPTS=3D""
=
#
# A note about classpaths.
#
# The classpath is configured s= uch that entries are stripped prior
# to handing to Java based either up= on duplication or non-existence.
# Wildcards and/or directories are *NOT= * expanded as the
# de-duplication is fairly simple.=C2=A0 So if two dir= ectories are in
# the classpath that both contain awesome-methods-1.0.ja= r,
# awesome-methods-1.0.jar will still be seen by java.=C2=A0 But if# the classpath specifically has awesome-methods-1.0.jar from the
# sam= e directory listed twice, the last one will be removed.
#

# An ad= ditional, custom CLASSPATH.=C2=A0 This is really meant for
# end users, = but as an administrator, one might want to push
# something extra in her= e too, such as the jar to the topology
# method.=C2=A0 Just be sure to a= ppend to the existing HADOOP_USER_CLASSPATH
# so end users have a way to= add stuff.
# export HADOOP_USER_CLASSPATH=3D"/some/cool/path/on/yo= ur/machine"

# Should HADOOP_USER_CLASSPATH be first in the offi= cial CLASSPATH?
# export HADOOP_USER_CLASSPATH_FIRST=3D"yes"
# If HADOOP_USE_CLIENT_CLASSLOADER is set, HADOOP_CLASSPATH along wit= h the main
# jar are handled by a separate isolated client classloader. = If it is set,
# HADOOP_USER_CLASSPATH_FIRST is ignored. Can be defined b= y doing
# export HADOOP_USE_CLIENT_CLASSLOADER=3Dtrue

# HADOOP_CL= IENT_CLASSLOADER_SYSTEM_CLASSES overrides the default definition of
# sy= stem classes for the client classloader when HADOOP_USE_CLIENT_CLASSLOADER<= br># is enabled. Names ending in '.' (period) are treated as packag= e names, and
# names starting with a '-' are treated as negative= matches. For example,
# export HADOOP_CLIENT_CLASSLOADER_SYSTEM_CLASSES= =3D"-org.apache.hadoop.UserClass,java.,javax.,org.apache.hadoop."=

# You need the hadoop-aws-3.0.0-SNAPSHOT.jar (or similar) in your C= LASSPATH
# otherwise you might get the following error:
# java.lang.C= lassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not fou= nd
export HADOOP_CLASSPATH=3D$HADOOP_CLASSPATH:$HADOOP_PREFIX/share/hado= op/tools/lib/*

###
# Options for remote shell connectivity
###=

# There are some optional components of hadoop that allow for
# = command and control of remote hosts.=C2=A0 For example,
# start-dfs.sh w= ill attempt to bring up all NNs, DNS, etc.

# Options to pass to SSH = when one of the "log into a host and
# start/stop daemons" scr= ipts is executed
# export HADOOP_SSH_OPTS=3D"-o BatchMode=3Dyes -o = StrictHostKeyChecking=3Dno -o ConnectTimeout=3D10s"

# The built= -in ssh handler will limit itself to 10 simultaneous connections.
# For = pdsh users, this sets the fanout size ( -f )
# Change this to increase/d= ecrease as necessary.
# export HADOOP_SSH_PARALLEL=3D10

# Filenam= e which contains all of the hosts for any remote execution
# helper scri= pts # such as slaves.sh, start-dfs.sh, etc.
# export HADOOP_SLAVES=3D&qu= ot;${HADOOP_CONF_DIR}/slaves"

###
# Options for all daemons<= br>###
#

#
# Many options may also be specified as Java proper= ties.=C2=A0 It is
# very common, and in many cases, desirable, to hard-s= et these
# in daemon _OPTS variables.=C2=A0 Where applicable, the approp= riate
# Java property is also identified.=C2=A0 Note that many are re-us= ed
# or set differently in certain contexts (e.g., secure vs
# non-se= cure)
#

# Where (primarily) daemon log files are stored.=C2=A0 # = $HADOOP_PREFIX/logs
# by default.
# Java property: hadoop.log.direxport HADOOP_LOG_DIR=3D/var/log/hadoop

# A string representing thi= s instance of hadoop. $USER by default.
# This is used in writing log an= d pid files, so keep that in mind!
# Java property: hadoop.id.str
# e= xport HADOOP_IDENT_STRING=3D$USER

# How many seconds to pause after = stopping a daemon
# export HADOOP_STOP_TIMEOUT=3D5

# Where pid fi= les are stored.=C2=A0 /tmp by default.
# export HADOOP_PID_DIR=3D/tmp
# Default log4j setting for interactive commands
# Java property: h= adoop.root.logger
# export HADOOP_ROOT_LOGGER=3DINFO,console

# De= fault log4j setting for daemons spawned explicitly by
# --daemon option= of hadoop, hdfs, mapred and yarn command.
# Java property: hadoop.root.= logger
#export HADOOP_DAEMON_ROOT_LOGGER=3DINFO,RFA
export HADOOP_DAE= MON_ROOT_LOGGER=3DDEBUG,RFA

# Default log level and output location = for security-related messages.
# You will almost certainly want to chang= e this on a per-daemon basis via
# the Java property (i.e., -Dhadoop.sec= urity.logger=3Dfoo). (Note that the
# defaults for the NN and 2NN overri= de this by default.)
# Java property: hadoop.security.logger
# export= HADOOP_SECURITY_LOGGER=3DINFO,NullAppender

# Default log level for = file system audit messages.
# Generally, this is specifically set in the= namenode-specific
# options line.
# Java property: hdfs.audit.logger=
# export HADOOP_AUDIT_LOGGER=3DINFO,NullAppender

# Default proce= ss priority level
# Note that sub-processes will also run at this level!=
# export HADOOP_NICENESS=3D0

# Default name for the service leve= l authorization file
# Java property: hadoop.policy.file
# export HAD= OOP_POLICYFILE=3D"hadoop-policy.xml"

#
# NOTE: this is = not used by default!=C2=A0 <-----
# You can define variables right he= re and then re-use them later on.
# For example, it is common to use the= same garbage collection settings
# for all the daemons.=C2=A0 So one co= uld define:
#
# export HADOOP_GC_SETTINGS=3D"-verbose:gc -XX:+Pr= intGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps"
#
# = .. and then use it as per the b option under the namenode.

###
# = Secure/privileged execution
###

#
# Out of the box, Hadoop use= s jsvc from Apache Commons to launch daemons
# on privileged ports.=C2= =A0 This functionality can be replaced by providing
# custom functions.= =C2=A0 See hadoop-functions.sh for more information.
#

# The jsvc= implementation to use. Jsvc is required to run secure datanodes
# that = bind to privileged ports to provide authentication of data transfer
# pr= otocol.=C2=A0 Jsvc is not required if SASL is configured for authentication= of
# data transfer protocol using non-privileged ports.
# export JSV= C_HOME=3D/usr/bin

#
# This directory contains pids for secure and= privileged processes.
#export HADOOP_SECURE_PID_DIR=3D${HADOOP_PID_DIR}=

#
# This directory contains the logs for secure and privileged p= rocesses.
# Java property: hadoop.log.dir
# export HADOOP_SECURE_LOG= =3D${HADOOP_LOG_DIR}

#
# When running a secure daemon, the defaul= t value of HADOOP_IDENT_STRING
# ends up being a bit bogus.=C2=A0 Theref= ore, by default, the code will
# replace HADOOP_IDENT_STRING with HADOOP= _SECURE_xx_USER.=C2=A0 If one wants
# to keep HADOOP_IDENT_STRING untouc= hed, then uncomment this line.
# export HADOOP_SECURE_IDENT_PRESERVE=3D&= quot;true"

###
# NameNode specific parameters
###

= # Default log level and output location for file system related change
#= messages. For non-namenode daemons, the Java property must be set in
# = the appropriate _OPTS if one wants something other than INFO,NullAppender# Java property: hdfs.audit.logger
# export HDFS_AUDIT_LOGGER=3DINFO,N= ullAppender

# Specify the JVM options to be used when starting the N= ameNode.
# These options will be appended to the options specified as HA= DOOP_OPTS
# and therefore may override any similar flags set in HADOOP_O= PTS
#
# a) Set JMX options
# export HADOOP_NAMENODE_OPTS=3D"-= Dcom.sun.management.jmxremote=3Dtrue -Dcom.sun.management.jmxremote.authent= icate=3Dfalse -Dcom.sun.management.jmxremote.ssl=3Dfalse -Dcom.sun.manageme= nt.jmxremote.port=3D1026"
#
# b) Set garbage collection logs
= # export HADOOP_NAMENODE_OPTS=3D"${HADOOP_GC_SETTINGS} -Xloggc:${HADOO= P_LOG_DIR}/gc-rm.log-$(date +'%Y%m%d%H%M')"
#
# c) ... o= r set them directly
# export HADOOP_NAMENODE_OPTS=3D"-verbose:gc -X= X:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xloggc:${H= ADOOP_LOG_DIR}/gc-rm.log-$(date +'%Y%m%d%H%M')"

# this = is the default:
# export HADOOP_NAMENODE_OPTS=3D"-Dhadoop.security.= logger=3DINFO,RFAS"

###
# SecondaryNameNode specific paramet= ers
###
# Specify the JVM options to be used when starting the Second= aryNameNode.
# These options will be appended to the options specified a= s HADOOP_OPTS
# and therefore may override any similar flags set in HADO= OP_OPTS
#
# This is the default:
# export HADOOP_SECONDARYNAMENODE= _OPTS=3D"-Dhadoop.security.logger=3DINFO,RFAS"

###
# Da= taNode specific parameters
###
# Specify the JVM options to be used w= hen starting the DataNode.
# These options will be appended to the optio= ns specified as HADOOP_OPTS
# and therefore may override any similar fla= gs set in HADOOP_OPTS
#
# This is the default:
# export HADOOP_DAT= ANODE_OPTS=3D"-Dhadoop.security.logger=3DERROR,RFAS"

# On = secure datanodes, user to run the datanode as after dropping privileges.# This **MUST** be uncommented to enable secure HDFS if using privileged p= orts
# to provide authentication of data transfer protocol.=C2=A0 This *= *MUST NOT** be
# defined if SASL is configured for authentication of dat= a transfer protocol
# using non-privileged ports.
# This will replace= the hadoop.id.str Java property in secure mode.
# export HADOOP_SECURE_= DN_USER=3Dhdfs

# Supplemental options for secure datanodes
# By d= efault, Hadoop uses jsvc which needs to know to launch a
# server jvm.# export HADOOP_DN_SECURE_EXTRA_OPTS=3D"-jvm server"

# W= here datanode log files are stored in the secure data environment.
# Thi= s will replace the hadoop.log.dir Java property in secure mode.
# export= HADOOP_SECURE_DN_LOG_DIR=3D${HADOOP_SECURE_LOG_DIR}

# Where datanod= e pid files are stored in the secure data environment.
# export HADOOP_S= ECURE_DN_PID_DIR=3D${HADOOP_SECURE_PID_DIR}

###
# NFS3 Gateway sp= ecific parameters
###
# Specify the JVM options to be used when start= ing the NFS3 Gateway.
# These options will be appended to the options sp= ecified as HADOOP_OPTS
# and therefore may override any similar flags se= t in HADOOP_OPTS
#
# export HADOOP_NFS3_OPTS=3D""

# = Specify the JVM options to be used when starting the Hadoop portmapper.
= # These options will be appended to the options specified as HADOOP_OPTS# and therefore may override any similar flags set in HADOOP_OPTS
#
= # export HADOOP_PORTMAP_OPTS=3D"-Xmx512m"

# Supplemental o= ptions for priviliged gateways
# By default, Hadoop uses jsvc which need= s to know to launch a
# server jvm.
# export HADOOP_NFS3_SECURE_EXTRA= _OPTS=3D"-jvm server"

# On privileged gateways, user to ru= n the gateway as after dropping privileges
# This will replace the hadoo= p.id.str Java property in secure mode.
# export HADOOP_PRIVILEGED_NFS_US= ER=3Dnfsserver

###
# ZKFailoverController specific parameters
= ###
# Specify the JVM options to be used when starting the ZKFailoverCon= troller.
# These options will be appended to the options specified as HA= DOOP_OPTS
# and therefore may override any similar flags set in HADOOP_O= PTS
#
# export HADOOP_ZKFC_OPTS=3D""

###
# Quorum= JournalNode specific parameters
###
# Specify the JVM options to be u= sed when starting the QuorumJournalNode.
# These options will be appende= d to the options specified as HADOOP_OPTS
# and therefore may override a= ny similar flags set in HADOOP_OPTS
#
# export HADOOP_JOURNALNODE_OPT= S=3D""

###
# HDFS Balancer specific parameters
#### Specify the JVM options to be used when starting the HDFS Balancer.
= # These options will be appended to the options specified as HADOOP_OPTS# and therefore may override any similar flags set in HADOOP_OPTS
#
= # export HADOOP_BALANCER_OPTS=3D""

###
# HDFS Mover spe= cific parameters
###
# Specify the JVM options to be used when starti= ng the HDFS Mover.
# These options will be appended to the options speci= fied as HADOOP_OPTS
# and therefore may override any similar flags set i= n HADOOP_OPTS
#
# export HADOOP_MOVER_OPTS=3D""

###<= br># Advanced Users Only!
###

#
# When building Hadoop, one ca= n add the class paths to the commands
# via this special env var:
# e= xport HADOOP_ENABLE_BUILD_PATHS=3D"true"

#
# To prevent= accidents, shell commands be (superficially) locked
# to only allow cer= tain users to execute certain subcommands.
#
# For example, to limit = who can execute the namenode command,
# export HADOOP_namenode_USER=3Dhd= fs
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D


--
Emre Sevin=C3=A7
--047d7b3a8054846540052b3f96b0--