Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D9DD5D0C8 for ; Tue, 25 Jun 2013 12:48:10 +0000 (UTC) Received: (qmail 38086 invoked by uid 500); 25 Jun 2013 12:48:05 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 37679 invoked by uid 500); 25 Jun 2013 12:48:05 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 37672 invoked by uid 99); 25 Jun 2013 12:48:04 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 25 Jun 2013 12:48:04 +0000 X-ASF-Spam-Status: No, hits=1.7 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of aji1705@gmail.com designates 209.85.217.177 as permitted sender) Received: from [209.85.217.177] (HELO mail-lb0-f177.google.com) (209.85.217.177) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 25 Jun 2013 12:47:57 +0000 Received: by mail-lb0-f177.google.com with SMTP id 10so889392lbf.8 for ; Tue, 25 Jun 2013 05:47:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=reuWSn9UycezM1iSwqskmekpQyRY9htVCLvvgnWmNYg=; b=RIiQFm4tikAvjW+X7LN7mDCbobauVa4C02H2Kq+Ddz3gK9Uc6I9L+R7jCmMo2stbcs Gi2RZLnHRNNt8qxqg3F6cLpp0McYpFfelBqwoD/B0Q4Xv4SQHKYWAisOBU/62+xhllDY IePHj3l/4nmyHw5Aao43J6Qk/AP7dmyB4HkVkRA1FVsq/Jbp19mQh9dkke0aiIOQwRNU o1VPtG9EqgHeSkNbqfwinmNYnMzRIcd/F3o37me7/J8rXim4l+fBT+KaDTnuc98YRtYp R2RoVu/L8qa828TWl7S+zqtS5rHoEj4jvSCn17Or3Xsu51myCeH+d3YRr3W/b6DvvgXN YxdA== MIME-Version: 1.0 X-Received: by 10.152.25.169 with SMTP id d9mr13564560lag.63.1372164457357; Tue, 25 Jun 2013 05:47:37 -0700 (PDT) Received: by 10.112.7.193 with HTTP; Tue, 25 Jun 2013 05:47:37 -0700 (PDT) Date: Tue, 25 Jun 2013 08:47:37 -0400 Message-ID: Subject: log4j logging for mapreduce From: Aji Janis To: user Content-Type: multipart/alternative; boundary=089e0158b2020de6d404dff9ef69 X-Virus-Checked: Checked by ClamAV on apache.org --089e0158b2020de6d404dff9ef69 Content-Type: text/plain; charset=ISO-8859-1 I know hadoop uses log4 for logging and I was able to view the properties in conf/log4j.properties to figure out where the existing log files are. However, is there a way for me to direct the logs from a (hbase) mapreduce job to a completely new file? Idea being, I have a job scheduled to run nightly and I like to be able to log to /var/log/myjob.log for just this job so that I can check that file for any errors/exceptions instead of having to go through the jotrackter UI. Is this possible? If so, how? Also, since the job will be submitted to the cluster so please advice if the log file needs to be on HDFS or the regular (linux) platform. If on Linux then should it be on all nodes or just the hadoop master? Hadoop version in use: 0.20.2-cdh3u5, 30233064aaf5f2492bc687d61d72956876102109 Thank you for any suggestions. --089e0158b2020de6d404dff9ef69 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
I know hadoop uses log4 for logging and I was able to view the pro= perties in conf/log4j.properties to figure out where the existing log files= are. However, is there a way for me to direct the logs from a (hbase) mapr= educe job to a completely new file? Idea being, I have a job scheduled to r= un nightly and I like to be able to log to /var/log/myjob.log for just this= job so that I can check that file for any errors/exceptions instead of hav= ing to go through the jotrackter UI. Is this possible? If so, how? Also, si= nce the job will be submitted to the cluster so please advice if the log fi= le needs to be on HDFS or the regular (linux) platform. If on Linux then sh= ould it be on all nodes or just the hadoop master?
Hadoop version in use:=A00.20.2-cdh3u5, 30233064aaf5f2492bc687d61d72956876102109

Thank you for any suggestions.
<= /div> --089e0158b2020de6d404dff9ef69--