Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 084DB10115 for ; Thu, 17 Apr 2014 10:29:16 +0000 (UTC) Received: (qmail 4605 invoked by uid 500); 17 Apr 2014 10:29:08 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 4250 invoked by uid 500); 17 Apr 2014 10:29:06 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 4233 invoked by uid 99); 17 Apr 2014 10:29:05 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 17 Apr 2014 10:29:05 +0000 X-ASF-Spam-Status: No, hits=1.8 required=5.0 tests=HTML_MESSAGE,PLING_QUERY,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of a.eremihin@corp.badoo.com designates 209.85.215.43 as permitted sender) Received: from [209.85.215.43] (HELO mail-la0-f43.google.com) (209.85.215.43) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 17 Apr 2014 10:29:00 +0000 Received: by mail-la0-f43.google.com with SMTP id e16so213598lan.30 for ; Thu, 17 Apr 2014 03:28:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corp.badoo.com; s=google; h=message-id:date:from:user-agent:mime-version:to:subject :content-type; bh=sDrYozb8x+k4lZSEFZV/QCWKWMxVRw31/aFJ7QtuvaQ=; b=Pa/1hTH3jbBrTcQv55QwdPqgHfO5nszwdAx05P62C75nIqwqZ1H8MBpju2CAPz9YWs VOzlLdbRBcJvWfK0Z2xA6ODN1+sqkSsnAYszFVrizBsSHfa6xre5JtvQtJov8LY+OS5s pgHQ5PHrbU5WXnxzr7E7JXEXjfs+EcQp/+jps= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:content-type; bh=sDrYozb8x+k4lZSEFZV/QCWKWMxVRw31/aFJ7QtuvaQ=; b=ccuxgZ1y18JsZDV4HqRL4F+zGoQXCR6PTo0mJRHQs1ZchueEJrwmlmaOe3Pv2FTESJ JPybvrs7rOS91GtCu1PMV2Y+IDSeCNNM5/yBN+RkY5lvDL9Ip/nzoKM3J+1jdJY5c8FP dcK6v+LDCigHxeSPMhmf1LD8pD79OphwDy9gdNSK/9diTY/p0wmRS6PzXJLOWbgRzn0y xKOd6/vQypv03vB7zZ7izzUpq3ESYKhOWbPXTlhTnug3wOKD9cpUnjGhPLH7jbSEWIxi ijY2dmzVEuGoSj5OTQx0MmSI0oF+kRZkhBxmk9unI0QPt1u0wVwMdI8WlYFSL3mkF0I1 J8pg== X-Gm-Message-State: ALoCoQn2MpVeajpz9PuAiMzjw6Em8/VRSuCvSZzw1wVr+wDmMsndoxXp2IBIJ+GTc7D4lEFBz3P4 X-Received: by 10.112.50.194 with SMTP id e2mr6649656lbo.4.1397730517936; Thu, 17 Apr 2014 03:28:37 -0700 (PDT) Received: from [192.168.3.202] ([212.248.29.228]) by mx.google.com with ESMTPSA id cq4sm24748342lad.5.2014.04.17.03.28.36 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 17 Apr 2014 03:28:37 -0700 (PDT) Message-ID: <534FACD3.8040907@corp.badoo.com> Date: Thu, 17 Apr 2014 14:28:35 +0400 From: Eremikhin Alexey User-Agent: Mozilla/5.0 (X11; Linux i686; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: user@hadoop.apache.org Subject: log4j.appender.DRFA.MaxBackupIndex is not it nonsense!? Content-Type: multipart/alternative; boundary="------------070100040103080905050502" X-Virus-Checked: Checked by ClamAV on apache.org This is a multi-part message in MIME format. --------------070100040103080905050502 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Hi everyone! I've started looking about logs retention on Hadoop and noticed interesting option in default Hadoop log4j.properties configuration # 30-day backup # log4j.appender.DRFA.MaxBackupIndex=30 I've enabled it and it produced no effect on existing files even after rotation happened. That caused me start reading source code of log4j. There is a base class FileAppender which has 2 children RollingFileAppender(RFA) and DailyRollingFileAppender(DRFA). RFA enumerates files ascending and on each rotation increases log file id from 1 up to MaxBackupIndex and removes last file. http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/RollingFileAppender.html DRFA uses date to distinct logs files but has no retention techniques inside. http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html That means MaxBackupIndex can't have any effect on DRFA at all. And that means property name log4j.appender.DRFA.MaxBackupIndex has no sense and no effect. You can easily google this parameter and find plenty complaints it does not work. Yes there are some projects about adding retention to DRFA but these classes have different names. http://wiki.apache.org/logging-log4j/DailyRollingFileAppender - DailyMaxRollingFileAppender You can find this parameter even in last stable distribution. http://www.eu.apache.org/dist/hadoop/common/stable/hadoop-2.2.0.tar.gz file /hadoop-2.2.0/etc/hadoop/log4.properties What I'm doing wrong? 8-\ --------------070100040103080905050502 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 7bit Hi everyone!
I've started looking about logs retention on Hadoop and noticed interesting option in default Hadoop log4j.properties configuration
# 30-day backup
# log4j.appender.DRFA.MaxBackupIndex=30

I've enabled it and it produced no effect on existing files even after rotation happened. That caused me start reading source code of log4j.
There is a base class FileAppender which has 2 children RollingFileAppender(RFA) and DailyRollingFileAppender(DRFA).

RFA enumerates files ascending and on each rotation increases log file id from 1 up to MaxBackupIndex and removes last file.
http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/RollingFileAppender.html

DRFA uses date to distinct logs files but has no retention techniques inside.
http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html

That means MaxBackupIndex can't have any effect on DRFA at all. And that means property name log4j.appender.DRFA.MaxBackupIndex has no sense and no effect.
You can easily google this parameter and find plenty complaints it does not work.

Yes there are some projects about adding retention to DRFA but these classes have different names.
http://wiki.apache.org/logging-log4j/DailyRollingFileAppender - DailyMaxRollingFileAppender


You can find this parameter even in last stable distribution.
http://www.eu.apache.org/dist/hadoop/common/stable/hadoop-2.2.0.tar.gz
file
/hadoop-2.2.0/etc/hadoop/log4.properties


What I'm doing wrong? 8-\
--------------070100040103080905050502--