hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kihwal Lee (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-8149) cap space usage of default log4j rolling policy
Date Fri, 16 Mar 2012 15:37:38 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13231298#comment-13231298

Kihwal Lee commented on HADOOP-8149:

Please make hadoop-env.sh honor user-specified log appender.

For example, when the name node is started through scripts, the last logger setting on the
command line will be from $HADOOP_NAMENODE_OPTS, which is currently hardcoded in hadoop-env.sh.
This will override whatever user specified. (last one wins)

As a side note, we need to fix the shell scripts so that they don't source the same thing
again and again and duplicate or triplicate the same command line options. Users can easily
be confused about when something is set, eval'ed or overridden. But this is beyond the scope
of this jira.

As for the cap, we need to keep in mind that hadoop components are very noisy. I see hadoop
service processes generating multiple gigabytes of log messages per day. So 1 GB feels too
small, but, for the same reason, the default may need to be small enough to keep single-node
desktop users from running into trouble. I will throw a number to get it going.  How about
5GB? 5 GB * 5-8 processes <= 40 GB. 

> cap space usage of default log4j rolling policy 
> ------------------------------------------------
>                 Key: HADOOP-8149
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8149
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: conf
>            Reporter: Patrick Hunt
>            Assignee: Patrick Hunt
>         Attachments: HADOOP-8149.patch, HADOOP-8149.patch, HADOOP-8149.patch
> I've seen several critical production issues because logs are not automatically removed
after some time and accumulate. Changes to Hadoop's default log4j file appender would help
with this.
> I recommend we move to an appender which:
> 1) caps the max file size (configurable)
> 2) caps the max number of files to keep (configurable)
> 3) uses rolling file appender rather than DRFA, see the warning here:
> http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html
> Specifically: "DailyRollingFileAppender has been observed to exhibit synchronization
issues and data loss."
> We'd lose (based on the default log4j configuration) the daily rolling aspect, however
increase reliability.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message