hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Owen O'Malley (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-953) huge log files
Date Tue, 06 Feb 2007 08:24:05 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12470476
] 

Owen O'Malley commented on HADOOP-953:
--------------------------------------

This is already possible. Just change the HADOOP_ROOT_LOGGER variable in bin/hadoop-daemon.sh
to WARN,DRFA or even ERROR,DRFA. (We really should make it a default rather than hard coding
it in the script.) It will dramatically cut down the messages.

> huge log files
> --------------
>
>                 Key: HADOOP-953
>                 URL: https://issues.apache.org/jira/browse/HADOOP-953
>             Project: Hadoop
>          Issue Type: Improvement
>    Affects Versions: 0.10.1
>         Environment: N/A
>            Reporter: Andrew McNabb
>
> On our system, it's not uncommon to get 20 MB of logs with each MapReduce job.  It would
be very helpful if it were possible to configure Hadoop daemons to write logs only when major
things happen, but the only conf options I could find are for increasing the amount of output.
 The disk is really a bottleneck for us, and I believe that short jobs would run much more
quickly with less disk usage.  We also believe that the high disk usage might be triggering
a kernel bug on some of our machines, causing them to crash.  If the 20 MB of logs went down
to 20 KB, we would probably still have all of the information we needed.
> Thanks!

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message