hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aaron Kimball <aa...@cloudera.com>
Subject Re: Hadoop logging
Date Wed, 22 Jul 2009 02:01:53 GMT
Hm. What version of Hadoop are you running? Have you modified the
log4j.properties file in other ways?  The logfiles generated by Hadoop
should, by default, switch to a new file every day, appending the
previous day's date to the closed log file (e.g.,
"hadoop-hadoop-datanode-jargon.log.2009-07-13"). You should be able to
use logrotate + cron to move those files out to someplace with more
space and/or remove them after they're too old.

After you changed log4j.properties, did you restart the daemons?
Changing the properties file has no effect on running processes.
Similarly, if you're running on a proper cluster (as opposed to
standalone / pseudo-distributed modes), you'll need to set
"hadoop.root.logger=ERROR,console" in the log4j.properties file on
each separate machine; the client machine alone is not sufficient.

- Aaron


On Tue, Jul 21, 2009 at 1:59 PM, Arv Mistry<arv@kindsight.net> wrote:
> Hi,
>
> Recently we ran out of disk space on the hadoop machine, and on
> investigation we found it was the hadoop log4j logs.
>
> In the log4j.properties file I have set the hadoop.root.logger=ERROR
> yet I still the daily hadoop-hadoopadmin-*.log files with INFO level
> logging in them. These never seem to get trimmed or rollover.
>
> Does anyone know how to limit ALL hadoop logs?
>
> Do I have to set each daemon logging individually? i.e.
>
> log4j.logger.org.apache.hadoop.mapred.JobTracker=ERROR etc
>
> Cheers Arv
>
>
>

Mime
View raw message