hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mohammad Tariq <donta...@gmail.com>
Subject Re: Questions about Hadoop logs and mapred.local.dir
Date Sat, 17 May 2014 01:12:00 GMT
Hi Sam,

1. I am sorry I didn't quite get "how many methods could clean it correctly?

Since this directory contains only the temporary files it should get
cleaned up after your jobs are over. If you still have unnecessary data
present there you can delete it. Make sure no jobs are running while you
clean this directory.

2. All the daemons use log4j and DailyRollingFileAppender, which does not
have retention settings. You can change the behavior by changing the
Appender of your choice in *log4j.properties* files under
*HADOOP_HOME/conf*directory. The associated property is

3. You must never touch the content of these 2 directories. This the actual
HDFS *data+metadata*, which you don't want to loose.

You can't find more on log files


*Warm regards,*
*Mohammad Tariq*
*cloudfront.blogspot.com <http://cloudfront.blogspot.com>*

On Wed, May 7, 2014 at 9:10 AM, sam liu <samliuhadoop@gmail.com> wrote:

> Hi Experts,
> 1. The size of mapred.local.dir is big(30 GB), how many methods could
> clean it correctly?
> 2. For logs of NameNode/DataNode/JobTracker/TaskTracker, are they all
> rolling type log? What's their max size? I can not find the specific
> settings for them in log4j.properties.
> 3. I find the size of dfs.name.dir and dfs.data.dir is very big now, are
> there any files under them could be removed actually? Or all files under
> the two folders could not be removed at all?
> Thanks!

View raw message