hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Fabio C." <anyte...@gmail.com>
Subject Re: Question about log files
Date Mon, 06 Apr 2015 13:19:18 GMT
I noticed that too, I think Hadoop keeps the file open all the time and
when you delete it it is just no more able to write on it and doesn't try
to recreate it. Not sure if it's a Log4j problem or an Hadoop one...
yanghaogn, which is the *correct* way to delete the Hadoop logs? I didn't
find anything better than deleting the file and restarting the service...

On Mon, Apr 6, 2015 at 9:27 AM, 杨浩 <yanghaogn@gmail.com> wrote:

> I think the log information has lost.
>  the hadoop is not designed for that you deleted these files incorrectly
> 2015-04-02 11:45 GMT+08:00 煜 韦 <yu2003w@hotmail.com>:
>> Hi there,
>> If log files are deleted without restarting service, it seems that the
>> logs is to be lost for later operation. For example, on namenode, datanode.
>> Why not log files could be re-created when deleted by mistake or on
>> purpose during cluster is running?
>> Thanks,
>> Jared

View raw message