hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Joep Rottinghuis <jrottingh...@gmail.com>
Subject Re: Question about log files
Date Mon, 06 Apr 2015 17:39:52 GMT
This depends on your OS.
When you "delete" a file on Linux, you merely unlink the entry from the directory.
The file does not actually get deleted until until the last reference (open handle) goes away.
Note that this could lead to an interesting way to fill up a disk.
You should be able to see the open files by a process using the lsof command.
The process itself does not know that a dentry has been removed, so there is nothing that
log4j or the Hadoop code can do about it.
Assuming you have some rolling file appender configured, log4j should start logging to a new
file at some point, or you have to bounce you daemon process.

Cheers,

Joep

Sent from my iPhone

> On Apr 6, 2015, at 6:19 AM, Fabio C. <anytek88@gmail.com> wrote:
> 
> I noticed that too, I think Hadoop keeps the file open all the time and when you delete
it it is just no more able to write on it and doesn't try to recreate it. Not sure if it's
a Log4j problem or an Hadoop one...
> yanghaogn, which is the *correct* way to delete the Hadoop logs? I didn't find anything
better than deleting the file and restarting the service...
> 
>> On Mon, Apr 6, 2015 at 9:27 AM, 杨浩 <yanghaogn@gmail.com> wrote:
>> I think the log information has lost.
>> 
>>  the hadoop is not designed for that you deleted these files incorrectly
>> 
>> 2015-04-02 11:45 GMT+08:00 煜 韦 <yu2003w@hotmail.com>:
>>> Hi there,
>>> If log files are deleted without restarting service, it seems that the logs is
to be lost for later operation. For example, on namenode, datanode.
>>> Why not log files could be re-created when deleted by mistake or on purpose during
cluster is running?
>>> 
>>> Thanks,
>>> Jared
> 

Mime
View raw message