hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From lohit <lohit...@yahoo.com>
Subject Re: ERROR dfs.NameNode - java.io.EOFException
Date Sat, 05 Jul 2008 08:08:57 GMT
I remember dhruba telling me about this once.
Yes, Take a backup of the whole current directory.
As you have seen, remove the last line from edits and try to start the NameNode. 
If it starts, then run fsck to find out which file had the problem. 
Thanks,
Lohit

----- Original Message ----
From: Otis Gospodnetic <otis_gospodnetic@yahoo.com>
To: core-user@hadoop.apache.org
Sent: Friday, July 4, 2008 4:46:57 PM
Subject: Re: ERROR dfs.NameNode - java.io.EOFException

Hi,

If it helps with the problem below -- I don't mind losing some data.
For instance, I see my "edits" file has about 74K lines.
Can I just nuke the edits file or remove the last N lines?

I am looking at the edits file with vi and I see the very last line is very short - it looks
like it was cut off, incomplete, and some of the logs do mention running out of disk space
(even though the NN machine has some more free space).

Could I simply remove this last incomplete line?

Any help would be greatly appreciated.

Thanks,
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch



----- Original Message ----
> From: Otis Gospodnetic <otis_gospodnetic@yahoo.com>
> To: core-user@hadoop.apache.org
> Sent: Friday, July 4, 2008 2:00:58 AM
> Subject: ERROR dfs.NameNode - java.io.EOFException
> 
> Hi,
> 
> Using Hadoop 0.16.2, I am seeing seeing the following in the NN log:
> 
> 2008-07-03 19:46:26,715 ERROR dfs.NameNode - java.io.EOFException
>         at java.io.DataInputStream.readFully(DataInputStream.java:180)
>         at org.apache.hadoop.io.UTF8.readFields(UTF8.java:106)
>         at org.apache.hadoop.io.ArrayWritable.readFields(ArrayWritable.java:90)
>         at org.apache.hadoop.dfs.FSEditLog.loadFSEdits(FSEditLog.java:433)
>         at org.apache.hadoop.dfs.FSImage.loadFSEdits(FSImage.java:756)
>         at org.apache.hadoop.dfs.FSImage.loadFSImage(FSImage.java:639)
>         at org.apache.hadoop.dfs.FSImage.recoverTransitionRead(FSImage.java:222)
>         at org.apache.hadoop.dfs.FSDirectory.loadFSImage(FSDirectory.java:79)
>         at org.apache.hadoop.dfs.FSNamesystem.initialize(FSNamesystem.java:254)
>         at org.apache.hadoop.dfs.FSNamesystem.(FSNamesystem.java:235)
>         at org.apache.hadoop.dfs.NameNode.initialize(NameNode.java:131)
>         at org.apache.hadoop.dfs.NameNode.(NameNode.java:176)
>         at org.apache.hadoop.dfs.NameNode.(NameNode.java:162)
>         at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:846)
>         at org.apache.hadoop.dfs.NameNode.main(NameNode.java:855)
> 
> The exception doesn't include the name and location of the file whose reading is 
> failing and causing EOFException :(
> But it looks like it's the fsedit log (the "edits" file, I think).
> 
> There is no secondary NN in the cluster.
> 
> Is there any way I can revive this NN?  Any way to "fix" the corrupt "edits" 
> file?
> 
> Thanks,
> Otis

Mime
View raw message