hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mac fang <mac.had...@gmail.com>
Subject Issue of FSImage, need help
Date Tue, 28 Jun 2011 08:44:54 GMT
Hi, Team,

What we found when we use the Hadoop is, the FSImage often currupts when we
do start/stop the Hadoop cluster. The reason we think might be around the
write to the outputstream: the NameNode may be killed when it saveNamespace,
then the FsImage file doesn't complete writing. Currently i saw a
previous.checkpoint folder, the logic of saveNamespace is like:

1. mv the current folder to the previous.checkpoint folder.
2. start to write the FSImage into the current folder.

I think there mightbe a case if the FSImage is currupted, the NameNode can
NOT be started, but we can NOT get any EOFException, since we might
encounter the OutofMemory exception if we read the wrong numBlocks and
instantiate the Blocks [] blocks = new Blocks[numBlocks] (actually, we face
this issue).

Any suggestion to it?

thanks
macf

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message