hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Day, Phil" <philip....@hp.com>
Subject DFS Recovery with fsck
Date Mon, 18 Jan 2010 13:55:29 GMT
Hi All,

Can anyone help me with the following please:

I have a 20.1 cluster where I've been doing some testing on recovering from various namenode
failure scenarios.

The current problem I've managed to create is where some directories and the files within
them were deleted, the cluster then stopped, and the edits file lost.

On restart dfs stays in safemode as there are blocks missing (the image knows about the directories,
the datanodes don't have the blocks for them).   Fsck correctly identifies the missing blocks.

I then take dfs out of safe mode and run "fsck -delete" (to get rid of the corrupt files).
  After that a further fsck run reports the filesystem as health (and a ls shows the directories
as empty).

However if I now stop the cluster and restart it, it comes back into the same state.   It's
as if the results of the "fsck -delete" aren't persisted.

Any thought on what's happening, and what I need to do to tidy up, would be very welcome.

Thanks,
Phil

Mime
View raw message