hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4995) Offline Namenode fsImage verification
Date Fri, 09 Jan 2009 00:14:59 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12662184#action_12662184

Raghu Angadi commented on HADOOP-4995:

Some time back I had a proposal to checksum the fsimage. Here, each record (about a few hundred
bytes) is checksumed rather than the whole file. This helps both with the verification as
well as better recovery from multiple copies. In case of multiple copies, the image can be
recovered as long as both copies are not damaged at the same location.

> Offline Namenode fsImage verification
> -------------------------------------
>                 Key: HADOOP-4995
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4995
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Brian Bockelman
> Currently, there is no way to verify that a copy of the fsImage is not corrupt.  I propose
that we should have an offline tool that loads the fsImage into memory to see if it is usable.
 This will allow us to automate backup testing to some extent.
> One can start a namenode process on the fsImage to see if it can be loaded, but this
is not easy to automate.
> To use HDFS in production, it is greatly desired to have both checkpoints - and have
some idea that the checkpoints are valid!  No one wants to see the day where they reload from
backup only to find that the fsImage in the backup wasn't usable.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message