hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Konstantin Shvachko (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-903) NN should verify images and edit logs on startup
Date Wed, 03 Nov 2010 05:30:30 GMT

    [ https://issues.apache.org/jira/browse/HDFS-903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12927743#action_12927743
] 

Konstantin Shvachko commented on HDFS-903:
------------------------------------------

I didn't know you discussed it already. I agree the image can be corrupted during the transmission.
It seems logical if we include the verification logic in the transmission process. That is,
SNN send the checksum via servlet, then NN uploads the image and calculate the downloaded
checksum on the fly, and then matches it with the one sent by the SNN. The checksum verification
can be done by validateCheckpointUpload(), which is already there just need to be extended.
I don't think it will be a good idea to separate the upload and the verification, which is
imminent if you first upload then send the checksum via rollFSImage() and verify inside.

> NN should verify images and edit logs on startup
> ------------------------------------------------
>
>                 Key: HDFS-903
>                 URL: https://issues.apache.org/jira/browse/HDFS-903
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: name-node
>            Reporter: Eli Collins
>            Assignee: Hairong Kuang
>            Priority: Critical
>             Fix For: 0.22.0
>
>         Attachments: trunkChecksumImage.patch, trunkChecksumImage1.patch
>
>
> I was playing around with corrupting fsimage and edits logs when there are multiple dfs.name.dirs
specified. I noticed that:
>  * As long as your corruption does not make the image invalid, eg changes an opcode so
it's an invalid opcode HDFS doesn't notice and happily uses a corrupt image or applies the
corrupt edit.
> * If the first image in dfs.name.dir is "valid" it replaces the other copies in the other
name.dirs, even if they are different, with this first image, ie if the first image is actually
invalid/old/corrupt metadata than you've lost your valid metadata, which can result in data
loss if the namenode garbage collects blocks that it thinks are no longer used.
> How about we maintain a checksum as part of the image and edit log and check those on
startup and refuse to startup if they are different. Or at least provide a configuration option
to do so if people are worried about the overhead of maintaining checksums of these files.
Even if we assume dfs.name.dir is reliable storage this guards against operator errors.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message