hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aaron T. Myers (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-1630) Checksum fsedits
Date Fri, 18 Feb 2011 07:17:12 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12996269#comment-12996269
] 

Aaron T. Myers commented on HDFS-1630:
--------------------------------------

As Steve points out, it seems to me that MD5 is overkill if the goal is just to verify integrity.

To address the problem of recomputing the hash of a constantly-growing file, rather than checksum
each individual transaction, I suggest we use a rolling hash: http://en.wikipedia.org/wiki/Rolling_hash

In particular, adler32 seems like a good choice: http://download.oracle.com/javase/1.5.0/docs/api/java/util/zip/Adler32.html

> Checksum fsedits
> ----------------
>
>                 Key: HDFS-1630
>                 URL: https://issues.apache.org/jira/browse/HDFS-1630
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>
> HDFS-903 calculates a MD5 checksum to a saved image, so that we could verify the integrity
of the image at the loading time.
> The other half of the story is how to verify fsedits. Similarly we could use the checksum
approach. But since a fsedit file is growing constantly, a checksum per file does not work.
I am thinking to add a checksum per transaction. Is it doable or too expensive?

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message