hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tsz Wo (Nicholas), SZE (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-3981) Need a distributed file checksum algorithm for HDFS
Date Wed, 10 Sep 2008 20:41:44 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Tsz Wo (Nicholas), SZE updated HADOOP-3981:

    Attachment: 3981_20080910.patch


> It looks like you forgot to include the class MD5MD5CRC32FileChecksum in the patch.
added MD5MD5CRC32FileChecksum

> Why do you use the datanode's socket/opcode interface rather than adding a method to
There might be a RPC timeout problem.  So I use data transfer protocol instead of ClientDatanodeProtocol.

> WritableUtils#toByteArray can use io.DataOutputBuffer, no?
Moved WritableUtils#toByteArray to io.DataOutputBuffer and implemented it with DataOutputBuffer.

> DistCp#sameFile() should be changed to only get checksums when the lengths differ.
Updated DistCp

> Need a distributed file checksum algorithm for HDFS
> ---------------------------------------------------
>                 Key: HADOOP-3981
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3981
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Tsz Wo (Nicholas), SZE
>         Attachments: 3981_20080909.patch, 3981_20080910.patch
> Traditional message digest algorithms, like MD5, SHA1, etc., require reading the entire
input message sequentially in a central location.  HDFS supports large files with multiple
tera bytes.  The overhead of reading the entire file is huge. A distributed file checksum
algorithm is needed for HDFS.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message