hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tsz Wo (Nicholas), SZE (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3941) Extend FileSystem API to return file-checksums/file-digests
Date Tue, 19 Aug 2008 22:05:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12623822#action_12623822

Tsz Wo (Nicholas), SZE commented on HADOOP-3941:

bq. Why not have the default implementation of getFileChecksum() throw the "unsupported operation"
exception so that we don't have duplicated code in every subclass? Also, should this really
throw an exception or return null? I would guess that most applications would want to handle
this not as an exceptional condition somewhere higher on the stack, but rather explicitly
where getFileChecksum() is called, so perhaps null would be better. 

For other optional operaions (e.g. append), we declare an abstract method in FileSystem and
let other FileSystem sub-classes throw "Not supported".  Should we do the same for getFileChecksum()?

I think throwing IOException might be better than returning null.  Otherwise, applications
have to check null, or they may get NPE which is a RuntimeException.

The methods defined in java.security.MessageDigest, e.g. getInstance(String algorithm), throw
NoSuchAlgorithmException.  We might want to do something similar.

bq. Do you intend to implement this for HDFS here, or as a separate issue?

Yes, it is because there are more works for implementing HDFS.

> Extend FileSystem API to return file-checksums/file-digests
> -----------------------------------------------------------
>                 Key: HADOOP-3941
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3941
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: fs
>            Reporter: Tsz Wo (Nicholas), SZE
>         Attachments: 3941_20080818.patch, 3941_20080819.patch
> Suppose we have two files in two locations (may be two clusters) and these two files
have the same size.  How could we tell whether the content of them are the same?
> Currently, the only way is to read both files and compare the content of them.  This
is a very expensive operation if the files are huge.
> So, we would like to extend the FileSystem API to support returning file-checksums/file-digests.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message