hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kihwal Lee (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3177) Allow DFSClient to find out and use the CRC type being used for a file.
Date Tue, 21 Aug 2012 15:04:38 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13438786#comment-13438786
] 

Kihwal Lee commented on HDFS-3177:
----------------------------------

{code}
 -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.server.namenode.TestBackupNode
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
{code}

None of test failures was caused by this patch

- TestBackupNode: faild due to "Port in use: 0.0.0.0:50105" in the initial cluster startup
even before trying restart
- TestDataNodeVolumeFailureReporting: The test worked okay but org.apache.hadoop.util.ExitUtil$ExitException
was thrown during the cluster shutdown. It failed the same way before. (e.g. Aug 04, https://builds.apache.org/job/PreCommit-HADOOP-Build/1248//testReport/)

I will investigate a bit more and file jiras if needed.
                
> Allow DFSClient to find out and use the CRC type being used for a file.
> -----------------------------------------------------------------------
>
>                 Key: HDFS-3177
>                 URL: https://issues.apache.org/jira/browse/HDFS-3177
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node, hdfs client
>    Affects Versions: 0.23.0
>            Reporter: Kihwal Lee
>            Assignee: Kihwal Lee
>             Fix For: 2.1.0-alpha, 3.0.0
>
>         Attachments: hdfs-3177-after-hadoop-8239-8240.patch.txt, hdfs-3177-after-hadoop-8239.patch.txt,
hdfs-3177-branch2-trunk.patch.txt, hdfs-3177.patch, hdfs-3177-with-hadoop-8239-8240.patch.txt,
hdfs-3177-with-hadoop-8239-8240.patch.txt, hdfs-3177-with-hadoop-8239-8240.patch.txt, hdfs-3177-with-hadoop-8239.patch.txt
>
>
> To support HADOOP-8060, DFSClient should be able to find out the checksum type being
used for files in hdfs.
> In my prototype, DataTransferProtocol was extended to include the checksum type in the
blockChecksum() response. DFSClient uses it in getFileChecksum() to determin the checksum
type. Also append() can be configured to use the existing checksum type instead of the configured
one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message