hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1134) Block level CRCs in HDFS
Date Thu, 31 May 2007 20:39:17 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12500483

Raghu Angadi commented on HADOOP-1134:

Regd checksum calculation at the source, as we discussed on hadoop-dev, it should ideally
be solved by not buffering data at the higher level. There is no reason to think higher level
knows best what to buffer and how to buffer.

That leaves the problem with the validation while reading. DistributedFileSystem does not
a ChecksumFileSystem any more. Should it be? Similarly once we make DistributedFileSystem
not buffer any data, would that address this issue?

This issue needs to be and will be addressed, especially since we know that most often, memory
is the culprit.

Regd sharing the code, except for the fact that both use CRC32 class, almost everything is
different about the implementation. Process of making these share the code would result in
quite a few changes to ChecksumFileSystem. May be that should be a different Jira? 

> Block level CRCs in HDFS
> ------------------------
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>         Attachments: bc-no-upgrade-05302007.patch, DfsBlockCrcDesign-05305007.htm
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See
recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about
it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases,
it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace
performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With
block level CRCs, Datanode can periodically verify the checksums and report corruptions to
namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS.
I will update the jira with detailed requirements and design. This will include same guarantees
provided by current implementation and will include a upgrade of current data.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message