hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sameer Paranjpye (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1134) Block level CRCs in HDFS
Date Thu, 29 Mar 2007 19:45:25 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485321
] 

Sameer Paranjpye commented on HADOOP-1134:
------------------------------------------

As Konstantin suggests, using a client program to perform validation is also reasonable. It
has the advantage of keeping upgrade code in HDFS very simple and decoupling the Namenode
and Datanode upgrades.

Datanodes would perform local upgrades during which they'd re-generate checksums for all their
blocks and put them in side checksum files. Once this is done, we could lauch a Map/Reduce
job that reads data files and validates them against the existing .crc files, ensuring that
it reads all blocks. If it discovers corruption, it reports the corrupt blocks to the Namenode,
which can then proceed to invalidate them and replicate the correct instances. For every file
that is successfully validated the client would delete the .crc file from the namespace. Dealing
with missing replicas is a bit tricky with this approach, the other downside it has is that
it is potentially much slower since validating with Map/Reduce could cause a lot of data transfer
over the network.



> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See
recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about
it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases,
it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace
performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With
block level CRCs, Datanode can periodically verify the checksums and report corruptions to
namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS.
I will update the jira with detailed requirements and design. This will include same guarantees
provided by current implementation and will include a upgrade of current data.
>  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message