hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sameer Paranjpye (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1134) Block level CRCs in HDFS
Date Thu, 29 Mar 2007 20:33:25 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485339
] 

Sameer Paranjpye commented on HADOOP-1134:
------------------------------------------

> Why wouldn't the map tasks run on a node where the block is local? The checksum data
would need to be read over the network, but checksums are 1% the size of data, and 
> we typically assume that net reads from a random node are 10x slower than local disk
reads, so the checksum network i/o should only add 10% to the cost of reading the 
> block, right?

Yes, it could be done that way, if each split were a set of block instances on a node. The
client would need a way to go from a block id to a .crc file via an extension of the Namenode
API. The difficulty there is in determining the set of validated files from the set of validated
blocks and so knowing which .crc files can be deleted. Of course, all the .crc files could
be deleted at the end.

The way I was thinking about it was to have each split be a file or a set of files, it would
be hard to schedule local to all the blocks in that case. This requires practically no API
changes, there already exists a API to report corrupt blocks. Once a file is validated the
.crc file would be deleted by the client. The set of .crc files remaining at the end tells
you exactly which data is suspect. This feels very clean, but doesn't do such a great job
of ensuring data locality.




> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See
recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about
it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases,
it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace
performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With
block level CRCs, Datanode can periodically verify the checksums and report corruptions to
namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS.
I will update the jira with detailed requirements and design. This will include same guarantees
provided by current implementation and will include a upgrade of current data.
>  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message