hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1134) Block level CRCs in HDFS
Date Tue, 03 Apr 2007 17:26:35 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12486429

Raghu Angadi commented on HADOOP-1134:

> I think we should handle this the same way we'll handle things if/when a CRC file is
missing after the upgrade.

After the upgrade, I think it is cleaner and simpler to treat this as hard error on the block.
i.e, block will be considered badly corrupt handled accordingly.

> That shouldn't happen, but it might, and we need to think about what we should do in
that case. My guess is that we should 
> return a null checksum with the data when it is read, and let the client decide whether
to accept or reject the unchecksummed data

How do we handle transfer from datanode to another.

I understand it will be better to be flexible, but one way or the we have to deal with real
hard errors (mostly because of hardware errors). If our software is so buggy that we need
to expect CRC file not to exists and handle it as an 'expected condition', I think it would
be better to spend more time fixing those bugs. I vote against treating this as soft error.

Regd option to serve possibly corrupt data, I was thinking of making client to explicitly
ask datanode to ignore checksum errors at the beginning of  reading data from the datanode
(possibly based on client config). Since CRC is served inline on the connection, we should
have some conventions like 'checksum of 0000 followed by some magic 8 bytes means checksum
is incorrect' or some such thing.

> Block level CRCs in HDFS
> ------------------------
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See
recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about
it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases,
it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace
performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With
block level CRCs, Datanode can periodically verify the checksums and report corruptions to
namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS.
I will update the jira with detailed requirements and design. This will include same guarantees
provided by current implementation and will include a upgrade of current data.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message