hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Doug Cutting (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1134) Block level CRCs in HDFS
Date Fri, 01 Jun 2007 16:53:15 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12500758

Doug Cutting commented on HADOOP-1134:

> I am still not clear which length is missing.

It's a minor point, but if we write async-io daemons for this protocol then the easier it
is to parse the total packet length the easier it will be to write these daemons.  So placing
the total packet length in a fixed position at the front of the packet so that it may be generically
accessed without having to determine what kind of a packet it is, will simplify things.  An
async daemon will typically buffer entire requests as they arrive in small bits, then, once
a request is complete, perform an action.  However we can always add such a length on later,
if and when we write async daemons, but it may then take longer to roll out such daemons,
as it may require protocol changes.

> So start_offset can be removed from DATA_CHUNK.  I would prefer to keep the length so
that loops that read and write to these streams could be a little simpler.

That sounds fine.

> only notable code replication I see is retry logic in FSInputChecker.readBuffer() where
SeekToNewSource() and reportChecksumFailure are executed

Which is some of the most delicate code, that has taken several revisions to get to its current
level of correctness.  In other words, logic that shouldn't be replicated if at all possible.

> Block level CRCs in HDFS
> ------------------------
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>         Attachments: bc-no-upgrade-05302007.patch, DfsBlockCrcDesign-05305007.htm
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See
recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about
it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases,
it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace
performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With
block level CRCs, Datanode can periodically verify the checksums and report corruptions to
namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS.
I will update the jira with detailed requirements and design. This will include same guarantees
provided by current implementation and will include a upgrade of current data.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message