hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1470) Rework FSInputChecker and FSOutputSummer to support checksum code sharing between ChecksumFileSystem and block level crc dfs
Date Wed, 06 Jun 2007 23:45:26 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12502148
] 

Raghu Angadi commented on HADOOP-1470:
--------------------------------------

> In the block-level-crc dfs, do we allow different values of bytesPerSum for blocks in
a file? 
Yes. Though there is no way to do this now. Also we don't enforce that each block to have
same {{bps}} for a file. So theoretical yes.

> Do we allow different block size for blocks in a file?
No. Just like current DFS. i.e. this is not a block-level-crc specific issue.

bq. If data are checksumed at the FileSystem level, there is another complexity in the block-level-crc
dfs. When a block size is not a multiple of bytesPerSum, we also need to output/verify checksum
at the end of each block. So it is harder to decide checksumChunk boundaries.

good point. Yes. This will further influence ChecksumFileSystem implementation. This inter-(dependency
and influence) between ChecksumFS and DFS could potentially further increase with this jira,




> Rework FSInputChecker and FSOutputSummer to support checksum code sharing between ChecksumFileSystem
and block level crc dfs
> ----------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-1470
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1470
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: fs
>    Affects Versions: 0.12.3
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>             Fix For: 0.14.0
>
>
> Comment from Doug in HADOOP-1134:
> I'd prefer it if the CRC code could be shared with CheckSumFileSystem. In particular,
it seems to me that FSInputChecker and FSOutputSummer could be extended to support pluggable
sources and sinks for checksums, respectively, and DFSDataInputStream and DFSDataOutputStream
could use these. Advantages of this are: (a) single implementation of checksum logic to debug
and maintain; (b) keeps checksumming as close to possible to data generation and use. This
patch computes checksums after data has been buffered, and validates them before it is buffered.
We sometimes use large buffers and would like to guard against in-memory errors. The current
checksum code catches a lot of such errors. So we should compute checksums after minimal buffering
(just bytesPerChecksum, ideally) and validate them at the last possible moment (e.g., through
the use of a small final buffer with a larger buffer behind it). I do not think this will
significantly affect performance, and data integrity is a high priority. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message