hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1470) Rework FSInputChecker and FSOutputSummer to support checksum code sharing between ChecksumFileSystem and block level crc dfs
Date Sat, 09 Jun 2007 01:31:26 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12502975
] 

Raghu Angadi commented on HADOOP-1470:
--------------------------------------

I prefer readData ( or readChunk) and readChecksum(). Contract is that amount returned in
readChunk is what need to be verified. Also readChunk() can provide the buffer it has (usually
user buffer) and if chunk does not fit in it, the implementation of readChunk will use its
local buffer. This also avoids extra copy.

I think inputChecker is the more complex one. This does not easily suite outputSummer and
outputSummer is pretty simple. If we still want to share the outputSummer, then we could add
writeAvailable() call before it checksums the data.

reader interface looks like this (not compiled) :
{code:title=genericChecksums.java} 
class ReadInfo {
   byte[] userBuffer;
   int  userOffset;
   int userLen;
   
   int readLen; // -1 indicates same as InputStream.read() returning -1;
   byte[] localBuf; // null if readLen <= userLen 
   byte[] localOffset; 
   int localLen; // localLen + userLen
}

/*
 *  Verifies the checksum of the returned data. It handles the checksums errors and retries.
Number of retries can be controlled by 
 *  the implementation.
 * Input checker gaurantees that readChunk() and readChecksum() are called in sequence.
 * If info.localLen > 0, next readChunk() will be called only after localLen of data has
been consumed
 */ 
int readChunk( ReadInfo info ); 

/** Number of bytes returned depends on "Type of Checksum" provided during creation of inputchecker.
(Similar to DataChecksum class in HADOOP-1134)
 * When checksum size is zero (or checksum object is "null"), implementation could decide
to ignore or retry.
 * How do we support "stopSumming()" in current ChecksumFS? May be return of -1 could indicate
that.
 */
int readChecksum(byte[] buf, int offset);
 
/// Write interface. If we really need this? 

int writeChunkAvailable(); // called when checksum is reset. Will call writeChecksum() after
writing this many bytes or when it is closed.
int write(buf, offset, len); // gaurantees that it obeys chunkAvailable() above.
int writeChecksum(); // called just before ChunkAvailable().
}
{code}


> Rework FSInputChecker and FSOutputSummer to support checksum code sharing between ChecksumFileSystem
and block level crc dfs
> ----------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-1470
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1470
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: fs
>    Affects Versions: 0.12.3
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>             Fix For: 0.14.0
>
>         Attachments: genericChecksum.patch
>
>
> Comment from Doug in HADOOP-1134:
> I'd prefer it if the CRC code could be shared with CheckSumFileSystem. In particular,
it seems to me that FSInputChecker and FSOutputSummer could be extended to support pluggable
sources and sinks for checksums, respectively, and DFSDataInputStream and DFSDataOutputStream
could use these. Advantages of this are: (a) single implementation of checksum logic to debug
and maintain; (b) keeps checksumming as close to possible to data generation and use. This
patch computes checksums after data has been buffered, and validates them before it is buffered.
We sometimes use large buffers and would like to guard against in-memory errors. The current
checksum code catches a lot of such errors. So we should compute checksums after minimal buffering
(just bytesPerChecksum, ideally) and validate them at the last possible moment (e.g., through
the use of a small final buffer with a larger buffer behind it). I do not think this will
significantly affect performance, and data integrity is a high priority. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message