hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-1262) file corruption detected because dfs client does not use replica blocks for checksum file
Date Mon, 16 Apr 2007 22:27:15 GMT
file corruption detected because dfs client does not use replica blocks for checksum file
-----------------------------------------------------------------------------------------

                 Key: HADOOP-1262
                 URL: https://issues.apache.org/jira/browse/HADOOP-1262
             Project: Hadoop
          Issue Type: Bug
          Components: dfs
            Reporter: dhruba borthakur


A block of a crc file was corrupted. This caused the DFS client to detect a CRc corruption.
The client tried all the three replicas of the data file. It did not try any replicas of the
CRC file. This caused the client to abort the read request with a bad-CRC message.

07/04/16 20:42:26 INFO fs.FileSystem: Found checksum error in data stream at block=blk_6205660483922449140
on datanode=xx:50010
07/04/16 20:42:26 INFO fs.FileSystem: Found checksum error in checksum stream at block=blk_-3722915954820866561
on datanode=yy:50010

07/04/16 20:42:26 INFO fs.FileSystem: Found checksum error in data stream at block=blk_6205660483922449140
on datanode=zz:50010
07/04/16 20:42:26 INFO fs.FileSystem: Found checksum error in checksum stream at block=blk_-3722915954820866561
on datanode=yy:50010

07/04/16 20:42:26 INFO fs.FileSystem: Found checksum error in data stream at block=blk_6205660483922449140
on datanode=xx.:50010
07/04/16 20:42:26 INFO fs.FileSystem: Found checksum error in checksum stream at block=blk_-3722915954820866561
on datanode=yy:50010

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message