Return-Path: Delivered-To: apmail-lucene-hadoop-dev-archive@locus.apache.org Received: (qmail 55240 invoked from network); 24 Apr 2007 22:39:38 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 24 Apr 2007 22:39:38 -0000 Received: (qmail 35355 invoked by uid 500); 24 Apr 2007 22:39:43 -0000 Delivered-To: apmail-lucene-hadoop-dev-archive@lucene.apache.org Received: (qmail 35297 invoked by uid 500); 24 Apr 2007 22:39:43 -0000 Mailing-List: contact hadoop-dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-dev@lucene.apache.org Delivered-To: mailing list hadoop-dev@lucene.apache.org Received: (qmail 35280 invoked by uid 99); 24 Apr 2007 22:39:42 -0000 Received: from herse.apache.org (HELO herse.apache.org) (140.211.11.133) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 24 Apr 2007 15:39:42 -0700 X-ASF-Spam-Status: No, hits=-100.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO brutus.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 24 Apr 2007 15:39:35 -0700 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 80E43714058 for ; Tue, 24 Apr 2007 15:39:15 -0700 (PDT) Message-ID: <23172111.1177454355524.JavaMail.jira@brutus> Date: Tue, 24 Apr 2007 15:39:15 -0700 (PDT) From: "dhruba borthakur (JIRA)" To: hadoop-dev@lucene.apache.org Subject: [jira] Commented: (HADOOP-1262) file corruption detected because dfs client does not use replica blocks for checksum file In-Reply-To: <5866092.1176762435275.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-1262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12491478 ] dhruba borthakur commented on HADOOP-1262: ------------------------------------------ +1. looks good. > file corruption detected because dfs client does not use replica blocks for checksum file > ----------------------------------------------------------------------------------------- > > Key: HADOOP-1262 > URL: https://issues.apache.org/jira/browse/HADOOP-1262 > Project: Hadoop > Issue Type: Bug > Components: dfs > Reporter: dhruba borthakur > Assigned To: Hairong Kuang > Attachments: newSource.patch > > > A block of a crc file was corrupted. This caused the DFS client to detect a CRc corruption. The client tried all the three replicas of the data file. It did not try any replicas of the CRC file. This caused the client to abort the read request with a bad-CRC message. > 07/04/16 20:42:26 INFO fs.FileSystem: Found checksum error in data stream at block=blk_6205660483922449140 on datanode=xx:50010 > 07/04/16 20:42:26 INFO fs.FileSystem: Found checksum error in checksum stream at block=blk_-3722915954820866561 on datanode=yy:50010 > 07/04/16 20:42:26 INFO fs.FileSystem: Found checksum error in data stream at block=blk_6205660483922449140 on datanode=zz:50010 > 07/04/16 20:42:26 INFO fs.FileSystem: Found checksum error in checksum stream at block=blk_-3722915954820866561 on datanode=yy:50010 > 07/04/16 20:42:26 INFO fs.FileSystem: Found checksum error in data stream at block=blk_6205660483922449140 on datanode=xx.:50010 > 07/04/16 20:42:26 INFO fs.FileSystem: Found checksum error in checksum stream at block=blk_-3722915954820866561 on datanode=yy:50010 -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.