Return-Path: Delivered-To: apmail-lucene-hadoop-dev-archive@locus.apache.org Received: (qmail 49080 invoked from network); 27 Sep 2007 21:58:14 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 27 Sep 2007 21:58:14 -0000 Received: (qmail 9015 invoked by uid 500); 27 Sep 2007 21:58:03 -0000 Delivered-To: apmail-lucene-hadoop-dev-archive@lucene.apache.org Received: (qmail 8973 invoked by uid 500); 27 Sep 2007 21:58:03 -0000 Mailing-List: contact hadoop-dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-dev@lucene.apache.org Delivered-To: mailing list hadoop-dev@lucene.apache.org Received: (qmail 8964 invoked by uid 99); 27 Sep 2007 21:58:03 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 27 Sep 2007 14:58:03 -0700 X-ASF-Spam-Status: No, hits=-100.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO brutus.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 27 Sep 2007 22:00:33 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id B96AA7141ED for ; Thu, 27 Sep 2007 14:57:50 -0700 (PDT) Message-ID: <6448295.1190930270755.JavaMail.jira@brutus> Date: Thu, 27 Sep 2007 14:57:50 -0700 (PDT) From: "Owen O'Malley (JIRA)" To: hadoop-dev@lucene.apache.org Subject: [jira] Resolved: (HADOOP-518) hadoop dfs -cp foo/bar/bad-file mumble/new-file copies a file with a bad checksum In-Reply-To: <20524097.1157747422363.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Owen O'Malley resolved HADOOP-518. ---------------------------------- Resolution: Duplicate This was fixed by HADOOP-1134 (block crcs). > hadoop dfs -cp foo/bar/bad-file mumble/new-file copies a file with a bad checksum > --------------------------------------------------------------------------------- > > Key: HADOOP-518 > URL: https://issues.apache.org/jira/browse/HADOOP-518 > Project: Hadoop > Issue Type: Bug > Components: dfs > Environment: red hat > Reporter: Dick King > Assignee: Sameer Paranjpye > > I have a file that reliably generates a checksum error when it's read, whether by a map/reduce job as input or by a "dfs -get" command. > However... > if I do a "dfs -cp" from the file with the bad checksum the copy can be read in its entirety without a checksum error. > I would consider it reasonable for the command to fail, or for the new file to be created but to also have a checksum error in the same place, but this behavior is unsettling. > -dk -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.