Return-Path: Delivered-To: apmail-lucene-hadoop-dev-archive@locus.apache.org Received: (qmail 85412 invoked from network); 15 Aug 2007 14:23:52 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 15 Aug 2007 14:23:52 -0000 Received: (qmail 60508 invoked by uid 500); 15 Aug 2007 14:23:50 -0000 Delivered-To: apmail-lucene-hadoop-dev-archive@lucene.apache.org Received: (qmail 60182 invoked by uid 500); 15 Aug 2007 14:23:49 -0000 Mailing-List: contact hadoop-dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-dev@lucene.apache.org Delivered-To: mailing list hadoop-dev@lucene.apache.org Received: (qmail 60173 invoked by uid 99); 15 Aug 2007 14:23:49 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 15 Aug 2007 07:23:49 -0700 X-ASF-Spam-Status: No, hits=-100.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO brutus.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 15 Aug 2007 14:23:51 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id DCFBE7141E2 for ; Wed, 15 Aug 2007 07:23:30 -0700 (PDT) Message-ID: <31778394.1187187810901.JavaMail.jira@brutus> Date: Wed, 15 Aug 2007 07:23:30 -0700 (PDT) From: "Enis Soztutar (JIRA)" To: hadoop-dev@lucene.apache.org Subject: [jira] Reopened: (HADOOP-1629) Block CRC Unit Tests: upgrade test In-Reply-To: <28360280.1184730124448.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar reopened HADOOP-1629: ----------------------------------- I am reopening this issue, since TestDFSUpgradeFromImage fails for hadoop-patch and hudson-nightly builds on hudson. The error thrown is : {noformat} java.io.IOException: tar: z: unknown function modifier at org.apache.hadoop.fs.Command.run(Command.java:33) at org.apache.hadoop.fs.Command.execCommand(Command.java:89) at org.apache.hadoop.dfs.TestDFSUpgradeFromImage.setUp(TestDFSUpgradeFromImage.java:75) Standard Output 2007-08-15 13:22:38,601 INFO dfs.TestDFSUpgradeFromImage (TestDFSUpgradeFromImage.java:setUp(72)) - Unpacking the tar file /export/home/hudson/hudson/jobs/Hadoop-Patch/workspace/trunk/build/test/cache/hadoop-12-dfs-dir.tgz {noformat} It seems that gzip is not installed on the lucene.zones.apache.org . Can someone with the privileges check this out. > Block CRC Unit Tests: upgrade test > ---------------------------------- > > Key: HADOOP-1629 > URL: https://issues.apache.org/jira/browse/HADOOP-1629 > Project: Hadoop > Issue Type: Test > Components: dfs > Affects Versions: 0.14.0 > Reporter: Nigel Daley > Assignee: Raghu Angadi > Priority: Blocker > Fix For: 0.14.0 > > Attachments: hadoop-12-dfs-dir.tgz, HADOOP-1629-trunk.patch, HADOOP-1629.patch, HADOOP-1629.patch, HADOOP-1629.patch > > > HADOOP-1286 introduced a distributed upgrade framework. 1 or more unit tests should be developed that start with a zipped up Hadoop 0.12 file system (that is included in Hadoop's src/test directory under version controlled) and attempts to upgrade it to the current version of Hadoop (ie the version that the tests are running against). The zipped up file system should include some "interesting" files, such as: > - zero length files > - file with replication set higher than number of datanodes > - file with no .crc file > - file with corrupt .crc file > - file with multiple blocks (will need to set dfs.block.size to a small value) > - file with multiple checksum blocks > - empty directory > - all of the above again but with a different io.bytes.per.checksum setting > The class that generates the zipped up file system should also be included in this patch. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.