hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Enis Soztutar (JIRA)" <j...@apache.org>
Subject [jira] Reopened: (HADOOP-1629) Block CRC Unit Tests: upgrade test
Date Wed, 15 Aug 2007 14:23:30 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Enis Soztutar reopened HADOOP-1629:
-----------------------------------


I am reopening this issue, since TestDFSUpgradeFromImage fails for hadoop-patch and hudson-nightly
builds on hudson. 
The error thrown is :
{noformat}
java.io.IOException: tar: z: unknown function modifier
	at org.apache.hadoop.fs.Command.run(Command.java:33)
	at org.apache.hadoop.fs.Command.execCommand(Command.java:89)
	at org.apache.hadoop.dfs.TestDFSUpgradeFromImage.setUp(TestDFSUpgradeFromImage.java:75)

Standard Output

2007-08-15 13:22:38,601 INFO  dfs.TestDFSUpgradeFromImage (TestDFSUpgradeFromImage.java:setUp(72))
- Unpacking the tar file /export/home/hudson/hudson/jobs/Hadoop-Patch/workspace/trunk/build/test/cache/hadoop-12-dfs-dir.tgz
{noformat}

It seems that gzip is not installed on the lucene.zones.apache.org . Can someone with the
privileges check this out.  

> Block CRC Unit Tests: upgrade test
> ----------------------------------
>
>                 Key: HADOOP-1629
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1629
>             Project: Hadoop
>          Issue Type: Test
>          Components: dfs
>    Affects Versions: 0.14.0
>            Reporter: Nigel Daley
>            Assignee: Raghu Angadi
>            Priority: Blocker
>             Fix For: 0.14.0
>
>         Attachments: hadoop-12-dfs-dir.tgz, HADOOP-1629-trunk.patch, HADOOP-1629.patch,
HADOOP-1629.patch, HADOOP-1629.patch
>
>
> HADOOP-1286 introduced a distributed upgrade framework.  1 or more unit tests should
be developed that start with a zipped up Hadoop 0.12 file system (that is included in Hadoop's
src/test directory under version controlled) and attempts to upgrade it to the current version
of Hadoop (ie the version that the tests are running against).  The zipped up file system
should include some "interesting" files, such as:
> - zero length files
> - file with replication set higher than number of datanodes
> - file with no .crc file
> - file with corrupt .crc file
> - file with multiple blocks (will need to set dfs.block.size to a small value)
> - file with multiple checksum blocks
> - empty directory
> - all of the above again but with a different io.bytes.per.checksum setting
> The class that generates the zipped up file system should also be included in this patch.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message