hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-1629) Block CRC Unit Tests: upgrade test
Date Wed, 08 Aug 2007 22:17:00 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Raghu Angadi updated HADOOP-1629:
---------------------------------

    Attachment: HADOOP-1629.patch


Thanks to Nigel for helping through this.

Attached patch for new unit test "TestDFSUpgradeFromImage". This is an end-to-end test for
upgrade from Hadoop-0.12 to current version. The initial image contain the various categories
and error that Nigel mentioned in the Jira desription. 

For now we are using tar-gzipped file. Hadoop anyway requires cygwin. Once HADOOP-1622 goes
in we can change the format. 

The patch does not actually contain the {{.tgz}} file. Will attach it. hadoop-12-dfs-dir.txt
contains a description of the data and the file checksums that are verified during the unit
test.


> Block CRC Unit Tests: upgrade test
> ----------------------------------
>
>                 Key: HADOOP-1629
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1629
>             Project: Hadoop
>          Issue Type: Test
>          Components: dfs
>    Affects Versions: 0.14.0
>            Reporter: Nigel Daley
>            Assignee: Raghu Angadi
>            Priority: Blocker
>             Fix For: 0.14.0
>
>         Attachments: hadoop-12-dfs-dir.tgz, HADOOP-1629.patch
>
>
> HADOOP-1286 introduced a distributed upgrade framework.  1 or more unit tests should
be developed that start with a zipped up Hadoop 0.12 file system (that is included in Hadoop's
src/test directory under version controlled) and attempts to upgrade it to the current version
of Hadoop (ie the version that the tests are running against).  The zipped up file system
should include some "interesting" files, such as:
> - zero length files
> - file with replication set higher than number of datanodes
> - file with no .crc file
> - file with corrupt .crc file
> - file with multiple blocks (will need to set dfs.block.size to a small value)
> - file with multiple checksum blocks
> - empty directory
> - all of the above again but with a different io.bytes.per.checksum setting
> The class that generates the zipped up file system should also be included in this patch.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message