hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chuan Liu (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-5100) TestNamenodeRetryCache fails on Windows due to incorrect cleanup
Date Thu, 15 Aug 2013 23:12:16 GMT

    [ https://issues.apache.org/jira/browse/HDFS-5100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741642#comment-13741642

Chuan Liu commented on HDFS-5100:

Thanks, [~sureshms]! I just tried that. Without shutting down the cluster for each test case,
the testRetryCacheRebuild fails at the first assert -- {{assertEquals(14, cacheSet.size());}}.
I think this is because we have some cache from previous test runs since the cluster is still
the old cluster. So shutting down the cluster for each test case may be a more robust approach
for this unit test.
testRetryCacheRebuild(org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache)  Time
elapsed: 3400 sec  <<< FAILURE!
java.lang.AssertionError: expected:<14> but was:<37>
        at org.junit.Assert.fail(Assert.java:93)
        at org.junit.Assert.failNotEquals(Assert.java:647)
        at org.junit.Assert.assertEquals(Assert.java:128)
        at org.junit.Assert.assertEquals(Assert.java:472)
        at org.junit.Assert.assertEquals(Assert.java:456)
        at org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache.testRetryCacheRebuild(TestNamenodeRetryCache.java:387)

> TestNamenodeRetryCache fails on Windows due to incorrect cleanup
> ----------------------------------------------------------------
>                 Key: HDFS-5100
>                 URL: https://issues.apache.org/jira/browse/HDFS-5100
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: test
>    Affects Versions: 3.0.0, 2.1.1-beta
>            Reporter: Chuan Liu
>            Assignee: Chuan Liu
>            Priority: Minor
>         Attachments: HDFS-5100-trunk.patch, HDFS-5100-trunk.patch
> The test case fails on Windows with the following exceptions.
> {noformat}
> java.io.IOException: Could not fully delete C:\hdc\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1
> 	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:759)
> 	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:644)
> 	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:334)
> 	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:316)
> 	at org.apache.hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits.setupCluster(TestInitializeSharedEdits.java:68)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ...
> {noformat}
> The root cause is that the {{cleanup()}} only try to delete root directory instead of
shutting down the MiniDFSCluster. Every test case in this unit test will create a new MiniDFSCluster
during {{setup()}} step. Without shutting down the previous cluster, the new cluster creation
will fail with the above exception due to blocking file handling on Windows.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message