hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Allen Wittenauer (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-5003) TestNNThroughputBenchmark failed caused by existing directories
Date Wed, 03 Sep 2014 22:49:54 GMT

     [ https://issues.apache.org/jira/browse/HDFS-5003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Allen Wittenauer updated HDFS-5003:
-----------------------------------
    Fix Version/s:     (was: 3.0.0)

> TestNNThroughputBenchmark failed caused by existing directories
> ---------------------------------------------------------------
>
>                 Key: HDFS-5003
>                 URL: https://issues.apache.org/jira/browse/HDFS-5003
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: test
>    Affects Versions: 3.0.0, 1-win, 2.1.0-beta, 1.3.0
>            Reporter: Xi Fang
>            Assignee: Xi Fang
>            Priority: Minor
>             Fix For: 1-win, 2.1.0-beta, 1.3.0
>
>         Attachments: HADOOP-9739.1.patch, HADOOP-9739.1.trunk.patch
>
>
> This test failed on both Windows and Linux.
> Here is the error information.
> Testcase: testNNThroughput took 36.221 sec
> 	Caused an ERROR
> NNThroughputBenchmark: cannot mkdir D:\condor\condor\build\test\dfs\hosts\exclude
> java.io.IOException: NNThroughputBenchmark: cannot mkdir D:\condor\condor\build\test\dfs\hosts\exclude
> 	at org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.<init>(NNThroughputBenchmark.java:111)
> 	at org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.runBenchmark(NNThroughputBenchmark.java:1168)
> 	at org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark.testNNThroughput(TestNNThroughputBenchmark.java:38)
> This test may not fail for the first run, but will fail for the second one.
> The root cause is in the constructor of NNThroughputBenchmark
> {code}
> NNThroughputBenchmark(Configuration conf) throws IOException, LoginException  {
> ...
>      config.set("dfs.hosts.exclude", "${hadoop.tmp.dir}/dfs/hosts/exclude");
>      File excludeFile = new File(config.get("dfs.hosts.exclude", "exclude"));
>      if(! excludeFile.exists()) {
>       if(!excludeFile.getParentFile().mkdirs())
>          throw new IOException("NNThroughputBenchmark: cannot mkdir " + excludeFile);
>      }
>      new FileOutputStream(excludeFile).close();
> {code}
> excludeFile.getParentFile() may already exist, then excludeFile.getParentFile().mkdirs()
will return false, which however is not an expected behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message