hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-5003) TestNNThroughputBenchmark failed caused by existing directories
Date Thu, 18 Jul 2013 00:02:48 GMT

    [ https://issues.apache.org/jira/browse/HDFS-5003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13711815#comment-13711815

Hadoop QA commented on HDFS-5003:

{color:green}+1 overall{color}.  Here are the results of testing the latest attachment 
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:green}+1 tests included{color}.  The patch appears to include 1 new or modified
test files.

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of
javac compiler warnings.

    {color:green}+1 javadoc{color}.  The javadoc tool did not generate any warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new Findbugs (version
1.3.9) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number
of release audit warnings.

    {color:green}+1 core tests{color}.  The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

    {color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4674//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4674//console

This message is automatically generated.
> TestNNThroughputBenchmark failed caused by existing directories
> ---------------------------------------------------------------
>                 Key: HDFS-5003
>                 URL: https://issues.apache.org/jira/browse/HDFS-5003
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: test
>    Affects Versions: 3.0.0, 1-win, 2.1.0-beta
>            Reporter: Xi Fang
>            Assignee: Xi Fang
>            Priority: Minor
>             Fix For: 3.0.0, 1-win, 2.1.0-beta
>         Attachments: HADOOP-9739.1.patch, HADOOP-9739.1.trunk.patch
> This test failed on both Windows and Linux.
> Here is the error information.
> Testcase: testNNThroughput took 36.221 sec
> 	Caused an ERROR
> NNThroughputBenchmark: cannot mkdir D:\condor\condor\build\test\dfs\hosts\exclude
> java.io.IOException: NNThroughputBenchmark: cannot mkdir D:\condor\condor\build\test\dfs\hosts\exclude
> 	at org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.<init>(NNThroughputBenchmark.java:111)
> 	at org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.runBenchmark(NNThroughputBenchmark.java:1168)
> 	at org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark.testNNThroughput(TestNNThroughputBenchmark.java:38)
> This test may not fail for the first run, but will fail for the second one.
> The root cause is in the constructor of NNThroughputBenchmark
> {code}
> NNThroughputBenchmark(Configuration conf) throws IOException, LoginException  {
> ...
>      config.set("dfs.hosts.exclude", "${hadoop.tmp.dir}/dfs/hosts/exclude");
>      File excludeFile = new File(config.get("dfs.hosts.exclude", "exclude"));
>      if(! excludeFile.exists()) {
>       if(!excludeFile.getParentFile().mkdirs())
>          throw new IOException("NNThroughputBenchmark: cannot mkdir " + excludeFile);
>      }
>      new FileOutputStream(excludeFile).close();
> {code}
> excludeFile.getParentFile() may already exist, then excludeFile.getParentFile().mkdirs()
will return false, which however is not an expected behavior.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message