commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From dlmarion <dlmar...@comcast.net>
Subject Re: [VFS] Implementing custom hdfs file system using commons-vfs 2.0
Date Sun, 11 Jan 2015 01:58:18 GMT
Regarding the warning, it is something the user can change in their hdfs configuration files.
It comes from the hdfs client object, not the vfs code.



<div>-------- Original message --------</div><div>From: Bernd Eckenfels
<ecki@zusammenkunft.net> </div><div>Date:01/10/2015  7:25 PM  (GMT-05:00)
</div><div>To: dlmarion@comcast.net </div><div>Cc: Commons Developers
List <dev@commons.apache.org> </div><div>Subject: Re: [VFS] Implementing
custom hdfs file system using commons-vfs
  2.0 </div><div>
</div>Hello,

Am Sat, 10 Jan 2015 03:12:19 +0000 (UTC)
schrieb dlmarion@comcast.net:

> Bernd, 
> 
> Regarding the Hadoop version for VFS 2.1, why not use the latest on
> the first release of the HDFS provider? The Hadoop 1.1.2 release was
> released in Feb 2013. 

Yes, you are right. We dont need to care about 2.0 as this is a new
provider. I will make the changes, just want to fix the current test
failures I see first.


> I just built 2.1-SNAPSHOT over the holidays with JDK 6, 7, and 8 on
> Ubuntu. What type of test errors are you getting? Testing is disabled
> on Windows unless you decide to pull in windows artifacts attached to
> VFS-530. However, those artifacts are associated with patch 3 and are
> for Hadoop 2.4.0. Updating to 2.4.0 would also be sufficient in my
> opinion. 

Yes, what I mean is: I typically build under Windows so I would not
notice if the test starts to fail. However it seems to pass on the
integration build:

https://continuum-ci.apache.org/continuum/projectView.action?projectId=129&amp;projectGroupId=16

Running org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
Starting DataNode 0 with dfs.data.dir: target/build/test/data/dfs/data/data1,target/build/test/data/dfs/data/data2
Cluster is active
Cluster is active
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.821 sec - in org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
Running org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase
Starting DataNode 0 with dfs.data.dir: target/build/test2/data/dfs/data/data1,target/build/test2/data/dfs/data/data2
Cluster is active
Cluster is active
Tests run: 76, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.853 sec - in org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase

Anyway, on a Ubuntu, I get this exception currently:

Running org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase
Starting DataNode 0 with dfs.data.dir: target/build/test/data/dfs/data/data1,tar         
                               get/build/test/data/dfs/data/data2
Cluster is active
Cluster is active
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.486 sec <<< FA
                                        ILURE! - in org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase
junit.framework.TestSuite@56c77035(org.apache.commons.vfs2.provider.hdfs.test.Hd         
                               fsFileProviderTestCase$HdfsProviderTestSuite)  Time elapsed:
1.479 sec  <<< ERRO                                         R!
java.lang.RuntimeException: Error setting up mini cluster
        at org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase$H         
                               dfsProviderTestSuite.setUp(HdfsFileProviderTestCase.java:112)
        at org.apache.commons.vfs2.test.AbstractTestSuite$1.protect(AbstractTest         
                               Suite.java:148)
        at junit.framework.TestResult.runProtected(TestResult.java:142)
        at org.apache.commons.vfs2.test.AbstractTestSuite.run(AbstractTestSuite. java:154)
        at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.         
                               java:86)
        at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provide         
                               r.java:283)
        at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUni         
                               t4Provider.java:173)
        at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4         
                               Provider.java:153)
        at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider         
                               .java:128)
        at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameCla         
                               ssLoader(ForkedBooter.java:203)
        at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(Fork edBooter.java:155)
        at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java: 103)
Caused by: java.io.IOException: Cannot lock storage target/build/test/data/dfs/n         
                               ame1. The directory is already locked.
        at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(St         
                               orage.java:599)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:13         
                               27)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:13         
                               45)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:         
                               1207)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:         
                               187)
        at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:268)
        at org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase$H         
                               dfsProviderTestSuite.setUp(HdfsFileProviderTestCase.java:107)
        ... 11 more

Running org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.445 sec - in          
 

When I delete the core/target/build/test/data/dfs/ directory and then run the ProviderTest
I can do that multiple times and it works:

  mvn surefire:test -Dtest=org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest

But when I run all tests or the HdfsFileProviderTestCase then it fails and afterwards not
even the ProviderTest suceeds until I delete that dir.

(I suspect the "locking" is a missleading error, looks more like the data pool has some kind
of instance ID which it does not have at the next run)

Looks like TestCase has a problem and ProviderTest does no proper pre-cleaning. Will check
the source. More generally speaking it should not use a fixed working directory anyway.


> I started up Hadoop 2.6.0 on my laptop, created a directory and file,
> then used the VFS shell to list and view the contents (remember, HDFS
> provider is read-only currently). Here is the what I did: 

Looks good. I will shorten it a bit and add it to the wiki. BTW: the warning, is this something
we can change?

Gruss
Bernd
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message