accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Josh Elser <josh.el...@gmail.com>
Subject Re: Tests failing when building 1.5.0 source with -Dhadoop.profile=2.0
Date Wed, 22 Jan 2014 19:42:53 GMT
Weird stuff.

I just checked out the 1.5.0 tag from Git and ran the following 
successfully:

`mvn clean package -Dhadoop.profile=2.0`

which builds against 2.0.4-alpha. The following against 2.1.0-beta works 
for me too:

`mvn clean package -Dhadoop.profile=2.0 -Dhadoop.version=2.1.0-beta`

For fun, despite that it shouldn't matter, I did install instead of just 
package to the same success. This was with Oracle 1.7.0_40 and Maven 3.1.1.

On 1/22/14, 2:21 PM, Matthew Molek wrote:
> Thanks for the quick responses.
>
> In this case I do really mean install. I'm trying to build another
> project against a version of Accumulo 1.5.0 built with hadoop
> 2.1.0-beta. Is that a reasonable thing to do? The reason I'm building
> Accumulo from source is that I was getting similar unsatisfied link
> errors when trying to use MiniAccumuloCluster in unit tests in another
> project that was depending on 2.1.0-beta. I thought the issue might be
> related to the version of Hadoop that the Accumulo 1.5.0 in the maven
> central repo was built with.
>
> On the issue of the build errors, I tried a separate mvn clean before
> the install, as well as wiping my whole local mvn repo, but I'm still
> getting the same errors. Is there anything else that might be worth trying?
>
>
> On Wed, Jan 22, 2014 at 2:06 PM, Billie Rinaldi
> <billie.rinaldi@gmail.com <mailto:billie.rinaldi@gmail.com>> wrote:
>
>     Also, we recommend never to use "install" unless you really mean to
>     do that.  It can often cause issues later with maven using your
>     local repo to pull in dependencies you don't want.  It couldn't hurt
>     to wipe out accumulo from your local repo before trying to build again.
>
>
>     On Wed, Jan 22, 2014 at 10:57 AM, Eric Newton <eric.newton@gmail.com
>     <mailto:eric.newton@gmail.com>> wrote:
>
>         Be sure to "mvn clean" when switching between hadoop versions.
>
>         -Eric
>
>
>
>         On Wed, Jan 22, 2014 at 1:52 PM, Matthew Molek
>         <mmolek@clearedgeit.com <mailto:mmolek@clearedgeit.com>> wrote:
>
>             I'm having trouble building accumulo 1.5.0 with the hadoop
>             2.0 profile. I'm using the source release
>             accumulo-1.5.0-src.tar.gz from
>             http://accumulo.apache.org/downloads/ . I'm building on
>             Centos 6.3 with oracle java 1.7 and maven 3.0.5.
>
>             I think I'm running into two separate issues.
>
>             First, when building with the default hadoop version for the
>             2.0 profile with this command: 'mvn -Dhadoop.profile=2.0
>             -Dmaven.test.failure.ignore=true clean install'
>
>             I get two test failures.
>
>             testFileMonitor(org.apache.accumulo.start.classloader.vfs.providers.VfsClassLoaderTest)
>               Time elapsed: 0.385 sec  <<< ERROR!
>             testGetClass(org.apache.accumulo.start.classloader.vfs.providers.VfsClassLoaderTest)
>               Time elapsed: 0.084 sec  <<< ERROR!
>
>             Both errors have the same cause:
>
>             java.lang.UnsatisfiedLinkError:
>             org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(IILjava/nio/ByteBuffer;ILjava/nio/ByteBuffer;IILjava/lang/String;J)V
>                      at
>             org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(Native
>             Method)
>                      at
>             org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:57)
>                      at
>             org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:291)
>                      at
>             org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:187)
>                      at
>             org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:122)
>                      at
>             org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:542)
>                      at
>             org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:594)
>                      at
>             org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:648)
>                      at
>             org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689)
>                      at
>             java.io.DataInputStream.read(DataInputStream.java:149)
>                      at
>             java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>                      at
>             java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
>                      at
>             java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>                      at
>             org.apache.commons.vfs2.util.MonitorInputStream.read(MonitorInputStream.java:100)
>                      at
>             java.io.FilterInputStream.read(FilterInputStream.java:107)
>                      at
>             org.apache.commons.vfs2.FileUtil.writeContent(FileUtil.java:85)
>                      at
>             org.apache.commons.vfs2.FileUtil.copyContent(FileUtil.java:114)
>                      at
>             org.apache.commons.vfs2.provider.AbstractFileObject.copyFrom(AbstractFileObject.java:1053)
>                      at
>             org.apache.commons.vfs2.impl.DefaultFileReplicator.replicateFile(DefaultFileReplicator.java:249)
>                      at
>             org.apache.commons.vfs2.provider.AbstractFileSystem.doReplicateFile(AbstractFileSystem.java:467)
>                      at
>             org.apache.commons.vfs2.provider.AbstractFileSystem.replicateFile(AbstractFileSystem.java:423)
>                      at
>             org.apache.commons.vfs2.provider.zip.ZipFileSystem.<init>(ZipFileSystem.java:61)
>                      at
>             org.apache.commons.vfs2.provider.jar.JarFileSystem.<init>(JarFileSystem.java:50)
>                      at
>             org.apache.commons.vfs2.provider.jar.JarFileProvider.doCreateFileSystem(JarFileProvider.java:82)
>                      at
>             org.apache.commons.vfs2.provider.AbstractLayeredFileProvider.createFileSystem(AbstractLayeredFileProvider.java:89)
>                      at
>             org.apache.commons.vfs2.impl.DefaultFileSystemManager.createFileSystem(DefaultFileSystemManager.java:914)
>                      at
>             org.apache.commons.vfs2.impl.DefaultFileSystemManager.createFileSystem(DefaultFileSystemManager.java:933)
>                      at
>             org.apache.commons.vfs2.impl.VFSClassLoader.addFileObjects(VFSClassLoader.java:153)
>                      at
>             org.apache.commons.vfs2.impl.VFSClassLoader.<init>(VFSClassLoader.java:116)
>                      at
>             org.apache.commons.vfs2.impl.VFSClassLoader.<init>(VFSClassLoader.java:98)
>                      at
>             org.apache.accumulo.start.classloader.vfs.providers.VfsClassLoaderTest.setup(VfsClassLoaderTest.java:58)
>                      ... and a bunch more
>
>             Second problem: If I try to build with a more recent hadoop
>             release (2.1.0-beta and up) command: 'mvn
>             -Dhadoop.profile=2.0 -Dhadoop.version=2.1.0-beta
>             -Dmaven.test.failure.ignore=true clean install'
>
>             I end up with 140+ test errors. Many are like this:
>
>             java.lang.RuntimeException:
>             java.lang.reflect.InvocationTargetException
>                      at
>             org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
>                      at
>             org.apache.hadoop.security.Groups.<init>(Groups.java:55)
>                      at
>             org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:182)
>                      at
>             org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:234)
>                      at
>             org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:213)
>                      at
>             org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:662)
>                      at
>             org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:570)
>                      at
>             org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2586)
>                      at
>             org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2578)
>                      at
>             org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2444)
>                      at
>             org.apache.hadoop.fs.FileSystem.get(FileSystem.java:363)
>                      at
>             org.apache.hadoop.fs.FileSystem.get(FileSystem.java:165)
>                      at
>             org.apache.accumulo.core.client.mock.MockInstance.getDefaultFileSystem(MockInstance.java:69)
>                      at
>             org.apache.accumulo.core.client.mock.MockInstance.<init>(MockInstance.java:60)
>                      at
>             org.apache.accumulo.server.util.CloneTest.testMerge(CloneTest.java:344)
>                      ... 25 more (my truncation)
>             Caused by: java.lang.reflect.InvocationTargetException
>                      at
>             sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>                      at
>             sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>                      at
>             sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>                      at
>             java.lang.reflect.Constructor.newInstance(Constructor.java:525)
>                      at
>             org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:129)
>                      ... 39 more
>             Caused by: java.lang.UnsatisfiedLinkError:
>             org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
>                      at
>             org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative(Native
>             Method)
>                      at
>             org.apache.hadoop.security.JniBasedUnixGroupsMapping.<clinit>(JniBasedUnixGroupsMapping.java:49)
>                      at
>             org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.<init>(JniBasedUnixGroupsMappingWithFallback.java:38)
>                      ... 44 more
>
>
>             Lastly, I can build with no errors if I use the default
>             hadoop-1.0 profile.
>
>             Both of my problems seem to boil down to
>             UnsatisfiedLinkErrors related to the underlying Hadoop
>             dependencies. I can't find much information to help resolve
>             this other than some general java related discussions saying
>             that UnsatisfiedLinkErrors usually result from missing
>             native libraries. Can anyone point me in the right
>             direction? Thanks!
>
>
>
>

Mime
View raw message