accumulo-notifications mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (ACCUMULO-1244) commons-io version conflict with CDH4
Date Wed, 10 Apr 2013 23:35:18 GMT

    [ https://issues.apache.org/jira/browse/ACCUMULO-1244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13628439#comment-13628439
] 

Hudson commented on ACCUMULO-1244:
----------------------------------

Integrated in Accumulo-1.5 #72 (See [https://builds.apache.org/job/Accumulo-1.5/72/])
    ACCUMULO-1244 Make more libraries provided, because they must be. So, we'll depend on
the version given by Hadoop. Remove all provided jars and source jars from lib directory.
(Revision 1466685)

     Result = UNSTABLE
ctubbsii : 
Files : 
* /accumulo/branches/1.5/assemble/pom.xml
* /accumulo/branches/1.5/bin/accumulo
* /accumulo/branches/1.5/bin/bootstrap_hdfs.sh
* /accumulo/branches/1.5/core/pom.xml
* /accumulo/branches/1.5/examples/simple/pom.xml
* /accumulo/branches/1.5/fate/pom.xml
* /accumulo/branches/1.5/pom.xml
* /accumulo/branches/1.5/proxy/pom.xml
* /accumulo/branches/1.5/server/pom.xml
* /accumulo/branches/1.5/start/pom.xml
* /accumulo/branches/1.5/test/pom.xml
* /accumulo/branches/1.5/trace/pom.xml

                
> commons-io version conflict with CDH4
> -------------------------------------
>
>                 Key: ACCUMULO-1244
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-1244
>             Project: Accumulo
>          Issue Type: Bug
>         Environment: Hadoop version 2.0.0-CDH4.2.0
>            Reporter: Adam Fuchs
>            Assignee: Christopher Tubbs
>             Fix For: 1.5.0
>
>
> CDH4 appears to rely on commons-io version 2.0 or greater. Accumulo currently packages
in version 1.4. We should bump this up to achieve compatibility.
> Workaround: put the hadoop dependency libraries before the accumulo dependency libraries
in the general.classpaths variable in accumulo-site.xml.
> {code}
> 2013-04-04 22:27:13,868 [tabletserver.Tablet] ERROR: Unknown error during minor compaction
for extent: !0;~;!0<
> java.lang.RuntimeException: java.lang.NoSuchMethodError: org.apache.commons.io.IOUtils.closeQuietly(Ljava/io/Closeable;)V
>   at org.apache.accumulo.server.tabletserver.Tablet.minorCompact(Tablet.java:2152)
>   at org.apache.accumulo.server.tabletserver.Tablet.access$4400(Tablet.java:152)
>   at org.apache.accumulo.server.tabletserver.Tablet$MinorCompactionTask.run(Tablet.java:2219)
>   at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>   at org.apache.accumulo.trace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>   at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at org.apache.accumulo.trace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>   at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>   at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.NoSuchMethodError: org.apache.commons.io.IOUtils.closeQuietly(Ljava/io/Closeable;)V
>   at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:941)
>   at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:471)
>   at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:662)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:706)
>   at java.io.DataInputStream.read(DataInputStream.java:132)
>   at java.io.DataInputStream.readFully(DataInputStream.java:178)
>   at java.io.DataInputStream.readLong(DataInputStream.java:399)
>   at org.apache.accumulo.core.file.rfile.bcfile.BCFile$Reader.<init>(BCFile.java:608)
>   at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.init(CachableBlockFile.java:246)
>   at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getBCFile(CachableBlockFile.java:257)
>   at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.access$000(CachableBlockFile.java:143)
>   at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader$MetaBlockLoader.get(CachableBlockFile.java:212)
>   at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getBlock(CachableBlockFile.java:313)
>   at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getMetaBlock(CachableBlockFile.java:367)
>   at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getMetaBlock(CachableBlockFile.java:143)
>   at org.apache.accumulo.core.file.rfile.RFile$Reader.<init>(RFile.java:834)
>   at org.apache.accumulo.core.file.rfile.RFileOperations.openReader(RFileOperations.java:79)
>   at org.apache.accumulo.core.file.DispatchingFileFactory.openReader(FileOperations.java:72)
>   at org.apache.accumulo.server.tabletserver.Compactor.call(Compactor.java:317)
>   at org.apache.accumulo.server.tabletserver.MinorCompactor.call(MinorCompactor.java:96)
>   at org.apache.accumulo.server.tabletserver.Tablet.minorCompact(Tablet.java:2138)
>   ... 9 more
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message