accumulo-notifications mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Christopher Tubbs (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (ACCUMULO-1244) commons-io version conflict with CDH4
Date Mon, 22 Apr 2013 15:17:15 GMT

    [ https://issues.apache.org/jira/browse/ACCUMULO-1244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13638091#comment-13638091
] 

Christopher Tubbs commented on ACCUMULO-1244:
---------------------------------------------

[~billie.rinaldi]
{quote}Is it a necessity for the dependencies to be marked provided in the module poms, or
could we move the provided markings to the top-level pom?{quote}

I would argue that scopes are best put in module poms, because the same artifact may have
two different scopes in two different modules in a multi-module project, and it gets confusing
when the default "compile" scope is explicit in some cases (to override the parent) and implicit
in other cases.

Specifying scopes this way, also helps simplify the copy-dependencies plugin configuration
a lot, and makes the behavior of plugins that use the scopes more predictable (vs. the extra
configuration in our assembly to get around the messiness of picking and choosing what to
include). It also makes our integration testing environment much more realistic. The biggest
downside I'm aware of is that provided scope is not resolved transitively in the unit testing
phase of the build lifecycle.

{quote}When I try to init, it throws an exception because it can't find log4j.{quote}
I don't get that. Have you figured this out yet?
                
> commons-io version conflict with CDH4
> -------------------------------------
>
>                 Key: ACCUMULO-1244
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-1244
>             Project: Accumulo
>          Issue Type: Bug
>         Environment: Hadoop version 2.0.0-CDH4.2.0
>            Reporter: Adam Fuchs
>            Assignee: Christopher Tubbs
>             Fix For: 1.5.0
>
>
> CDH4 appears to rely on commons-io version 2.0 or greater. Accumulo currently packages
in version 1.4. We should bump this up to achieve compatibility.
> Workaround: put the hadoop dependency libraries before the accumulo dependency libraries
in the general.classpaths variable in accumulo-site.xml.
> {code}
> 2013-04-04 22:27:13,868 [tabletserver.Tablet] ERROR: Unknown error during minor compaction
for extent: !0;~;!0<
> java.lang.RuntimeException: java.lang.NoSuchMethodError: org.apache.commons.io.IOUtils.closeQuietly(Ljava/io/Closeable;)V
>   at org.apache.accumulo.server.tabletserver.Tablet.minorCompact(Tablet.java:2152)
>   at org.apache.accumulo.server.tabletserver.Tablet.access$4400(Tablet.java:152)
>   at org.apache.accumulo.server.tabletserver.Tablet$MinorCompactionTask.run(Tablet.java:2219)
>   at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>   at org.apache.accumulo.trace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>   at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at org.apache.accumulo.trace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>   at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>   at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.NoSuchMethodError: org.apache.commons.io.IOUtils.closeQuietly(Ljava/io/Closeable;)V
>   at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:941)
>   at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:471)
>   at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:662)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:706)
>   at java.io.DataInputStream.read(DataInputStream.java:132)
>   at java.io.DataInputStream.readFully(DataInputStream.java:178)
>   at java.io.DataInputStream.readLong(DataInputStream.java:399)
>   at org.apache.accumulo.core.file.rfile.bcfile.BCFile$Reader.<init>(BCFile.java:608)
>   at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.init(CachableBlockFile.java:246)
>   at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getBCFile(CachableBlockFile.java:257)
>   at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.access$000(CachableBlockFile.java:143)
>   at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader$MetaBlockLoader.get(CachableBlockFile.java:212)
>   at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getBlock(CachableBlockFile.java:313)
>   at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getMetaBlock(CachableBlockFile.java:367)
>   at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getMetaBlock(CachableBlockFile.java:143)
>   at org.apache.accumulo.core.file.rfile.RFile$Reader.<init>(RFile.java:834)
>   at org.apache.accumulo.core.file.rfile.RFileOperations.openReader(RFileOperations.java:79)
>   at org.apache.accumulo.core.file.DispatchingFileFactory.openReader(FileOperations.java:72)
>   at org.apache.accumulo.server.tabletserver.Compactor.call(Compactor.java:317)
>   at org.apache.accumulo.server.tabletserver.MinorCompactor.call(MinorCompactor.java:96)
>   at org.apache.accumulo.server.tabletserver.Tablet.minorCompact(Tablet.java:2138)
>   ... 9 more
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message