accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Josh Elser <josh.el...@gmail.com>
Subject Re: Unable to import RFile produced by AccumuloFileOutputFormat
Date Fri, 08 Jul 2016 19:02:15 GMT
Interesting! I have not run into this one before.

You could use `accumulo rfile-info`, but I'd guess that would net the 
same exception you see below.

Let me see if I can dig a little into the code and come up with a 
plausible explanation.

Russ Weeks wrote:
> Hi, folks,
>
> Has anybody ever encountered a problem where the RFiles that are
> generated by AccumuloFileOutputFormat can't be imported using
> TableOperations.importDirectory?
>
> I'm seeing this problem very frequently for small RFiles and
> occasionally for larger RFiles. The errors shown in the monitor's log UI
> suggest a corrupt file, to me. For instance, the stack trace below shows
> a case where the BCFileVersion was incorrect, but sometimes it will
> complain about an invalid length, negative offset, or invalid codec.
>
> I'm using HDP Accumulo 1.7.0 (1.7.0.2.3.4.12-1) on an encrypted HDFS
> volume, with Kerberos turned on. The RFiles are generated by
> AccumuloFileOutputFormat from a Spark job.
>
> A very small RFile that exhibits this problem is available here:
> http://firebar.newbrightidea.com/downloads/bad_rfiles/I0000waz.rf
>
> I'm pretty confident that the keys are being written to the RFile in
> order. Are there any tools I could use to inspect the internal structure
> of the RFile?
>
> Thanks,
> -Russ
>
> Unable to find tablets that overlap file
> hdfs://[redacted]/accumulo/data/tables/f/b-0000ze9/I0000zeb.rf
> java.lang.RuntimeException: Incompatible BCFile fileBCFileVersion.
> at
> org.apache.accumulo.core.file.rfile.bcfile.BCFile$Reader.<init>(BCFile.java:828)
> at
> org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.init(CachableBlockFile.java:246)
> at
> org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getBCFile(CachableBlockFile.java:257)
> at
> org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.access$100(CachableBlockFile.java:137)
> at
> org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader$MetaBlockLoader.get(CachableBlockFile.java:209)
> at
> org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getBlock(CachableBlockFile.java:313)
> at
> org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getMetaBlock(CachableBlockFile.java:368)
> at
> org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getMetaBlock(CachableBlockFile.java:137)
> at org.apache.accumulo.core.file.rfile.RFile$Reader.<init>(RFile.java:843)
> at
> org.apache.accumulo.core.file.rfile.RFileOperations.openReader(RFileOperations.java:79)
> at
> org.apache.accumulo.core.file.DispatchingFileFactory.openReader(DispatchingFileFactory.java:69)
> at
> org.apache.accumulo.server.client.BulkImporter.findOverlappingTablets(BulkImporter.java:644)
> at
> org.apache.accumulo.server.client.BulkImporter.findOverlappingTablets(BulkImporter.java:615)
> at
> org.apache.accumulo.server.client.BulkImporter$1.run(BulkImporter.java:146)
> at
> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
> at org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at
> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
> at java.lang.Thread.run(Thread.java:745)

Mime
View raw message