hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Amit Sela <am...@infolinks.com>
Subject Bulk load from OSGi running client
Date Tue, 03 Sep 2013 15:37:19 GMT
Hi all,

I'm running on Hadoop 1.0.4 with HBase 0.94.2 and I've bundled both (for
client side use only) so that I could support execution of MR and/or HBase
queries (and other client operations) from an OSGi environment (in my case
Felix).

So far, I've managed (context class loader adjustments) to execute MR jobs
and to query HBase (gert, put...) with no problem.

*I'm trying to execute Bulk Load into HBase and I seem to encounter a
strange NullPointerException:*
 Caused by: java.lang.NullPointerException: null
at
org.apache.felix.framework.BundleRevisionImpl.getResourceLocal(BundleRevisionImpl.java:474)
at
org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation(BundleWiringImpl.java:1432)
at
org.apache.felix.framework.BundleWiringImpl.getResourceByDelegation(BundleWiringImpl.java:1360)
at
org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.getResource(BundleWiringImpl.java:2256)
at org.apache.hadoop.conf.Configuration.getResource(Configuration.java:1002)
at
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1156)
at
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1112)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1056)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:401)
at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:471)
at
org.apache.hadoop.io.compress.GzipCodec.createInputStream(GzipCodec.java:131)
at
org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.createDecompressionStream(Compression.java:223)
at
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.decompress(HFileBlock.java:1392)
at
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1897)
at
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1637)
at
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1286)
at
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1294)
at
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.<init>(HFileReaderV2.java:126)
at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:552)
at
org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:589)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:603)
at
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplit(LoadIncrementalHFiles.java:402)
at
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(LoadIncrementalHFiles.java:323)
at
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(LoadIncrementalHFiles.java:321)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

*I get this every time I try to bulk load after server restart following
with a bundle update (update is done after restart, so the update calls for
refresh packages).*
*Strangely, if I immediately try again, success. Any following attempts
succeed as well.*
*
*
*Any ideas anyone ?*
*
*
*Thanks,  *
*
*
*Amit. *

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message