hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chen Wang <chen.apache.s...@gmail.com>
Subject Re: org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$3@482d59a3, java.io.IOException: java.io.IOException: No FileSystem for scheme: maprfs
Date Wed, 18 Jun 2014 05:04:31 GMT
Yes, the hadoop cluster is using maprfs, so the hdfs files are are in
maprfs:/ format:


2014-06-17 21:48:58 WARN:
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles - Skipping
non-directory maprfs:/user/chen/hbase/_SUCCESS
2014-06-17 21:48:58 INFO: org.apache.hadoop.hbase.io.hfile.CacheConfig -
Allocating LruBlockCache with maximum size 239.6m
2014-06-17 21:48:58 INFO: org.apache.hadoop.hbase.util.ChecksumType -
Checksum using org.apache.hadoop.util.PureJavaCrc32
2014-06-17 21:48:58 INFO:
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles - Trying to load
hfile=maprfs:/user/chen/hbase/m/cdd83ff3007b4955869d69c82a9f5b91 first=row1
last=row9

Chen
On Tue, Jun 17, 2014 at 9:59 PM, Ted Yu <yuzhihong@gmail.com> wrote:

> The scheme says maprfs.
> Do you happen to use MapR product ?
>
> Cheers
>
> On Jun 17, 2014, at 9:53 PM, Chen Wang <chen.apache.solr@gmail.com> wrote:
>
> > Folk,
> > I am trying to bulk load the hdfs file into hbase with
> >
> > LoadIncrementalHFiles loader = new LoadIncrementalHFiles(conf);
> >
> > loader.doBulkLoad(new Path(args[1]), hTable);
> >
> >
> > However, i receive exception of java.io.IOException: java.io.IOException:
> > No FileSystem for scheme: maprfs
> >
> > Exception in thread "main" java.io.IOException: BulkLoad encountered an
> > unrecoverable problem
> >
> > at
> >
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.bulkLoadPhase(LoadIncrementalHFiles.java:331)
> >
> > at
> >
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:261)
> >
> > at com.walmartlabs.targeting.mapred.Driver.main(Driver.java:81)
> >
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >
> > at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >
> > at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >
> > at java.lang.reflect.Method.invoke(Method.java:597)
> >
> > at org.apache.hadoop.util.RunJar.main(RunJar.java:197)
> >
> > Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException:
> Failed
> > after attempts=10, exceptions:
> >
> > Tue Jun 17 21:48:58 PDT 2014,
> > org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$3@482d59a3,
> > java.io.IOException: java.io.IOException: No FileSystem for scheme:
> maprfs
> >
> >
> > What is the reason for this exception? I did some googling, and tried to
> > add some config to Hbase configuration:
> >
> > hbaseConf.set("fs.hdfs.impl",
> >
> >  org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
> >
> > hbaseConf.set("fs.file.impl",
> >
> >  org.apache.hadoop.fs.LocalFileSystem.class.getName());
> >
> >
> > But it does not have any effect.
> >
> > Any idea?
> >
> > Thanks advance.
> >
> > Chen
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message