Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 7AD4A11C6F for ; Wed, 18 Jun 2014 10:15:05 +0000 (UTC) Received: (qmail 61926 invoked by uid 500); 18 Jun 2014 10:15:03 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 61843 invoked by uid 500); 18 Jun 2014 10:15:03 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 61826 invoked by uid 99); 18 Jun 2014 10:15:03 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 18 Jun 2014 10:15:03 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of chen.apache.solr@gmail.com designates 209.85.213.176 as permitted sender) Received: from [209.85.213.176] (HELO mail-ig0-f176.google.com) (209.85.213.176) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 18 Jun 2014 10:14:59 +0000 Received: by mail-ig0-f176.google.com with SMTP id a13so5166865igq.15 for ; Wed, 18 Jun 2014 03:14:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=4li3HbLu503hwvbczveggDVgzarvKu27SnkNasCjEcs=; b=pmIQ2xKjWpEc9+phfqqyTW4HYs64zRUm5HcfNkQOKU9lnwklmbEzroAmMGF2jvoNLd nA9izq/2G2jyCie5XraITe1mGg2WfiRcA6SarSPeWOvsDOhdluszk1U5034yy5ZtEtPo vaejEn213Teo8XrPsoHCdLNf+HQExOAZz67d78xRv9fYKIBxdiBhuWc0fSUkelnvQ8ds OI98P0TlRy26/HlTvCBvb2NZgS/mCYeqNC4/13Rgsq+c3yasOVUp1wFP18Iq45Ea5ijW auJO2nac8wovCLVSYEJGbeKOjdYz+4Uy0z0o4gRvQL+1Iu59A6rtuZi185MCqhaMuBA1 zXQg== MIME-Version: 1.0 X-Received: by 10.42.72.198 with SMTP id p6mr1147441icj.52.1403086478655; Wed, 18 Jun 2014 03:14:38 -0700 (PDT) Received: by 10.64.21.5 with HTTP; Wed, 18 Jun 2014 03:14:38 -0700 (PDT) In-Reply-To: <74A27DE0-E491-41A7-BEED-9D5F9708BE73@gmail.com> References: <74A27DE0-E491-41A7-BEED-9D5F9708BE73@gmail.com> Date: Wed, 18 Jun 2014 03:14:38 -0700 Message-ID: Subject: Re: org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$3@482d59a3, java.io.IOException: java.io.IOException: No FileSystem for scheme: maprfs From: Chen Wang To: user@hbase.apache.org Content-Type: multipart/alternative; boundary=90e6ba613b9a26469b04fc1987ad X-Virus-Checked: Checked by ClamAV on apache.org --90e6ba613b9a26469b04fc1987ad Content-Type: text/plain; charset=UTF-8 In case anyone interested, I switched to TableOutputFormat to unblock myself.. job.setOutputFormatClass(TableOutputFormat.class); job.getConfiguration().set(TableOutputFormat.OUTPUT_TABLE, myTable); job.setOutputKeyClass(ImmutableBytesWritable.class); job.setOutputValueClass(Writable.class); job.setNumReduceTasks(0); Chen On Wed, Jun 18, 2014 at 12:40 AM, Ted Yu wrote: > Have you asked this question on MapR mailing list ? > > Cheers > > On Jun 18, 2014, at 12:14 AM, Chen Wang > wrote: > > > I actually tried that already, but it didn't work..I added > > > > > > > > org.apache.hbase > > > > hbase > > > > 0.94.9-mapr-1308 > > > > > > > > and removed the original hbase dependency.. > > > > > > On Wed, Jun 18, 2014 at 12:05 AM, Rabbit's Foot < > rabbitsfoot@is-land.com.tw> > > wrote: > > > >> Maybe you can refer the Maven Repository and Artifacts for MapR > >> < > http://doc.mapr.com/display/MapR/Maven+Repository+and+Artifacts+for+MapR> > >> to > >> set pom > >> > >> > >> 2014-06-18 13:33 GMT+08:00 Chen Wang : > >> > >>> Is this error indicating that I basically need a hbase mapr client? > >>> currently my pom looks like this; > >>> > >>> > >>> > >>> org.apache.hadoop > >>> > >>> hadoop-client > >>> > >>> 1.0.3 > >>> > >>> > >>> > >>> > >>> > >>> org.apache.hadoop > >>> > >>> hadoop-core > >>> > >>> 1.2.1 > >>> > >>> > >>> > >>> > >>> > >>> org.apache.httpcomponents > >>> > >>> httpclient > >>> > >>> 4.1.1 > >>> > >>> > >>> > >>> > >>> > >>> com.google.code.gson > >>> > >>> gson > >>> > >>> 2.2.4 > >>> > >>> > >>> > >>> > >>> > >>> > >>> org.apache.hbase > >>> > >>> hbase > >>> > >>> 0.94.6.1 > >>> > >>> > >>> > >>> > >>> On Tue, Jun 17, 2014 at 10:04 PM, Chen Wang < > chen.apache.solr@gmail.com> > >>> wrote: > >>> > >>>> Yes, the hadoop cluster is using maprfs, so the hdfs files are are in > >>>> maprfs:/ format: > >>>> > >>>> > >>>> 2014-06-17 21:48:58 WARN: > >>>> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles - Skipping > >>>> non-directory maprfs:/user/chen/hbase/_SUCCESS > >>>> 2014-06-17 21:48:58 INFO: org.apache.hadoop.hbase.io.hfile.CacheConfig > >> - > >>>> Allocating LruBlockCache with maximum size 239.6m > >>>> 2014-06-17 21:48:58 INFO: org.apache.hadoop.hbase.util.ChecksumType - > >>>> Checksum using org.apache.hadoop.util.PureJavaCrc32 > >>>> 2014-06-17 21:48:58 INFO: > >>>> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles - Trying to > >> load > >>>> hfile=maprfs:/user/chen/hbase/m/cdd83ff3007b4955869d69c82a9f5b91 > >>> first=row1 > >>>> last=row9 > >>>> > >>>> Chen > >>>> > >>>> On Tue, Jun 17, 2014 at 9:59 PM, Ted Yu wrote: > >>>> > >>>>> The scheme says maprfs. > >>>>> Do you happen to use MapR product ? > >>>>> > >>>>> Cheers > >>>>> > >>>>> On Jun 17, 2014, at 9:53 PM, Chen Wang > >>>>> wrote: > >>>>> > >>>>>> Folk, > >>>>>> I am trying to bulk load the hdfs file into hbase with > >>>>>> > >>>>>> LoadIncrementalHFiles loader = new LoadIncrementalHFiles(conf); > >>>>>> > >>>>>> loader.doBulkLoad(new Path(args[1]), hTable); > >>>>>> > >>>>>> > >>>>>> However, i receive exception of java.io.IOException: > >>>>> java.io.IOException: > >>>>>> No FileSystem for scheme: maprfs > >>>>>> > >>>>>> Exception in thread "main" java.io.IOException: BulkLoad encountered > >>> an > >>>>>> unrecoverable problem > >>>>>> > >>>>>> at > >> > org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.bulkLoadPhase(LoadIncrementalHFiles.java:331) > >>>>>> > >>>>>> at > >> > org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:261) > >>>>>> > >>>>>> at com.walmartlabs.targeting.mapred.Driver.main(Driver.java:81) > >>>>>> > >>>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > >>>>>> > >>>>>> at > >> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > >>>>>> > >>>>>> at > >> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > >>>>>> > >>>>>> at java.lang.reflect.Method.invoke(Method.java:597) > >>>>>> > >>>>>> at org.apache.hadoop.util.RunJar.main(RunJar.java:197) > >>>>>> > >>>>>> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: > >>>>> Failed > >>>>>> after attempts=10, exceptions: > >>>>>> > >>>>>> Tue Jun 17 21:48:58 PDT 2014, > >>>>>> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$3@482d59a3, > >>>>>> java.io.IOException: java.io.IOException: No FileSystem for scheme: > >>>>> maprfs > >>>>>> > >>>>>> > >>>>>> What is the reason for this exception? I did some googling, and > >> tried > >>> to > >>>>>> add some config to Hbase configuration: > >>>>>> > >>>>>> hbaseConf.set("fs.hdfs.impl", > >>>>>> > >>>>>> org.apache.hadoop.hdfs.DistributedFileSystem.class.getName()); > >>>>>> > >>>>>> hbaseConf.set("fs.file.impl", > >>>>>> > >>>>>> org.apache.hadoop.fs.LocalFileSystem.class.getName()); > >>>>>> > >>>>>> > >>>>>> But it does not have any effect. > >>>>>> > >>>>>> Any idea? > >>>>>> > >>>>>> Thanks advance. > >>>>>> > >>>>>> Chen > >> > --90e6ba613b9a26469b04fc1987ad--