hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From rahul malviya <malviyarahul2...@gmail.com>
Subject Re: Simultaneous MR on a Snapshot
Date Tue, 21 Jul 2015 17:29:37 GMT
That is what I am doing but I still get the error.

String tempPathString = "xxxx/snapshot_xxxxx_" + System.nanoTime();
 Path tempPath = new Path(tempPathString);
 sLogger.info("Temp path for snapshot based run : " + tempPathString);
 TableMapReduceUtil.initTableSnapshotMapperJob(snapshotName, scan,
Mapper.class, keyClass,
                valueClass, job, true, tempPath);

On Tue, Jul 21, 2015 at 10:00 AM, Matteo Bertozzi <theo.bertozzi@gmail.com>
wrote:

> when you init the snapshot job, you have to specify a "restore dir" you
> have just to use a different name
>
> Matteo
>
>
> On Tue, Jul 21, 2015 at 9:58 AM, rahul gidwani <rahul.gidwani@gmail.com>
> wrote:
>
> > hi rahul,
> >
> > i believe the problem lies in the TableSnapshotInputFormat.  You always
> > have to clone the snapshot no matter what.
> >
> >
> >
> > On Tue, Jul 21, 2015 at 9:28 AM, rahul malviya <
> malviyarahul2001@gmail.com
> > >
> > wrote:
> >
> > > Hi,
> > >
> > > I am trying to run simultaneous two MR jobs on same sanpshot of a hbase
> > > table but I am getting exception before the job starts. Do we support
> > > simultaneous MR on sanpshots ?
> > >
> > > Exception :
> > >
> > > Exception in thread "main" java.io.IOException:
> > > java.util.concurrent.ExecutionException:
> > > org.apache.hadoop.fs.FileAlreadyExistsException:
> > >
> > >
> >
> /hbase/archive/data/default/table/9450b2ddda0841a991cd81edb568633a/c/.links-0eacd75da00a4a94a6b392913b89dd75/9450b2ddda0841a991cd81edb568633a.table
> > > for client 17.170.176.99 already exists
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2616)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2510)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2398)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:551)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:108)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:388)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
> > >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
> > >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> > >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> > >     at java.security.AccessController.doPrivileged(Native Method)
> > >     at javax.security.auth.Subject.doAs(Subject.java:422)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
> > >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> > >
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegions(ModifyRegionUtils.java:164)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.cloneHdfsRegions(RestoreSnapshotHelper.java:512)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.restoreHdfsRegions(RestoreSnapshotHelper.java:224)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.restoreHdfsRegions(RestoreSnapshotHelper.java:160)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.copySnapshotForScanner(RestoreSnapshotHelper.java:736)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormatImpl.setInput(TableSnapshotInputFormatImpl.java:364)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormat.setInput(TableSnapshotInputFormat.java:216)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableSnapshotMapperJob(TableMapReduceUtil.java:314)
> > >
> > >     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> > >     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> > >
> > >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > >     at
> > >
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > >     at
> > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > >     at java.lang.reflect.Method.invoke(Method.java:483)
> > >     at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message