hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Yu <yuzhih...@gmail.com>
Subject Re: exportSnapshot tool
Date Sun, 03 May 2015 15:17:30 GMT
bq. Operation category READ is not supported in state standby

Can you confirm whether active namenode is running on hb1m ?

Cheers

On Sun, May 3, 2015 at 8:00 AM, Akmal Abbasov <akmal.abbasov@icloud.com>
wrote:

> Hi Ted,
> I am using hadoop-2.5.1 and hbase-0.98.7-hadoop2.
> The command for snapshot export is:
> hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot snappy
> -copy-to hdfs://hb1m/hbase -overwrite
> Thank you
>
> Regards,
> Akmal Abbasov
>
> > On 03 May 2015, at 16:57, Ted Yu <yuzhihong@gmail.com> wrote:
> >
> > Can you give us a bit more information ?
> > Such as:
> > release of hbase you're using
> > release of hadoop you're using
> > the command line for snapshot export
> >
> > Thanks
> >
> > On Sun, May 3, 2015 at 7:53 AM, Akmal Abbasov <akmal.abbasov@icloud.com>
> > wrote:
> >
> >> Hi,
> >> I using exportSnapshot tool, and observed a strange behaviour. I have
> >> HBase HA configured in my destination cluster. I have hbm1 and hbm2
> which
> >> are HBase masters.
> >> Currently hbm2 is active, and hbm1 is in standby mode. I am assuming
> that
> >> when I am using exportSnapshot tool I need to specify the address of the
> >> server where my active Hbase master is running.
> >> But, when I do this, I am getting
> >> Exception in thread "main"
> >>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
> >> Operation category READ is not supported in state standby
> >>        at
> >>
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
> >>        at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1688)
> >>        at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1258)
> >>        at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3684)
> >>        at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:803)
> >>        at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:779)
> >>        at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> >>        at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> >>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> >>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> >>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> >>        at java.security.AccessController.doPrivileged(Native Method)
> >>        at javax.security.auth.Subject.doAs(Subject.java:415)
> >>        at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
> >>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> >>
> >>        at org.apache.hadoop.ipc.Client.call(Client.java:1411)
> >>        at org.apache.hadoop.ipc.Client.call(Client.java:1364)
> >>        at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> >>        at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source)
> >>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>        at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >>        at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>        at java.lang.reflect.Method.invoke(Method.java:606)
> >>        at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
> >>        at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> >>        at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source)
> >>        at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:707)
> >>        at
> >> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1785)
> >>        at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1068)
> >>        at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)
> >>        at
> >>
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> >>        at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)
> >>        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1398)
> >>        at
> >>
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:870)
> >>        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> >>        at
> >>
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:991)
> >>        at
> >>
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:995)
> >>
> >> But, when I try with the standby hbase master, everything is working.
> >> Is it the correct way of working?
> >> Thank you.
> >>
> >> Regards,
> >> Akmal Abbasov
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message