hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nick Dimiduk <ndimi...@gmail.com>
Subject Re: hbase-client Put serialization exception
Date Fri, 17 Oct 2014 00:03:00 GMT
Can you confirm that you're using the same version of hbase in your project
dependencies as with your runtime system? Seems like you might have some
0.94 mixed in somewhere.

On Thu, Oct 16, 2014 at 2:57 PM, Ted Yu <yuzhihong@gmail.com> wrote:

> Do you have more information about the NoSuchMethodException w.r.t. Put()
> ctor ?
> Put() ctor doesn't exist in 0.98
>
> Can you check GeoAnalyticFormatBulkLoader ?
>
> The other exceptions are for hdfs.
>
> Cheers
>
> On Thu, Oct 16, 2014 at 11:35 AM, THORMAN, ROBERT D <rt2357@att.com>
> wrote:
>
> > Anyone had a problem using hbase-client on hadoop2?  Seems like the Put
> > class is missing a method (either default public constructor or some init
> > method) and throws an exception when my MR jobs starts up.
> >
> > I’m using:
> > HDP 2.1 with
> > hbase-client.0.98.0.2.1.4.0-632-hadoop2.jar
> >
> > Stack trace:
> >
> > hadoop
> >
> com.att.bdcoe.platform.persistence.mapreduce.jobs.GeoAnalyticFormatBulkLoader
> > /user/hbase/scada /user/hbase/output
> > 14/10/16 13:25:46 INFO impl.TimelineClientImpl: Timeline service address:
> > http://dn02.platform.bigtdata.io:8188/ws/v1/timeline/
> > 14/10/16 13:25:46 INFO client.RMProxy: Connecting to ResourceManager at
> > dn02.platform.bigtdata.io/172.16.6.27:8050
> > 14/10/16 13:25:47 INFO input.FileInputFormat: Total input paths to
> > process : 1
> > 14/10/16 13:25:47 INFO mapreduce.JobSubmitter: Cleaning up the staging
> > area /user/hbase/.staging/job_1410461150931_0063
> > 14/10/16 13:25:47 ERROR jobs.GeoAnalyticFormatBulkLoader:
> > java.lang.NoSuchMethodException:
> org.apache.hadoop.hbase.client.Put.<init>()
> > 14/10/16 13:25:47 WARN hdfs.DFSClient: DataStreamer Exception
> >
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
> > No lease on /user/hbase/.staging/job_1410461150931_0063/job.split: File
> > does not exist. Holder DFSClient_NONMAPREDUCE_-1551666999_1 does not have
> > any open files.
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2952)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2772)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2680)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:590)
> > at
> >
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:440)
> > at
> >
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> > at
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> > at java.security.AccessController.doPrivileged(Native Method)
> > at javax.security.auth.Subject.doAs(Subject.java:415)
> > at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
> > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> >
> > at org.apache.hadoop.ipc.Client.call(Client.java:1410)
> > at org.apache.hadoop.ipc.Client.call(Client.java:1363)
> > at
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> > at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:606)
> > at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
> > at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
> > at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
> > at
> >
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:361)
> > at
> >
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1439)
> > at
> >
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1261)
> > at
> >
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:525)
> > 14/10/16 13:25:47 ERROR hdfs.DFSClient: Failed to close file
> > /user/hbase/.staging/job_1410461150931_0063/job.split
> >
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
> > No lease on /user/hbase/.staging/job_1410461150931_0063/job.split: File
> > does not exist. Holder DFSClient_NONMAPREDUCE_-1551666999_1 does not have
> > any open files.
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2952)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2772)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2680)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:590)
> > at
> >
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:440)
> > at
> >
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> > at
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> > at java.security.AccessController.doPrivileged(Native Method)
> > at javax.security.auth.Subject.doAs(Subject.java:415)
> > at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
> > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> >
> > at org.apache.hadoop.ipc.Client.call(Client.java:1410)
> > at org.apache.hadoop.ipc.Client.call(Client.java:1363)
> > at
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> > at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:606)
> > at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
> > at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
> > at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
> > at
> >
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:361)
> > at
> >
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1439)
> > at
> >
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1261)
> > at
> >
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:525)
> > [
> >
> > v/r
> > Bob Thorman
> > Principal Big Data Engineer
> > AT&T Big Data CoE
> > 2900 W. Plano Parkway
> > Plano, TX 75075
> > 972-658-1714
> >
> >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message