hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stack <st...@duboce.net>
Subject Re: Configurations for running a M/R job remotely with HBase
Date Thu, 28 Apr 2011 23:32:14 GMT
You need to fix "Could not resolve the DNS name of db2.dev.abc.net:60020"
St.Ack

On Thu, Apr 28, 2011 at 3:46 PM, sulabh choudhury <sulabhc@gmail.com> wrote:
> Thanks for the response.
> Harsh..I already looked into that document, but for my use-case I need it to
> connect to HBase and should be a HMaster or RegionServer port
>
> Bennett...
> Supplying the "hbase.zookeeper.quorum" it throws an error
> 11/04/28 14:06:18 ERROR hbase.HServerAddress: Could not resolve the DNS name
> of db2.dev.abc.net:60020
> 11/04/28 14:06:18 ERROR mapreduce.TableInputFormat:
> java.lang.IllegalArgumentException: Could not resolve the DNS name of
> db2.dev.abc.net:60020
>  at
> org.apache.hadoop.hbase.HServerAddress.checkBindAddressCanBeResolved(HServerAddress.java:105)
>  at org.apache.hadoop.hbase.HServerAddress.<init>(HServerAddress.java:66)
>  at
> org.apache.hadoop.hbase.zookeeper.RootRegionTracker.dataToHServerAddress(RootRegionTracker.java:82)
>  at
> org.apache.hadoop.hbase.zookeeper.RootRegionTracker.waitRootRegionLocation(RootRegionTracker.java:73)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:578)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:558)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:687)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:589)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:558)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:687)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:593)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:558)
>  at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:171)
> at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:145)
>  at
> org.apache.hadoop.hbase.mapreduce.TableInputFormat.setConf(TableInputFormat.java:91)
>  at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
>  at
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
>  at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:882)
> at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
>  at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
>  at abc.main(abc.java:194)
>
> Exception in thread "main" java.io.IOException: No table was provided.
> at
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:130)
>  at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
>  at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
> at abc.main(abc.java:194)
>
>
> On Thu, Apr 28, 2011 at 12:29 PM, Bennett Andrews <
> bennett.j.andrews@gmail.com> wrote:
>
>> For HBase TableMR you should just need "hbase.zookeeper.quorum" in addition
>> to TaskTracker stuff.
>>
>> On Thu, Apr 28, 2011 at 3:15 PM, Harsh J <harsh@cloudera.com> wrote:
>>
>> > I believe 60000 is your HBase HMaster IPC port, not the NameNode one.
>> > What are you attempting to do here?
>> >
>> > See this package's docs on proper docs for running regular MR jobs
>> > using HBase as Sink/Source:
>> >
>> >
>> http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html
>> >
>> > On Fri, Apr 29, 2011 at 12:29 AM, sulabh choudhury <sulabhc@gmail.com>
>> > wrote:
>> > > I am trying to run a M/R job remotely on a HBAse table.
>> > >
>> > > I have added the conf.set("fs.default.name","hdfs://10.0.0.3:60000")
>> to
>> > the
>> > > code hence now it does go to the cluster there I see the error
>> > >
>> > > WARN org.apache.hadoop.ipc.HBaseServer: IPC Server listener on 60000:
>> > > readAndProcess threw exception java.io.EOFException. Count of bytes
>> read:
>> > 0
>> > >
>> > > jjava.io.EOFException
>> > >
>> > >
>> > >        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>> > >        at
>> >
>> org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
>> > >        at
>> > org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
>> > >        at org.apache.hadoop.io.UTF8.readChars(UTF8.java:216)
>> > >        at org.apache.hadoop.io.UTF8.readString(UTF8.java:208)
>> > >        at
>> > org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:179)
>> > >        at
>> > org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:171)
>> > >        at
>> >
>> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processHeader(HBaseServer.java:966)
>> > >        at
>> >
>> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:950)
>> > >        at
>> >
>> org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:522)
>> > >        at
>> >
>> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:316)
>> > >        at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> > >        at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> > >        at java.lang.Thread.run(Thread.java:662)
>> > >
>> > > and at the client side I see
>> > >
>> > > Exception in thread "main" java.io.IOException: Call to
>> > > /10.0.0.3:60000failed on local exception: java.io.IOException:
>> > > Connection reset by peer
>> > >
>> > > at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
>> > >
>> > > at org.apache.hadoop.ipc.Client.call(Client.java:743)
>> > >
>> > > at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>> > >
>> > > at $Proxy4.getProtocolVersion(Unknown Source)
>> > >
>> > > at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>> > >
>> > > at
>> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
>> > >
>> > > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
>> > >
>> > >        etc
>> > >
>> > >
>> > >
>> > > I am pretty sure I am missing configuration pieces to achieve the
>> > same..What
>> > > could be those ?
>> > >
>> >
>> >
>> >
>> > --
>> > Harsh J
>> >
>>
>
>
>
> --
>
> --
> Thanks and Regards,
> Sulabh Choudhury
>

Mime
View raw message