hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Daniel Cryans <jdcry...@apache.org>
Subject Re: ERROR zookeeper.ZKConfig: no clientPort found in zoo.cfg
Date Wed, 23 Feb 2011 19:48:38 GMT
I think I solved the problem in Dan's "TableInputFormat configuration
problems with 0.90" thread, you need to pass a Configuration object to
Job that was created using HBaseConfiguration.create().

J-D

On Wed, Feb 23, 2011 at 2:20 AM, Cavus,M.,Fa. Post Direkt
<M.Cavus@postdirekt.de> wrote:
> Hi Jean-Daniel,
>
> I've only one hbase.jar in my classpath and if I copy
> hbase-default.xml file which is now packaged inside the hbase.jar in $HBASE_HOME/conf
I get the same failure "no clientPort found ...."
>
> I get this problem only by hbase 0.90.x.
>
> Regards
> Musa
>
> -----Original Message-----
> From: jdcryans@gmail.com [mailto:jdcryans@gmail.com] On Behalf Of Jean-Daniel Cryans
> Sent: Tuesday, February 22, 2011 11:56 PM
> To: user@hbase.apache.org
> Subject: Re: ERROR zookeeper.ZKConfig: no clientPort found in zoo.cfg
>
> This exception happens when hbase.zookeeper.property.clientPort cannot
> be found from any file in the classpath (the bit about zoo.cfg is a
> bit confusing I agree).
>
> If you didn't change it, then it should be found in the
> hbase-default.xml file which is now packaged inside the hbase jar
> since 0.90.0
>
> It's hard for me to tell what exactly is the cause for your issue, but
> my guess is that it's coming from a confusion on the classpath. Take a
> look at how it gets built for your mapreduce jobs and that you only
> have 1 hbase jar.
>
> J-D
>
> On Mon, Feb 21, 2011 at 8:17 AM, Cavus,M.,Fa. Post Direkt
> <M.Cavus@postdirekt.de> wrote:
>> Hi All,
>>
>> I've 4 Cluster and one Master. On hbase 0.89x can I run my program
>> sucessfully,
>>
>> but I've a big problem with hbase 0.90.0 and 0.90.1. The config files
>> are same.
>>
>>
>>
>> I get at first this Error:
>>
>> 11/02/21 17:14:50 ERROR zookeeper.ZKConfig: no clientPort found in
>> zoo.cfg
>>
>> 11/02/21 17:14:50 ERROR mapreduce.TableOutputFormat:
>> org.apache.hadoop.hbase.ZooKeeperConnectionException:
>> java.io.IOException: Unable to determine ZooKeeper ensemble
>>
>>
>>
>> And than I get this Error:
>>
>>
>>
>> 11/02/21 17:14:50 INFO input.FileInputFormat: Total input paths to
>> process : 1
>>
>> 11/02/21 17:14:51 INFO mapred.JobClient: Running job:
>> job_201102211629_0002
>>
>> 11/02/21 17:14:52 INFO mapred.JobClient:  map 0% reduce 0%
>>
>> 11/02/21 17:15:03 INFO mapred.JobClient:  map 14% reduce 0%
>>
>> 11/02/21 17:15:06 INFO mapred.JobClient:  map 28% reduce 0%
>>
>> 11/02/21 17:15:09 INFO mapred.JobClient:  map 95% reduce 0%
>>
>> 11/02/21 17:15:12 INFO mapred.JobClient:  map 100% reduce 9%
>>
>> 11/02/21 17:15:21 INFO mapred.JobClient:  map 100% reduce 23%
>>
>> 11/02/21 17:15:30 INFO mapred.JobClient: Task Id :
>> attempt_201102211629_0002_r_000000_0, Status : FAILED
>>
>> java.lang.NullPointerException
>>
>>        at
>> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.wr
>> ite(TableOutputFormat.java:126)
>>
>>        at
>> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.wr
>> ite(TableOutputFormat.java:81)
>>
>>        at
>> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(Reduce
>> Task.java:508)
>>
>>        at
>> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutput
>> Context.java:80)
>>
>>        at
>> de.postdirekt.point.point.SBPoint2StrassenabschnittEventDriver$SBReducer
>> .reduce(SBPoint2StrassenabschnittEventDriver.java:230)
>>
>>        at
>> de.postdirekt.point.point.SBPoint2StrassenabschnittEventDriver$SBReducer
>> .reduce(SBPoint2StrassenabschnittEventDriver.java:1)
>>
>>        at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
>>
>>        at
>> org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:566)
>>
>>        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
>>
>>        at org.apache.hadoop.mapred.Child.main(Child.java:170)
>>
>>
>>
>> 11/02/21 17:15:31 INFO mapred.JobClient:  map 100% reduce 0%
>>
>> 11/02/21 17:15:40 INFO mapred.JobClient:  map 100% reduce 14%
>>
>> 11/02/21 17:15:45 INFO mapred.JobClient: Task Id :
>> attempt_201102211629_0002_r_000000_1, Status : FAILED
>>
>> java.lang.NullPointerException
>>
>>        at
>> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.wr
>> ite(TableOutputFormat.java:126)
>>
>>        at
>> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.wr
>> ite(TableOutputFormat.java:81)
>>
>>        at
>> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(Reduce
>> Task.java:508)
>>
>>        at
>> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutput
>> Context.java:80)
>>
>>        at
>> de.postdirekt.point.point.SBPoint2StrassenabschnittEventDriver$SBReducer
>> .reduce(SBPoint2StrassenabschnittEventDriver.java:230)
>>
>>        at
>> de.postdirekt.point.point.SBPoint2StrassenabschnittEventDriver$SBReducer
>> .reduce(SBPoint2StrassenabschnittEventDriver.java:1)
>>
>>        at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
>>
>>        at
>> org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:566)
>>
>>        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
>>
>>        at org.apache.hadoop.mapred.Child.main(Child.java:170)
>>
>>
>>
>> 11/02/21 17:15:46 INFO mapred.JobClient:  map 100% reduce 0%
>>
>> 11/02/21 17:15:55 INFO mapred.JobClient:  map 100% reduce 9%
>>
>> 11/02/21 17:15:58 INFO mapred.JobClient:  map 100% reduce 33%
>>
>> 11/02/21 17:16:00 INFO mapred.JobClient: Task Id :
>> attempt_201102211629_0002_r_000000_2, Status : FAILED
>>
>> java.lang.NullPointerException
>>
>>        at
>> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.wr
>> ite(TableOutputFormat.java:126)
>>
>>        at
>> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.wr
>> ite(TableOutputFormat.java:81)
>>
>>        at
>> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(Reduce
>> Task.java:508)
>>
>>        at
>> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutput
>> Context.java:80)
>>
>>        at
>> de.postdirekt.point.point.SBPoint2StrassenabschnittEventDriver$SBReducer
>> .reduce(SBPoint2StrassenabschnittEventDriver.java:230)
>>
>>        at
>> de.postdirekt.point.point.SBPoint2StrassenabschnittEventDriver$SBReducer
>> .reduce(SBPoint2StrassenabschnittEventDriver.java:1)
>>
>>        at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
>>
>>        at
>> org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:566)
>>
>>        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
>>
>>        at org.apache.hadoop.mapred.Child.main(Child.java:170)
>>
>>
>>
>> 11/02/21 17:16:01 INFO mapred.JobClient:  map 100% reduce 0%
>>
>> 11/02/21 17:16:13 INFO mapred.JobClient:  map 100% reduce 14%
>>
>> 11/02/21 17:16:19 INFO mapred.JobClient:  map 100% reduce 0%
>>
>> 11/02/21 17:16:21 INFO mapred.JobClient: Job complete:
>> job_201102211629_0002
>>
>> 11/02/21 17:16:21 INFO mapred.JobClient: Counters: 14
>>
>> 11/02/21 17:16:21 INFO mapred.JobClient:   Job Counters
>>
>> 11/02/21 17:16:21 INFO mapred.JobClient:     Launched reduce tasks=4
>>
>> 11/02/21 17:16:21 INFO mapred.JobClient:     Rack-local map tasks=3
>>
>> 11/02/21 17:16:21 INFO mapred.JobClient:     Launched map tasks=7
>>
>> 11/02/21 17:16:21 INFO mapred.JobClient:     Data-local map tasks=4
>>
>> 11/02/21 17:16:21 INFO mapred.JobClient:     Failed reduce tasks=1
>>
>> 11/02/21 17:16:21 INFO mapred.JobClient:   FileSystemCounters
>>
>> 11/02/21 17:16:21 INFO mapred.JobClient:     FILE_BYTES_READ=9681866
>>
>> 11/02/21 17:16:21 INFO mapred.JobClient:     HDFS_BYTES_READ=475184595
>>
>> 11/02/21 17:16:21 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=60377332
>>
>> 11/02/21 17:16:21 INFO mapred.JobClient:   Map-Reduce Framework
>>
>> 11/02/21 17:16:21 INFO mapred.JobClient:     Combine output records=0
>>
>> 11/02/21 17:16:21 INFO mapred.JobClient:     Map input records=1456034
>>
>> 11/02/21 17:16:21 INFO mapred.JobClient:     Spilled Records=1398513
>>
>> 11/02/21 17:16:21 INFO mapred.JobClient:     Map output bytes=436798755
>>
>> 11/02/21 17:16:21 INFO mapred.JobClient:     Combine input records=0
>>
>> 11/02/21 17:16:21 INFO mapred.JobClient:     Map output records=1176324
>>
>> *closing down* with (1) at Mon Feb 21 17:16:21 CET 2011
>>
>>
>>
>> Did anyone an Idea what's the problem?
>>
>>
>>
>> Regards
>>
>> Musa Cavus
>>
>>
>>
>>
>

Mime
View raw message