hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Travis Hegner <theg...@trilliumit.com>
Subject Re: hbase connection issue
Date Tue, 14 Jul 2009 14:51:25 GMT
Since you are running a single node cluster, perhaps you should stick
with the local file system directive... I.E.

<property>

	<name>hbase.rootdir</name>
	<value>file:///var/hbase</value>
	<description>The directory shared by region servers.
	Should be fully-qualified to include the filesystem to use.
	E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
	</description>
</property>


Obviously, if you are trying to test the hadoop dfs as well, then this
is not the way to go, but if your only intention is to test hbase on
single node, then give this a try.

Travis Hegner
http://www.travishegner.com/


-----Original Message-----
From: Jean-Daniel Cryans <jdcryans@apache.org>
Reply-to: "hbase-user@hadoop.apache.org" <hbase-user@hadoop.apache.org>
To: hbase-user@hadoop.apache.org <hbase-user@hadoop.apache.org>
Subject: Re: hbase connection issue
Date: Tue, 14 Jul 2009 10:44:44 -0400


Are you sure the Namenode is running?

J-D

On Tue, Jul 14, 2009 at 10:35 AM, Muhammad Mudassar<mudassark7@gmail.com> wrote:
> here is logs of master
>
>
>
> Tue Jul 14 20:28:20 PKST 2009 Starting master on mudassar-desktop
> ulimit -n 1024
> 2009-07-14 20:28:20,458 INFO org.apache.hadoop.hbase.master.HMaster:
> vmName=Java HotSpot(TM) Server VM, vmVendor=Sun Microsystems Inc.,
> vmVersion=14.0-b16
> 2009-07-14 20:28:20,459 INFO org.apache.hadoop.hbase.master.HMaster:
> vmInputArguments=[-Xmx1000m, -XX:+HeapDumpOnOutOfMemoryError,
> -Dhbase.log.dir=/home/hadoop/Desktop/hbase-0.19.2/bin/../logs,
> -Dhbase.log.file=hbase-hadoop-master-mudassar-desktop.log,
> -Dhbase.home.dir=/home/hadoop/Desktop/hbase-0.19.2/bin/..,
> -Dhbase.id.str=hadoop, -Dhbase.root.logger=INFO,DRFA,
> -Djava.library.path=/home/hadoop/Desktop/hbase-0.19.2/bin/../lib/native/Linux-i386-32]
> 2009-07-14 20:28:21,729 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /127.0.0.1:60000. Already tried 0 time(s).
> 2009-07-14 20:28:22,729 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /127.0.0.1:60000. Already tried 1 time(s).
> 2009-07-14 20:28:23,729 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /127.0.0.1:60000. Already tried 2 time(s).
> 2009-07-14 20:28:24,730 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /127.0.0.1:60000. Already tried 3 time(s).
> 2009-07-14 20:28:25,730 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /127.0.0.1:60000. Already tried 4 time(s).
> 2009-07-14 20:28:26,731 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /127.0.0.1:60000. Already tried 5 time(s).
> 2009-07-14 20:28:27,731 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /127.0.0.1:60000. Already tried 6 time(s).
> 2009-07-14 20:28:28,731 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /127.0.0.1:60000. Already tried 7 time(s).
> 2009-07-14 20:28:29,732 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /127.0.0.1:60000. Already tried 8 time(s).
> 2009-07-14 20:28:30,732 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /127.0.0.1:60000. Already tried 9 time(s).
> 2009-07-14 20:28:30,734 ERROR org.apache.hadoop.hbase.master.HMaster: Can
> not start master
> java.net.ConnectException: Call to /127.0.0.1:60000 failed on connection
> exception: java.net.ConnectException: Connection refused
>    at org.apache.hadoop.ipc.Client.wrapException(Client.java:724)
>    at org.apache.hadoop.ipc.Client.call(Client.java:700)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
>    at $Proxy0.getProtocolVersion(Unknown Source)
>    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:348)
>    at
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:104)
>    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:176)
>    at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:75)
>    at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1367)
>    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:56)
>    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1379)
>    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:215)
>    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:120)
>    at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:186)
>    at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:156)
>    at
> org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:96)
>    at
> org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:78)
>    at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1013)
>    at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1057)
> Caused by: java.net.ConnectException: Connection refused
>    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>    at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>    at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:100)
>    at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:300)
>    at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:177)
>    at org.apache.hadoop.ipc.Client.getConnection(Client.java:801)
>    at org.apache.hadoop.ipc.Client.call(Client.java:686)
>    ... 17 more
>
>
>
>
>
>
>
> On Tue, Jul 14, 2009 at 8:28 PM, Vaibhav Puranik <vpuranik@gmail.com> wrote:
>
>> Mhuhammad,
>>
>> Looks like your hbase master didn't start properly. You should check your
>> master log.
>>
>> The master log will be in the logs directory. It will have more specific
>> exception that can help you to find the real problem. If you couldn't solve
>> it, paste the exception in the log here so that we can help you.
>>
>> Regards,
>> Vaibhav
>>
>> On Tue, Jul 14, 2009 at 6:47 AM, Muhammad Mudassar <mudassark7@gmail.com
>> >wrote:
>>
>> > Hi,
>> >
>> > I am running hbase on single node and my hbase-site seetings are as
>> > follows:
>> >
>> >
>> > <configuration>
>> >  <property>
>> >    <name>hbase.rootdir</name>
>> >    <value>hdfs://127.0.0.1:9000/hbase</value>
>> >    <description>The directory shared by region servers.
>> >    Should be fully-qualified to include the filesystem to use.
>> >    E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
>> >    </description>
>> >  </property>
>> >    <property>
>> >    <name>hbase.master</name>
>> >    <value>local</value>
>> >    <description>The host and port that the HBase master runs at.
>> >    A value of 'local' runs the master and a regionserver in
>> >    a single process.
>> >    </description>
>> >  </property>
>> >
>> > </configuration>
>> >
>> >
>> > After this when I created table in hbase shell it is saying trying to
>> > connect to the server like:
>> >
>> > 09/07/14 19:41:03 INFO ipc.HBaseClass: Retrying connect to server:
>> > localhost/127.0.0.1:60000. Already tried 0 time(s).
>> > 09/07/14 19:41:04 INFO ipc.HBaseClass: Retrying connect to server:
>> > localhost/127.0.0.1:60000. Already tried 1 time(s).
>> > 09/07/14 19:41:05 INFO ipc.HBaseClass: Retrying connect to server:
>> > localhost/127.0.0.1:60000. Already tried 2 time(s).
>> > NativeException: org.apache.hadoop.hbase.MasterNotRunningException:
>> > localhost:60000
>> >    from org/apache/hadoop/hbase/client/HConnectionManager.java:239:in
>> > `getMaster'
>> >    from org/apache/hadoop/hbase/client/HBaseAdmin.java:70:in `<init>'
>> >    from sun/reflect/NativeConstructorAccessorImpl.java:-2:in
>> `newInstance0'
>> >    from sun/reflect/NativeConstructorAccessorImpl.java:39:in
>> `newInstance'
>> >    from sun/reflect/DelegatingConstructorAccessorImpl.java:27:in
>> > `newInstance'
>> >
>> >
>> >
>> > I required help to solve out it!
>> >
>> > waiting
>> >
>> >
>> >
>> > Regards
>> >
>> > Muhammad Mudassar
>> >
>>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message