hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yabo-Arber Xu" <arber.resea...@gmail.com>
Subject Re: Hbase single-Node cluster config problem
Date Sat, 02 Aug 2008 05:54:28 GMT
Thanks J-D and St.Act for your help. I will try what you suggested to expand
the cluster.

St.Act: I explicitly set the following property in hadoop-site.xml

<property>
 <name>dfs.http.address</name>
 <value>*my_host_name*:50070 <http://0.0.0.0:50070/></value>
</property>

I also can see there is a instance listening on 50070, but when i type
http://*my_host_name*:50070 <http://0.0.0.0:50070/> in the firefox of the
other computer, there is still no connection. Did I miss anything?

Thanks again.

On Fri, Aug 1, 2008 at 3:05 PM, stack <stack@duboce.net> wrote:

> Default for HDFS webui is:
>
> <property>
>  <name>dfs.http.address</name>
>  <value>0.0.0.0:50070</value>
>  <description>
>   The address and the base port where the dfs namenode web ui will listen
> on.
>   If the port is 0 then the server will start on a free port.
>  </description>
> </property>
>
> I may not be reading the below properly but it looks like there is
> something listening on 50070.
>
> St.Ack
>
>
> Yabo-Arber Xu wrote:
>
>> Hi J-D,
>>
>> Sorry that just now forgot to ask another question. Even though i have
>> HDFS
>> and HBase run well one one computer, but strangely I can not connect to
>> HDFS
>> using WebUI. I run the following command on my computer, and it seems the
>> only two port active are for HDFS and HBase, but there is no such default
>> port open for WebUI connection.
>>
>> netstat -plten | grep java
>>
>> tcp        0      0 10.254.199.132:60000        0.0.0.0:*
>> LISTEN      0          tcp        0      0 0.0.0.0:37669
>> 0.0.0.0:*                   LISTEN      0          tcp        0      0
>> 10.254.199.132:54310        0.0.0.0:*                   LISTEN
>> 0          tcp        0      0 0.0.0.0:49769               0.0.0.0:*
>> LISTEN      0          tcp        0      0 0.0.0.0:60010
>> 0.0.0.0:*                   LISTEN      0          tcp        0      0
>> 0.0.0.0:50090               0.0.0.0:*                   LISTEN
>> 0          tcp        0      0 0.0.0.0:60020               0.0.0.0:*
>> LISTEN      0          tcp        0      0 0.0.0.0:50070
>> 0.0.0.0:*                   LISTEN      0          tcp        0      0
>> 0.0.0.0:41625               0.0.0.0:*                   LISTEN
>> 0          tcp        0      0 0.0.0.0:50010               0.0.0.0:*
>> LISTEN      0          tcp        0      0 0.0.0.0:50075
>> 0.0.0.0:*                   LISTEN      0          tcp        0      0
>> 0.0.0.0:60030               0.0.0.0:*                   LISTEN
>> 0
>>
>>
>> Thanks,
>> Arber
>>
>> On Fri, Aug 1, 2008 at 2:48 PM, Yabo-Arber Xu <arber.research@gmail.com
>> >wrote:
>>
>>
>>
>>> Hi J-D,
>>>
>>> Thanks, J-D. I cleaned HDFS directory and re-run it. It's fine now.
>>>
>>> Wondered if there is any documents out there showing how to expand such
>>> one- computer-with all-servers structure to a truely distributed one but
>>> without re-importing all the data?
>>>
>>> Thanks again,
>>> Arber
>>>
>>>
>>> On Fri, Aug 1, 2008 at 6:13 AM, Jean-Daniel Cryans <jdcryans@gmail.com
>>> >wrote:
>>>
>>>
>>>
>>>> Yair,
>>>>
>>>> It seems that your master is unable to communicate with HDFS (that's the
>>>> SocketTimeoutException). To correct this, I would check that HDFS is
>>>> running
>>>> by looking at the web UI, I would make sure that the ports are open
>>>> (using
>>>> telnet for example) and I would also check that HDFS uses the default
>>>> ports.
>>>>
>>>> J-D
>>>>
>>>> On Fri, Aug 1, 2008 at 5:40 AM, Yabo-Arber Xu <arber.research@gmail.com
>>>>
>>>>
>>>>> wrote:
>>>>>        Greetings,
>>>>>
>>>>> I am trying to set up a hbase cluster. To simplify the setting, i first
>>>>> tried the single node cluster, where HDFS name/data node are set on one
>>>>> computer, and hbase master/regionserver are also set on the same
>>>>>
>>>>>
>>>> computer.
>>>>
>>>>
>>>>> The HDFS passed the test and works well. But, for hbase, when I try to
>>>>> create a table using hbase shell. It keeps popping the following
>>>>>
>>>>>
>>>> message:
>>>>
>>>>
>>>>> 08/08/01 02:30:29 INFO ipc.Client: Retrying connect to server:
>>>>> ec2-67-202-24-167.compute-1.amazonaws.com/10.254.199.132:60000.
>>>>> Already
>>>>> tried 1 time(s).
>>>>>
>>>>> I checked the hbase log, and it has the following error:
>>>>>
>>>>> 2008-08-01 02:30:24,337 ERROR org.apache.hadoop.hbase.HMaster: Can not
>>>>> start
>>>>> master
>>>>> java.lang.reflect.InvocationTargetException
>>>>>       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>>>> Method)
>>>>>       at
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>>>
>>>>
>>>>>       at
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>>>
>>>>
>>>>>       at
>>>>>
>>>>>
>>>> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>>>
>>>>
>>>>>       at org.apache.hadoop.hbase.HMaster.doMain(HMaster.java:3313)
>>>>>       at org.apache.hadoop.hbase.HMaster.main(HMaster.java:3347)
>>>>> Caused by: java.net.SocketTimeoutException: timed out waiting for rpc
>>>>> response
>>>>>       at org.apache.hadoop.ipc.Client.call(Client.java:514)
>>>>>       at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
>>>>>       at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown
>>>>>
>>>>>
>>>> Source)
>>>>
>>>>
>>>>>       at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:291)
>>>>>       at
>>>>> org.apache.hadoop.dfs.DFSClient.createNamenode(DFSClient.java:128)
>>>>>       at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:151)
>>>>>       at
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>> org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:65)
>>>>
>>>>
>>>>>       at
>>>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1182)
>>>>>       at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:55)
>>>>>       at
>>>>>
>>>>>
>>>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1193)
>>>>
>>>>
>>>>>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:150)
>>>>>
>>>>>
>>>>> For your information, i also attach the hbase-site.xml:
>>>>>
>>>>>  <property>
>>>>>   <name>hbase.master</name>
>>>>>   <value>ec2-67-202-24-167.compute-1.amazonaws.com:60000</value>
>>>>>   <description>The host and port that the HBase master runs at.
>>>>>   </description>
>>>>>  </property>
>>>>>
>>>>>  <property>
>>>>>   <name>hbase.rootdir</name>
>>>>>   <value>hdfs://ec2-67-202-24-167.compute-1.amazonaws.com:9000/hbase
>>>>> </value>
>>>>>   <description>The directory shared by region servers.
>>>>>   </description>
>>>>>  </property>
>>>>>
>>>>> Can anybody point out what i did wrong?
>>>>>
>>>>> Thanks in advance
>>>>>
>>>>> -Arber
>>>>>
>>>>>
>>>>>
>>>>
>>
>>
>
>


-- 
Yabo-Arber Xu <yxu@cs.sfu.ca>
Web: http://www.cs.sfu.ca/~yxu/personal/

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message