Return-Path: Delivered-To: apmail-hadoop-hbase-user-archive@locus.apache.org Received: (qmail 34996 invoked from network); 2 Aug 2008 17:07:43 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 2 Aug 2008 17:07:43 -0000 Received: (qmail 41094 invoked by uid 500); 2 Aug 2008 17:07:42 -0000 Delivered-To: apmail-hadoop-hbase-user-archive@hadoop.apache.org Received: (qmail 41077 invoked by uid 500); 2 Aug 2008 17:07:42 -0000 Mailing-List: contact hbase-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hbase-user@hadoop.apache.org Delivered-To: mailing list hbase-user@hadoop.apache.org Received: (qmail 41066 invoked by uid 99); 2 Aug 2008 17:07:42 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 02 Aug 2008 10:07:42 -0700 X-ASF-Spam-Status: No, hits=1.5 required=10.0 tests=NORMAL_HTTP_TO_IP,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: local policy) Received: from [63.203.238.117] (HELO dns.duboce.net) (63.203.238.117) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 02 Aug 2008 17:06:44 +0000 Received: by dns.duboce.net (Postfix, from userid 1008) id DB348C51D; Sat, 2 Aug 2008 08:35:41 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.1.4 (2006-07-26) on dns.duboce.net X-Spam-Level: Received: from durruti.local (unknown [192.168.1.135]) by dns.duboce.net (Postfix) with ESMTP id 9F570C1CE for ; Sat, 2 Aug 2008 08:35:36 -0700 (PDT) Message-ID: <4894940A.8040103@duboce.net> Date: Sat, 02 Aug 2008 10:06:18 -0700 From: stack User-Agent: Thunderbird 2.0.0.16 (Macintosh/20080707) MIME-Version: 1.0 To: hbase-user@hadoop.apache.org Subject: Re: Hbase single-Node cluster config problem References: <382e1efc0808010240o13250344je61c079360b7e881@mail.gmail.com> <31a243e70808010613l22e5cfc9tf08e57b75459ec6d@mail.gmail.com> <382e1efc0808011448r56fc1c18y41b07ba7d797acf9@mail.gmail.com> <382e1efc0808011455o30ad8a8g6a0a14a2e1922d6a@mail.gmail.com> <489388BE.7080400@duboce.net> <382e1efc0808012254ofe1739djab3a9aa65b5cc7e7@mail.gmail.com> In-Reply-To: <382e1efc0808012254ofe1739djab3a9aa65b5cc7e7@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org X-Old-Spam-Status: No, score=-4.3 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00, NORMAL_HTTP_TO_IP autolearn=ham version=3.1.4 That looks like it should work thought IIRC, its not needed. The webui comes up by default. Have you done as J-D suggests below, tried telneting to the port to see that there is something listening? Look in the HDFS logs to see if its registering putting up a webui. St.Ack Yabo-Arber Xu wrote: > Thanks J-D and St.Act for your help. I will try what you suggested to expand > the cluster. > > St.Act: I explicitly set the following property in hadoop-site.xml > > > dfs.http.address > *my_host_name*:50070 > > > I also can see there is a instance listening on 50070, but when i type > http://*my_host_name*:50070 in the firefox of the > other computer, there is still no connection. Did I miss anything? > > Thanks again. > > On Fri, Aug 1, 2008 at 3:05 PM, stack wrote: > > >> Default for HDFS webui is: >> >> >> dfs.http.address >> 0.0.0.0:50070 >> >> The address and the base port where the dfs namenode web ui will listen >> on. >> If the port is 0 then the server will start on a free port. >> >> >> >> I may not be reading the below properly but it looks like there is >> something listening on 50070. >> >> St.Ack >> >> >> Yabo-Arber Xu wrote: >> >> >>> Hi J-D, >>> >>> Sorry that just now forgot to ask another question. Even though i have >>> HDFS >>> and HBase run well one one computer, but strangely I can not connect to >>> HDFS >>> using WebUI. I run the following command on my computer, and it seems the >>> only two port active are for HDFS and HBase, but there is no such default >>> port open for WebUI connection. >>> >>> netstat -plten | grep java >>> >>> tcp 0 0 10.254.199.132:60000 0.0.0.0:* >>> LISTEN 0 tcp 0 0 0.0.0.0:37669 >>> 0.0.0.0:* LISTEN 0 tcp 0 0 >>> 10.254.199.132:54310 0.0.0.0:* LISTEN >>> 0 tcp 0 0 0.0.0.0:49769 0.0.0.0:* >>> LISTEN 0 tcp 0 0 0.0.0.0:60010 >>> 0.0.0.0:* LISTEN 0 tcp 0 0 >>> 0.0.0.0:50090 0.0.0.0:* LISTEN >>> 0 tcp 0 0 0.0.0.0:60020 0.0.0.0:* >>> LISTEN 0 tcp 0 0 0.0.0.0:50070 >>> 0.0.0.0:* LISTEN 0 tcp 0 0 >>> 0.0.0.0:41625 0.0.0.0:* LISTEN >>> 0 tcp 0 0 0.0.0.0:50010 0.0.0.0:* >>> LISTEN 0 tcp 0 0 0.0.0.0:50075 >>> 0.0.0.0:* LISTEN 0 tcp 0 0 >>> 0.0.0.0:60030 0.0.0.0:* LISTEN >>> 0 >>> >>> >>> Thanks, >>> Arber >>> >>> On Fri, Aug 1, 2008 at 2:48 PM, Yabo-Arber Xu >> >>>> wrote: >>>> >>> >>> >>>> Hi J-D, >>>> >>>> Thanks, J-D. I cleaned HDFS directory and re-run it. It's fine now. >>>> >>>> Wondered if there is any documents out there showing how to expand such >>>> one- computer-with all-servers structure to a truely distributed one but >>>> without re-importing all the data? >>>> >>>> Thanks again, >>>> Arber >>>> >>>> >>>> On Fri, Aug 1, 2008 at 6:13 AM, Jean-Daniel Cryans >>> >>>>> wrote: >>>>> >>>> >>>> >>>>> Yair, >>>>> >>>>> It seems that your master is unable to communicate with HDFS (that's the >>>>> SocketTimeoutException). To correct this, I would check that HDFS is >>>>> running >>>>> by looking at the web UI, I would make sure that the ports are open >>>>> (using >>>>> telnet for example) and I would also check that HDFS uses the default >>>>> ports. >>>>> >>>>> J-D >>>>> >>>>> On Fri, Aug 1, 2008 at 5:40 AM, Yabo-Arber Xu >>>> >>>>> >>>>> >>>>>> wrote: >>>>>> Greetings, >>>>>> >>>>>> I am trying to set up a hbase cluster. To simplify the setting, i first >>>>>> tried the single node cluster, where HDFS name/data node are set on one >>>>>> computer, and hbase master/regionserver are also set on the same >>>>>> >>>>>> >>>>>> >>>>> computer. >>>>> >>>>> >>>>> >>>>>> The HDFS passed the test and works well. But, for hbase, when I try to >>>>>> create a table using hbase shell. It keeps popping the following >>>>>> >>>>>> >>>>>> >>>>> message: >>>>> >>>>> >>>>> >>>>>> 08/08/01 02:30:29 INFO ipc.Client: Retrying connect to server: >>>>>> ec2-67-202-24-167.compute-1.amazonaws.com/10.254.199.132:60000. >>>>>> Already >>>>>> tried 1 time(s). >>>>>> >>>>>> I checked the hbase log, and it has the following error: >>>>>> >>>>>> 2008-08-01 02:30:24,337 ERROR org.apache.hadoop.hbase.HMaster: Can not >>>>>> start >>>>>> master >>>>>> java.lang.reflect.InvocationTargetException >>>>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native >>>>>> Method) >>>>>> at >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) >>>>> >>>>> >>>>> >>>>>> at >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) >>>>> >>>>> >>>>> >>>>>> at >>>>>> >>>>>> >>>>>> >>>>> java.lang.reflect.Constructor.newInstance(Constructor.java:513) >>>>> >>>>> >>>>> >>>>>> at org.apache.hadoop.hbase.HMaster.doMain(HMaster.java:3313) >>>>>> at org.apache.hadoop.hbase.HMaster.main(HMaster.java:3347) >>>>>> Caused by: java.net.SocketTimeoutException: timed out waiting for rpc >>>>>> response >>>>>> at org.apache.hadoop.ipc.Client.call(Client.java:514) >>>>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198) >>>>>> at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown >>>>>> >>>>>> >>>>>> >>>>> Source) >>>>> >>>>> >>>>> >>>>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:291) >>>>>> at >>>>>> org.apache.hadoop.dfs.DFSClient.createNamenode(DFSClient.java:128) >>>>>> at org.apache.hadoop.dfs.DFSClient.(DFSClient.java:151) >>>>>> at >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>> org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:65) >>>>> >>>>> >>>>> >>>>>> at >>>>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1182) >>>>>> at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:55) >>>>>> at >>>>>> >>>>>> >>>>>> >>>>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1193) >>>>> >>>>> >>>>> >>>>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:150) >>>>>> >>>>>> >>>>>> For your information, i also attach the hbase-site.xml: >>>>>> >>>>>> >>>>>> hbase.master >>>>>> ec2-67-202-24-167.compute-1.amazonaws.com:60000 >>>>>> The host and port that the HBase master runs at. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> hbase.rootdir >>>>>> hdfs://ec2-67-202-24-167.compute-1.amazonaws.com:9000/hbase >>>>>> >>>>>> The directory shared by region servers. >>>>>> >>>>>> >>>>>> >>>>>> Can anybody point out what i did wrong? >>>>>> >>>>>> Thanks in advance >>>>>> >>>>>> -Arber >>>>>> >>>>>> >>>>>> >>>>>> >>> >> > > >