Return-Path: Delivered-To: apmail-hadoop-hbase-user-archive@minotaur.apache.org Received: (qmail 40749 invoked from network); 14 Jul 2009 14:57:50 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 14 Jul 2009 14:57:50 -0000 Received: (qmail 43013 invoked by uid 500); 14 Jul 2009 14:57:59 -0000 Delivered-To: apmail-hadoop-hbase-user-archive@hadoop.apache.org Received: (qmail 42986 invoked by uid 500); 14 Jul 2009 14:57:59 -0000 Mailing-List: contact hbase-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hbase-user@hadoop.apache.org Delivered-To: mailing list hbase-user@hadoop.apache.org Received: (qmail 42976 invoked by uid 99); 14 Jul 2009 14:57:59 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 14 Jul 2009 14:57:59 +0000 X-ASF-Spam-Status: No, hits=2.2 required=10.0 tests=HTML_MESSAGE,SPF_HELO_PASS,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of thegner@trilliumit.com designates 216.120.182.70 as permitted sender) Received: from [216.120.182.70] (HELO pf2.trilliumit.com) (216.120.182.70) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 14 Jul 2009 14:57:50 +0000 Received: from TSS-2K7CLS1.ad.trilliumstaffing.com (tss-2k7cls1.ad.trilliumstaffing.com [10.0.2.2]) by pf2.trilliumit.com (Postfix) with ESMTP id E1EB033EB6 for ; Tue, 14 Jul 2009 10:58:11 -0400 (EDT) Received: from [10.0.2.243] (10.0.2.243) by TSS-2K7CLS1.ad.trilliumstaffing.com (10.0.2.2) with Microsoft SMTP Server id 8.1.340.0; Tue, 14 Jul 2009 10:57:21 -0400 Subject: Re: hbase connection issue From: Travis Hegner Reply-To: thegner@trilliumit.com To: "hbase-user@hadoop.apache.org" In-Reply-To: <31a243e70907140744k6b537d97r8c5a679dd1d0cc7b@mail.gmail.com> References: <63334b530907140647g3edb93a1he434b405be7f256@mail.gmail.com> <2eef7bd70907140728x35d32727pbba21681e2902c94@mail.gmail.com> <63334b530907140735p506543c2o519ee31b50dffade@mail.gmail.com> <31a243e70907140744k6b537d97r8c5a679dd1d0cc7b@mail.gmail.com> Content-Type: multipart/alternative; boundary="=-9RXXIIzSVcEpsekbLQuk" Organization: Trillium IT Solutions Date: Tue, 14 Jul 2009 10:51:25 -0400 Message-ID: <1247583085.4353.7.camel@it-thegner.ad.trilliumstaffing.com> MIME-Version: 1.0 X-Mailer: Evolution 2.26.1 X-MailScanner-ID: E1EB033EB6.8F6C2 X-TrilliumIT-MailScanner: Found to be clean X-TrilliumIT-MailScanner-SpamCheck: not spam (whitelisted), SpamAssassin (not cached, score=-4.398, required 2, autolearn=not spam, ALL_TRUSTED -1.80, BAYES_00 -2.60, HTML_MESSAGE 0.00) X-TrilliumIT-MailScanner-From: thegner@trilliumit.com X-Virus-Checked: Checked by ClamAV on apache.org X-Old-Spam-Status: No --=-9RXXIIzSVcEpsekbLQuk Content-Type: text/plain Content-Transfer-Encoding: 7bit Since you are running a single node cluster, perhaps you should stick with the local file system directive... I.E. hbase.rootdir file:///var/hbase The directory shared by region servers. Should be fully-qualified to include the filesystem to use. E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR Obviously, if you are trying to test the hadoop dfs as well, then this is not the way to go, but if your only intention is to test hbase on single node, then give this a try. Travis Hegner http://www.travishegner.com/ -----Original Message----- From: Jean-Daniel Cryans Reply-to: "hbase-user@hadoop.apache.org" To: hbase-user@hadoop.apache.org Subject: Re: hbase connection issue Date: Tue, 14 Jul 2009 10:44:44 -0400 Are you sure the Namenode is running? J-D On Tue, Jul 14, 2009 at 10:35 AM, Muhammad Mudassar wrote: > here is logs of master > > > > Tue Jul 14 20:28:20 PKST 2009 Starting master on mudassar-desktop > ulimit -n 1024 > 2009-07-14 20:28:20,458 INFO org.apache.hadoop.hbase.master.HMaster: > vmName=Java HotSpot(TM) Server VM, vmVendor=Sun Microsystems Inc., > vmVersion=14.0-b16 > 2009-07-14 20:28:20,459 INFO org.apache.hadoop.hbase.master.HMaster: > vmInputArguments=[-Xmx1000m, -XX:+HeapDumpOnOutOfMemoryError, > -Dhbase.log.dir=/home/hadoop/Desktop/hbase-0.19.2/bin/../logs, > -Dhbase.log.file=hbase-hadoop-master-mudassar-desktop.log, > -Dhbase.home.dir=/home/hadoop/Desktop/hbase-0.19.2/bin/.., > -Dhbase.id.str=hadoop, -Dhbase.root.logger=INFO,DRFA, > -Djava.library.path=/home/hadoop/Desktop/hbase-0.19.2/bin/../lib/native/Linux-i386-32] > 2009-07-14 20:28:21,729 INFO org.apache.hadoop.ipc.Client: Retrying connect > to server: /127.0.0.1:60000. Already tried 0 time(s). > 2009-07-14 20:28:22,729 INFO org.apache.hadoop.ipc.Client: Retrying connect > to server: /127.0.0.1:60000. Already tried 1 time(s). > 2009-07-14 20:28:23,729 INFO org.apache.hadoop.ipc.Client: Retrying connect > to server: /127.0.0.1:60000. Already tried 2 time(s). > 2009-07-14 20:28:24,730 INFO org.apache.hadoop.ipc.Client: Retrying connect > to server: /127.0.0.1:60000. Already tried 3 time(s). > 2009-07-14 20:28:25,730 INFO org.apache.hadoop.ipc.Client: Retrying connect > to server: /127.0.0.1:60000. Already tried 4 time(s). > 2009-07-14 20:28:26,731 INFO org.apache.hadoop.ipc.Client: Retrying connect > to server: /127.0.0.1:60000. Already tried 5 time(s). > 2009-07-14 20:28:27,731 INFO org.apache.hadoop.ipc.Client: Retrying connect > to server: /127.0.0.1:60000. Already tried 6 time(s). > 2009-07-14 20:28:28,731 INFO org.apache.hadoop.ipc.Client: Retrying connect > to server: /127.0.0.1:60000. Already tried 7 time(s). > 2009-07-14 20:28:29,732 INFO org.apache.hadoop.ipc.Client: Retrying connect > to server: /127.0.0.1:60000. Already tried 8 time(s). > 2009-07-14 20:28:30,732 INFO org.apache.hadoop.ipc.Client: Retrying connect > to server: /127.0.0.1:60000. Already tried 9 time(s). > 2009-07-14 20:28:30,734 ERROR org.apache.hadoop.hbase.master.HMaster: Can > not start master > java.net.ConnectException: Call to /127.0.0.1:60000 failed on connection > exception: java.net.ConnectException: Connection refused > at org.apache.hadoop.ipc.Client.wrapException(Client.java:724) > at org.apache.hadoop.ipc.Client.call(Client.java:700) > at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216) > at $Proxy0.getProtocolVersion(Unknown Source) > at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:348) > at > org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:104) > at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:176) > at > org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:75) > at > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1367) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:56) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1379) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:215) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:120) > at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:186) > at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:156) > at > org.apache.hadoop.hbase.LocalHBaseCluster.(LocalHBaseCluster.java:96) > at > org.apache.hadoop.hbase.LocalHBaseCluster.(LocalHBaseCluster.java:78) > at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1013) > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1057) > Caused by: java.net.ConnectException: Connection refused > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574) > at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:100) > at > org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:300) > at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:177) > at org.apache.hadoop.ipc.Client.getConnection(Client.java:801) > at org.apache.hadoop.ipc.Client.call(Client.java:686) > ... 17 more > > > > > > > > On Tue, Jul 14, 2009 at 8:28 PM, Vaibhav Puranik wrote: > >> Mhuhammad, >> >> Looks like your hbase master didn't start properly. You should check your >> master log. >> >> The master log will be in the logs directory. It will have more specific >> exception that can help you to find the real problem. If you couldn't solve >> it, paste the exception in the log here so that we can help you. >> >> Regards, >> Vaibhav >> >> On Tue, Jul 14, 2009 at 6:47 AM, Muhammad Mudassar > >wrote: >> >> > Hi, >> > >> > I am running hbase on single node and my hbase-site seetings are as >> > follows: >> > >> > >> > >> > >> > hbase.rootdir >> > hdfs://127.0.0.1:9000/hbase >> > The directory shared by region servers. >> > Should be fully-qualified to include the filesystem to use. >> > E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR >> > >> > >> > >> > hbase.master >> > local >> > The host and port that the HBase master runs at. >> > A value of 'local' runs the master and a regionserver in >> > a single process. >> > >> > >> > >> > >> > >> > >> > After this when I created table in hbase shell it is saying trying to >> > connect to the server like: >> > >> > 09/07/14 19:41:03 INFO ipc.HBaseClass: Retrying connect to server: >> > localhost/127.0.0.1:60000. Already tried 0 time(s). >> > 09/07/14 19:41:04 INFO ipc.HBaseClass: Retrying connect to server: >> > localhost/127.0.0.1:60000. Already tried 1 time(s). >> > 09/07/14 19:41:05 INFO ipc.HBaseClass: Retrying connect to server: >> > localhost/127.0.0.1:60000. Already tried 2 time(s). >> > NativeException: org.apache.hadoop.hbase.MasterNotRunningException: >> > localhost:60000 >> > from org/apache/hadoop/hbase/client/HConnectionManager.java:239:in >> > `getMaster' >> > from org/apache/hadoop/hbase/client/HBaseAdmin.java:70:in `' >> > from sun/reflect/NativeConstructorAccessorImpl.java:-2:in >> `newInstance0' >> > from sun/reflect/NativeConstructorAccessorImpl.java:39:in >> `newInstance' >> > from sun/reflect/DelegatingConstructorAccessorImpl.java:27:in >> > `newInstance' >> > >> > >> > >> > I required help to solve out it! >> > >> > waiting >> > >> > >> > >> > Regards >> > >> > Muhammad Mudassar >> > >> > --=-9RXXIIzSVcEpsekbLQuk--