Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 944C3C136 for ; Mon, 14 May 2012 10:20:50 +0000 (UTC) Received: (qmail 25424 invoked by uid 500); 14 May 2012 10:20:48 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 25199 invoked by uid 500); 14 May 2012 10:20:47 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 25172 invoked by uid 99); 14 May 2012 10:20:46 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 14 May 2012 10:20:46 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_NONE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of dalia.mohsobhy@hotmail.com designates 65.54.190.76 as permitted sender) Received: from [65.54.190.76] (HELO bay0-omc2-s1.bay0.hotmail.com) (65.54.190.76) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 14 May 2012 10:20:39 +0000 Received: from BAY156-W38 ([65.54.190.125]) by bay0-omc2-s1.bay0.hotmail.com with Microsoft SMTPSVC(6.0.3790.4675); Mon, 14 May 2012 03:20:18 -0700 Message-ID: Content-Type: multipart/alternative; boundary="_34d5c9b4-e250-4949-8fd8-94a7f99e3834_" X-Originating-IP: [41.41.16.198] From: Dalia Sobhy To: Subject: RE: Important "Undefined Error" Date: Mon, 14 May 2012 12:20:18 +0200 Importance: Normal In-Reply-To: <073401cd3069$515c3870$f414a950$@gmail.com> References: ,,,,, ,<073401cd3069$515c3870$f414a950$@gmail.com> MIME-Version: 1.0 X-OriginalArrivalTime: 14 May 2012 10:20:18.0703 (UTC) FILETIME=[296961F0:01CD31BB] X-Virus-Checked: Checked by ClamAV on apache.org --_34d5c9b4-e250-4949-8fd8-94a7f99e3834_ Content-Type: text/plain; charset="iso-2022-jp" Content-Transfer-Encoding: 7bit Hi, I tried what you told me, but nothing worked:((( First when I run this command:dalia@namenode:~$ host -v -t A `hostname`Output:Trying "namenode"Host namenode not found: 3(NXDOMAIN)Received 101 bytes from 10.0.2.1#53 in 13 ms My core-site.xml: fs.default.name hdfs://namenode:54310/ My hdfs-site.xmldfs.name.dir/data/1/dfs/nn,/nfsmount/dfs/nndfs.datanode.max.xcievers4096dfs.replication3 dfs.permissions.superusergroup hadoop My Mapred-site.xmlmapred.local.dir/data/1/mapred/local,/data/2/mapred/local,/data/3/mapred/local My Hbase-site.xmlhbase.cluster.distributed true hbase.rootdir hdfs://namenode:9000/hbasehbase.zookeeper.quorun namenodehbase.regionserver.port60020The host and port that the HBase master runs at.dfs.replication1hbase.zookeeper.property.clientPort2181Property from ZooKeeper's config zoo.cfg.The port at which the clients will connect. Please Help I am really disappointed I have been through all that for two weeks !!!! > From: dwivedishashwat@gmail.com > To: user@hbase.apache.org > Subject: RE: Important "Undefined Error" > Date: Sat, 12 May 2012 23:31:49 +0530 > > The problem is your hbase is not able to connect to Hadoop, can you put your > hbase-site.xml content >> here.. have you specified localhost somewhere, if > so remove localhost from everywhere and put your hdfsl namenode address > suppose your namenode is running on master:9000 then put your hbase file > system setting as master:9000/hbase here I am sending you the configuration > which I am using in hbase and is working > > > My hbase-site.xml content is > > > > > > > hbase.rootdir > hdfs://master:9000/hbase > > > hbase.master > master:60000 > The host and port that the HBase master runs at. > > > hbase.regionserver.port > 60020 > The host and port that the HBase master runs at. > > > > hbase.cluster.distributed > true > > > hbase.tmp.dir > /home/shashwat/Hadoop/hbase-0.90.4/temp > > > hbase.zookeeper.quorum > master > > > dfs.replication > 1 > > > hbase.zookeeper.property.clientPort > 2181 > Property from ZooKeeper's config zoo.cfg. > The port at which the clients will connect. > > > > hbase.zookeeper.property.dataDir > /home/shashwat/zookeeper > Property from ZooKeeper's config zoo.cfg. > The directory where the snapshot is stored. > > > > > > > > > Check this out, and also stop hbase, If its not stopping kill all the > processes, and after putting your hdfs-site.xml, mapred-site.xml and > core-site.sml to hbase conf directory try to restart, and also delete the > folders created by hbase ,,, like temp directory or other then try to start. > > Regards > ∞ > Shashwat Shriparv > > > -----Original Message----- > From: Dalia Sobhy [mailto:dalia.mohsobhy@hotmail.com] > Sent: 12 May 2012 22:48 > To: user@hbase.apache.org > Subject: RE: Important "Undefined Error" > > > Hi Shashwat, > I want to tell you about my configurations: > I am using 4 nodesOne "Master": Namenode, SecondaryNamenode, Job Tracker, > Zookeeper, HMasterThree "Slaves": datanodes, tasktrackers, regionservers In > both master and slaves, all the hadoop daemons are working well, but as for > the hbase master service it is not working.. > As for region server here is the error:12/05/12 14:42:13 INFO > util.ServerCommandLine: vmName=Java HotSpot(TM) 64-Bit Server VM, > vmVendor=Sun Microsystems Inc., vmVersion=20.1-b0212/05/12 14:42:13 INFO > util.ServerCommandLine: vmInputArguments=[-Xmx1000m, -ea, > -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode, > -Dhbase.log.dir=/usr/lib/hbase/bin/../logs, -Dhbase.log.file=hbase.log, > -Dhbase.home.dir=/usr/lib/hbase/bin/.., -Dhbase.id.str=, > -Dhbase.root.logger=INFO,console, > -Djava.library.path=/usr/lib/hadoop-0.20/lib/native/Linux-amd64-64:/usr/lib/ > hbase/bin/../lib/native/Linux-amd64-64]12/05/12 14:42:13 INFO > ipc.HBaseRpcMetrics: Initializing RPC Metrics with hostName=HRegionServer, > port=6002012/05/12 14:42:14 FATAL zookeeper.ZKConfig: The server in zoo.cfg > cannot be set to localhost in a fully-distributed setup because it won't be > reachable. See "Getting Started" for more information.12/05/12 14:42:14 WARN > zookeeper.ZKConfig: Cannot read zoo.cfg, loading from XML > filesjava.io.IOException: > The server in zoo.cfg cannot be set to localhost in a fully-distributed > setup because it won't be reachable. See "Getting Started" for more > information. at > org.apache.hadoop.hbase.zookeeper.ZKConfig.parseZooCfg(ZKConfig.java:172) > at org.apache.hadoop.hbase.zookeeper.ZKConfig.makeZKProps(ZKConfig.java:68) > at > org.apache.hadoop.hbase.zookeeper.ZKConfig.getZKQuorumServersString(ZKConfig > .java:249) at > org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.(ZooKeeperWatcher.j > ava:117) at > org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegi > onServer.java:489) at > org.apache.hadoop.hbase.regionserver.HRegionServer.initialize(HRegionServer. > java:465) at > org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:56 > 4) at java.lang.Thread.run(Thread.java:662)12/05/12 14:42:14 INFO > zookeeper.ZooKeeper: Client environment:zookeeper.version=3.3.5-cdh3u4--1, > built on 05/07/2012 21:12 GMT12/05/12 14:42:14 INFO zookeeper.ZooKeeper: > Client environment:host.name=data > node212/05/12 14:42:14 INFO zookeeper.ZooKeeper: Client > environment:java.version=1.6.0_2612/05/12 14:42:14 INFO zookeeper.ZooKeeper: > Client environment:java.vendor=Sun Microsystems Inc.12/05/12 14:42:14 INFO > zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-6-sun-1. > 6.0.26/jre12/05/12 14:42:14 INFO zookeeper.ZooKeeper: Client > environment:java.class.path=/usr/lib/hbase/bin/../conf:/usr/lib/jvm/java-6-s > un/lib/tools.jar:/usr/lib/hbase/bin/..:/usr/lib/hbase/bin/../hbase-0.90.6-cd > h3u4.jar:/usr/lib/hbase/bin/../hbase-0.90.6-cdh3u4-tests.jar:/usr/lib/hbase/ > bin/../lib/activation-1.1.jar:/usr/lib/hbase/bin/../lib/asm-3.1.jar:/usr/lib > /hbase/bin/../lib/avro-1.5.4.jar:/usr/lib/hbase/bin/../lib/avro-ipc-1.5.4.ja > r:/usr/lib/hbase/bin/../lib/commons-cli-1.2.jar:/usr/lib/hbase/bin/../lib/co > mmons-codec-1.4.jar:/usr/lib/hbase/bin/../lib/commons-el-1.0.jar:/usr/lib/hb > ase/bin/../lib/commons-httpclient-3.1.jar:/usr/lib/hbase/bin/../lib/commons- > lang-2.5.jar:/usr/lib/hbase/bin/../lib/commo > > ns-logging-1.1.1.jar:/usr/lib/hbase/bin/../lib/commons-net-1.4.1.jar:/usr/li > b/hbase/bin/../lib/core-3.1.1.jar:/usr/lib/hbase/bin/../lib/guava-r06.jar:/u > sr/lib/hbase/bin/../lib/guava-r09-jarjar.jar:/usr/lib/hbase/bin/../lib/hadoo > p-core.jar:/usr/lib/hbase/bin/../lib/jackson-core-asl-1.5.2.jar:/usr/lib/hba > se/bin/../lib/jackson-jaxrs-1.5.5.jar:/usr/lib/hbase/bin/../lib/jackson-mapp > er-asl-1.5.2.jar:/usr/lib/hbase/bin/../lib/jackson-xc-1.5.5.jar:/usr/lib/hba > se/bin/../lib/jamon-runtime-2.3.1.jar:/usr/lib/hbase/bin/../lib/jasper-compi > ler-5.5.23.jar:/usr/lib/hbase/bin/../lib/jasper-runtime-5.5.23.jar:/usr/lib/ > hbase/bin/../lib/jaxb-api-2.1.jar:/usr/lib/hbase/bin/../lib/jaxb-impl-2.1.12 > .jar:/usr/lib/hbase/bin/../lib/jersey-core-1.4.jar:/usr/lib/hbase/bin/../lib > /jersey-json-1.4.jar:/usr/lib/hbase/bin/../lib/jersey-server-1.4.jar:/usr/li > b/hbase/bin/../lib/jettison-1.1.jar:/usr/lib/hbase/bin/../lib/jetty-6.1.26.j > ar:/usr/lib/hbase/bin/../lib/jetty-util-6.1.26.jar:/usr/lib/hbase/bin/../lib > /jruby-co > > mplete-1.6.0.jar:/usr/lib/hbase/bin/../lib/jsp-2.1-6.1.14.jar:/usr/lib/hbase > /bin/../lib/jsp-api-2.1-6.1.14.jar:/usr/lib/hbase/bin/../lib/jsp-api-2.1.jar > :/usr/lib/hbase/bin/../lib/jsr311-api-1.1.1.jar:/usr/lib/hbase/bin/../lib/lo > g4j-1.2.16.jar:/usr/lib/hbase/bin/../lib/netty-3.2.4.Final.jar:/usr/lib/hbas > e/bin/../lib/protobuf-java-2.3.0.jar:/usr/lib/hbase/bin/../lib/servlet-api-2 > .5-6.1.14.jar:/usr/lib/hbase/bin/../lib/servlet-api-2.5.jar:/usr/lib/hbase/b > in/../lib/slf4j-api-1.5.8.jar:/usr/lib/hbase/bin/../lib/slf4j-log4j12-1.5.8. > jar:/usr/lib/hbase/bin/../lib/snappy-java-1.0.3.2.jar:/usr/lib/hbase/bin/../ > lib/stax-api-1.0.1.jar:/usr/lib/hbase/bin/../lib/thrift-0.2.0.jar:/usr/lib/h > base/bin/../lib/velocity-1.5.jar:/usr/lib/hbase/bin/../lib/xmlenc-0.52.jar:/ > usr/lib/hbase/bin/../lib/zookeeper.jar:/etc/zookeeper:/etc/hadoop-0.20/conf: > /usr/lib/hadoop-0.20/hadoop-0.20.2-cdh3u4-ant.jar:/usr/lib/hadoop-0.20/hadoo > p-0.20.2-cdh3u4-tools.jar:/usr/lib/hadoop-0.20/hadoop-tools.jar:/usr/lib/had > oop-0.20/ > > hadoop-examples-0.20.2-cdh3u4.jar:/usr/lib/hadoop-0.20/hadoop-ant-0.20.2-cdh > 3u4.jar:/usr/lib/hadoop-0.20/hadoop-0.20.2-cdh3u4-examples.jar:/usr/lib/hado > op-0.20/hadoop-ant.jar:/usr/lib/hadoop-0.20/hadoop-core-0.20.2-cdh3u4.jar:/u > sr/lib/hadoop-0.20/hadoop-core.jar:/usr/lib/hadoop-0.20/hadoop-tools-0.20.2- > cdh3u4.jar:/usr/lib/hadoop-0.20/hadoop-examples.jar:/usr/lib/hadoop-0.20/had > oop-0.20.2-cdh3u4-core.jar:/usr/lib/hadoop-0.20/hadoop-test-0.20.2-cdh3u4.ja > r:/usr/lib/hadoop-0.20/hadoop-0.20.2-cdh3u4-test.jar:/usr/lib/hadoop-0.20/ha > doop-test.jar:/usr/lib/hadoop-0.20/lib/jetty-6.1.26.cloudera.1.jar:/usr/lib/ > hadoop-0.20/lib/jets3t-0.6.1.jar:/usr/lib/hadoop-0.20/lib/junit-4.5.jar:/usr > /lib/hadoop-0.20/lib/commons-daemon-1.0.1.jar:/usr/lib/hadoop-0.20/lib/slf4j > -api-1.4.3.jar:/usr/lib/hadoop-0.20/lib/commons-codec-1.4.jar:/usr/lib/hadoo > p-0.20/lib/log4j-1.2.15.jar:/usr/lib/hadoop-0.20/lib/jasper-compiler-5.5.12. > jar:/usr/lib/hadoop-0.20/lib/guava-r09-jarjar.jar:/usr/lib/hadoop-0.20/lib/j > ackson-ma > > pper-asl-1.5.2.jar:/usr/lib/hadoop-0.20/lib/commons-logging-1.0.4.jar:/usr/l > ib/hadoop-0.20/lib/ant-contrib-1.0b3.jar:/usr/lib/hadoop-0.20/lib/commons-la > ng-2.4.jar:/usr/lib/hadoop-0.20/lib/kfs-0.2.2.jar:/usr/lib/hadoop-0.20/lib/j > etty-util-6.1.26.cloudera.1.jar:/usr/lib/hadoop-0.20/lib/commons-httpclient- > 3.1.jar:/usr/lib/hadoop-0.20/lib/jsch-0.1.42.jar:/usr/lib/hadoop-0.20/lib/sl > f4j-log4j12-1.4.3.jar:/usr/lib/hadoop-0.20/lib/servlet-api-2.5-20081211.jar: > /usr/lib/hadoop-0.20/lib/jasper-runtime-5.5.12.jar:/usr/lib/hadoop-0.20/lib/ > servlet-api-2.5-6.1.14.jar:/usr/lib/hadoop-0.20/lib/jetty-servlet-tester-6.1 > .26.cloudera.1.jar:/usr/lib/hadoop-0.20/lib/aspectjtools-1.6.5.jar:/usr/lib/ > hadoop-0.20/lib/xmlenc-0.52.jar:/usr/lib/hadoop-0.20/lib/hadoop-fairschedule > r-0.20.2-cdh3u4.jar:/usr/lib/hadoop-0.20/lib/jackson-core-asl-1.5.2.jar:/usr > /lib/hadoop-0.20/lib/mockito-all-1.8.2.jar:/usr/lib/hadoop-0.20/lib/commons- > el-1.0.jar:/usr/lib/hadoop-0.20/lib/commons-logging-api-1.0.4.jar:/usr/lib/h > adoop-0.2 > > 0/lib/commons-net-3.1.jar:/usr/lib/hadoop-0.20/lib/commons-cli-1.2.jar:/usr/ > lib/hadoop-0.20/lib/aspectjrt-1.6.5.jar:/usr/lib/hadoop-0.20/lib/core-3.1.1. > jar:/usr/lib/hadoop-0.20/lib/hsqldb-1.8.0.10.jar:/usr/lib/hadoop-0.20/lib/or > o-2.0.8.jar:/usr/lib/zookeeper/zookeeper.jar:/usr/lib/zookeeper/zookeeper-3. > 3.5-cdh3u4.jar:/usr/lib/zookeeper/lib/log4j-1.2.15.jar:/usr/lib/zookeeper/li > b/jline-0.9.94.jar::/usr/lib/hadoop-0.20/conf:/usr/lib/hadoop-0.20/hadoop-co > re-0.20.2-cdh3u4.jar:/usr/lib/hadoop-0.20/lib/ant-contrib-1.0b3.jar:/usr/lib > /hadoop-0.20/lib/aspectjrt-1.6.5.jar:/usr/lib/hadoop-0.20/lib/aspectjtools-1 > .6.5.jar:/usr/lib/hadoop-0.20/lib/commons-cli-1.2.jar:/usr/lib/hadoop-0.20/l > ib/commons-codec-1.4.jar:/usr/lib/hadoop-0.20/lib/commons-daemon-1.0.1.jar:/ > usr/lib/hadoop-0.20/lib/commons-el-1.0.jar:/usr/lib/hadoop-0.20/lib/commons- > httpclient-3.1.jar:/usr/lib/hadoop-0.20/lib/commons-lang-2.4.jar:/usr/lib/ha > doop-0.20/lib/commons-logging-1.0.4.jar:/usr/lib/hadoop-0.20/lib/commons-log > ging-api- > > 1.0.4.jar:/usr/lib/hadoop-0.20/lib/commons-net-3.1.jar:/usr/lib/hadoop-0.20/ > lib/core-3.1.1.jar:/usr/lib/hadoop-0.20/lib/guava-r09-jarjar.jar:/usr/lib/ha > doop-0.20/lib/hadoop-fairscheduler-0.20.2-cdh3u4.jar:/usr/lib/hadoop-0.20/li > b/hsqldb-1.8.0.10.jar:/usr/lib/hadoop-0.20/lib/jackson-core-asl-1.5.2.jar:/u > sr/lib/hadoop-0.20/lib/jackson-mapper-asl-1.5.2.jar:/usr/lib/hadoop-0.20/lib > /jasper-compiler-5.5.12.jar:/usr/lib/hadoop-0.20/lib/jasper-runtime-5.5.12.j > ar:/usr/lib/hadoop-0.20/lib/jets3t-0.6.1.jar:/usr/lib/hadoop-0.20/lib/jetty- > 6.1.26.cloudera.1.jar:/usr/lib/hadoop-0.20/lib/jetty-servlet-tester-6.1.26.c > loudera.1.jar:/usr/lib/hadoop-0.20/lib/jetty-util-6.1.26.cloudera.1.jar:/usr > /lib/hadoop-0.20/lib/jsch-0.1.42.jar:/usr/lib/hadoop-0.20/lib/junit-4.5.jar: > /usr/lib/hadoop-0.20/lib/kfs-0.2.2.jar:/usr/lib/hadoop-0.20/lib/log4j-1.2.15 > .jar:/usr/lib/hadoop-0.20/lib/mockito-all-1.8.2.jar:/usr/lib/hadoop-0.20/lib > /oro-2.0.8.jar:/usr/lib/hadoop-0.20/lib/servlet-api-2.5-20081211.jar:/usr/li > b/hadoop- > 0.20/lib/servlet-api-2.5-6.1.14.jar:/usr/lib/hadoop-0.20/lib/slf4j-api-1.4. > 3.jar:/usr/lib/hadoop-0.20/lib/slf4j-log4j12-1.4.3.jar:/usr/lib/hadoop-0.20/ > lib/xmlenc-0.52.jar12/05/12 14:42:14 INFO zookeeper.ZooKeeper: Client > environment:java.library.path=/usr/lib/hadoop-0.20/lib/native/Linux-amd64-64 > :/usr/lib/hbase/bin/../lib/native/Linux-amd64-6412/05/12 14:42:14 INFO > zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp12/05/12 14:42:14 > INFO zookeeper.ZooKeeper: Client environment:java.compiler=12/05/12 > 14:42:14 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux12/05/12 > 14:42:14 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd6412/05/12 > 14:42:14 INFO zookeeper.ZooKeeper: Client > environment:os.version=2.6.35-22-server12/05/12 14:42:14 INFO > zookeeper.ZooKeeper: Client environment:user.name=dalia12/05/12 14:42:14 > INFO zookeeper.ZooKeeper: Client environment:user.home=/home/dalia12/05/12 > 14:42:14 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/dalia12 > /05/12 14:42:14 INFO zookeeper.ZooKeeper: Initiating client connection, > connectString=localhost:2181 sessionTimeout=180000 > watcher=regionserver:6002012/05/12 14:42:14 INFO zookeeper.ClientCnxn: > Opening socket connection to server localhost/0:0:0:0:0:0:0:1:218112/05/12 > 14:42:14 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected > error, closing socket connection and attempting > reconnectjava.net.ConnectException: Connection refused at > sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) at > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)12/05/12 > 14:42:14 INFO zookeeper.ClientCnxn: Opening socket connection to server > localhost/127.0.0.1:218112/05/12 14:42:14 WARN zookeeper.ClientCnxn: Session > 0x0 for server null, unexpected error, closing socket connection and > attempting reconnectjava.net.ConnectException: Connection refused at > sun.nio.ch.SocketChannelImpl.checkConnect(Native > Method) at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) at > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)12/05/12 > 14:42:15 INFO ipc.Client: Retrying connect to server: > namenode/10.0.2.3:8020. Already tried 0 time(s).12/05/12 14:42:16 INFO > zookeeper.ClientCnxn: Opening socket connection to server localhost/0:0:0:0: > 0:0:0:1:218112/05/12 14:42:16 WARN zookeeper.ClientCnxn: Session 0x0 for > server null, unexpected error, closing socket connection and attempting > reconnectjava.net.ConnectException: Connection refused at > sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) at > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)12/05/12 > 14:42:16 INFO ipc.Client: Retrying connect to server: > namenode/10.0.2.3:8020. Already tried 1 time(s).12/05/12 14:42:16 INFO > zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0. > 1:218112/05/12 14: > 42:16 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected > error, closing socket connection and attempting > reconnectjava.net.ConnectException: Connection refused at > sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) at > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)12/05/12 > 14:42:17 INFO ipc.Client: Retrying connect to server: > namenode/10.0.2.3:8020. Already tried 2 time(s).12/05/12 14:42:18 INFO > ipc.Client: Retrying connect to server: namenode/10.0.2.3:8020. Already > tried 3 time(s).12/05/12 14:42:18 INFO zookeeper.ClientCnxn: Opening socket > connection to server localhost/0:0:0:0:0:0:0:1:218112/05/12 14:42:18 WARN > zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing > socket connection and attempting reconnectjava.net.ConnectException: > Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native > Method) at sun.nio.ch.SocketChannelImpl.fin > ishConnect(SocketChannelImpl.java:567) at > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)12/05/12 > 14:42:18 INFO zookeeper.ClientCnxn: Opening socket connection to server > localhost/127.0.0.1:218112/05/12 14:42:18 WARN zookeeper.ClientCnxn: Session > 0x0 for server null, unexpected error, closing socket connection and > attempting reconnectjava.net.ConnectException: Connection refused at > sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) at > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)12/05/12 > 14:42:19 INFO ipc.Client: Retrying connect to server: > namenode/10.0.2.3:8020. Already tried 4 time(s).12/05/12 14:42:19 INFO > zookeeper.ClientCnxn: Opening socket connection to server localhost/0:0:0:0: > 0:0:0:1:218112/05/12 14:42:19 WARN zookeeper.ClientCnxn: Session 0x0 for > server null, unexpected error, closing socket connection and attempting > reconnectjava.net.ConnectException > : Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native > Method) at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) at > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)12/05/12 > 14:42:20 INFO ipc.Client: Retrying connect to server: > namenode/10.0.2.3:8020. Already tried 5 time(s).12/05/12 14:42:20 INFO > zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0. > 1:218112/05/12 14:42:20 WARN zookeeper.ClientCnxn: Session 0x0 for server > null, unexpected error, closing socket connection and attempting > reconnectjava.net.ConnectException: Connection refused at > sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) at > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)12/05/12 > 14:42:21 INFO ipc.Client: Retrying connect to server: > namenode/10.0.2.3:8020. Already tried 6 time(s).12/05/12 14:42:22 INFO > zookeeper.ClientCnxn: Openin > g socket connection to server localhost/0:0:0:0:0:0:0:1:218112/05/12 14:42: > 22 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, > closing socket connection and attempting reconnectjava.net.ConnectException: > Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native > Method) at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) at > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)12/05/12 > 14:42:22 INFO ipc.Client: Retrying connect to server: > namenode/10.0.2.3:8020. Already tried 7 time(s).12/05/12 14:42:22 INFO > zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0. > 1:218112/05/12 14:42:22 WARN zookeeper.ClientCnxn: Session 0x0 for server > null, unexpected error, closing socket connection and attempting > reconnectjava.net.ConnectException: Connection refused at > sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) at > org > .apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)12/05/12 > 14:42:23 INFO ipc.Client: Retrying connect to server: > namenode/10.0.2.3:8020. Already tried 8 time(s).12/05/12 14:42:24 INFO > ipc.Client: Retrying connect to server: namenode/10.0.2.3:8020. Already > tried 9 time(s).Exception in thread "main" java.net.ConnectException: Call > to namenode/10.0.2.3:8020 failed on connection exception: > java.net.ConnectException: Connection refused at > org.apache.hadoop.ipc.Client.wrapException(Client.java:1134) at > org.apache.hadoop.ipc.Client.call(Client.java:1110) at > org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226) at > $Proxy5.getProtocolVersion(Unknown Source) at > org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398) at > org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384) at > org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:129) at > org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:255) at > org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:217) at > org.apache.hadoop > .hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89) at > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1563) at > org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67) at > org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1597) at > org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1579) at > org.apache.hadoop.fs.FileSystem.get(FileSystem.java:228) at > org.apache.hadoop.fs.FileSystem.get(FileSystem.java:111) at > org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegion > Server.java:2785) at > org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegion > Server.java:2768) at > org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionS > erverCommandLine.java:61) at > org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionSer > verCommandLine.java:75) at org.apache.hadoop.util.ToolRunner.run(ToolRunner. > java:65) at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(Serve > rCommandLine.java:76) at > org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2 > 829)Caused by: java.net.ConnectException: Connection refused at > sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) at > org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:2 > 06) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:429) at > org.apache.hadoop.net.NetUtils.connect(NetUtils.java:394) at > org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:425) > at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:532) > at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:210) at > org.apache.hadoop.ipc.Client.getConnection(Client.java:1247) at > org.apache.hadoop.ipc.Client.call(Client.java:1078) ... 21 more12/05/12 > 14:42:24 INFO zookeeper.ClientCnxn: Opening socket connection to server > localhost/0:0:0:0:0:0:0:1:218112/05/12 14:42:24 WARN zookeeper > .ClientCnxn: Session 0x0 for server null, unexpected error, closing socket > connection and attempting reconnectjava.net.ConnectException: Connection > refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) at > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)12/05/12 > 14:42:25 INFO zookeeper.ClientCnxn: Opening socket connection to server > localhost/127.0.0.1:218112/05/12 14:42:25 INFO zookeeper.ZooKeeper: Session: > 0x0 closed12/05/12 14:42:25 INFO zookeeper.ClientCnxn: EventThread shut > down12/05/12 14:42:25 INFO ipc.HBaseServer: Stopping server on 6002012/05/12 > 14:42:25 FATAL regionserver.HRegionServer: ABORTING region server > serverName=datanode2,60020,1336826533870, load=(requests=0, regions=0, > usedHeap=0, maxHeap=0): Initialization of RS failed. Hence aborting RS.org. > apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able to connect > to ZooKeeper but the connection closes im > mediately. This could be a sign that the server has too many connections > (30 is the default). Consider inspecting your ZK server logs for that error > and then make sure you are reusing HBaseConfiguration as often as you can. > See HTable's javadoc for more information. at > org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.(ZooKeeperWatcher.j > ava:160) at > org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegi > onServer.java:489) at > org.apache.hadoop.hbase.regionserver.HRegionServer.initialize(HRegionServer. > java:465) at > org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:56 > 4) at java.lang.Thread.run(Thread.java:662)Caused by: > org.apache.zookeeper.KeeperException$ConnectionLossException: > KeeperErrorCode = ConnectionLoss for /hbase at > org.apache.zookeeper.KeeperException.create(KeeperException.java:90) at > org.apache.zookeeper.KeeperException.create(KeeperException.java:42) at > org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:815) at org.apac > he.zookeeper.ZooKeeper.exists(ZooKeeper.java:843) at > org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:930 > ) at > org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.(ZooKeeperWatcher.j > ava:138) ... 4 more12/05/12 14:42:25 INFO regionserver.HRegionServer: > STOPPED: Initialization of RS failed. Hence aborting RS.Exception in thread > "regionserver60020" java.lang.NullPointerException at > org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:63 > 3) at java.lang.Thread.run(Thread.java:662) > So any help??? > > Date: Sat, 12 May 2012 20:22:03 +0530 > > Subject: Re: Important "Undefined Error" > > From: dwivedishashwat@gmail.com > > To: user@hbase.apache.org > > > > you can turn off hadoop safe mode uisng *hadoop dfsadmin -safemode > > leave* > > > > On Sat, May 12, 2012 at 8:15 PM, shashwat shriparv < > > dwivedishashwat@gmail.com> wrote: > > > > > First thing copy core-site.xml, dfs xml from hadoop conf directory > > > to hbase conf dirctory. and turn of hadoop save mode and then try... > > > > > > > > > On Sat, May 12, 2012 at 6:27 PM, Harsh J wrote: > > > > > >> Dalia, > > >> > > >> Is your NameNode running fine? The issue is that HBase Master has > > >> been asked to talk to HDFS, but it can't connect to the HDFS > > >> NameNode. Does "hadoop dfs -touchz foobar" pass or fail with similar > retry issues? > > >> > > >> What's your fs.default.name's value in Hadoop's core-site.xml? And > > >> whats the output of that fixed host command I'd posted before? > > >> > > >> On Sat, May 12, 2012 at 6:06 PM, Dalia Sobhy > > >> > > >> wrote: > > >> > > > >> > > > >> > Dear Harsh > > >> > When I run $hbase master start > > >> > I found the following errors:12/05/12 08:32:42 INFO > > >> ipc.HBaseRpcMetrics: Initializing RPC Metrics with > > >> hostName=HMaster, > > >> port=6000012/05/12 08:32:42 INFO security.UserGroupInformation: > > >> JAAS Configuration already set up for Hadoop, not > > >> re-installing.12/05/12 > > >> 08:32:42 INFO ipc.HBaseServer: IPC Server Responder: > > >> starting12/05/12 > > >> 08:32:42 INFO ipc.HBaseServer: IPC Server listener on 60000: > > >> starting12/05/12 08:32:42 INFO ipc.HBaseServer: IPC Server handler > > >> 0 on > > >> 60000: starting12/05/12 08:32:42 INFO ipc.HBaseServer: IPC Server > > >> handler 1 on 60000: starting12/05/12 08:32:42 INFO ipc.HBaseServer: > > >> IPC Server handler 2 on 60000: starting12/05/12 08:32:42 INFO > > >> ipc.HBaseServer: IPC Server handler 3 on 60000: starting12/05/12 08:32: > 42 INFO ipc.HBaseServer: > > >> IPC Server handler 5 on 60000: starting12/05/12 08:32:42 INFO > > >> ipc.HBaseServer: IPC Server handler 4 on 60000: starting12/05/12 > > >> 08:32:42 INFO ipc.HBaseServer: IPC Server handler 7 on 60000: > > >> starting12/05/12 > > >> 08:32:42 INFO ipc.HBaseServer: IPC Serv > > >> > er handler 6 on 60000: starting12/05/12 08:32:42 INFO > ipc.HBaseServer: > > >> IPC Server handler 8 on 60000: starting12/05/12 08:32:42 INFO > > >> ipc.HBaseServer: IPC Server handler 9 on 60000: starting12/05/12 > > >> 08:32:42 INFO zookeeper.ZooKeeper: Client > > >> environment:zookeeper.version=3.3.5-cdh3u4--1, built on 05/07/2012 > > >> 21:12 > > >> GMT12/05/12 08:32:42 INFO zookeeper.ZooKeeper: Client environment: > > >> host.name=namenode12/05/12 08:32:42 INFO zookeeper.ZooKeeper: > > >> Client > > >> environment:java.version=1.6.0_3012/05/12 08:32:42 INFO > > >> zookeeper.ZooKeeper: Client environment:java.vendor=Sun > > >> Microsystems > > >> Inc.12/05/12 08:32:42 INFO zookeeper.ZooKeeper: Client > > >> environment:java.home=/usr/lib/jvm/java-6-sun-1.6.0.30/jre12/05/12 > > >> 08:32:42 INFO zookeeper.ZooKeeper: Client > > >> environment:java.class.path=/usr/lib/hbase/bin/../conf:/usr/lib/jvm > > >> /java-6-sun/lib/tools.jar:/usr/lib/hbase/bin/..:/usr/lib/hbase/bin/ > > >> ../hbase-0.90.4-cdh3u3.jar:/usr/lib/hbase/bin/../hbase-0.90.4-cdh3u > > >> 3-tests.jar:/usr/lib/hbase/bin/../lib/activation-1.1.jar:/u > > >> > > > >> > > >> sr/lib/hbase/bin/../lib/asm-3.1.jar:/usr/lib/hbase/bin/../lib/avro- > > >> 1.5.4.jar:/usr/lib/hbase/bin/../lib/avro-ipc-1.5.4.jar:/usr/lib/hba > > >> se/bin/../lib/commons-cli-1.2.jar:/usr/lib/hbase/bin/../lib/commons > > >> -codec-1.4.jar:/usr/lib/hbase/bin/../lib/commons-el-1.0.jar:/usr/li > > >> b/hbase/bin/../lib/commons-httpclient-3.1.jar:/usr/lib/hbase/bin/.. > > >> /lib/commons-lang-2.5.jar:/usr/lib/hbase/bin/../lib/commons-logging > > >> -1.1.1.jar:/usr/lib/hbase/bin/../lib/commons-net-1.4.1.jar:/usr/lib > > >> /hbase/bin/../lib/core-3.1.1.jar:/usr/lib/hbase/bin/../lib/guava-r0 > > >> 6.jar:/usr/lib/hbase/bin/../lib/guava-r09-jarjar.jar:/usr/lib/hbase > > >> /bin/../lib/hadoop-core.jar:/usr/lib/hbase/bin/../lib/jackson-core- > > >> asl-1.5.2.jar:/usr/lib/hbase/bin/../lib/jackson-jaxrs-1.5.5.jar:/us > > >> r/lib/hbase/bin/../lib/jackson-mapper-asl-1.5.2.jar:/usr/lib/hbase/ > > >> bin/../lib/jackson-xc-1.5.5.jar:/usr/lib/hbase/bin/../lib/jamon-run > > >> time-2.3.1.jar:/usr/lib/hbase/bin/../lib/jasper-compiler-5.5.23.jar > > >> :/usr/lib/hbase/bin/../lib/jasper-runtime-5.5.23.jar:/ > usr/l > > >> > > > >> > > >> ib/hbase/bin/../lib/jaxb-api-2.1.jar:/usr/lib/hbase/bin/../lib/jaxb > > >> -impl-2.1.12.jar:/usr/lib/hbase/bin/../lib/jersey-core-1.4.jar:/usr > > >> /lib/hbase/bin/../lib/jersey-json-1.4.jar:/usr/lib/hbase/bin/../lib > > >> /jersey-server-1.4.jar:/usr/lib/hbase/bin/../lib/jettison-1.1.jar:/ > > >> usr/lib/hbase/bin/../lib/jetty-6.1.26.jar:/usr/lib/hbase/bin/../lib > > >> /jetty-util-6.1.26.jar:/usr/lib/hbase/bin/../lib/jruby-complete-1.6 > > >> .0.jar:/usr/lib/hbase/bin/../lib/jsp-2.1-6.1.14.jar:/usr/lib/hbase/ > > >> bin/../lib/jsp-api-2.1-6.1.14.jar:/usr/lib/hbase/bin/../lib/jsp-api > > >> -2.1.jar:/usr/lib/hbase/bin/../lib/jsr311-api-1.1.1.jar:/usr/lib/hb > > >> ase/bin/../lib/log4j-1.2.16.jar:/usr/lib/hbase/bin/../lib/netty-3.2 > > >> .4.Final.jar:/usr/lib/hbase/bin/../lib/protobuf-java-2.3.0.jar:/usr > > >> /lib/hbase/bin/../lib/servlet-api-2.5-6.1.14.jar:/usr/lib/hbase/bin > > >> /../lib/servlet-api-2.5.jar:/usr/lib/hbase/bin/../lib/slf4j-api-1.5 > > >> .8.jar:/usr/lib/hbase/bin/../lib/slf4j-log4j12-1.5.8.jar:/usr/lib/h > > >> base/bin/../lib/snappy-java-1.0.3.2.jar:/usr/lib/hbase > /bin/ > > >> > > > >> > > >> ../lib/stax-api-1.0.1.jar:/usr/lib/hbase/bin/../lib/thrift-0.2.0.ja > > >> r:/usr/lib/hbase/bin/../lib/velocity-1.5.jar:/usr/lib/hbase/bin/../ > > >> lib/xmlenc-0.52.jar:/usr/lib/hbase/bin/../lib/zookeeper.jar:/etc/zo > > >> okeeper:/etc/hadoop-0.20/conf:/usr/lib/hadoop-0.20/hadoop-examples. > > >> jar:/usr/lib/hadoop-0.20/hadoop-0.20.2-cdh3u3-core.jar:/usr/lib/had > > >> oop-0.20/hadoop-0.20.2-cdh3u3-ant.jar:/usr/lib/hadoop-0.20/hadoop-c > > >> ore-0.20.2-cdh3u3.jar:/usr/lib/hadoop-0.20/hadoop-test.jar:/usr/lib > > >> /hadoop-0.20/hadoop-ant-0.20.2-cdh3u3.jar:/usr/lib/hadoop-0.20/hado > > >> op-tools.jar:/usr/lib/hadoop-0.20/hadoop-tools-0.20.2-cdh3u3.jar:/u > > >> sr/lib/hadoop-0.20/hadoop-test-0.20.2-cdh3u3.jar:/usr/lib/hadoop-0. > > >> 20/hadoop-core.jar:/usr/lib/hadoop-0.20/hadoop-0.20.2-cdh3u3-exampl > > >> es.jar:/usr/lib/hadoop-0.20/hadoop-0.20.2-cdh3u3-test.jar:/usr/lib/ > > >> hadoop-0.20/hadoop-ant.jar:/usr/lib/hadoop-0.20/hadoop-examples-0.2 > > >> 0.2-cdh3u3.jar:/usr/lib/hadoop-0.20/hadoop-0.20.2-cdh3u3-tools.jar: > > >> /usr/lib/hadoop-0.20/lib/jasper-runtime-5.5.12.jar:/us > r/lib > > >> > > > >> > > >> /hadoop-0.20/lib/hsqldb-1.8.0.10.jar:/usr/lib/hadoop-0.20/lib/jacks > > >> on-mapper-asl-1.5.2.jar:/usr/lib/hadoop-0.20/lib/jets3t-0.6.1.jar:/ > > >> usr/lib/hadoop-0.20/lib/jetty-servlet-tester-6.1.26.cloudera.1.jar: > > >> /usr/lib/hadoop-0.20/lib/jackson-core-asl-1.5.2.jar:/usr/lib/hadoop > > >> -0.20/lib/oro-2.0.8.jar:/usr/lib/hadoop-0.20/lib/ant-contrib-1.0b3. > > >> jar:/usr/lib/hadoop-0.20/lib/commons-daemon-1.0.1.jar:/usr/lib/hado > > >> op-0.20/lib/mockito-all-1.8.2.jar:/usr/lib/hadoop-0.20/lib/aspectjr > > >> t-1.6.5.jar:/usr/lib/hadoop-0.20/lib/commons-lang-2.4.jar:/usr/lib/ > > >> hadoop-0.20/lib/junit-4.5.jar:/usr/lib/hadoop-0.20/lib/commons-code > > >> c-1.4.jar:/usr/lib/hadoop-0.20/lib/servlet-api-2.5-6.1.14.jar:/usr/ > > >> lib/hadoop-0.20/lib/log4j-1.2.15.jar:/usr/lib/hadoop-0.20/lib/jsch- > > >> 0.1.42.jar:/usr/lib/hadoop-0.20/lib/core-3.1.1.jar:/usr/lib/hadoop- > > >> 0.20/lib/jetty-6.1.26.cloudera.1.jar:/usr/lib/hadoop-0.20/lib/commo > > >> ns-logging-1.0.4.jar:/usr/lib/hadoop-0.20/lib/jetty-util-6.1.26.clo > > >> udera.1.jar:/usr/lib/hadoop-0.20/lib/servlet-api-2.5-2 > 00812 > > >> > > > >> > > >> 11.jar:/usr/lib/hadoop-0.20/lib/jasper-compiler-5.5.12.jar:/usr/lib > > >> /hadoop-0.20/lib/kfs-0.2.2.jar:/usr/lib/hadoop-0.20/lib/commons-cli > > >> -1.2.jar:/usr/lib/hadoop-0.20/lib/commons-net-1.4.1.jar:/usr/lib/ha > > >> doop-0.20/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-0.20/lib/s > > >> lf4j-api-1.4.3.jar:/usr/lib/hadoop-0.20/lib/xmlenc-0.52.jar:/usr/li > > >> b/hadoop-0.20/lib/commons-logging-api-1.0.4.jar:/usr/lib/hadoop-0.2 > > >> 0/lib/commons-el-1.0.jar:/usr/lib/hadoop-0.20/lib/slf4j-log4j12-1.4 > > >> .3.jar:/usr/lib/hadoop-0.20/lib/aspectjtools-1.6.5.jar:/usr/lib/had > > >> oop-0.20/lib/guava-r09-jarjar.jar:/usr/lib/hadoop-0.20/lib/hadoop-f > > >> airscheduler-0.20.2-cdh3u3.jar:/usr/lib/zookeeper/zookeeper.jar:/us > > >> r/lib/zookeeper/zookeeper-3.3.5-cdh3u4.jar:/usr/lib/zookeeper/lib/l > > >> og4j-1.2.15.jar:/usr/lib/zookeeper/lib/jline-0.9.94.jar::/usr/lib/h > > >> adoop-0.20/conf:/usr/lib/hadoop-0.20/hadoop-core-0.20.2-cdh3u3.jar: > > >> /usr/lib/hadoop-0.20/lib/ant-contrib-1.0b3.jar:/usr/lib/hadoop-0.20 > > >> /lib/aspectjrt-1.6.5.jar:/usr/lib/hadoop-0.20/lib/aspe > ctjto > > >> > > > >> > > >> ols-1.6.5.jar:/usr/lib/hadoop-0.20/lib/commons-cli-1.2.jar:/usr/lib > > >> /hadoop-0.20/lib/commons-codec-1.4.jar:/usr/lib/hadoop-0.20/lib/com > > >> mons-daemon-1.0.1.jar:/usr/lib/hadoop-0.20/lib/commons-el-1.0.jar:/ > > >> usr/lib/hadoop-0.20/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop- > > >> 0.20/lib/commons-lang-2.4.jar:/usr/lib/hadoop-0.20/lib/commons-logg > > >> ing-1.0.4.jar:/usr/lib/hadoop-0.20/lib/commons-logging-api-1.0.4.ja > > >> r:/usr/lib/hadoop-0.20/lib/commons-net-1.4.1.jar:/usr/lib/hadoop-0. > > >> 20/lib/core-3.1.1.jar:/usr/lib/hadoop-0.20/lib/guava-r09-jarjar.jar > > >> :/usr/lib/hadoop-0.20/lib/hadoop-fairscheduler-0.20.2-cdh3u3.jar:/u > > >> sr/lib/hadoop-0.20/lib/hsqldb-1.8.0.10.jar:/usr/lib/hadoop-0.20/lib > > >> /jackson-core-asl-1.5.2.jar:/usr/lib/hadoop-0.20/lib/jackson-mapper > > >> -asl-1.5.2.jar:/usr/lib/hadoop-0.20/lib/jasper-compiler-5.5.12.jar: > > >> /usr/lib/hadoop-0.20/lib/jasper-runtime-5.5.12.jar:/usr/lib/hadoop- > > >> 0.20/lib/jets3t-0.6.1.jar:/usr/lib/hadoop-0.20/lib/jetty-6.1.26.clo > > >> udera.1.jar:/usr/lib/hadoop-0.20/lib/jetty-servlet-tes > ter-6 > > >> > > > >> > > >> .1.26.cloudera.1.jar:/usr/lib/hadoop-0.20/lib/jetty-util-6.1.26.clo > > >> udera.1.jar:/usr/lib/hadoop-0.20/lib/jsch-0.1.42.jar:/usr/lib/hadoo > > >> p-0.20/lib/junit-4.5.jar:/usr/lib/hadoop-0.20/lib/kfs-0.2.2.jar:/us > > >> r/lib/hadoop-0.20/lib/log4j-1.2.15.jar:/usr/lib/hadoop-0.20/lib/moc > > >> kito-all-1.8.2.jar:/usr/lib/hadoop-0.20/lib/oro-2.0.8.jar:/usr/lib/ > > >> hadoop-0.20/lib/servlet-api-2.5-20081211.jar:/usr/lib/hadoop-0.20/l > > >> ib/servlet-api-2.5-6.1.14.jar:/usr/lib/hadoop-0.20/lib/slf4j-api-1. > > >> 4.3.jar:/usr/lib/hadoop-0.20/lib/slf4j-log4j12-1.4.3.jar:/usr/lib/h > > >> adoop-0.20/lib/xmlenc-0.52.jar12/05/12 > > >> 08:32:42 INFO zookeeper.ZooKeeper: Client > > >> environment:java.library.path=/usr/lib/hadoop-0.20/lib/native/Linux > > >> -amd64-64:/usr/lib/hbase/bin/../lib/native/Linux-amd64-6412/05/12 > > >> 08:32:42 INFO zookeeper.ZooKeeper: Client > > >> environment:java.io.tmpdir=/tmp12/05/12 08:32:42 INFO > zookeeper.ZooKeeper: > > >> Client environment:java.compiler=12/05/12 08:32:42 INFO > > >> zookeeper.ZooKeeper: Client environment:os.name=Linux12/05/12 > > >> 08:32:42 > > >> > INFO zookeeper.ZooKeeper: Client > > >> > environment:os.arch=amd6412/05/12 > > >> 08:32:42 INFO zookeeper.ZooKeeper: Client > > >> environment:os.version=2.6.35-22-server12/05/12 08:32:42 INFO > > >> zookeeper.ZooKeeper: Client environment:user.name=dalia12/05/12 > > >> 08:32:42 INFO zookeeper.ZooKeeper: Client > > >> environment:user.home=/home/dalia12/05/12 > > >> 08:32:42 INFO zookeeper.ZooKeeper: Client > > >> environment:user.dir=/home/dalia12/05/12 08:32:42 INFO > zookeeper.ZooKeeper: > > >> Initiating client connection, connectString=namenode:2181 > > >> sessionTimeout=180000 watcher=master:6000012/05/12 08:32:42 INFO > > >> zookeeper.ClientCnxn: Opening socket connection to server namenode/ > > >> 10.0.2.3:218112/05/12 08:32:42 INFO zookeeper.ClientCnxn: Socket > > >> connection established to namenode/10.0.2.3:2181, initiating > > >> session12/05/12 08:32:42 INFO zookeeper.ClientCnxn: Session > > >> establishment complete on server namenode/10.0.2.3:2181, sessionid > > >> = 0x13740bc4f70000c, negotiated timeout = 4000012/05/12 08:32:42 > > >> INFO > > >> jvm.JvmMetrics: Initializing JVM Metrics with > > >> > processName=Master, sessionId=namenode:6000012/05/12 08:32:42 > > >> > INFO > > >> hbase.metrics: MetricsString added: revision12/05/12 08:32:42 INFO > > >> hbase.metrics: MetricsString added: hdfsUser12/05/12 08:32:42 INFO > > >> hbase.metrics: MetricsString added: hdfsDate12/05/12 08:32:42 INFO > > >> hbase.metrics: MetricsString added: hdfsUrl12/05/12 08:32:42 INFO > > >> hbase.metrics: MetricsString added: date12/05/12 08:32:42 INFO > > >> hbase.metrics: MetricsString added: hdfsRevision12/05/12 08:32:42 > > >> INFO > > >> hbase.metrics: MetricsString added: user12/05/12 08:32:42 INFO > > >> hbase.metrics: MetricsString added: hdfsVersion12/05/12 08:32:42 > > >> INFO > > >> hbase.metrics: MetricsString added: url12/05/12 08:32:42 INFO > > >> hbase.metrics: MetricsString added: version12/05/12 08:32:42 INFO > > >> hbase.metrics: new MBeanInfo12/05/12 08:32:42 INFO hbase.metrics: > > >> new > > >> MBeanInfo12/05/12 08:32:42 INFO metrics.MasterMetrics: > > >> Initialized12/05/12 > > >> 08:32:42 INFO master.ActiveMasterManager: > > >> Master=namenode:6000012/05/12 > > >> 08:32:44 INFO ipc.Client: Retrying connect to serve > > >> > r: namenode/10.0.2.3:8020. Already tried 0 time(s).12/05/12 > > >> > 08:32:45 > > >> INFO ipc.Client: Retrying connect to server: namenode/10.0.2.3:8020. > > >> Already tried 1 time(s).12/05/12 08:32:46 INFO ipc.Client: Retrying > > >> connect to server: namenode/10.0.2.3:8020. Already tried 2 > > >> time(s).12/05/12 > > >> 08:32:47 INFO ipc.Client: Retrying connect to server: namenode/ > > >> 10.0.2.3:8020. Already tried 3 time(s).12/05/12 08:32:48 INFO > > >> ipc.Client: Retrying connect to server: namenode/10.0.2.3:8020. > > >> Already tried 4 time(s).12/05/12 08:32:49 INFO ipc.Client: Retrying > > >> connect to > > >> server: namenode/10.0.2.3:8020. Already tried 5 time(s).12/05/12 > > >> 08:32:50 INFO ipc.Client: Retrying connect to server: namenode/ > > >> 10.0.2.3:8020. Already tried 6 time(s).12/05/12 08:32:51 INFO > > >> ipc.Client: Retrying connect to server: namenode/10.0.2.3:8020. > > >> Already tried 7 time(s).12/05/12 08:32:52 INFO ipc.Client: Retrying > > >> connect to > > >> server: namenode/10.0.2.3:8020. Already tried 8 time(s).12/05/12 > > >> 08:32:53 INFO ipc.Client: Retrying connect to ser > > >> > ver: namenode/10.0.2.3:8020. Already tried 9 time(s).12/05/12 > > >> 08:32:53 FATAL master.HMaster: Unhandled exception. Starting > > >> shutdown.java.net.ConnectException: Call to > namenode/10.0.2.3:8020failed on connection exception: > java.net.ConnectException: Connection > > >> refused at > org.apache.hadoop.ipc.Client.wrapException(Client.java:1134) > > >> at org.apache.hadoop.ipc.Client.call(Client.java:1110) at > > >> org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226) at > > >> $Proxy6.getProtocolVersion(Unknown Source) at > > >> org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398) at > > >> org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384) at > > >> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:123) > > >> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:246) > > >> at > > >> org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:208) at > > >> > org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSyste > m.java:89) > > >> at > > >> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1563) > > >> at org.apache.hadoop.fs.FileSystem.acc > > >> > ess$200(FileSystem.java:67) at > > >> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1597) > > >> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1579) > at > > >> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:228) at > > >> org.apache.hadoop.fs.Path.getFileSystem(Path.java:183) at > > >> org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:364) at > > >> > org.apache.hadoop.hbase.master.MasterFileSystem.(MasterFileSystem.java > :86) > > >> at > > >> > org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:360 > ) > > >> at > > >> org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:293)Caused by: > > >> java.net.ConnectException: Connection refused at > > >> sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at > > >> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) > > >> at > > >> > org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:2 > 06) > > >> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408) at > > >> > org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:425) > > >> a > > >> > t > > >> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java: > > >> 532) at > > >> org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:210 > > >> ) at > > >> org.apache.hadoop.ipc.Client.getConnection(Client.java:1247) at > > >> org.apache.hadoop.ipc.Client.call(Client.java:1078) ... 18 > > >> more12/05/12 > > >> 08:32:53 INFO master.HMaster: Aborting12/05/12 08:32:53 DEBUG > > >> master.HMaster: Stopping service threads12/05/12 08:32:53 INFO > > >> ipc.HBaseServer: Stopping server on 6000012/05/12 08:32:53 INFO > > >> ipc.HBaseServer: IPC Server handler 5 on 60000: exiting12/05/12 > > >> 08:32:53 INFO ipc.HBaseServer: Stopping IPC Server listener on > > >> 6000012/05/12 > > >> 08:32:53 INFO ipc.HBaseServer: IPC Server handler 1 on 60000: > > >> exiting12/05/12 08:32:53 INFO ipc.HBaseServer: IPC Server handler 0 > > >> on > > >> 60000: exiting12/05/12 08:32:53 INFO ipc.HBaseServer: IPC Server > > >> handler 3 on 60000: exiting12/05/12 08:32:53 INFO ipc.HBaseServer: > > >> IPC Server handler > > >> 7 on 60000: exiting12/05/12 08:32:53 INFO ipc.HBaseServer: IPC > > >> Server handler 9 on 60000: exiting1 > > >> > 2/05/12 08:32:53 INFO ipc.HBaseServer: IPC Server handler 6 on > 60000: > > >> exiting12/05/12 08:32:53 INFO ipc.HBaseServer: IPC Server handler 4 > > >> on > > >> 60000: exiting12/05/12 08:32:53 INFO ipc.HBaseServer: IPC Server > > >> handler 2 on 60000: exiting12/05/12 08:32:53 INFO ipc.HBaseServer: > > >> IPC Server handler > > >> 8 on 60000: exiting12/05/12 08:32:53 INFO ipc.HBaseServer: Stopping > > >> IPC Server Responder12/05/12 08:32:53 INFO zookeeper.ZooKeeper: > Session: > > >> 0x13740bc4f70000c closed12/05/12 08:32:53 INFO zookeeper.ClientCnxn: > > >> EventThread shut down12/05/12 08:32:53 INFO master.HMaster: HMaster > > >> main thread exiting> From: harsh@cloudera.com > > >> >> Date: Sat, 12 May 2012 17:28:29 +0530 > > >> >> Subject: Re: Important "Undefined Error" > > >> >> To: user@hbase.apache.org > > >> >> > > >> >> Hi Dalia, > > >> >> > > >> >> On Sat, May 12, 2012 at 5:14 PM, Dalia Sobhy < > > >> dalia.mohsobhy@hotmail.com> wrote: > > >> >> > > > >> >> > Dear all, > > >> >> > I have first a problem with Hbase I am trying to install it on > > >> >> > a > > >> distributed/multinode cluster.. > > >> >> > I am using the cloudera > > >> https://ccp.cloudera.com/display/CDH4B2/HBase+Installation#HBaseIns > > >> tallation-StartingtheHBaseMaster > > >> >> > But when I write this command > > >> >> > Creating the /hbase Directory in HDFS $sudo -u hdfs hadoop fs > > >> >> > -mkdir > > >> /hbase > > >> >> > I get the following error:12/05/12 07:20:42 INFO > > >> security.UserGroupInformation: JAAS Configuration already set up > > >> for Hadoop, not re-installing. > > >> >> > > >> >> This is not an error and you shouldn't be worried. It is rather > > >> >> a noisy INFO log that should be fixed (as a DEBUG level instead) > > >> >> in subsequent releases (Are you using CDH3 or CDH4? IIRC only > > >> >> CDH3u3 printed these, not in anything above that.) > > >> >> > > >> >> > 2. Another Aspect is when I start the hbase master it closes > > >> automatically after a while. > > >> >> > > >> >> Could you post us your HMaster start->crash log? You can use a > > >> >> service like pastebin.com to send us the output. > > >> >> > > >> >> > 3. Also this command is not working$host -v -t A > `namenode`namenode: > > >> command not found > > >> >> > > >> >> The right command is perhaps just: > > >> >> > > >> >> $host -v -t A `hostname` > > >> >> > > >> >> -- > > >> >> Harsh J > > >> > > > >> > > >> > > >> > > >> -- > > >> Harsh J > > >> > > > > > > > > > > > > -- > > > > > > > > > ∞ > > > Shashwat Shriparv > > > > > > > > > > > > > > > -- > > > > > > ∞ > > Shashwat Shriparv > > --_34d5c9b4-e250-4949-8fd8-94a7f99e3834_--