hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From shashwat shriparv <dwivedishash...@gmail.com>
Subject Re: Important "Undefined Error"
Date Fri, 18 May 2012 20:05:09 GMT
Here is the link from which you can download the preconfigured hadoop/hbase
,,,, just keep in mind please change the configuration according to your
machine and hostnames..

*http://dl.dropbox.com/u/19454506/Hadoop_HHH_Working.zip*
*
*
*
*
The video link i sent you its not just for the cluster on same pc if you
follow the steps you can configure a cluster on using physical pc.

and please let us know what code you are trying??

and what ever settings you have for the master node in hosts file just copy
the sam to the slaves also...

Last option if you are not able to do, let me take remote connection to
your pc i will do it.. but i dont think i will be needed :)

On Fri, May 18, 2012 at 5:52 PM, Dalia Sobhy <dalia.mohsobhy@hotmail.com>wrote:

>  I did all what u said.
>
> Yes I have to use it bec I already tried my code but on a standalone
> machine for cdh3u3.
>
> But am mad from it its been now 20 days since I started deploying it and I
> only have one week to finish it bec I am sticked with deadlines.
>
> Bas I think it wud be great if you send me your preconfigured dist of
> apache..
>
> Sent from my iPad
>
> On May 18, 2012, at 1:41 PM, "shashwat shriparv" <
> dwivedishashwat@gmail.com> wrote:
>
> > See the hosts file and hostname file looks fine... add this setting to
> all
> > your slaves and hosts.
> >
> > Is it cumplsory to use CDH3u3 dist...? dont you download from apache. if
> > you want i can send you the preconfigured dist of apache...
> >
> > did you do ssh to aal the machine and added the public key of master to
> all
> > slaves?? just remove localhost anywhere in conf files of hadoop and
> hbase...
> >
> > On Fri, May 18, 2012 at 4:20 PM, Dalia Sobhy <dalia.mohsobhy@hotmail.com
> >wrote:
> >
> >>
> >> Hi Shashwat,
> >> I am deploying my cluster on 4 PCs, one master and 3 slave nodes, not
> all
> >> in a single machine...
> >>
> >>> From: dalia.mohsobhy@hotmail.com
> >>> To: user@hbase.apache.org
> >>> Subject: RE: Important "Undefined Error"
> >>> Date: Fri, 18 May 2012 12:48:58 +0200
> >>>
> >>>
> >>> /etc/hosts
> >>> 127.0.0.1 localhost
> >>> 10.0.2.3      namenode.dalia.com10.0.2.5
> >> datanode3.dalia.com10.0.2.6     datanode1.dalia.com10.0.2.42
> >> datanode2.dalia.com
> >>> /etc/hostnamenamenode.dalia.com
> >>> And I am always receiving this error:
> >>> INFO org.apache.hadoop.ipc.Client: Retrying connect to server:
> namenode/
> >> 10.0.2.3:8020. Already tried 0 time(s).
> >>> Note that I have already disabled the firewall and I opened the port :
> >> ufw allow 8020
> >>> But when I run : telnet 10.0.2.3 8020 => connection refused ....
> >>> So the problem is that I cannot open the port..... :(
> >>> Note that I have tried it with other ports as 54310 and 9000 but same
> >> error occurs...
> >>>> Date: Fri, 18 May 2012 01:52:48 +0530
> >>>> Subject: Re: Important "Undefined Error"
> >>>> From: dwivedishashwat@gmail.com
> >>>> To: user@hbase.apache.org
> >>>>
> >>>> Please send the content of /etc/hosts and /etc/hostname file
> >>>>
> >>>> try this link
> >>>>
> >>>>
> >>
> http://helpmetocode.blogspot.in/2012/05/hadoop-fully-distributed-cluster.html
> >>>>
> >>>>
> >>>> for hadoop configuration
> >>>>
> >>>> On Mon, May 14, 2012 at 10:15 PM, Dalia Sobhy <
> >> dalia.mohsobhy@hotmail.com>wrote:
> >>>>
> >>>>> Yeasss
> >>>>>
> >>>>> Sent from my iPhone
> >>>>>
> >>>>> On 2012-05-14, at 5:28 PM, "N Keywal" <nkeywal@gmail.com> wrote:
> >>>>>
> >>>>>> In core-file.xml, do you have this?
> >>>>>>
> >>>>>> <configuration>
> >>>>>> <property>
> >>>>>> <name>fs.default.name</name>
> >>>>>> <value>hdfs://namenode:8020/hbase</value>
> >>>>>> </property>
> >>>>>>
> >>>>>> If you want hbase to connect to 8020 you must have hdfs listening
> >> on
> >>>>>> 8020 as well.
> >>>>>>
> >>>>>>
> >>>>>> On Mon, May 14, 2012 at 5:17 PM, Dalia Sobhy <
> >> dalia.mohsobhy@hotmail.com>
> >>>>> wrote:
> >>>>>>> Hiiii
> >>>>>>>
> >>>>>>> I have tried to make both ports the same.
> >>>>>>> But the prob is the hbase cannot connect to port 8020.
> >>>>>>> When i run nmap hostname, port 8020 wasnt with the list of open
> >> ports.
> >>>>>>> I have tried what harsh told me abt.
> >>>>>>> I used the same port he used but same error occurred.
> >>>>>>> Another aspect in cloudera doc it says that i have to canonical
> >> name
> >>>>> for the host ex: namenode.example.com as the hostname, but i didnt
> >> find
> >>>>> it in any tutorial. No one makes it.
> >>>>>>> Note that i am deploying my cluster in fully distributed mode i.e
> >> am
> >>>>> using 4 machines..
> >>>>>>>
> >>>>>>> So any ideas??!!
> >>>>>>>
> >>>>>>> Sent from my iPhone
> >>>>>>>
> >>>>>>> On 2012-05-14, at 4:07 PM, "N Keywal" <nkeywal@gmail.com> wrote:
> >>>>>>>
> >>>>>>>> Hi,
> >>>>>>>>
> >>>>>>>> There could be multiple issues, but it's strange to have in
> >>>>> hbase-site.xml
> >>>>>>>>
> >>>>>>>> <value>hdfs://namenode:9000/hbase</value>
> >>>>>>>>
> >>>>>>>> while the core-site.xml says:
> >>>>>>>>
> >>>>>>>> <value>hdfs://namenode:54310/</value>
> >>>>>>>>
> >>>>>>>> The two entries should match.
> >>>>>>>>
> >>>>>>>> I would recommend to:
> >>>>>>>> - use netstat to check the ports (netstat -l)
> >>>>>>>> - do the check recommended by Harsh J previously.
> >>>>>>>>
> >>>>>>>> N.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On Mon, May 14, 2012 at 3:21 PM, Dalia Sobhy <
> >>>>> dalia.mohsobhy@hotmail.com> wrote:
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> pleaseeeeeeeeeeee helpppppppppppppppppppp
> >>>>>>>>>
> >>>>>>>>>> From: dalia.mohsobhy@hotmail.com
> >>>>>>>>>> To: user@hbase.apache.org
> >>>>>>>>>> Subject: RE: Important "Undefined Error"
> >>>>>>>>>> Date: Mon, 14 May 2012 12:20:18 +0200
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> Hi,
> >>>>>>>>>> I tried what you told me, but nothing worked:(((
> >>>>>>>>>> First when I run this command:dalia@namenode:~$ host -v -t A
> >>>>> `hostname`Output:Trying "namenode"Host namenode not found:
> >>>>> 3(NXDOMAIN)Received 101 bytes from 10.0.2.1#53 in 13 ms My
> >>>>> core-site.xml:<configuration><property>        <name>fs.default.name
> >> </name>
> >>>>> <!--<value>hdfs://namenode:8020</value>-->
> >>>>> <value>hdfs://namenode:54310/</value></property></configuration>
> >>>>>>>>>> My
> >>>>>
> >>
> hdfs-site.xml<configuration><property><name>dfs.name.dir</name><value>/data/1/dfs/nn,/nfsmount/dfs/nn</value></property><!--<property><name>dfs.data.dir</name><value>/data/1/dfs/dn,/data/2/dfs/dn,/data/3/dfs/dn</value></property>--><property><name>dfs.datanode.max.xcievers</name><value>4096</value></property><property><name>dfs.replication</name><value>3</value></property><property>
> >>>>> <name>dfs.permissions.superusergroup</name>
> >> <value>hadoop</value></property>
> >>>>>>>>>> My
> >>>>>
> >>
> Mapred-site.xml<configuration><name>mapred.local.dir</name><value>/data/1/mapred/local,/data/2/mapred/local,/data/3/mapred/local</value></configuration>
> >>>>>>>>>> My
> >>>>>
> >>
> Hbase-site.xml<configuration><property><name>hbase.cluster.distributed</name>
> >>>>> <value>true</value></property><property>  <name>hbase.rootdir</name>
> >>>>>
> >>
> <value>hdfs://namenode:9000/hbase</value></property><property><name>hbase.zookeeper.quorun</name>
> >>>>>
> >>
> <value>namenode</value></property><property><name>hbase.regionserver.port</name><value>60020</value><description>The
> >>>>> host and port that the HBase master runs
> >>>>>
> >>
> at.</description></property><property><name>dfs.replication</name><value>1</value></property><property><name>hbase.zookeeper.property.clientPort</name><value>2181</value><description>Property
> >>>>> from ZooKeeper's config zoo.cfg.The port at which the clients will
> >>>>> connect.</description></property></configuration>
> >>>>>>>>>> Please Help I am really disappointed I have been through all
> >> that
> >>>>> for two weeks !!!!
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>> From: dwivedishashwat@gmail.com
> >>>>>>>>>>> To: user@hbase.apache.org
> >>>>>>>>>>> Subject: RE: Important "Undefined Error"
> >>>>>>>>>>> Date: Sat, 12 May 2012 23:31:49 +0530
> >>>>>>>>>>>
> >>>>>>>>>>> The problem is your hbase is not able to connect to Hadoop,
> >> can you
> >>>>> put your
> >>>>>>>>>>> hbase-site.xml content >> here.. have you specified localhost
> >>>>> somewhere, if
> >>>>>>>>>>> so remove localhost from everywhere and put your hdfsl
> >> namenode
> >>>>> address
> >>>>>>>>>>> suppose your namenode is running on master:9000 then put your
> >> hbase
> >>>>> file
> >>>>>>>>>>> system setting as master:9000/hbase here I am sending you the
> >>>>> configuration
> >>>>>>>>>>> which I am using in hbase and is working
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> My hbase-site.xml content is
> >>>>>>>>>>>
> >>>>>>>>>>> <?xml version="1.0"?>
> >>>>>>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> >>>>>>>>>>> <!--
> >>>>>>>>>>> /**
> >>>>>>>>>>> * Copyright 2010 The Apache Software Foundation
> >>>>>>>>>>> *
> >>>>>>>>>>> * Licensed to the Apache Software Foundation (ASF) under one
> >>>>>>>>>>> * or more contributor license agreements.  See the NOTICE
> >> file
> >>>>>>>>>>> * distributed with this work for additional information
> >>>>>>>>>>> * regarding copyright ownership.  The ASF licenses this file
> >>>>>>>>>>> * to you under the Apache License, Version 2.0 (the
> >>>>>>>>>>> * "License"); you may not use this file except in compliance
> >>>>>>>>>>> * with the License.  You may obtain a copy of the License at
> >>>>>>>>>>> *
> >>>>>>>>>>> *     http://www.apache.org/licenses/LICENSE-2.0
> >>>>>>>>>>> *
> >>>>>>>>>>> * Unless required by applicable law or agreed to in writing,
> >>>>> software
> >>>>>>>>>>> * distributed under the License is distributed on an "AS IS"
> >> BASIS,
> >>>>>>>>>>> * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
> >> express or
> >>>>> implied.
> >>>>>>>>>>> * See the License for the specific language governing
> >> permissions
> >>>>> and
> >>>>>>>>>>> * limitations under the License.
> >>>>>>>>>>> */
> >>>>>>>>>>> -->
> >>>>>>>>>>> <configuration>
> >>>>>>>>>>> <property>
> >>>>>>>>>>> <name>hbase.rootdir</name>
> >>>>>>>>>>> <value>hdfs://master:9000/hbase</value>
> >>>>>>>>>>> </property>
> >>>>>>>>>>> <property>
> >>>>>>>>>>> <name>hbase.master</name>
> >>>>>>>>>>> <value>master:60000</value>
> >>>>>>>>>>> <description>The host and port that the HBase master runs
> >>>>> at.</description>
> >>>>>>>>>>> </property>
> >>>>>>>>>>> <property>
> >>>>>>>>>>> <name>hbase.regionserver.port</name>
> >>>>>>>>>>> <value>60020</value>
> >>>>>>>>>>> <description>The host and port that the HBase master runs
> >>>>> at.</description>
> >>>>>>>>>>> </property>
> >>>>>>>>>>> <!--<property>
> >>>>>>>>>>> <name>hbase.master.port</name>
> >>>>>>>>>>> <value>60000</value>
> >>>>>>>>>>> <description>The host and port that the HBase master runs
> >>>>> at.</description>
> >>>>>>>>>>> </property>-->
> >>>>>>>>>>> <property>
> >>>>>>>>>>> <name>hbase.cluster.distributed</name>
> >>>>>>>>>>> <value>true</value>
> >>>>>>>>>>> </property>
> >>>>>>>>>>> <property>
> >>>>>>>>>>> <name>hbase.tmp.dir</name>
> >>>>>>>>>>> <value>/home/shashwat/Hadoop/hbase-0.90.4/temp</value>
> >>>>>>>>>>> </property>
> >>>>>>>>>>> <property>
> >>>>>>>>>>> <name>hbase.zookeeper.quorum</name>
> >>>>>>>>>>> <value>master</value>
> >>>>>>>>>>> </property>
> >>>>>>>>>>> <property>
> >>>>>>>>>>> <name>dfs.replication</name>
> >>>>>>>>>>> <value>1</value>
> >>>>>>>>>>> </property>
> >>>>>>>>>>> <property>
> >>>>>>>>>>> <name>hbase.zookeeper.property.clientPort</name>
> >>>>>>>>>>> <value>2181</value>
> >>>>>>>>>>> <description>Property from ZooKeeper's config zoo.cfg.
> >>>>>>>>>>> The port at which the clients will connect.
> >>>>>>>>>>> </description>
> >>>>>>>>>>> </property>
> >>>>>>>>>>> <property>
> >>>>>>>>>>> <name>hbase.zookeeper.property.dataDir</name>
> >>>>>>>>>>> <value>/home/shashwat/zookeeper</value>
> >>>>>>>>>>> <description>Property from ZooKeeper's config zoo.cfg.
> >>>>>>>>>>> The directory where the snapshot is stored.
> >>>>>>>>>>> </description>
> >>>>>>>>>>> </property>
> >>>>>>>>>>>
> >>>>>>>>>>> </configuration>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> Check this out, and also stop hbase, If its not stopping kill
> >> all
> >>>>> the
> >>>>>>>>>>> processes, and after putting your  hdfs-site.xml,
> >> mapred-site.xml
> >>>>> and
> >>>>>>>>>>> core-site.sml to hbase conf directory try to restart, and also
> >>>>> delete the
> >>>>>>>>>>> folders created by hbase ,,, like temp directory or other
> >> then try
> >>>>> to start.
> >>>>>>>>>>>
> >>>>>>>>>>> Regards
> >>>>>>>>>>> ∞
> >>>>>>>>>>> Shashwat Shriparv
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> -----Original Message-----
> >>>>>>>>>>> From: Dalia Sobhy [mailto:dalia.mohsobhy@hotmail.com]
> >>>>>>>>>>> Sent: 12 May 2012 22:48
> >>>>>>>>>>> To: user@hbase.apache.org
> >>>>>>>>>>> Subject: RE: Important "Undefined Error"
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> Hi Shashwat,
> >>>>>>>>>>> I want to tell you about my configurations:
> >>>>>>>>>>> I am using 4 nodesOne "Master": Namenode, SecondaryNamenode,
> >> Job
> >>>>> Tracker,
> >>>>>>>>>>> Zookeeper, HMasterThree "Slaves": datanodes, tasktrackers,
> >>>>> regionservers In
> >>>>>>>>>>> both master and slaves, all the hadoop daemons are working
> >> well,
> >>>>> but as for
> >>>>>>>>>>> the hbase master service it is not working..
> >>>>>>>>>>> As for region server here is the error:12/05/12 14:42:13 INFO
> >>>>>>>>>>> util.ServerCommandLine: vmName=Java HotSpot(TM) 64-Bit Server
> >> VM,
> >>>>>>>>>>> vmVendor=Sun Microsystems Inc., vmVersion=20.1-b0212/05/12
> >> 14:42:13
> >>>>> INFO
> >>>>>>>>>>> util.ServerCommandLine: vmInputArguments=[-Xmx1000m, -ea,
> >>>>>>>>>>> -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode,
> >>>>>>>>>>> -Dhbase.log.dir=/usr/lib/hbase/bin/../logs,
> >>>>> -Dhbase.log.file=hbase.log,
> >>>>>>>>>>> -Dhbase.home.dir=/usr/lib/hbase/bin/.., -Dhbase.id.str=,
> >>>>>>>>>>> -Dhbase.root.logger=INFO,console,
> >>>>>>>>>>>
> >>>>>
> >>
> -Djava.library.path=/usr/lib/hadoop-0.20/lib/native/Linux-amd64-64:/usr/lib/
> >>>>>>>>>>> hbase/bin/../lib/native/Linux-amd64-64]12/05/12 14:42:13 INFO
> >>>>>>>>>>> ipc.HBaseRpcMetrics: Initializing RPC Metrics with
> >>>>> hostName=HRegionServer,
> >>>>>>>>>>> port=6002012/05/12 14:42:14 FATAL zookeeper.ZKConfig: The
> >> server in
> >>>>> zoo.cfg
> >>>>>>>>>>> cannot be set to localhost in a fully-distributed setup
> >> because it
> >>>>> won't be
> >>>>>>>>>>> reachable. See "Getting Started" for more information.12/05/12
> >>>>> 14:42:14 WARN
> >>>>>>>>>>> zookeeper.ZKConfig: Cannot read zoo.cfg, loading from XML
> >>>>>>>>>>> filesjava.io.IOException:
> >>>>>>>>>>> The server in zoo.cfg cannot be set to localhost in a
> >>>>> fully-distributed
> >>>>>>>>>>> setup because it won't be reachable. See "Getting Started"
> >> for more
> >>>>>>>>>>> information.        at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.hadoop.hbase.zookeeper.ZKConfig.parseZooCfg(ZKConfig.java:172)
> >>>>>>>>>>> at
> >>>>>
> >> org.apache.hadoop.hbase.zookeeper.ZKConfig.makeZKProps(ZKConfig.java:68)
> >>>>>>>>>>> at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.hadoop.hbase.zookeeper.ZKConfig.getZKQuorumServersString(ZKConfig
> >>>>>>>>>>> .java:249)  at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.j
> >>>>>>>>>>> ava:117)    at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegi
> >>>>>>>>>>> onServer.java:489)  at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.initialize(HRegionServer.
> >>>>>>>>>>> java:465)   at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:56
> >>>>>>>>>>> 4)  at java.lang.Thread.run(Thread.java:662)12/05/12 14:42:14
> >> INFO
> >>>>>>>>>>> zookeeper.ZooKeeper: Client
> >>>>> environment:zookeeper.version=3.3.5-cdh3u4--1,
> >>>>>>>>>>> built on 05/07/2012 21:12 GMT12/05/12 14:42:14 INFO
> >>>>> zookeeper.ZooKeeper:
> >>>>>>>>>>> Client environment:host.name=data
> >>>>>>>>>>> node212/05/12 14:42:14 INFO zookeeper.ZooKeeper: Client
> >>>>>>>>>>> environment:java.version=1.6.0_2612/05/12 14:42:14 INFO
> >>>>> zookeeper.ZooKeeper:
> >>>>>>>>>>> Client environment:java.vendor=Sun Microsystems Inc.12/05/12
> >>>>> 14:42:14 INFO
> >>>>>>>>>>> zookeeper.ZooKeeper: Client
> >>>>> environment:java.home=/usr/lib/jvm/java-6-sun-1.
> >>>>>>>>>>> 6.0.26/jre12/05/12 14:42:14 INFO zookeeper.ZooKeeper: Client
> >>>>>>>>>>>
> >>>>>
> >>
> environment:java.class.path=/usr/lib/hbase/bin/../conf:/usr/lib/jvm/java-6-s
> >>>>>>>>>>>
> >>>>>
> >>
> un/lib/tools.jar:/usr/lib/hbase/bin/..:/usr/lib/hbase/bin/../hbase-0.90.6-cd
> >>>>>>>>>>>
> >>>>>
> >>
> h3u4.jar:/usr/lib/hbase/bin/../hbase-0.90.6-cdh3u4-tests.jar:/usr/lib/hbase/
> >>>>>>>>>>>
> >>>>>
> >>
> bin/../lib/activation-1.1.jar:/usr/lib/hbase/bin/../lib/asm-3.1.jar:/usr/lib
> >>>>>>>>>>>
> >>>>>
> >>
> /hbase/bin/../lib/avro-1.5.4.jar:/usr/lib/hbase/bin/../lib/avro-ipc-1.5.4.ja
> >>>>>>>>>>>
> >>>>>
> >>
> r:/usr/lib/hbase/bin/../lib/commons-cli-1.2.jar:/usr/lib/hbase/bin/../lib/co
> >>>>>>>>>>>
> >>>>>
> >>
> mmons-codec-1.4.jar:/usr/lib/hbase/bin/../lib/commons-el-1.0.jar:/usr/lib/hb
> >>>>>>>>>>>
> >>>>>
> >>
> ase/bin/../lib/commons-httpclient-3.1.jar:/usr/lib/hbase/bin/../lib/commons-
> >>>>>>>>>>> lang-2.5.jar:/usr/lib/hbase/bin/../lib/commo
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>
> >>
> ns-logging-1.1.1.jar:/usr/lib/hbase/bin/../lib/commons-net-1.4.1.jar:/usr/li
> >>>>>>>>>>>
> >>>>>
> >>
> b/hbase/bin/../lib/core-3.1.1.jar:/usr/lib/hbase/bin/../lib/guava-r06.jar:/u
> >>>>>>>>>>>
> >>>>>
> >>
> sr/lib/hbase/bin/../lib/guava-r09-jarjar.jar:/usr/lib/hbase/bin/../lib/hadoo
> >>>>>>>>>>>
> >>>>>
> >>
> p-core.jar:/usr/lib/hbase/bin/../lib/jackson-core-asl-1.5.2.jar:/usr/lib/hba
> >>>>>>>>>>>
> >>>>>
> >>
> se/bin/../lib/jackson-jaxrs-1.5.5.jar:/usr/lib/hbase/bin/../lib/jackson-mapp
> >>>>>>>>>>>
> >>>>>
> >>
> er-asl-1.5.2.jar:/usr/lib/hbase/bin/../lib/jackson-xc-1.5.5.jar:/usr/lib/hba
> >>>>>>>>>>>
> >>>>>
> >>
> se/bin/../lib/jamon-runtime-2.3.1.jar:/usr/lib/hbase/bin/../lib/jasper-compi
> >>>>>>>>>>>
> >>>>>
> >>
> ler-5.5.23.jar:/usr/lib/hbase/bin/../lib/jasper-runtime-5.5.23.jar:/usr/lib/
> >>>>>>>>>>>
> >>>>>
> >>
> hbase/bin/../lib/jaxb-api-2.1.jar:/usr/lib/hbase/bin/../lib/jaxb-impl-2.1.12
> >>>>>>>>>>>
> >>>>>
> >>
> .jar:/usr/lib/hbase/bin/../lib/jersey-core-1.4.jar:/usr/lib/hbase/bin/../lib
> >>>>>>>>>>>
> >>>>>
> >>
> /jersey-json-1.4.jar:/usr/lib/hbase/bin/../lib/jersey-server-1.4.jar:/usr/li
> >>>>>>>>>>>
> >>>>>
> >>
> b/hbase/bin/../lib/jettison-1.1.jar:/usr/lib/hbase/bin/../lib/jetty-6.1.26.j
> >>>>>>>>>>>
> >>>>>
> >>
> ar:/usr/lib/hbase/bin/../lib/jetty-util-6.1.26.jar:/usr/lib/hbase/bin/../lib
> >>>>>>>>>>> /jruby-co
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>
> >>
> mplete-1.6.0.jar:/usr/lib/hbase/bin/../lib/jsp-2.1-6.1.14.jar:/usr/lib/hbase
> >>>>>>>>>>>
> >>>>>
> >>
> /bin/../lib/jsp-api-2.1-6.1.14.jar:/usr/lib/hbase/bin/../lib/jsp-api-2.1.jar
> >>>>>>>>>>>
> >>>>>
> >>
> :/usr/lib/hbase/bin/../lib/jsr311-api-1.1.1.jar:/usr/lib/hbase/bin/../lib/lo
> >>>>>>>>>>>
> >>>>>
> >>
> g4j-1.2.16.jar:/usr/lib/hbase/bin/../lib/netty-3.2.4.Final.jar:/usr/lib/hbas
> >>>>>>>>>>>
> >>>>>
> >>
> e/bin/../lib/protobuf-java-2.3.0.jar:/usr/lib/hbase/bin/../lib/servlet-api-2
> >>>>>>>>>>>
> >>>>>
> >>
> .5-6.1.14.jar:/usr/lib/hbase/bin/../lib/servlet-api-2.5.jar:/usr/lib/hbase/b
> >>>>>>>>>>>
> >>>>>
> >>
> in/../lib/slf4j-api-1.5.8.jar:/usr/lib/hbase/bin/../lib/slf4j-log4j12-1.5.8.
> >>>>>>>>>>>
> >>>>>
> >>
> jar:/usr/lib/hbase/bin/../lib/snappy-java-1.0.3.2.jar:/usr/lib/hbase/bin/../
> >>>>>>>>>>>
> >>>>>
> >>
> lib/stax-api-1.0.1.jar:/usr/lib/hbase/bin/../lib/thrift-0.2.0.jar:/usr/lib/h
> >>>>>>>>>>>
> >>>>>
> >>
> base/bin/../lib/velocity-1.5.jar:/usr/lib/hbase/bin/../lib/xmlenc-0.52.jar:/
> >>>>>>>>>>>
> >>>>>
> >>
> usr/lib/hbase/bin/../lib/zookeeper.jar:/etc/zookeeper:/etc/hadoop-0.20/conf:
> >>>>>>>>>>>
> >>>>>
> >>
> /usr/lib/hadoop-0.20/hadoop-0.20.2-cdh3u4-ant.jar:/usr/lib/hadoop-0.20/hadoo
> >>>>>>>>>>>
> >>>>>
> >>
> p-0.20.2-cdh3u4-tools.jar:/usr/lib/hadoop-0.20/hadoop-tools.jar:/usr/lib/had
> >>>>>>>>>>> oop-0.20/
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>
> >>
> hadoop-examples-0.20.2-cdh3u4.jar:/usr/lib/hadoop-0.20/hadoop-ant-0.20.2-cdh
> >>>>>>>>>>>
> >>>>>
> >>
> 3u4.jar:/usr/lib/hadoop-0.20/hadoop-0.20.2-cdh3u4-examples.jar:/usr/lib/hado
> >>>>>>>>>>>
> >>>>>
> >>
> op-0.20/hadoop-ant.jar:/usr/lib/hadoop-0.20/hadoop-core-0.20.2-cdh3u4.jar:/u
> >>>>>>>>>>>
> >>>>>
> >>
> sr/lib/hadoop-0.20/hadoop-core.jar:/usr/lib/hadoop-0.20/hadoop-tools-0.20.2-
> >>>>>>>>>>>
> >>>>>
> >>
> cdh3u4.jar:/usr/lib/hadoop-0.20/hadoop-examples.jar:/usr/lib/hadoop-0.20/had
> >>>>>>>>>>>
> >>>>>
> >>
> oop-0.20.2-cdh3u4-core.jar:/usr/lib/hadoop-0.20/hadoop-test-0.20.2-cdh3u4.ja
> >>>>>>>>>>>
> >>>>>
> >>
> r:/usr/lib/hadoop-0.20/hadoop-0.20.2-cdh3u4-test.jar:/usr/lib/hadoop-0.20/ha
> >>>>>>>>>>>
> >>>>>
> >>
> doop-test.jar:/usr/lib/hadoop-0.20/lib/jetty-6.1.26.cloudera.1.jar:/usr/lib/
> >>>>>>>>>>>
> >>>>>
> >>
> hadoop-0.20/lib/jets3t-0.6.1.jar:/usr/lib/hadoop-0.20/lib/junit-4.5.jar:/usr
> >>>>>>>>>>>
> >>>>>
> >>
> /lib/hadoop-0.20/lib/commons-daemon-1.0.1.jar:/usr/lib/hadoop-0.20/lib/slf4j
> >>>>>>>>>>>
> >>>>>
> >>
> -api-1.4.3.jar:/usr/lib/hadoop-0.20/lib/commons-codec-1.4.jar:/usr/lib/hadoo
> >>>>>>>>>>>
> >>>>>
> >>
> p-0.20/lib/log4j-1.2.15.jar:/usr/lib/hadoop-0.20/lib/jasper-compiler-5.5.12.
> >>>>>>>>>>>
> >>>>>
> >>
> jar:/usr/lib/hadoop-0.20/lib/guava-r09-jarjar.jar:/usr/lib/hadoop-0.20/lib/j
> >>>>>>>>>>> ackson-ma
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>
> >>
> pper-asl-1.5.2.jar:/usr/lib/hadoop-0.20/lib/commons-logging-1.0.4.jar:/usr/l
> >>>>>>>>>>>
> >>>>>
> >>
> ib/hadoop-0.20/lib/ant-contrib-1.0b3.jar:/usr/lib/hadoop-0.20/lib/commons-la
> >>>>>>>>>>>
> >>>>>
> >>
> ng-2.4.jar:/usr/lib/hadoop-0.20/lib/kfs-0.2.2.jar:/usr/lib/hadoop-0.20/lib/j
> >>>>>>>>>>>
> >>>>>
> >>
> etty-util-6.1.26.cloudera.1.jar:/usr/lib/hadoop-0.20/lib/commons-httpclient-
> >>>>>>>>>>>
> >>>>>
> >>
> 3.1.jar:/usr/lib/hadoop-0.20/lib/jsch-0.1.42.jar:/usr/lib/hadoop-0.20/lib/sl
> >>>>>>>>>>>
> >>>>>
> >>
> f4j-log4j12-1.4.3.jar:/usr/lib/hadoop-0.20/lib/servlet-api-2.5-20081211.jar:
> >>>>>>>>>>>
> >>>>>
> >>
> /usr/lib/hadoop-0.20/lib/jasper-runtime-5.5.12.jar:/usr/lib/hadoop-0.20/lib/
> >>>>>>>>>>>
> >>>>>
> >>
> servlet-api-2.5-6.1.14.jar:/usr/lib/hadoop-0.20/lib/jetty-servlet-tester-6.1
> >>>>>>>>>>>
> >>>>>
> >>
> .26.cloudera.1.jar:/usr/lib/hadoop-0.20/lib/aspectjtools-1.6.5.jar:/usr/lib/
> >>>>>>>>>>>
> >>>>>
> >>
> hadoop-0.20/lib/xmlenc-0.52.jar:/usr/lib/hadoop-0.20/lib/hadoop-fairschedule
> >>>>>>>>>>>
> >>>>>
> >>
> r-0.20.2-cdh3u4.jar:/usr/lib/hadoop-0.20/lib/jackson-core-asl-1.5.2.jar:/usr
> >>>>>>>>>>>
> >>>>>
> >>
> /lib/hadoop-0.20/lib/mockito-all-1.8.2.jar:/usr/lib/hadoop-0.20/lib/commons-
> >>>>>>>>>>>
> >>>>>
> >>
> el-1.0.jar:/usr/lib/hadoop-0.20/lib/commons-logging-api-1.0.4.jar:/usr/lib/h
> >>>>>>>>>>> adoop-0.2
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>
> >>
> 0/lib/commons-net-3.1.jar:/usr/lib/hadoop-0.20/lib/commons-cli-1.2.jar:/usr/
> >>>>>>>>>>>
> >>>>>
> >>
> lib/hadoop-0.20/lib/aspectjrt-1.6.5.jar:/usr/lib/hadoop-0.20/lib/core-3.1.1.
> >>>>>>>>>>>
> >>>>>
> >>
> jar:/usr/lib/hadoop-0.20/lib/hsqldb-1.8.0.10.jar:/usr/lib/hadoop-0.20/lib/or
> >>>>>>>>>>>
> >>>>>
> >>
> o-2.0.8.jar:/usr/lib/zookeeper/zookeeper.jar:/usr/lib/zookeeper/zookeeper-3.
> >>>>>>>>>>>
> >>>>>
> >>
> 3.5-cdh3u4.jar:/usr/lib/zookeeper/lib/log4j-1.2.15.jar:/usr/lib/zookeeper/li
> >>>>>>>>>>>
> >>>>>
> >>
> b/jline-0.9.94.jar::/usr/lib/hadoop-0.20/conf:/usr/lib/hadoop-0.20/hadoop-co
> >>>>>>>>>>>
> >>>>>
> >>
> re-0.20.2-cdh3u4.jar:/usr/lib/hadoop-0.20/lib/ant-contrib-1.0b3.jar:/usr/lib
> >>>>>>>>>>>
> >>>>>
> >>
> /hadoop-0.20/lib/aspectjrt-1.6.5.jar:/usr/lib/hadoop-0.20/lib/aspectjtools-1
> >>>>>>>>>>>
> >>>>>
> >>
> .6.5.jar:/usr/lib/hadoop-0.20/lib/commons-cli-1.2.jar:/usr/lib/hadoop-0.20/l
> >>>>>>>>>>>
> >>>>>
> >>
> ib/commons-codec-1.4.jar:/usr/lib/hadoop-0.20/lib/commons-daemon-1.0.1.jar:/
> >>>>>>>>>>>
> >>>>>
> >>
> usr/lib/hadoop-0.20/lib/commons-el-1.0.jar:/usr/lib/hadoop-0.20/lib/commons-
> >>>>>>>>>>>
> >>>>>
> >>
> httpclient-3.1.jar:/usr/lib/hadoop-0.20/lib/commons-lang-2.4.jar:/usr/lib/ha
> >>>>>>>>>>>
> >>>>>
> >>
> doop-0.20/lib/commons-logging-1.0.4.jar:/usr/lib/hadoop-0.20/lib/commons-log
> >>>>>>>>>>> ging-api-
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>
> >>
> 1.0.4.jar:/usr/lib/hadoop-0.20/lib/commons-net-3.1.jar:/usr/lib/hadoop-0.20/
> >>>>>>>>>>>
> >>>>>
> >>
> lib/core-3.1.1.jar:/usr/lib/hadoop-0.20/lib/guava-r09-jarjar.jar:/usr/lib/ha
> >>>>>>>>>>>
> >>>>>
> >>
> doop-0.20/lib/hadoop-fairscheduler-0.20.2-cdh3u4.jar:/usr/lib/hadoop-0.20/li
> >>>>>>>>>>>
> >>>>>
> >>
> b/hsqldb-1.8.0.10.jar:/usr/lib/hadoop-0.20/lib/jackson-core-asl-1.5.2.jar:/u
> >>>>>>>>>>>
> >>>>>
> >>
> sr/lib/hadoop-0.20/lib/jackson-mapper-asl-1.5.2.jar:/usr/lib/hadoop-0.20/lib
> >>>>>>>>>>>
> >>>>>
> >>
> /jasper-compiler-5.5.12.jar:/usr/lib/hadoop-0.20/lib/jasper-runtime-5.5.12.j
> >>>>>>>>>>>
> >>>>>
> >>
> ar:/usr/lib/hadoop-0.20/lib/jets3t-0.6.1.jar:/usr/lib/hadoop-0.20/lib/jetty-
> >>>>>>>>>>>
> >>>>>
> >>
> 6.1.26.cloudera.1.jar:/usr/lib/hadoop-0.20/lib/jetty-servlet-tester-6.1.26.c
> >>>>>>>>>>>
> >>>>>
> >>
> loudera.1.jar:/usr/lib/hadoop-0.20/lib/jetty-util-6.1.26.cloudera.1.jar:/usr
> >>>>>>>>>>>
> >>>>>
> >>
> /lib/hadoop-0.20/lib/jsch-0.1.42.jar:/usr/lib/hadoop-0.20/lib/junit-4.5.jar:
> >>>>>>>>>>>
> >>>>>
> >>
> /usr/lib/hadoop-0.20/lib/kfs-0.2.2.jar:/usr/lib/hadoop-0.20/lib/log4j-1.2.15
> >>>>>>>>>>>
> >>>>>
> >>
> .jar:/usr/lib/hadoop-0.20/lib/mockito-all-1.8.2.jar:/usr/lib/hadoop-0.20/lib
> >>>>>>>>>>>
> >>>>>
> >>
> /oro-2.0.8.jar:/usr/lib/hadoop-0.20/lib/servlet-api-2.5-20081211.jar:/usr/li
> >>>>>>>>>>> b/hadoop-
> >>>>>>>>>>>
> >>>>>
> >>
> 0.20/lib/servlet-api-2.5-6.1.14.jar:/usr/lib/hadoop-0.20/lib/slf4j-api-1.4.
> >>>>>>>>>>>
> >>>>>
> >>
> 3.jar:/usr/lib/hadoop-0.20/lib/slf4j-log4j12-1.4.3.jar:/usr/lib/hadoop-0.20/
> >>>>>>>>>>> lib/xmlenc-0.52.jar12/05/12 14:42:14 INFO zookeeper.ZooKeeper:
> >>>>> Client
> >>>>>>>>>>>
> >>>>>
> >>
> environment:java.library.path=/usr/lib/hadoop-0.20/lib/native/Linux-amd64-64
> >>>>>>>>>>> :/usr/lib/hbase/bin/../lib/native/Linux-amd64-6412/05/12
> >> 14:42:14
> >>>>> INFO
> >>>>>>>>>>> zookeeper.ZooKeeper: Client
> >> environment:java.io.tmpdir=/tmp12/05/12
> >>>>> 14:42:14
> >>>>>>>>>>> INFO zookeeper.ZooKeeper: Client
> >>>>> environment:java.compiler=<NA>12/05/12
> >>>>>>>>>>> 14:42:14 INFO zookeeper.ZooKeeper: Client environment:os.name
> >>>>> =Linux12/05/12
> >>>>>>>>>>> 14:42:14 INFO zookeeper.ZooKeeper: Client
> >>>>> environment:os.arch=amd6412/05/12
> >>>>>>>>>>> 14:42:14 INFO zookeeper.ZooKeeper: Client
> >>>>>>>>>>> environment:os.version=2.6.35-22-server12/05/12 14:42:14 INFO
> >>>>>>>>>>> zookeeper.ZooKeeper: Client environment:user.name
> >> =dalia12/05/12
> >>>>> 14:42:14
> >>>>>>>>>>> INFO zookeeper.ZooKeeper: Client
> >>>>> environment:user.home=/home/dalia12/05/12
> >>>>>>>>>>> 14:42:14 INFO zookeeper.ZooKeeper: Client
> >>>>> environment:user.dir=/home/dalia12
> >>>>>>>>>>> /05/12 14:42:14 INFO zookeeper.ZooKeeper: Initiating client
> >>>>> connection,
> >>>>>>>>>>> connectString=localhost:2181 sessionTimeout=180000
> >>>>>>>>>>> watcher=regionserver:6002012/05/12 14:42:14 INFO
> >>>>> zookeeper.ClientCnxn:
> >>>>>>>>>>> Opening socket connection to server
> >>>>> localhost/0:0:0:0:0:0:0:1:218112/05/12
> >>>>>>>>>>> 14:42:14 WARN zookeeper.ClientCnxn: Session 0x0 for server
> >> null,
> >>>>> unexpected
> >>>>>>>>>>> error, closing socket connection and attempting
> >>>>>>>>>>> reconnectjava.net.ConnectException: Connection refused      at
> >>>>>>>>>>> sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)    at
> >>>>>>>>>>>
> >>>>>
> >> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
> >>>>> at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)12/05/12
> >>>>>>>>>>> 14:42:14 INFO zookeeper.ClientCnxn: Opening socket connection
> >> to
> >>>>> server
> >>>>>>>>>>> localhost/127.0.0.1:218112/05/12 14:42:14 WARN
> >>>>> zookeeper.ClientCnxn: Session
> >>>>>>>>>>> 0x0 for server null, unexpected error, closing socket
> >> connection and
> >>>>>>>>>>> attempting reconnectjava.net.ConnectException: Connection
> >> refused
> >>>>> at
> >>>>>>>>>>> sun.nio.ch.SocketChannelImpl.checkConnect(Native
> >>>>>>>>>>>  Method)   at
> >>>>>>>>>>>
> >>>>>
> >> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
> >>>>> at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)12/05/12
> >>>>>>>>>>> 14:42:15 INFO ipc.Client: Retrying connect to server:
> >>>>>>>>>>> namenode/10.0.2.3:8020. Already tried 0 time(s).12/05/12
> >> 14:42:16
> >>>>> INFO
> >>>>>>>>>>> zookeeper.ClientCnxn: Opening socket connection to server
> >>>>> localhost/0:0:0:0:
> >>>>>>>>>>> 0:0:0:1:218112/05/12 14:42:16 WARN zookeeper.ClientCnxn:
> >> Session
> >>>>> 0x0 for
> >>>>>>>>>>> server null, unexpected error, closing socket connection and
> >>>>> attempting
> >>>>>>>>>>> reconnectjava.net.ConnectException: Connection refused      at
> >>>>>>>>>>> sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)    at
> >>>>>>>>>>>
> >>>>>
> >> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
> >>>>> at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)12/05/12
> >>>>>>>>>>> 14:42:16 INFO ipc.Client: Retrying connect to server:
> >>>>>>>>>>> namenode/10.0.2.3:8020. Already tried 1 time(s).12/05/12
> >> 14:42:16
> >>>>> INFO
> >>>>>>>>>>> zookeeper.ClientCnxn: Opening socket connection to server
> >> localhost/
> >>>>> 127.0.0.
> >>>>>>>>>>> 1:218112/05/12 14:
> >>>>>>>>>>> 42:16 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> >>>>> unexpected
> >>>>>>>>>>> error, closing socket connection and attempting
> >>>>>>>>>>> reconnectjava.net.ConnectException: Connection refused      at
> >>>>>>>>>>> sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)    at
> >>>>>>>>>>>
> >>>>>
> >> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
> >>>>> at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)12/05/12
> >>>>>>>>>>> 14:42:17 INFO ipc.Client: Retrying connect to server:
> >>>>>>>>>>> namenode/10.0.2.3:8020. Already tried 2 time(s).12/05/12
> >> 14:42:18
> >>>>> INFO
> >>>>>>>>>>> ipc.Client: Retrying connect to server: namenode/
> >> 10.0.2.3:8020.
> >>>>> Already
> >>>>>>>>>>> tried 3 time(s).12/05/12 14:42:18 INFO zookeeper.ClientCnxn:
> >>>>> Opening socket
> >>>>>>>>>>> connection to server localhost/0:0:0:0:0:0:0:1:218112/05/12
> >>>>> 14:42:18 WARN
> >>>>>>>>>>> zookeeper.ClientCnxn: Session 0x0 for server null, unexpected
> >>>>> error, closing
> >>>>>>>>>>> socket connection and attempting
> >> reconnectjava.net.ConnectException:
> >>>>>>>>>>> Connection refused  at
> >>>>> sun.nio.ch.SocketChannelImpl.checkConnect(Native
> >>>>>>>>>>> Method)     at sun.nio.ch.SocketChannelImpl.fin
> >>>>>>>>>>> ishConnect(SocketChannelImpl.java:567)     at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)12/05/12
> >>>>>>>>>>> 14:42:18 INFO zookeeper.ClientCnxn: Opening socket connection
> >> to
> >>>>> server
> >>>>>>>>>>> localhost/127.0.0.1:218112/05/12 14:42:18 WARN
> >>>>> zookeeper.ClientCnxn: Session
> >>>>>>>>>>> 0x0 for server null, unexpected error, closing socket
> >> connection and
> >>>>>>>>>>> attempting reconnectjava.net.ConnectException: Connection
> >> refused
> >>>>> at
> >>>>>>>>>>> sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)    at
> >>>>>>>>>>>
> >>>>>
> >> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
> >>>>> at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)12/05/12
> >>>>>>>>>>> 14:42:19 INFO ipc.Client: Retrying connect to server:
> >>>>>>>>>>> namenode/10.0.2.3:8020. Already tried 4 time(s).12/05/12
> >> 14:42:19
> >>>>> INFO
> >>>>>>>>>>> zookeeper.ClientCnxn: Opening socket connection to server
> >>>>> localhost/0:0:0:0:
> >>>>>>>>>>> 0:0:0:1:218112/05/12 14:42:19 WARN zookeeper.ClientCnxn:
> >> Session
> >>>>> 0x0 for
> >>>>>>>>>>> server null, unexpected error, closing socket connection and
> >>>>> attempting
> >>>>>>>>>>> reconnectjava.net.ConnectException
> >>>>>>>>>>> : Connection refused       at
> >>>>> sun.nio.ch.SocketChannelImpl.checkConnect(Native
> >>>>>>>>>>> Method)     at
> >>>>>>>>>>>
> >>>>>
> >> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
> >>>>> at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)12/05/12
> >>>>>>>>>>> 14:42:20 INFO ipc.Client: Retrying connect to server:
> >>>>>>>>>>> namenode/10.0.2.3:8020. Already tried 5 time(s).12/05/12
> >> 14:42:20
> >>>>> INFO
> >>>>>>>>>>> zookeeper.ClientCnxn: Opening socket connection to server
> >> localhost/
> >>>>> 127.0.0.
> >>>>>>>>>>> 1:218112/05/12 14:42:20 WARN zookeeper.ClientCnxn: Session
> >> 0x0 for
> >>>>> server
> >>>>>>>>>>> null, unexpected error, closing socket connection and
> >> attempting
> >>>>>>>>>>> reconnectjava.net.ConnectException: Connection refused      at
> >>>>>>>>>>> sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)    at
> >>>>>>>>>>>
> >>>>>
> >> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
> >>>>> at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)12/05/12
> >>>>>>>>>>> 14:42:21 INFO ipc.Client: Retrying connect to server:
> >>>>>>>>>>> namenode/10.0.2.3:8020. Already tried 6 time(s).12/05/12
> >> 14:42:22
> >>>>> INFO
> >>>>>>>>>>> zookeeper.ClientCnxn: Openin
> >>>>>>>>>>> g socket connection to server
> >>>>> localhost/0:0:0:0:0:0:0:1:218112/05/12 14:42:
> >>>>>>>>>>> 22 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> >>>>> unexpected error,
> >>>>>>>>>>> closing socket connection and attempting
> >>>>> reconnectjava.net.ConnectException:
> >>>>>>>>>>> Connection refused  at
> >>>>> sun.nio.ch.SocketChannelImpl.checkConnect(Native
> >>>>>>>>>>> Method)     at
> >>>>>>>>>>>
> >>>>>
> >> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
> >>>>> at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)12/05/12
> >>>>>>>>>>> 14:42:22 INFO ipc.Client: Retrying connect to server:
> >>>>>>>>>>> namenode/10.0.2.3:8020. Already tried 7 time(s).12/05/12
> >> 14:42:22
> >>>>> INFO
> >>>>>>>>>>> zookeeper.ClientCnxn: Opening socket connection to server
> >> localhost/
> >>>>> 127.0.0.
> >>>>>>>>>>> 1:218112/05/12 14:42:22 WARN zookeeper.ClientCnxn: Session
> >> 0x0 for
> >>>>> server
> >>>>>>>>>>> null, unexpected error, closing socket connection and
> >> attempting
> >>>>>>>>>>> reconnectjava.net.ConnectException: Connection refused      at
> >>>>>>>>>>> sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)    at
> >>>>>>>>>>>
> >>>>>
> >> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
> >>>>> at
> >>>>>>>>>>> org
> >>>>>>>>>>>
> >>>>>
> >>
> .apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)12/05/12
> >>>>>>>>>>> 14:42:23 INFO ipc.Client: Retrying connect to server:
> >>>>>>>>>>> namenode/10.0.2.3:8020. Already tried 8 time(s).12/05/12
> >> 14:42:24
> >>>>> INFO
> >>>>>>>>>>> ipc.Client: Retrying connect to server: namenode/
> >> 10.0.2.3:8020.
> >>>>> Already
> >>>>>>>>>>> tried 9 time(s).Exception in thread "main"
> >>>>> java.net.ConnectException: Call
> >>>>>>>>>>> to namenode/10.0.2.3:8020 failed on connection exception:
> >>>>>>>>>>> java.net.ConnectException: Connection refused       at
> >>>>>>>>>>> org.apache.hadoop.ipc.Client.wrapException(Client.java:1134)
> >>>>> at
> >>>>>>>>>>> org.apache.hadoop.ipc.Client.call(Client.java:1110) at
> >>>>>>>>>>> org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)      at
> >>>>>>>>>>> $Proxy5.getProtocolVersion(Unknown Source)  at
> >>>>>>>>>>> org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398)    at
> >>>>>>>>>>> org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384)    at
> >>>>>>>>>>>
> >>>>>
> >> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:129)
> >>>>> at
> >>>>>>>>>>> org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:255) at
> >>>>>>>>>>> org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:217) at
> >>>>>>>>>>> org.apache.hadoop
> >>>>>>>>>>>
> >>>>>
> >> .hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
> >>>>> at
> >>>>>>>>>>>
> >>>>>
> >> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1563)
> >>>>> at
> >>>>>>>>>>> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
> >>>>> at
> >>>>>>>>>>>
> >>>>>
> >> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1597)
> >>>>> at
> >>>>>>>>>>>
> >> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1579)
> >>>>> at
> >>>>>>>>>>> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:228)    at
> >>>>>>>>>>> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:111)    at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegion
> >>>>>>>>>>> Server.java:2785)   at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegion
> >>>>>>>>>>> Server.java:2768)   at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionS
> >>>>>>>>>>> erverCommandLine.java:61)   at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionSer
> >>>>>>>>>>> verCommandLine.java:75)     at
> >>>>> org.apache.hadoop.util.ToolRunner.run(ToolRunner.
> >>>>>>>>>>> java:65)    at
> >>>>>>>>>>> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(Serve
> >>>>>>>>>>> rCommandLine.java:76)      at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2
> >>>>>>>>>>> 829)Caused by: java.net.ConnectException: Connection refused
> >>>>> at
> >>>>>>>>>>> sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)    at
> >>>>>>>>>>>
> >>>>>
> >> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
> >>>>> at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:2
> >>>>>>>>>>> 06) at
> >> org.apache.hadoop.net.NetUtils.connect(NetUtils.java:429)
> >>>>> at
> >>>>>>>>>>> org.apache.hadoop.net.NetUtils.connect(NetUtils.java:394)   at
> >>>>>>>>>>>
> >>>>>
> >> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:425)
> >>>>>>>>>>> at
> >>>>>
> >> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:532)
> >>>>>>>>>>> at
> >>>>> org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:210)
> >>    at
> >>>>>>>>>>> org.apache.hadoop.ipc.Client.getConnection(Client.java:1247)
> >>>>> at
> >>>>>>>>>>> org.apache.hadoop.ipc.Client.call(Client.java:1078) ... 21
> >>>>> more12/05/12
> >>>>>>>>>>> 14:42:24 INFO zookeeper.ClientCnxn: Opening socket connection
> >> to
> >>>>> server
> >>>>>>>>>>> localhost/0:0:0:0:0:0:0:1:218112/05/12 14:42:24 WARN zookeeper
> >>>>>>>>>>> .ClientCnxn: Session 0x0 for server null, unexpected error,
> >>>>> closing socket
> >>>>>>>>>>> connection and attempting reconnectjava.net.ConnectException:
> >>>>> Connection
> >>>>>>>>>>> refused     at
> >> sun.nio.ch.SocketChannelImpl.checkConnect(Native
> >>>>> Method)     at
> >>>>>>>>>>>
> >>>>>
> >> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
> >>>>> at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)12/05/12
> >>>>>>>>>>> 14:42:25 INFO zookeeper.ClientCnxn: Opening socket connection
> >> to
> >>>>> server
> >>>>>>>>>>> localhost/127.0.0.1:218112/05/12 14:42:25 INFO
> >>>>> zookeeper.ZooKeeper: Session:
> >>>>>>>>>>> 0x0 closed12/05/12 14:42:25 INFO zookeeper.ClientCnxn:
> >> EventThread
> >>>>> shut
> >>>>>>>>>>> down12/05/12 14:42:25 INFO ipc.HBaseServer: Stopping server on
> >>>>> 6002012/05/12
> >>>>>>>>>>> 14:42:25 FATAL regionserver.HRegionServer: ABORTING region
> >> server
> >>>>>>>>>>> serverName=datanode2,60020,1336826533870, load=(requests=0,
> >>>>> regions=0,
> >>>>>>>>>>> usedHeap=0, maxHeap=0): Initialization of RS failed.  Hence
> >>>>> aborting RS.org.
> >>>>>>>>>>> apache.hadoop.hbase.ZooKeeperConnectionException: HBase is
> >> able to
> >>>>> connect
> >>>>>>>>>>> to ZooKeeper but the connection closes im
> >>>>>>>>>>> mediately. This could be a sign that the server has too many
> >>>>> connections
> >>>>>>>>>>> (30 is the default). Consider inspecting your ZK server logs
> >> for
> >>>>> that error
> >>>>>>>>>>> and then make sure you are reusing HBaseConfiguration as
> >> often as
> >>>>> you can.
> >>>>>>>>>>> See HTable's javadoc for more information.  at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.j
> >>>>>>>>>>> ava:160)    at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegi
> >>>>>>>>>>> onServer.java:489)  at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.initialize(HRegionServer.
> >>>>>>>>>>> java:465)   at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:56
> >>>>>>>>>>> 4)  at java.lang.Thread.run(Thread.java:662)Caused by:
> >>>>>>>>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> >>>>>>>>>>> KeeperErrorCode = ConnectionLoss for /hbase at
> >>>>>>>>>>>
> >>>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
> >>>>> at
> >>>>>>>>>>>
> >>>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
> >>>>> at
> >>>>>>>>>>> org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:815) at
> >>>>> org.apac
> >>>>>>>>>>> he.zookeeper.ZooKeeper.exists(ZooKeeper.java:843)  at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:930
> >>>>>>>>>>> )   at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.j
> >>>>>>>>>>> ava:138)    ... 4 more12/05/12 14:42:25 INFO
> >>>>> regionserver.HRegionServer:
> >>>>>>>>>>> STOPPED: Initialization of RS failed.  Hence aborting
> >> RS.Exception
> >>>>> in thread
> >>>>>>>>>>> "regionserver60020" java.lang.NullPointerException  at
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:63
> >>>>>>>>>>> 3)  at java.lang.Thread.run(Thread.java:662)
> >>>>>>>>>>> So any help???
> >>>>>>>>>>>> Date: Sat, 12 May 2012 20:22:03 +0530
> >>>>>>>>>>>> Subject: Re: Important "Undefined Error"
> >>>>>>>>>>>> From: dwivedishashwat@gmail.com
> >>>>>>>>>>>> To: user@hbase.apache.org
> >>>>>>>>>>>>
> >>>>>>>>>>>> you can turn off hadoop safe mode uisng *hadoop dfsadmin
> >> -safemode
> >>>>>>>>>>>> leave*
> >>>>>>>>>>>>
> >>>>>>>>>>>> On Sat, May 12, 2012 at 8:15 PM, shashwat shriparv <
> >>>>>>>>>>>> dwivedishashwat@gmail.com> wrote:
> >>>>>>>>>>>>
> >>>>>>>>>>>>> First thing copy core-site.xml, dfs xml from hadoop conf
> >> directory
> >>>>>>>>>>>>> to hbase conf dirctory. and turn of hadoop save mode and
> >> then
> >>>>> try...
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> On Sat, May 12, 2012 at 6:27 PM, Harsh J <
> >> harsh@cloudera.com>
> >>>>> wrote:
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>> Dalia,
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Is your NameNode running fine? The issue is that HBase
> >> Master has
> >>>>>>>>>>>>>> been asked to talk to HDFS, but it can't connect to the
> >> HDFS
> >>>>>>>>>>>>>> NameNode. Does "hadoop dfs -touchz foobar" pass or fail
> >> with
> >>>>> similar
> >>>>>>>>>>> retry issues?
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> What's your fs.default.name's value in Hadoop's
> >> core-site.xml?
> >>>>> And
> >>>>>>>>>>>>>> whats the output of that fixed host command I'd posted
> >> before?
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> On Sat, May 12, 2012 at 6:06 PM, Dalia Sobhy
> >>>>>>>>>>>>>> <dalia.mohsobhy@hotmail.com>
> >>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> Dear Harsh
> >>>>>>>>>>>>>>> When I run $hbase master start
> >>>>>>>>>>>>>>> I found the following errors:12/05/12 08:32:42 INFO
> >>>>>>>>>>>>>> ipc.HBaseRpcMetrics: Initializing RPC Metrics with
> >>>>>>>>>>>>>> hostName=HMaster,
> >>>>>>>>>>>>>> port=6000012/05/12 08:32:42 INFO
> >> security.UserGroupInformation:
> >>>>>>>>>>>>>> JAAS Configuration already set up for Hadoop, not
> >>>>>>>>>>>>>> re-installing.12/05/12
> >>>>>>>>>>>>>> 08:32:42 INFO ipc.HBaseServer: IPC Server Responder:
> >>>>>>>>>>>>>> starting12/05/12
> >>>>>>>>>>>>>> 08:32:42 INFO ipc.HBaseServer: IPC Server listener on
> >> 60000:
> >>>>>>>>>>>>>> starting12/05/12 08:32:42 INFO ipc.HBaseServer: IPC Server
> >>>>> handler
> >>>>>>>>>>>>>> 0 on
> >>>>>>>>>>>>>> 60000: starting12/05/12 08:32:42 INFO ipc.HBaseServer: IPC
> >> Server
> >>>>>>>>>>>>>> handler 1 on 60000: starting12/05/12 08:32:42 INFO
> >>>>> ipc.HBaseServer:
> >>>>>>>>>>>>>> IPC Server handler 2 on 60000: starting12/05/12 08:32:42
> >> INFO
> >>>>>>>>>>>>>> ipc.HBaseServer: IPC Server handler 3 on 60000:
> >> starting12/05/12
> >>>>> 08:32:
> >>>>>>>>>>> 42 INFO ipc.HBaseServer:
> >>>>>>>>>>>>>> IPC Server handler 5 on 60000: starting12/05/12 08:32:42
> >> INFO
> >>>>>>>>>>>>>> ipc.HBaseServer: IPC Server handler 4 on 60000:
> >> starting12/05/12
> >>>>>>>>>>>>>> 08:32:42 INFO ipc.HBaseServer: IPC Server handler 7 on
> >> 60000:
> >>>>>>>>>>>>>> starting12/05/12
> >>>>>>>>>>>>>> 08:32:42 INFO ipc.HBaseServer: IPC Serv
> >>>>>>>>>>>>>>> er handler 6 on 60000: starting12/05/12 08:32:42 INFO
> >>>>>>>>>>> ipc.HBaseServer:
> >>>>>>>>>>>>>> IPC Server handler 8 on 60000: starting12/05/12 08:32:42
> >> INFO
> >>>>>>>>>>>>>> ipc.HBaseServer: IPC Server handler 9 on 60000:
> >> starting12/05/12
> >>>>>>>>>>>>>> 08:32:42 INFO zookeeper.ZooKeeper: Client
> >>>>>>>>>>>>>> environment:zookeeper.version=3.3.5-cdh3u4--1, built on
> >>>>> 05/07/2012
> >>>>>>>>>>>>>> 21:12
> >>>>>>>>>>>>>> GMT12/05/12 08:32:42 INFO zookeeper.ZooKeeper: Client
> >>>>> environment:
> >>>>>>>>>>>>>> host.name=namenode12/05/12 08:32:42 INFO
> >> zookeeper.ZooKeeper:
> >>>>>>>>>>>>>> Client
> >>>>>>>>>>>>>> environment:java.version=1.6.0_3012/05/12 08:32:42 INFO
> >>>>>>>>>>>>>> zookeeper.ZooKeeper: Client environment:java.vendor=Sun
> >>>>>>>>>>>>>> Microsystems
> >>>>>>>>>>>>>> Inc.12/05/12 08:32:42 INFO zookeeper.ZooKeeper: Client
> >>>>>>>>>>>>>>
> >>>>> environment:java.home=/usr/lib/jvm/java-6-sun-1.6.0.30/jre12/05/12
> >>>>>>>>>>>>>> 08:32:42 INFO zookeeper.ZooKeeper: Client
> >>>>>>>>>>>>>>
> >>>>> environment:java.class.path=/usr/lib/hbase/bin/../conf:/usr/lib/jvm
> >>>>>>>>>>>>>>
> >>>>> /java-6-sun/lib/tools.jar:/usr/lib/hbase/bin/..:/usr/lib/hbase/bin/
> >>>>>>>>>>>>>>
> >>>>> ../hbase-0.90.4-cdh3u3.jar:/usr/lib/hbase/bin/../hbase-0.90.4-cdh3u
> >>>>>>>>>>>>>> 3-tests.jar:/usr/lib/hbase/bin/../lib/activation-1.1.jar:/u
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>> sr/lib/hbase/bin/../lib/asm-3.1.jar:/usr/lib/hbase/bin/../lib/avro-
> >>>>>>>>>>>>>>
> >>>>> 1.5.4.jar:/usr/lib/hbase/bin/../lib/avro-ipc-1.5.4.jar:/usr/lib/hba
> >>>>>>>>>>>>>>
> >>>>> se/bin/../lib/commons-cli-1.2.jar:/usr/lib/hbase/bin/../lib/commons
> >>>>>>>>>>>>>>
> >>>>> -codec-1.4.jar:/usr/lib/hbase/bin/../lib/commons-el-1.0.jar:/usr/li
> >>>>>>>>>>>>>>
> >>>>> b/hbase/bin/../lib/commons-httpclient-3.1.jar:/usr/lib/hbase/bin/..
> >>>>>>>>>>>>>>
> >>>>> /lib/commons-lang-2.5.jar:/usr/lib/hbase/bin/../lib/commons-logging
> >>>>>>>>>>>>>>
> >>>>> -1.1.1.jar:/usr/lib/hbase/bin/../lib/commons-net-1.4.1.jar:/usr/lib
> >>>>>>>>>>>>>>
> >>>>> /hbase/bin/../lib/core-3.1.1.jar:/usr/lib/hbase/bin/../lib/guava-r0
> >>>>>>>>>>>>>>
> >>>>> 6.jar:/usr/lib/hbase/bin/../lib/guava-r09-jarjar.jar:/usr/lib/hbase
> >>>>>>>>>>>>>>
> >>>>> /bin/../lib/hadoop-core.jar:/usr/lib/hbase/bin/../lib/jackson-core-
> >>>>>>>>>>>>>>
> >>>>> asl-1.5.2.jar:/usr/lib/hbase/bin/../lib/jackson-jaxrs-1.5.5.jar:/us
> >>>>>>>>>>>>>>
> >>>>> r/lib/hbase/bin/../lib/jackson-mapper-asl-1.5.2.jar:/usr/lib/hbase/
> >>>>>>>>>>>>>>
> >>>>> bin/../lib/jackson-xc-1.5.5.jar:/usr/lib/hbase/bin/../lib/jamon-run
> >>>>>>>>>>>>>>
> >>>>> time-2.3.1.jar:/usr/lib/hbase/bin/../lib/jasper-compiler-5.5.23.jar
> >>>>>>>>>>>>>> :/usr/lib/hbase/bin/../lib/jasper-runtime-5.5.23.jar:/
> >>>>>>>>>>> usr/l
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>> ib/hbase/bin/../lib/jaxb-api-2.1.jar:/usr/lib/hbase/bin/../lib/jaxb
> >>>>>>>>>>>>>>
> >>>>> -impl-2.1.12.jar:/usr/lib/hbase/bin/../lib/jersey-core-1.4.jar:/usr
> >>>>>>>>>>>>>>
> >>>>> /lib/hbase/bin/../lib/jersey-json-1.4.jar:/usr/lib/hbase/bin/../lib
> >>>>>>>>>>>>>>
> >>>>> /jersey-server-1.4.jar:/usr/lib/hbase/bin/../lib/jettison-1.1.jar:/
> >>>>>>>>>>>>>>
> >>>>> usr/lib/hbase/bin/../lib/jetty-6.1.26.jar:/usr/lib/hbase/bin/../lib
> >>>>>>>>>>>>>>
> >>>>> /jetty-util-6.1.26.jar:/usr/lib/hbase/bin/../lib/jruby-complete-1.6
> >>>>>>>>>>>>>>
> >>>>> .0.jar:/usr/lib/hbase/bin/../lib/jsp-2.1-6.1.14.jar:/usr/lib/hbase/
> >>>>>>>>>>>>>>
> >>>>> bin/../lib/jsp-api-2.1-6.1.14.jar:/usr/lib/hbase/bin/../lib/jsp-api
> >>>>>>>>>>>>>>
> >>>>> -2.1.jar:/usr/lib/hbase/bin/../lib/jsr311-api-1.1.1.jar:/usr/lib/hb
> >>>>>>>>>>>>>>
> >>>>> ase/bin/../lib/log4j-1.2.16.jar:/usr/lib/hbase/bin/../lib/netty-3.2
> >>>>>>>>>>>>>>
> >>>>> .4.Final.jar:/usr/lib/hbase/bin/../lib/protobuf-java-2.3.0.jar:/usr
> >>>>>>>>>>>>>>
> >>>>> /lib/hbase/bin/../lib/servlet-api-2.5-6.1.14.jar:/usr/lib/hbase/bin
> >>>>>>>>>>>>>>
> >>>>> /../lib/servlet-api-2.5.jar:/usr/lib/hbase/bin/../lib/slf4j-api-1.5
> >>>>>>>>>>>>>>
> >>>>> .8.jar:/usr/lib/hbase/bin/../lib/slf4j-log4j12-1.5.8.jar:/usr/lib/h
> >>>>>>>>>>>>>> base/bin/../lib/snappy-java-1.0.3.2.jar:/usr/lib/hbase
> >>>>>>>>>>> /bin/
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>> ../lib/stax-api-1.0.1.jar:/usr/lib/hbase/bin/../lib/thrift-0.2.0.ja
> >>>>>>>>>>>>>>
> >>>>> r:/usr/lib/hbase/bin/../lib/velocity-1.5.jar:/usr/lib/hbase/bin/../
> >>>>>>>>>>>>>>
> >>>>> lib/xmlenc-0.52.jar:/usr/lib/hbase/bin/../lib/zookeeper.jar:/etc/zo
> >>>>>>>>>>>>>>
> >>>>> okeeper:/etc/hadoop-0.20/conf:/usr/lib/hadoop-0.20/hadoop-examples.
> >>>>>>>>>>>>>>
> >>>>> jar:/usr/lib/hadoop-0.20/hadoop-0.20.2-cdh3u3-core.jar:/usr/lib/had
> >>>>>>>>>>>>>>
> >>>>> oop-0.20/hadoop-0.20.2-cdh3u3-ant.jar:/usr/lib/hadoop-0.20/hadoop-c
> >>>>>>>>>>>>>>
> >>>>> ore-0.20.2-cdh3u3.jar:/usr/lib/hadoop-0.20/hadoop-test.jar:/usr/lib
> >>>>>>>>>>>>>>
> >>>>> /hadoop-0.20/hadoop-ant-0.20.2-cdh3u3.jar:/usr/lib/hadoop-0.20/hado
> >>>>>>>>>>>>>>
> >>>>> op-tools.jar:/usr/lib/hadoop-0.20/hadoop-tools-0.20.2-cdh3u3.jar:/u
> >>>>>>>>>>>>>>
> >>>>> sr/lib/hadoop-0.20/hadoop-test-0.20.2-cdh3u3.jar:/usr/lib/hadoop-0.
> >>>>>>>>>>>>>>
> >>>>> 20/hadoop-core.jar:/usr/lib/hadoop-0.20/hadoop-0.20.2-cdh3u3-exampl
> >>>>>>>>>>>>>>
> >>>>> es.jar:/usr/lib/hadoop-0.20/hadoop-0.20.2-cdh3u3-test.jar:/usr/lib/
> >>>>>>>>>>>>>>
> >>>>> hadoop-0.20/hadoop-ant.jar:/usr/lib/hadoop-0.20/hadoop-examples-0.2
> >>>>>>>>>>>>>>
> >>>>> 0.2-cdh3u3.jar:/usr/lib/hadoop-0.20/hadoop-0.20.2-cdh3u3-tools.jar:
> >>>>>>>>>>>>>> /usr/lib/hadoop-0.20/lib/jasper-runtime-5.5.12.jar:/us
> >>>>>>>>>>> r/lib
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>> /hadoop-0.20/lib/hsqldb-1.8.0.10.jar:/usr/lib/hadoop-0.20/lib/jacks
> >>>>>>>>>>>>>>
> >>>>> on-mapper-asl-1.5.2.jar:/usr/lib/hadoop-0.20/lib/jets3t-0.6.1.jar:/
> >>>>>>>>>>>>>>
> >>>>> usr/lib/hadoop-0.20/lib/jetty-servlet-tester-6.1.26.cloudera.1.jar:
> >>>>>>>>>>>>>>
> >>>>> /usr/lib/hadoop-0.20/lib/jackson-core-asl-1.5.2.jar:/usr/lib/hadoop
> >>>>>>>>>>>>>>
> >>>>> -0.20/lib/oro-2.0.8.jar:/usr/lib/hadoop-0.20/lib/ant-contrib-1.0b3.
> >>>>>>>>>>>>>>
> >>>>> jar:/usr/lib/hadoop-0.20/lib/commons-daemon-1.0.1.jar:/usr/lib/hado
> >>>>>>>>>>>>>>
> >>>>> op-0.20/lib/mockito-all-1.8.2.jar:/usr/lib/hadoop-0.20/lib/aspectjr
> >>>>>>>>>>>>>>
> >>>>> t-1.6.5.jar:/usr/lib/hadoop-0.20/lib/commons-lang-2.4.jar:/usr/lib/
> >>>>>>>>>>>>>>
> >>>>> hadoop-0.20/lib/junit-4.5.jar:/usr/lib/hadoop-0.20/lib/commons-code
> >>>>>>>>>>>>>>
> >>>>> c-1.4.jar:/usr/lib/hadoop-0.20/lib/servlet-api-2.5-6.1.14.jar:/usr/
> >>>>>>>>>>>>>>
> >>>>> lib/hadoop-0.20/lib/log4j-1.2.15.jar:/usr/lib/hadoop-0.20/lib/jsch-
> >>>>>>>>>>>>>>
> >>>>> 0.1.42.jar:/usr/lib/hadoop-0.20/lib/core-3.1.1.jar:/usr/lib/hadoop-
> >>>>>>>>>>>>>>
> >>>>> 0.20/lib/jetty-6.1.26.cloudera.1.jar:/usr/lib/hadoop-0.20/lib/commo
> >>>>>>>>>>>>>>
> >>>>> ns-logging-1.0.4.jar:/usr/lib/hadoop-0.20/lib/jetty-util-6.1.26.clo
> >>>>>>>>>>>>>> udera.1.jar:/usr/lib/hadoop-0.20/lib/servlet-api-2.5-2
> >>>>>>>>>>> 00812
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>> 11.jar:/usr/lib/hadoop-0.20/lib/jasper-compiler-5.5.12.jar:/usr/lib
> >>>>>>>>>>>>>>
> >>>>> /hadoop-0.20/lib/kfs-0.2.2.jar:/usr/lib/hadoop-0.20/lib/commons-cli
> >>>>>>>>>>>>>>
> >>>>> -1.2.jar:/usr/lib/hadoop-0.20/lib/commons-net-1.4.1.jar:/usr/lib/ha
> >>>>>>>>>>>>>>
> >>>>> doop-0.20/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-0.20/lib/s
> >>>>>>>>>>>>>>
> >>>>> lf4j-api-1.4.3.jar:/usr/lib/hadoop-0.20/lib/xmlenc-0.52.jar:/usr/li
> >>>>>>>>>>>>>>
> >>>>> b/hadoop-0.20/lib/commons-logging-api-1.0.4.jar:/usr/lib/hadoop-0.2
> >>>>>>>>>>>>>>
> >>>>> 0/lib/commons-el-1.0.jar:/usr/lib/hadoop-0.20/lib/slf4j-log4j12-1.4
> >>>>>>>>>>>>>>
> >>>>> .3.jar:/usr/lib/hadoop-0.20/lib/aspectjtools-1.6.5.jar:/usr/lib/had
> >>>>>>>>>>>>>>
> >>>>> oop-0.20/lib/guava-r09-jarjar.jar:/usr/lib/hadoop-0.20/lib/hadoop-f
> >>>>>>>>>>>>>>
> >>>>> airscheduler-0.20.2-cdh3u3.jar:/usr/lib/zookeeper/zookeeper.jar:/us
> >>>>>>>>>>>>>>
> >>>>> r/lib/zookeeper/zookeeper-3.3.5-cdh3u4.jar:/usr/lib/zookeeper/lib/l
> >>>>>>>>>>>>>>
> >>>>> og4j-1.2.15.jar:/usr/lib/zookeeper/lib/jline-0.9.94.jar::/usr/lib/h
> >>>>>>>>>>>>>>
> >>>>> adoop-0.20/conf:/usr/lib/hadoop-0.20/hadoop-core-0.20.2-cdh3u3.jar:
> >>>>>>>>>>>>>>
> >>>>> /usr/lib/hadoop-0.20/lib/ant-contrib-1.0b3.jar:/usr/lib/hadoop-0.20
> >>>>>>>>>>>>>> /lib/aspectjrt-1.6.5.jar:/usr/lib/hadoop-0.20/lib/aspe
> >>>>>>>>>>> ctjto
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>> ols-1.6.5.jar:/usr/lib/hadoop-0.20/lib/commons-cli-1.2.jar:/usr/lib
> >>>>>>>>>>>>>>
> >>>>> /hadoop-0.20/lib/commons-codec-1.4.jar:/usr/lib/hadoop-0.20/lib/com
> >>>>>>>>>>>>>>
> >>>>> mons-daemon-1.0.1.jar:/usr/lib/hadoop-0.20/lib/commons-el-1.0.jar:/
> >>>>>>>>>>>>>>
> >>>>> usr/lib/hadoop-0.20/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-
> >>>>>>>>>>>>>>
> >>>>> 0.20/lib/commons-lang-2.4.jar:/usr/lib/hadoop-0.20/lib/commons-logg
> >>>>>>>>>>>>>>
> >>>>> ing-1.0.4.jar:/usr/lib/hadoop-0.20/lib/commons-logging-api-1.0.4.ja
> >>>>>>>>>>>>>>
> >>>>> r:/usr/lib/hadoop-0.20/lib/commons-net-1.4.1.jar:/usr/lib/hadoop-0.
> >>>>>>>>>>>>>>
> >>>>> 20/lib/core-3.1.1.jar:/usr/lib/hadoop-0.20/lib/guava-r09-jarjar.jar
> >>>>>>>>>>>>>>
> >>>>> :/usr/lib/hadoop-0.20/lib/hadoop-fairscheduler-0.20.2-cdh3u3.jar:/u
> >>>>>>>>>>>>>>
> >>>>> sr/lib/hadoop-0.20/lib/hsqldb-1.8.0.10.jar:/usr/lib/hadoop-0.20/lib
> >>>>>>>>>>>>>>
> >>>>> /jackson-core-asl-1.5.2.jar:/usr/lib/hadoop-0.20/lib/jackson-mapper
> >>>>>>>>>>>>>>
> >>>>> -asl-1.5.2.jar:/usr/lib/hadoop-0.20/lib/jasper-compiler-5.5.12.jar:
> >>>>>>>>>>>>>>
> >>>>> /usr/lib/hadoop-0.20/lib/jasper-runtime-5.5.12.jar:/usr/lib/hadoop-
> >>>>>>>>>>>>>>
> >>>>> 0.20/lib/jets3t-0.6.1.jar:/usr/lib/hadoop-0.20/lib/jetty-6.1.26.clo
> >>>>>>>>>>>>>> udera.1.jar:/usr/lib/hadoop-0.20/lib/jetty-servlet-tes
> >>>>>>>>>>> ter-6
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>> .1.26.cloudera.1.jar:/usr/lib/hadoop-0.20/lib/jetty-util-6.1.26.clo
> >>>>>>>>>>>>>>
> >>>>> udera.1.jar:/usr/lib/hadoop-0.20/lib/jsch-0.1.42.jar:/usr/lib/hadoo
> >>>>>>>>>>>>>>
> >>>>> p-0.20/lib/junit-4.5.jar:/usr/lib/hadoop-0.20/lib/kfs-0.2.2.jar:/us
> >>>>>>>>>>>>>>
> >>>>> r/lib/hadoop-0.20/lib/log4j-1.2.15.jar:/usr/lib/hadoop-0.20/lib/moc
> >>>>>>>>>>>>>>
> >>>>> kito-all-1.8.2.jar:/usr/lib/hadoop-0.20/lib/oro-2.0.8.jar:/usr/lib/
> >>>>>>>>>>>>>>
> >>>>> hadoop-0.20/lib/servlet-api-2.5-20081211.jar:/usr/lib/hadoop-0.20/l
> >>>>>>>>>>>>>>
> >>>>> ib/servlet-api-2.5-6.1.14.jar:/usr/lib/hadoop-0.20/lib/slf4j-api-1.
> >>>>>>>>>>>>>>
> >>>>> 4.3.jar:/usr/lib/hadoop-0.20/lib/slf4j-log4j12-1.4.3.jar:/usr/lib/h
> >>>>>>>>>>>>>> adoop-0.20/lib/xmlenc-0.52.jar12/05/12
> >>>>>>>>>>>>>> 08:32:42 INFO zookeeper.ZooKeeper: Client
> >>>>>>>>>>>>>>
> >>>>> environment:java.library.path=/usr/lib/hadoop-0.20/lib/native/Linux
> >>>>>>>>>>>>>>
> >> -amd64-64:/usr/lib/hbase/bin/../lib/native/Linux-amd64-6412/05/12
> >>>>>>>>>>>>>> 08:32:42 INFO zookeeper.ZooKeeper: Client
> >>>>>>>>>>>>>> environment:java.io.tmpdir=/tmp12/05/12 08:32:42 INFO
> >>>>>>>>>>> zookeeper.ZooKeeper:
> >>>>>>>>>>>>>> Client environment:java.compiler=<NA>12/05/12 08:32:42 INFO
> >>>>>>>>>>>>>> zookeeper.ZooKeeper: Client environment:os.name
> >> =Linux12/05/12
> >>>>>>>>>>>>>> 08:32:42
> >>>>>>>>>>>>>>> INFO zookeeper.ZooKeeper: Client
> >>>>>>>>>>>>>>> environment:os.arch=amd6412/05/12
> >>>>>>>>>>>>>> 08:32:42 INFO zookeeper.ZooKeeper: Client
> >>>>>>>>>>>>>> environment:os.version=2.6.35-22-server12/05/12 08:32:42
> >> INFO
> >>>>>>>>>>>>>> zookeeper.ZooKeeper: Client environment:user.name
> >> =dalia12/05/12
> >>>>>>>>>>>>>> 08:32:42 INFO zookeeper.ZooKeeper: Client
> >>>>>>>>>>>>>> environment:user.home=/home/dalia12/05/12
> >>>>>>>>>>>>>> 08:32:42 INFO zookeeper.ZooKeeper: Client
> >>>>>>>>>>>>>> environment:user.dir=/home/dalia12/05/12 08:32:42 INFO
> >>>>>>>>>>> zookeeper.ZooKeeper:
> >>>>>>>>>>>>>> Initiating client connection, connectString=namenode:2181
> >>>>>>>>>>>>>> sessionTimeout=180000 watcher=master:6000012/05/12
> >> 08:32:42 INFO
> >>>>>>>>>>>>>> zookeeper.ClientCnxn: Opening socket connection to server
> >>>>> namenode/
> >>>>>>>>>>>>>> 10.0.2.3:218112/05/12 08:32:42 INFO zookeeper.ClientCnxn:
> >> Socket
> >>>>>>>>>>>>>> connection established to namenode/10.0.2.3:2181,
> >> initiating
> >>>>>>>>>>>>>> session12/05/12 08:32:42 INFO zookeeper.ClientCnxn: Session
> >>>>>>>>>>>>>> establishment complete on server namenode/10.0.2.3:2181,
> >>>>> sessionid
> >>>>>>>>>>>>>> = 0x13740bc4f70000c, negotiated timeout = 4000012/05/12
> >> 08:32:42
> >>>>>>>>>>>>>> INFO
> >>>>>>>>>>>>>> jvm.JvmMetrics: Initializing JVM Metrics with
> >>>>>>>>>>>>>>> processName=Master, sessionId=namenode:6000012/05/12
> >> 08:32:42
> >>>>>>>>>>>>>>> INFO
> >>>>>>>>>>>>>> hbase.metrics: MetricsString added: revision12/05/12
> >> 08:32:42
> >>>>> INFO
> >>>>>>>>>>>>>> hbase.metrics: MetricsString added: hdfsUser12/05/12
> >> 08:32:42
> >>>>> INFO
> >>>>>>>>>>>>>> hbase.metrics: MetricsString added: hdfsDate12/05/12
> >> 08:32:42
> >>>>> INFO
> >>>>>>>>>>>>>> hbase.metrics: MetricsString added: hdfsUrl12/05/12
> >> 08:32:42 INFO
> >>>>>>>>>>>>>> hbase.metrics: MetricsString added: date12/05/12 08:32:42
> >> INFO
> >>>>>>>>>>>>>> hbase.metrics: MetricsString added: hdfsRevision12/05/12
> >> 08:32:42
> >>>>>>>>>>>>>> INFO
> >>>>>>>>>>>>>> hbase.metrics: MetricsString added: user12/05/12 08:32:42
> >> INFO
> >>>>>>>>>>>>>> hbase.metrics: MetricsString added: hdfsVersion12/05/12
> >> 08:32:42
> >>>>>>>>>>>>>> INFO
> >>>>>>>>>>>>>> hbase.metrics: MetricsString added: url12/05/12 08:32:42
> >> INFO
> >>>>>>>>>>>>>> hbase.metrics: MetricsString added: version12/05/12
> >> 08:32:42 INFO
> >>>>>>>>>>>>>> hbase.metrics: new MBeanInfo12/05/12 08:32:42 INFO
> >> hbase.metrics:
> >>>>>>>>>>>>>> new
> >>>>>>>>>>>>>> MBeanInfo12/05/12 08:32:42 INFO metrics.MasterMetrics:
> >>>>>>>>>>>>>> Initialized12/05/12
> >>>>>>>>>>>>>> 08:32:42 INFO master.ActiveMasterManager:
> >>>>>>>>>>>>>> Master=namenode:6000012/05/12
> >>>>>>>>>>>>>> 08:32:44 INFO ipc.Client: Retrying connect to serve
> >>>>>>>>>>>>>>> r: namenode/10.0.2.3:8020. Already tried 0
> >> time(s).12/05/12
> >>>>>>>>>>>>>>> 08:32:45
> >>>>>>>>>>>>>> INFO ipc.Client: Retrying connect to server: namenode/
> >>>>> 10.0.2.3:8020.
> >>>>>>>>>>>>>> Already tried 1 time(s).12/05/12 08:32:46 INFO ipc.Client:
> >>>>> Retrying
> >>>>>>>>>>>>>> connect to server: namenode/10.0.2.3:8020. Already tried 2
> >>>>>>>>>>>>>> time(s).12/05/12
> >>>>>>>>>>>>>> 08:32:47 INFO ipc.Client: Retrying connect to server:
> >> namenode/
> >>>>>>>>>>>>>> 10.0.2.3:8020. Already tried 3 time(s).12/05/12 08:32:48
> >> INFO
> >>>>>>>>>>>>>> ipc.Client: Retrying connect to server: namenode/
> >> 10.0.2.3:8020.
> >>>>>>>>>>>>>> Already tried 4 time(s).12/05/12 08:32:49 INFO ipc.Client:
> >>>>> Retrying
> >>>>>>>>>>>>>> connect to
> >>>>>>>>>>>>>> server: namenode/10.0.2.3:8020. Already tried 5
> >> time(s).12/05/12
> >>>>>>>>>>>>>> 08:32:50 INFO ipc.Client: Retrying connect to server:
> >> namenode/
> >>>>>>>>>>>>>> 10.0.2.3:8020. Already tried 6 time(s).12/05/12 08:32:51
> >> INFO
> >>>>>>>>>>>>>> ipc.Client: Retrying connect to server: namenode/
> >> 10.0.2.3:8020.
> >>>>>>>>>>>>>> Already tried 7 time(s).12/05/12 08:32:52 INFO ipc.Client:
> >>>>> Retrying
> >>>>>>>>>>>>>> connect to
> >>>>>>>>>>>>>> server: namenode/10.0.2.3:8020. Already tried 8
> >> time(s).12/05/12
> >>>>>>>>>>>>>> 08:32:53 INFO ipc.Client: Retrying connect to ser
> >>>>>>>>>>>>>>> ver: namenode/10.0.2.3:8020. Already tried 9
> >> time(s).12/05/12
> >>>>>>>>>>>>>> 08:32:53 FATAL master.HMaster: Unhandled exception.
> >> Starting
> >>>>>>>>>>>>>> shutdown.java.net.ConnectException: Call to
> >>>>>>>>>>> namenode/10.0.2.3:8020failed on connection exception:
> >>>>>>>>>>> java.net.ConnectException: Connection
> >>>>>>>>>>>>>> refused    at
> >>>>>>>>>>> org.apache.hadoop.ipc.Client.wrapException(Client.java:1134)
> >>>>>>>>>>>>>> at org.apache.hadoop.ipc.Client.call(Client.java:1110)  at
> >>>>>>>>>>>>>> org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
> >>  at
> >>>>>>>>>>>>>> $Proxy6.getProtocolVersion(Unknown Source)   at
> >>>>>>>>>>>>>> org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398)     at
> >>>>>>>>>>>>>> org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384)     at
> >>>>>>>>>>>>>>
> >>>>>
> >> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:123)
> >>>>>>>>>>>>>>  at
> >> org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:246)
> >>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>
> >> org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:208)  at
> >>>>>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSyste
> >>>>>>>>>>> m.java:89)
> >>>>>>>>>>>>>>      at
> >>>>>>>>>>>>>>
> >>>>>
> >> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1563)
> >>>>>>>>>>>>>>  at org.apache.hadoop.fs.FileSystem.acc
> >>>>>>>>>>>>>>> ess$200(FileSystem.java:67)    at
> >>>>>>>>>>>>>>
> >>>>>
> >> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1597)
> >>>>>>>>>>>>>> at
> >>>>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1579)
> >>>>>>>>>>> at
> >>>>>>>>>>>>>> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:228)
> >>  at
> >>>>>>>>>>>>>> org.apache.hadoop.fs.Path.getFileSystem(Path.java:183)
> >>  at
> >>>>>>>>>>>>>>
> >>>>> org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:364)
> >> at
> >>>>>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java
> >>>>>>>>>>> :86)
> >>>>>>>>>>>>>>    at
> >>>>>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:360
> >>>>>>>>>>> )
> >>>>>>>>>>>>>>       at
> >>>>>>>>>>>>>>
> >>>>> org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:293)Caused
> >> by:
> >>>>>>>>>>>>>> java.net.ConnectException: Connection refused at
> >>>>>>>>>>>>>> sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> >>  at
> >>>>>>>>>>>>>>
> >>>>>
> >> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
> >>>>>>>>>>>>>>  at
> >>>>>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>
> >>
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:2
> >>>>>>>>>>> 06)
> >>>>>>>>>>>>>>     at
> >>>>> org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)    at
> >>>>>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>
> >> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:425)
> >>>>>>>>>>>>>>  a
> >>>>>>>>>>>>>>> t
> >>>>>>>>>>>>>>
> >>>>> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:
> >>>>>>>>>>>>>> 532)  at
> >>>>>>>>>>>>>>
> >>>>> org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:210
> >>>>>>>>>>>>>> ) at
> >>>>>>>>>>>>>>
> >> org.apache.hadoop.ipc.Client.getConnection(Client.java:1247) at
> >>>>>>>>>>>>>> org.apache.hadoop.ipc.Client.call(Client.java:1078)  ... 18
> >>>>>>>>>>>>>> more12/05/12
> >>>>>>>>>>>>>> 08:32:53 INFO master.HMaster: Aborting12/05/12 08:32:53
> >> DEBUG
> >>>>>>>>>>>>>> master.HMaster: Stopping service threads12/05/12 08:32:53
> >> INFO
> >>>>>>>>>>>>>> ipc.HBaseServer: Stopping server on 6000012/05/12 08:32:53
> >> INFO
> >>>>>>>>>>>>>> ipc.HBaseServer: IPC Server handler 5 on 60000:
> >> exiting12/05/12
> >>>>>>>>>>>>>> 08:32:53 INFO ipc.HBaseServer: Stopping IPC Server
> >> listener on
> >>>>>>>>>>>>>> 6000012/05/12
> >>>>>>>>>>>>>> 08:32:53 INFO ipc.HBaseServer: IPC Server handler 1 on
> >> 60000:
> >>>>>>>>>>>>>> exiting12/05/12 08:32:53 INFO ipc.HBaseServer: IPC Server
> >>>>> handler 0
> >>>>>>>>>>>>>> on
> >>>>>>>>>>>>>> 60000: exiting12/05/12 08:32:53 INFO ipc.HBaseServer: IPC
> >> Server
> >>>>>>>>>>>>>> handler 3 on 60000: exiting12/05/12 08:32:53 INFO
> >>>>> ipc.HBaseServer:
> >>>>>>>>>>>>>> IPC Server handler
> >>>>>>>>>>>>>> 7 on 60000: exiting12/05/12 08:32:53 INFO ipc.HBaseServer:
> >> IPC
> >>>>>>>>>>>>>> Server handler 9 on 60000: exiting1
> >>>>>>>>>>>>>>> 2/05/12 08:32:53 INFO ipc.HBaseServer: IPC Server
> >> handler 6 on
> >>>>>>>>>>> 60000:
> >>>>>>>>>>>>>> exiting12/05/12 08:32:53 INFO ipc.HBaseServer: IPC Server
> >>>>> handler 4
> >>>>>>>>>>>>>> on
> >>>>>>>>>>>>>> 60000: exiting12/05/12 08:32:53 INFO ipc.HBaseServer: IPC
> >> Server
> >>>>>>>>>>>>>> handler 2 on 60000: exiting12/05/12 08:32:53 INFO
> >>>>> ipc.HBaseServer:
> >>>>>>>>>>>>>> IPC Server handler
> >>>>>>>>>>>>>> 8 on 60000: exiting12/05/12 08:32:53 INFO ipc.HBaseServer:
> >>>>> Stopping
> >>>>>>>>>>>>>> IPC Server Responder12/05/12 08:32:53 INFO
> >> zookeeper.ZooKeeper:
> >>>>>>>>>>> Session:
> >>>>>>>>>>>>>> 0x13740bc4f70000c closed12/05/12 08:32:53 INFO
> >>>>> zookeeper.ClientCnxn:
> >>>>>>>>>>>>>> EventThread shut down12/05/12 08:32:53 INFO master.HMaster:
> >>>>> HMaster
> >>>>>>>>>>>>>> main thread exiting> From: harsh@cloudera.com
> >>>>>>>>>>>>>>>> Date: Sat, 12 May 2012 17:28:29 +0530
> >>>>>>>>>>>>>>>> Subject: Re: Important "Undefined Error"
> >>>>>>>>>>>>>>>> To: user@hbase.apache.org
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Hi Dalia,
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> On Sat, May 12, 2012 at 5:14 PM, Dalia Sobhy <
> >>>>>>>>>>>>>> dalia.mohsobhy@hotmail.com> wrote:
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> Dear all,
> >>>>>>>>>>>>>>>>> I have first a problem with Hbase I am trying to
> >> install it on
> >>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>> distributed/multinode cluster..
> >>>>>>>>>>>>>>>>> I am using the cloudera
> >>>>>>>>>>>>>>
> >>>>> https://ccp.cloudera.com/display/CDH4B2/HBase+Installation#HBaseIns
> >>>>>>>>>>>>>> tallation-StartingtheHBaseMaster
> >>>>>>>>>>>>>>>>> But when I write this command
> >>>>>>>>>>>>>>>>> Creating the /hbase Directory in HDFS $sudo -u hdfs
> >> hadoop fs
> >>>>>>>>>>>>>>>>> -mkdir
> >>>>>>>>>>>>>> /hbase
> >>>>>>>>>>>>>>>>> I get the following error:12/05/12 07:20:42 INFO
> >>>>>>>>>>>>>> security.UserGroupInformation: JAAS Configuration already
> >> set up
> >>>>>>>>>>>>>> for Hadoop, not re-installing.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> This is not an error and you shouldn't be worried. It is
> >> rather
> >>>>>>>>>>>>>>>> a noisy INFO log that should be fixed (as a DEBUG level
> >>>>> instead)
> >>>>>>>>>>>>>>>> in subsequent releases (Are you using CDH3 or CDH4? IIRC
> >> only
> >>>>>>>>>>>>>>>> CDH3u3 printed these, not in anything above that.)
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> 2. Another Aspect is when I start the hbase master it
> >> closes
> >>>>>>>>>>>>>> automatically after a while.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Could you post us your HMaster start->crash log? You can
> >> use a
> >>>>>>>>>>>>>>>> service like pastebin.com to send us the output.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> 3. Also this command is not working$host -v -t A
> >>>>>>>>>>> `namenode`namenode:
> >>>>>>>>>>>>>> command not found
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> The right command is perhaps just:
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> $host -v -t A `hostname`
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>> Harsh J
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> --
> >>>>>>>>>>>>>> Harsh J
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> --
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> ∞
> >>>>>>>>>>>>> Shashwat Shriparv
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>> --
> >>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>> ∞
> >>>>>>>>>>>> Shashwat Shriparv
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>
> >>>>>
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>>
> >>>>
> >>>> ∞
> >>>> Shashwat Shriparv
> >>>
> >>
> >>
> >
> >
> >
> > --
> >
> >
> > ∞
> > Shashwat Shriparv
>



-- 


∞
Shashwat Shriparv

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message