hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ac@hsk.hk" ...@hsk.hk>
Subject Re: EXT :Re: HBase Issues (perhaps related to 127.0.0.1)
Date Sat, 24 Nov 2012 18:31:03 GMT
Hi,

I am also using Ubuntu 12.04, Zookeeper 3.4.4 HBase 0.94.2 and Hadoop 1.0.4. (64-bit nodes), I finally managed to have the HBase cluster up and running, below is the line in my /etc/hosts for your reference:

#127.0.0.1      localhost
127.0.0.1       localhost.localdomain localhost

According to my set up experience, below are my advices:
1) /etc/hosts: should not comment out 127.0.01 in /etc/hosts
2) Zookeeper: do not sync its "data" and "datalog" folders  to other Zookeeper servers in your deployment
3) check your start procedures: 
 	- check your firewall policies, make sure each server can use the required TCP/IP ports, especially port 2181 in your case
     	- start Zookeeper first, need to make sure all other servers can access Zookeeper servers, use "/bin/zkCli.sh -server XXXX" or "echo ruok | nc XXXX 2181" to test all Zookeepers from each HBASE server.  
	- start Hadoop, use JPS to make sure Namenode, SecondaryNameNode, Datanodes up and running, check LOG files of each servers
	- start MapReduce if you need it
	- start HBase, use JPS to check HBase's HMaster and HRegionServers, then wait a while use JPS to check HMaster and HRegionServers again, if them all HBASE servers gone but HADOOP still up and running,  most likely it would be HBASE configure issue in hbase-site.xml related to ZooKeeper settings or ZooKeeper configure/data issues.


Hope these help and good luck.
ac



Originally I have 7 nodes, 5 of them are 64-bit and 2 of them are 32-bit, all 64-bit servers are connected to network A and the two 


On 24 Nov 2012, at 10:51 AM, Michael Segel wrote:

> Hi Alan, 
> 
> Yes. I am suggesting that. 
> 
> Your 127.0.0.1 subnet should be localhost  only and then your other entries. 
> It looks like 10.64.155.52 is the external interface (eth0) for the machine hadoop1.
> 
> Adding it to 127.0.0.1 confuses HBase since it will use the first entry it sees. (Going from memory) So it will always look to local hosts.
> 
> I think that should fix your problem. 
> 
> HTH
> 
> -Mike
> 
> On Nov 23, 2012, at 10:11 AM, "Ratner, Alan S (IS)" <Alan.Ratner@ngc.com> wrote:
> 
>> Mike,
>> 
>> 
>> 
>>           Yes I do.
>> 
>> 
>> 
>> With this /etc/hosts HBase works but NX and VNC do not.
>> 
>> 10.64.155.52 hadoop1.aj.c2fse.northgrum.com hadoop1 hbase-masterserver hbase-nameserver localhost
>> 
>> 10.64.155.53 hadoop2.aj.c2fse.northgrum.com hadoop2 hbase-regionserver1
>> 
>> ...
>> 
>> 
>> 
>> With this /etc/hosts NX and VNC work but HBase does not.
>> 
>> 127.0.0.1 hadoop1 localhost.localdomain localhost
>> 
>> 10.64.155.52 hadoop1.aj.c2fse.northgrum.com hadoop1 hbase-masterserver hbase-nameserver
>> 
>> 10.64.155.53 hadoop2.aj.c2fse.northgrum.com hadoop2 hbase-regionserver1
>> 
>> ...
>> 
>> 
>> 
>> I assume from your question that if I should try replacing
>> 
>> 127.0.0.1 hadoop1 localhost.localdomain localhost
>> 
>> with simply:
>> 
>> 127.0.0.1 localhost
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> Alan
>> 
>> 
>> 
>> 
>> 
>> -----Original Message-----
>> From: Michael Segel [mailto:michael_segel@hotmail.com]
>> Sent: Wednesday, November 21, 2012 7:40 PM
>> To: user@hbase.apache.org
>> Subject: EXT :Re: HBase Issues (perhaps related to 127.0.0.1)
>> 
>> 
>> 
>> Hi,
>> 
>> 
>> 
>> Quick question...
>> 
>> 
>> 
>> DO you have 127.0.0.1 set to anything other than localhost?
>> 
>> 
>> 
>> If not, then it should be fine and you may want to revert to hard coded IP addresses on your other configuration files.
>> 
>> 
>> 
>> If you have Hadoop up and working, then you should be able to stand up HBase on top of that.
>> 
>> 
>> 
>> Just doing a quick look, and it seems that your name for your hadoop is resolving to your localhost.
>> 
>> What does your /etc/ hosts file look like?
>> 
>> 
>> 
>> How many machines in your cluster?
>> 
>> 
>> 
>> Have you thought about pulling down a 'free' copy of Cloudera, MapR or if Hortonworks has one ...
>> 
>> 
>> 
>> If you're thinking about using HBase as a standalone instance and don't care about Map/Reduce, maybe going with something else would make sense.
>> 
>> 
>> 
>> HTH
>> 
>> 
>> 
>> -Mike
>> 
>> 
>> 
>> On Nov 21, 2012, at 3:02 PM, "Ratner, Alan S (IS)" <Alan.Ratner@ngc.com<mailto:Alan.Ratner@ngc.com>> wrote:
>> 
>> 
>> 
>>> Thanks Mohammad.  I set the clientPort but as I was already using the default value of 2181 it made no difference.
>> 
>>> 
>> 
>>> I cannot remove the 127.0.0.1 line from my hosts file.  I connect to my servers via VPN from a Windows laptop using either NX or VNC and both apparently rely on the 127.0.0.1 IP address.  This was not a problem with older versions of HBase (I used to use 0.20.x) so it seems to be something relatively new.
>> 
>>> 
>> 
>>> It seems I have a choice: access my servers remotely or run HBase and these 2 are mutually incompatible.  I think my options are either:
>> 
>>> a) revert to an old version of HBase
>> 
>>> b) switch to Accumulo, or
>> 
>>> c) switch to Cassandra.
>> 
>>> 
>> 
>>> Alan
>> 
>>> 
>> 
>>> 
>> 
>>> -----Original Message-----
>> 
>>> From: Mohammad Tariq [mailto:dontariq@gmail.com]
>> 
>>> Sent: Wednesday, November 21, 2012 3:11 PM
>> 
>>> To: user@hbase.apache.org<mailto:user@hbase.apache.org>
>> 
>>> Subject: EXT :Re: HBase Issues (perhaps related to 127.0.0.1)
>> 
>>> 
>> 
>>> Hello Alan,
>> 
>>> 
>> 
>>>  It's better to keep 127.0.0.1 out of your /etc/hosts and make sure you
>> 
>>> have proper DNS resolution as it plays an important role in proper Hbase
>> 
>>> functioning. Also add the "hbase.zookeeper.property.clientPort" property in
>> 
>>> your hbase-site.xml file and see if it works for you.
>> 
>>> 
>> 
>>> Regards,
>> 
>>>  Mohammad Tariq
>> 
>>> 
>> 
>>> 
>> 
>>> 
>> 
>>> On Thu, Nov 22, 2012 at 1:31 AM, Ratner, Alan S (IS) <Alan.Ratner@ngc.com<mailto:Alan.Ratner@ngc.com>>wrote:
>> 
>>> 
>> 
>>>> I'd appreciate any suggestions as to how to get HBase up and running.
>> 
>>>> Right now it dies after a few seconds on all servers.  I am using Hadoop
>> 
>>>> 1.0.4, ZooKeeper 3.4.4 and HBase 0.94.2 on Ubuntu.
>> 
>>>> 
>> 
>>>> History: Yesterday I managed to get HBase 0.94.2 working but only after
>> 
>>>> removing the 127.0.0.1 line from my /etc/hosts file (and synchronizing my
>> 
>>>> clocks).  All was fine until this morning when I realized I could not
>> 
>>>> initiate remote log-ins to my servers (using VNC or NX) until I restored
>> 
>>>> the 127.0.0.1 line in /etc/hosts.  With that restored I am back to a
>> 
>>>> non-working HBase.
>> 
>>>> 
>> 
>>>> With HBase managing ZK I see the following in the HBase Master and ZK
>> 
>>>> logs, respectively:
>> 
>>>> 2012-11-21 13:40:22,236 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> 
>>>> ZooKeeper exception:
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase
>> 
>>>> 
>> 
>>>> 2012-11-21 13:40:22,122 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> 
>>>> Exception causing close of session 0x0 due to java.io.IOException:
>> 
>>>> ZooKeeperServer not running
>> 
>>>> 
>> 
>>>> At roughly the same time (clocks not perfectly synchronized) I see this in
>> 
>>>> a Regionserver log:
>> 
>>>> 2012-11-21 13:40:57,727 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> ...
>> 
>>>> 2012-11-21 13:40:57,848 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> 
>>>> ZooKeeper exception:
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase/master
>> 
>>>> 
>> 
>>>> Logs and configuration follows.
>> 
>>>> 
>> 
>>>> Then I tried managing ZK myself and HBase then fails for seemingly
>> 
>>>> different reasons.
>> 
>>>> 2012-11-21 14:46:37,320 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node
>> 
>>>> /hbase/backup-masters/hadoop1,60000,1353527196915 already deleted, and this
>> 
>>>> is not a retry
>> 
>>>> 
>> 
>>>> 2012-11-21 14:46:47,483 FATAL org.apache.hadoop.hbase.master.HMaster:
>> 
>>>> Unhandled exception. Starting shutdown.
>> 
>>>> java.net.ConnectException: Call to hadoop1/127.0.0.1:9000 failed on
>> 
>>>> connection exception: java.net.ConnectException: Connection refused
>> 
>>>> 
>> 
>>>> Both HMaster error logs (self-managed and me-managed ZK) mention the
>> 
>>>> 127.0.0.1 IP address instead of referring to the server by its name
>> 
>>>> (hadoop1) or its true IP address or simply as localhost.
>> 
>>>> 
>> 
>>>> So, start-hbase.sh works OK (HB managing ZK):
>> 
>>>> ngc@hadoop1:~/hbase-0.94.2$<mailto:ngc@hadoop1:~/hbase-0.94.2$> bin/start-hbase.sh
>> 
>>>> hadoop1: starting zookeeper, logging to
>> 
>>>> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop1.out
>> 
>>>> hadoop2: starting zookeeper, logging to
>> 
>>>> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop2.out
>> 
>>>> hadoop3: starting zookeeper, logging to
>> 
>>>> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop3.out
>> 
>>>> starting master, logging to
>> 
>>>> /tmp/hbase-ngc/logs/hbase-ngc-master-hadoop1.out
>> 
>>>> hadoop2: starting regionserver, logging to
>> 
>>>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop2.out
>> 
>>>> hadoop6: starting regionserver, logging to
>> 
>>>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop6.out
>> 
>>>> hadoop3: starting regionserver, logging to
>> 
>>>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop3.out
>> 
>>>> hadoop5: starting regionserver, logging to
>> 
>>>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop5.out
>> 
>>>> hadoop4: starting regionserver, logging to
>> 
>>>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop4.out
>> 
>>>> 
>> 
>>>> I have in hbase-site.xml:
>> 
>>>> <property>
>> 
>>>>  <name>hbase.cluster.distributed</name>
>> 
>>>>  <value>true</value>
>> 
>>>> </property>
>> 
>>>>    <property>
>> 
>>>>          <name>hbase.master</name>
>> 
>>>>          <value>hadoop1:60000</value>
>> 
>>>>      </property>
>> 
>>>> <property>
>> 
>>>>  <name>hbase.rootdir</name>
>> 
>>>>  <value>hdfs://hadoop1:9000/hbase</value>
>> 
>>>> </property>
>> 
>>>> <property>
>> 
>>>>  <name>hbase.zookeeper.property.dataDir</name>
>> 
>>>>  <value>/tmp/zookeeper_data</value>
>> 
>>>> </property>
>> 
>>>> <property>
>> 
>>>>  <name>hbase.zookeeper.quorum</name>
>> 
>>>>  <value>hadoop1,hadoop2,hadoop3</value>
>> 
>>>> </property>
>> 
>>>> 
>> 
>>>> I have in hbase-env.sh:
>> 
>>>> export JAVA_HOME=/home/ngc/jdk1.6.0_25/
>> 
>>>> export HBASE_CLASSPATH=/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4
>> 
>>>> export HBASE_HEAPSIZE=2000
>> 
>>>> export HBASE_OPTS="$HBASE_OPTS -XX:+HeapDumpOnOutOfMemoryError
>> 
>>>> -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode"
>> 
>>>> export HBASE_LOG_DIR=/tmp/hbase-ngc/logs
>> 
>>>> export HBASE_MANAGES_ZK=true
>> 
>>>> 
>> 
>>>> From server hadoop1 (running HMaster, ZK, NN, SNN, JT)
>> 
>>>> Wed Nov 21 13:40:20 EST 2012 Starting master on hadoop1
>> 
>>>> core file size          (blocks, -c) 0
>> 
>>>> data seg size           (kbytes, -d) unlimited
>> 
>>>> scheduling priority             (-e) 0
>> 
>>>> file size               (blocks, -f) unlimited
>> 
>>>> pending signals                 (-i) 386178
>> 
>>>> max locked memory       (kbytes, -l) 64
>> 
>>>> max memory size         (kbytes, -m) unlimited
>> 
>>>> open files                      (-n) 1024
>> 
>>>> pipe size            (512 bytes, -p) 8
>> 
>>>> POSIX message queues     (bytes, -q) 819200
>> 
>>>> real-time priority              (-r) 0
>> 
>>>> stack size              (kbytes, -s) 8192
>> 
>>>> cpu time               (seconds, -t) unlimited
>> 
>>>> max user processes              (-u) 386178
>> 
>>>> virtual memory          (kbytes, -v) unlimited
>> 
>>>> file locks                      (-x) unlimited
>> 
>>>> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> 
>>>> HBase 0.94.2
>> 
>>>> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> 
>>>> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
>> 
>>>> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> 
>>>> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
>> 
>>>> 2012-11-21 13:40:21,558 DEBUG org.apache.hadoop.hbase.master.HMaster: Set
>> 
>>>> serverside HConnection retries=100
>> 
>>>> 2012-11-21 13:40:21,823 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 13:40:21,826 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 13:40:21,829 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 13:40:21,833 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 13:40:21,836 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 13:40:21,839 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 13:40:21,842 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 13:40:21,846 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 13:40:21,849 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 13:40:21,852 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 13:40:21,863 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
>> 
>>>> Initializing RPC Metrics with hostName=HMaster, port=60000
>> 
>>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
>> 
>>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:host.name=hadoop1
>> 
>>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.version=1.6.0_25
>> 
>>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.vendor=Sun Microsystems Inc.
>> 
>>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.home=/home/ngc/jdk1.6.0_25/jre
>> 
>>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4:/home/ngc/hadoop-1.0.4/libexec/../conf:/home/ngc/jdk1.6.0_25/lib/tools.jar:/home/ngc/hadoop-1.0.4/libexec/..:/home/ngc/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
>> 
>>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.library.path=/home/ngc/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64:/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
>> 
>>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.io.tmpdir=/tmp
>> 
>>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.compiler=<NA>
>> 
>>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:os.name=Linux
>> 
>>>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:os.arch=amd64
>> 
>>>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:os.version=3.2.0-24-generic
>> 
>>>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:user.name=ngc
>> 
>>>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:user.home=/home/ngc
>> 
>>>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:user.dir=/home/ngc/hbase-0.94.2
>> 
>>>> 2012-11-21 13:40:22,080 INFO org.apache.zookeeper.ZooKeeper: Initiating
>> 
>>>> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181
>> 
>>>> sessionTimeout=180000 watcher=master:60000
>> 
>>>> 2012-11-21 13:40:22,097 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server /127.0.0.1:2181
>> 
>>>> 2012-11-21 13:40:22,099 INFO
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
>> 
>>>> this process is 742@hadoop1
>> 
>>>> 2012-11-21 13:40:22,106 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:22,106 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:22,110 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 
>>>> 2012-11-21 13:40:22,122 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:22,236 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> 
>>>> ZooKeeper exception:
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase
>> 
>>>> 2012-11-21 13:40:22,236 INFO org.apache.hadoop.hbase.util.RetryCounter:
>> 
>>>> Sleeping 2000ms before retry #1...
>> 
>>>> 2012-11-21 13:40:22,411 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server /10.64.155.53:2181
>> 
>>>> 2012-11-21 13:40:22,411 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:22,411 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:22,412 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:22,423 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:22,746 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server /10.64.155.54:2181
>> 
>>>> 2012-11-21 13:40:22,747 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:22,747 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:22,747 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:22,748 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:22,967 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server /10.64.155.52:2181
>> 
>>>> 2012-11-21 13:40:22,967 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:22,967 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:24,175 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server hadoop1/127.0.0.1:2181
>> 
>>>> 2012-11-21 13:40:24,176 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:24,176 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:24,176 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 
>>>> 2012-11-21 13:40:24,177 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:24,277 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> 
>>>> ZooKeeper exception:
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase
>> 
>>>> 2012-11-21 13:40:24,277 INFO org.apache.hadoop.hbase.util.RetryCounter:
>> 
>>>> Sleeping 4000ms before retry #2...
>> 
>>>> 2012-11-21 13:40:24,766 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 
>>>> 2012-11-21 13:40:24,767 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:24,767 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:24,767 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:24,768 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:25,756 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 
>>>> 2012-11-21 13:40:25,757 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:25,757 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:25,757 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:25,757 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:26,597 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
>> 
>>>> 2012-11-21 13:40:26,597 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:26,597 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:26,598 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:26,598 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:27,775 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server hadoop1/127.0.0.1:2181
>> 
>>>> 2012-11-21 13:40:27,775 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:27,775 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:27,775 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 
>>>> 2012-11-21 13:40:27,776 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:28,317 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 
>>>> 2012-11-21 13:40:28,318 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:28,318 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:28,318 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:28,319 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:28,419 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> 
>>>> ZooKeeper exception:
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase
>> 
>>>> 2012-11-21 13:40:28,419 INFO org.apache.hadoop.hbase.util.RetryCounter:
>> 
>>>> Sleeping 8000ms before retry #3...
>> 
>>>> 2012-11-21 13:40:29,106 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 
>>>> 2012-11-21 13:40:29,106 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:29,106 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:29,107 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:29,107 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:30,039 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
>> 
>>>> 2012-11-21 13:40:30,039 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:30,039 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:30,039 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:30,040 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:31,283 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server hadoop1/127.0.0.1:2181
>> 
>>>> 2012-11-21 13:40:31,283 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:31,283 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:31,283 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 
>>>> 2012-11-21 13:40:31,284 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:32,142 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 
>>>> 2012-11-21 13:40:32,143 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:32,143 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:32,143 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:32,144 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:32,479 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 
>>>> 2012-11-21 13:40:32,480 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:32,480 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:32,480 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:32,481 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:33,294 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
>> 
>>>> 2012-11-21 13:40:33,295 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:33,295 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:33,296 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:33,296 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:34,962 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server hadoop1/127.0.0.1:2181
>> 
>>>> 2012-11-21 13:40:34,962 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:34,962 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:34,962 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 
>>>> 2012-11-21 13:40:34,963 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:35,660 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 
>>>> 2012-11-21 13:40:35,661 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:35,661 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:35,661 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:35,662 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:36,522 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 
>>>> 2012-11-21 13:40:36,523 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:36,523 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:36,523 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:36,524 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:36,625 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> 
>>>> ZooKeeper exception:
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase
>> 
>>>> 2012-11-21 13:40:36,625 ERROR
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper exists
>> 
>>>> failed after 3 retries
>> 
>>>> 2012-11-21 13:40:36,626 ERROR
>> 
>>>> org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master
>> 
>>>> java.lang.RuntimeException: Failed construction of Master: class
>> 
>>>> org.apache.hadoop.hbase.master.HMaster
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1792)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:146)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:103)
>> 
>>>>    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76)
>> 
>>>>    at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1806)
>> 
>>>> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase
>> 
>>>>    at
>> 
>>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>> 
>>>>    at
>> 
>>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>> 
>>>>    at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
>> 
>>>>    at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1049)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:193)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:904)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.createBaseZNodes(ZooKeeperWatcher.java:166)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:159)
>> 
>>>>    at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:282)
>> 
>>>>    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> 
>>>> Method)
>> 
>>>>    at
>> 
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>> 
>>>>    at
>> 
>>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>> 
>>>>    at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1787)
>> 
>>>>    ... 5 more
>> 
>>>> 
>> 
>>>> 
>> 
>>>> From server hadoop2 (running regionserver, ZK, DN, TT)
>> 
>>>> Wed Nov 21 13:40:56 EST 2012 Starting regionserver on hadoop2
>> 
>>>> core file size          (blocks, -c) 0
>> 
>>>> data seg size           (kbytes, -d) unlimited
>> 
>>>> scheduling priority             (-e) 0
>> 
>>>> file size               (blocks, -f) unlimited
>> 
>>>> pending signals                 (-i) 193105
>> 
>>>> max locked memory       (kbytes, -l) 64
>> 
>>>> max memory size         (kbytes, -m) unlimited
>> 
>>>> open files                      (-n) 1024
>> 
>>>> pipe size            (512 bytes, -p) 8
>> 
>>>> POSIX message queues     (bytes, -q) 819200
>> 
>>>> real-time priority              (-r) 0
>> 
>>>> stack size              (kbytes, -s) 8192
>> 
>>>> cpu time               (seconds, -t) unlimited
>> 
>>>> max user processes              (-u) 193105
>> 
>>>> virtual memory          (kbytes, -v) unlimited
>> 
>>>> file locks                      (-x) unlimited
>> 
>>>> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> 
>>>> HBase 0.94.2
>> 
>>>> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> 
>>>> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
>> 
>>>> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> 
>>>> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
>> 
>>>> 2012-11-21 13:40:57,172 INFO
>> 
>>>> org.apache.hadoop.hbase.util.ServerCommandLine: vmName=Java HotSpot(TM)
>> 
>>>> 64-Bit Server VM, vmVendor=Sun Microsystems Inc., vmVersion=20.0-b11
>> 
>>>> 2012-11-21 13:40:57,172 INFO
>> 
>>>> org.apache.hadoop.hbase.util.ServerCommandLine:
>> 
>>>> vmInputArguments=[-XX:OnOutOfMemoryError=kill, -9, %p, -Xmx2000m,
>> 
>>>> -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC,
>> 
>>>> -XX:+CMSIncrementalMode, -XX:+HeapDumpOnOutOfMemoryError,
>> 
>>>> -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode,
>> 
>>>> -Dhbase.log.dir=/tmp/hbase-ngc/logs,
>> 
>>>> -Dhbase.log.file=hbase-ngc-regionserver-hadoop2.log,
>> 
>>>> -Dhbase.home.dir=/home/ngc/hbase-0.94.2/bin/.., -Dhbase.id.str=ngc,
>> 
>>>> -Dhbase.root.logger=INFO,DRFA,
>> 
>>>> -Djava.library.path=/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64,
>> 
>>>> -Dhbase.security.logger=INFO,DRFAS]
>> 
>>>> 2012-11-21 13:40:57,222 DEBUG
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Set serverside
>> 
>>>> HConnection retries=100
>> 
>>>> 2012-11-21 13:40:57,469 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-1
>> 
>>>> 2012-11-21 13:40:57,471 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-1
>> 
>>>> 2012-11-21 13:40:57,473 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-1
>> 
>>>> 2012-11-21 13:40:57,475 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-1
>> 
>>>> 2012-11-21 13:40:57,477 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-1
>> 
>>>> 2012-11-21 13:40:57,480 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-1
>> 
>>>> 2012-11-21 13:40:57,482 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-1
>> 
>>>> 2012-11-21 13:40:57,484 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-1
>> 
>>>> 2012-11-21 13:40:57,486 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-1
>> 
>>>> 2012-11-21 13:40:57,488 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-1
>> 
>>>> 2012-11-21 13:40:57,500 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
>> 
>>>> Initializing RPC Metrics with hostName=HRegionServer, port=60020
>> 
>>>> 2012-11-21 13:40:57,654 INFO org.apache.hadoop.hbase.io.hfile.CacheConfig:
>> 
>>>> Allocating LruBlockCache with maximum size 493.8m
>> 
>>>> 2012-11-21 13:40:57,699 INFO
>> 
>>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Installed shutdown hook
>> 
>>>> thread: Shutdownhook:regionserver60020
>> 
>>>> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
>> 
>>>> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:host.name=hadoop2.aj.c2fse.northgrum.com
>> 
>>>> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.version=1.6.0_25
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.vendor=Sun Microsystems Inc.
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.home=/home/ngc/jdk1.6.0_25/jre
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.library.path=/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.io.tmpdir=/tmp
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.compiler=<NA>
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:os.name=Linux
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:os.arch=amd64
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:os.version=3.0.0-12-generic
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:user.name=ngc
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:user.home=/home/ngc
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:user.dir=/home/ngc/hbase-0.94.2
>> 
>>>> 2012-11-21 13:40:57,703 INFO org.apache.zookeeper.ZooKeeper: Initiating
>> 
>>>> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181
>> 
>>>> sessionTimeout=180000 watcher=regionserver:60020
>> 
>>>> 2012-11-21 13:40:57,718 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server /10.64.155.54:2181
>> 
>>>> 2012-11-21 13:40:57,719 INFO
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
>> 
>>>> this process is 12835@hadoop2
>> 
>>>> 2012-11-21 13:40:57,727 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:57,727 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:57,731 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:57,733 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:57,848 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> 
>>>> ZooKeeper exception:
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase/master
>> 
>>>> 2012-11-21 13:40:57,849 INFO org.apache.hadoop.hbase.util.RetryCounter:
>> 
>>>> Sleeping 2000ms before retry #1...
>> 
>>>> 2012-11-21 13:40:58,283 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server /10.64.155.53:2181
>> 
>>>> 2012-11-21 13:40:58,283 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:58,283 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:58,283 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:58,284 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:58,726 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server /127.0.0.1:2181
>> 
>>>> 2012-11-21 13:40:58,726 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:58,726 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:58,726 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 
>>>> 2012-11-21 13:40:58,727 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:59,367 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server /10.64.155.52:2181
>> 
>>>> 2012-11-21 13:40:59,368 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:59,368 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:59,368 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:59,369 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:00,660 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 
>>>> 2012-11-21 13:41:00,660 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:00,660 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:00,660 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:00,661 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:00,761 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> 
>>>> ZooKeeper exception:
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase/master
>> 
>>>> 2012-11-21 13:41:00,762 INFO org.apache.hadoop.hbase.util.RetryCounter:
>> 
>>>> Sleeping 4000ms before retry #2...
>> 
>>>> 2012-11-21 13:41:01,422 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 
>>>> 2012-11-21 13:41:01,422 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:01,422 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:01,422 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:01,423 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:02,369 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server hadoop1/127.0.0.1:2181
>> 
>>>> 2012-11-21 13:41:02,370 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:02,370 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:02,370 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 
>>>> 2012-11-21 13:41:02,370 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:02,627 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
>> 
>>>> 2012-11-21 13:41:02,627 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:02,627 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:02,628 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:02,628 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:03,968 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 
>>>> 2012-11-21 13:41:03,968 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:03,969 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:03,969 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:03,969 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:04,733 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 
>>>> 2012-11-21 13:41:04,733 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:04,733 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:04,734 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:04,734 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:04,835 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> 
>>>> ZooKeeper exception:
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase/master
>> 
>>>> 2012-11-21 13:41:04,835 INFO org.apache.hadoop.hbase.util.RetryCounter:
>> 
>>>> Sleeping 8000ms before retry #3...
>> 
>>>> 2012-11-21 13:41:05,741 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server hadoop1/127.0.0.1:2181
>> 
>>>> 2012-11-21 13:41:05,741 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:05,741 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:05,742 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 
>>>> 2012-11-21 13:41:05,742 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:06,192 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
>> 
>>>> 2012-11-21 13:41:06,192 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:06,192 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:06,192 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:06,193 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:07,313 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 
>>>> 2012-11-21 13:41:07,313 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:07,313 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:07,314 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:07,314 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:08,272 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 
>>>> 2012-11-21 13:41:08,273 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:08,273 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:08,273 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:08,273 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:09,090 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server hadoop1/127.0.0.1:2181
>> 
>>>> 2012-11-21 13:41:09,090 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:09,090 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:09,091 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 
>>>> 2012-11-21 13:41:09,091 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:09,710 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
>> 
>>>> 2012-11-21 13:41:09,711 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:09,711 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:09,711 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:09,712 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:11,120 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 
>>>> 2012-11-21 13:41:11,121 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:11,121 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:11,121 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:11,122 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:11,599 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 
>>>> 2012-11-21 13:41:11,600 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:11,600 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:11,600 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:11,600 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:12,320 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server hadoop1/127.0.0.1:2181
>> 
>>>> 2012-11-21 13:41:12,320 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:12,320 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:12,321 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 
>>>> 2012-11-21 13:41:12,321 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:12,860 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
>> 
>>>> 2012-11-21 13:41:12,861 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:12,861 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:12,861 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:12,862 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:12,962 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> 
>>>> ZooKeeper exception:
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase/master
>> 
>>>> 2012-11-21 13:41:12,962 ERROR
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper exists
>> 
>>>> failed after 3 retries
>> 
>>>> 2012-11-21 13:41:12,963 WARN org.apache.hadoop.hbase.zookeeper.ZKUtil:
>> 
>>>> regionserver:60020 Unable to set watcher on znode /hbase/master
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase/master
>> 
>>>>    at
>> 
>>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>> 
>>>>    at
>> 
>>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>> 
>>>>    at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
>> 
>>>>    at java.lang.Thread.run(Thread.java:662)
>> 
>>>> 2012-11-21 13:41:12,966 ERROR
>> 
>>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: regionserver:60020
>> 
>>>> Received unexpected KeeperException, re-throwing exception
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase/master
>> 
>>>>    at
>> 
>>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>> 
>>>>    at
>> 
>>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>> 
>>>>    at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
>> 
>>>>    at java.lang.Thread.run(Thread.java:662)
>> 
>>>> 2012-11-21 13:41:12,966 FATAL
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
>> 
>>>> hadoop2.aj.c2fse.northgrum.com,60020,1353523257570: Unexpected exception
>> 
>>>> during initialization, aborting
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase/master
>> 
>>>>    at
>> 
>>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>> 
>>>>    at
>> 
>>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>> 
>>>>    at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
>> 
>>>>    at java.lang.Thread.run(Thread.java:662)
>> 
>>>> 2012-11-21 13:41:12,969 FATAL
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort:
>> 
>>>> loaded coprocessors are: []
>> 
>>>> 2012-11-21 13:41:12,969 INFO
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Unexpected
>> 
>>>> exception during initialization, aborting
>> 
>>>> 2012-11-21 13:41:14,834 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 
>>>> 2012-11-21 13:41:14,834 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:14,834 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:14,834 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:14,835 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:15,335 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 
>>>> 2012-11-21 13:41:15,335 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:15,335 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:15,335 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:15,336 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:15,975 INFO org.apache.hadoop.ipc.HBaseServer: Stopping
>> 
>>>> server on 60020
>> 
>>>> 2012-11-21 13:41:15,975 FATAL
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
>> 
>>>> hadoop2.aj.c2fse.northgrum.com,60020,1353523257570: Initialization of RS
>> 
>>>> failed.  Hence aborting RS.
>> 
>>>> java.io.IOException: Received the shutdown message while waiting.
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.blockAndCheckIfStopped(HRegionServer.java:623)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:598)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
>> 
>>>>    at java.lang.Thread.run(Thread.java:662)
>> 
>>>> 2012-11-21 13:41:15,976 FATAL
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort:
>> 
>>>> loaded coprocessors are: []
>> 
>>>> 2012-11-21 13:41:15,976 INFO
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Initialization
>> 
>>>> of RS failed.  Hence aborting RS.
>> 
>>>> 2012-11-21 13:41:15,978 INFO
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Registered RegionServer
>> 
>>>> MXBean
>> 
>>>> 2012-11-21 13:41:15,980 INFO
>> 
>>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook starting;
>> 
>>>> hbase.shutdown.hook=true; fsShutdownHook=Thread[Thread-5,5,main]
>> 
>>>> 2012-11-21 13:41:15,980 INFO
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Shutdown hook
>> 
>>>> 2012-11-21 13:41:15,981 INFO
>> 
>>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Starting fs shutdown
>> 
>>>> hook thread.
>> 
>>>> 2012-11-21 13:41:15,981 INFO
>> 
>>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook finished.
>> 
>>>> 
>> 
>>>> Finally, in the zookeeper log from hadoop1 I have:
>> 
>>>> Wed Nov 21 13:40:19 EST 2012 Starting zookeeper on hadoop1
>> 
>>>> core file size          (blocks, -c) 0
>> 
>>>> data seg size           (kbytes, -d) unlimited
>> 
>>>> scheduling priority             (-e) 0
>> 
>>>> file size               (blocks, -f) unlimited
>> 
>>>> pending signals                 (-i) 386178
>> 
>>>> max locked memory       (kbytes, -l) 64
>> 
>>>> max memory size         (kbytes, -m) unlimited
>> 
>>>> open files                      (-n) 1024
>> 
>>>> pipe size            (512 bytes, -p) 8
>> 
>>>> POSIX message queues     (bytes, -q) 819200
>> 
>>>> real-time priority              (-r) 0
>> 
>>>> stack size              (kbytes, -s) 8192
>> 
>>>> cpu time               (seconds, -t) unlimited
>> 
>>>> max user processes              (-u) 386178
>> 
>>>> virtual memory          (kbytes, -v) unlimited
>> 
>>>> file locks                      (-x) unlimited
>> 
>>>> 2012-11-21 13:40:20,279 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumPeerConfig: Defaulting to majority
>> 
>>>> quorums
>> 
>>>> 2012-11-21 13:40:20,334 DEBUG org.apache.hadoop.hbase.util.Bytes:
>> 
>>>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
>> 
>>>> name=log4j:logger=org.apache.hadoop.hbase.util.Bytes
>> 
>>>> 2012-11-21 13:40:20,335 DEBUG org.apache.hadoop.hbase.util.VersionInfo:
>> 
>>>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
>> 
>>>> name=log4j:logger=org.apache.hadoop.hbase.util.VersionInfo
>> 
>>>> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase.zookeeper.ZKConfig:
>> 
>>>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
>> 
>>>> name=log4j:logger=org.apache.hadoop.hbase.zookeeper.ZKConfig
>> 
>>>> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase.HBaseConfiguration:
>> 
>>>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
>> 
>>>> name=log4j:logger=org.apache.hadoop.hbase.HBaseConfiguration
>> 
>>>> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase: preRegister called.
>> 
>>>> Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
>> 
>>>> name=log4j:logger=org.apache.hadoop.hbase
>> 
>>>> 2012-11-21 13:40:20,336 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumPeerMain: Starting quorum peer
>> 
>>>> 2012-11-21 13:40:20,356 INFO
>> 
>>>> org.apache.zookeeper.server.NIOServerCnxnFactory: binding to port
>> 
>>>> 0.0.0.0/0.0.0.0:2181
>> 
>>>> 2012-11-21 13:40:20,378 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumPeer: tickTime set to 3000
>> 
>>>> 2012-11-21 13:40:20,379 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumPeer: minSessionTimeout set to -1
>> 
>>>> 2012-11-21 13:40:20,379 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumPeer: maxSessionTimeout set to
>> 
>>>> 180000
>> 
>>>> 2012-11-21 13:40:20,379 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumPeer: initLimit set to 10
>> 
>>>> 2012-11-21 13:40:20,395 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumPeer: acceptedEpoch not found!
>> 
>>>> Creating with a reasonable default of 0. This should only happen when you
>> 
>>>> are upgrading your installation
>> 
>>>> 2012-11-21 13:40:20,442 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: My election bind port:
>> 
>>>> 0.0.0.0/0.0.0.0:3888
>> 
>>>> 2012-11-21 13:40:20,456 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumPeer: LOOKING
>> 
>>>> 2012-11-21 13:40:20,458 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.FastLeaderElection: New election. My id
>> 
>>>> =  0, proposed zxid=0x0
>> 
>>>> 2012-11-21 13:40:20,460 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification: 0
>> 
>>>> (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0
>> 
>>>> (n.peerEPoch), LOOKING (my state)
>> 
>>>> 2012-11-21 13:40:20,464 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> 
>>>> identifier, so dropping the connection: (1, 0)
>> 
>>>> 2012-11-21 13:40:20,465 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> 
>>>> identifier, so dropping the connection: (2, 0)
>> 
>>>> 2012-11-21 13:40:20,663 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> 
>>>> identifier, so dropping the connection: (2, 0)
>> 
>>>> 2012-11-21 13:40:20,663 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> 
>>>> identifier, so dropping the connection: (1, 0)
>> 
>>>> 2012-11-21 13:40:20,663 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
>> 
>>>> out: 400
>> 
>>>> 2012-11-21 13:40:21,064 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> 
>>>> identifier, so dropping the connection: (2, 0)
>> 
>>>> 2012-11-21 13:40:21,065 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> 
>>>> identifier, so dropping the connection: (1, 0)
>> 
>>>> 2012-11-21 13:40:21,065 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
>> 
>>>> out: 800
>> 
>>>> 2012-11-21 13:40:21,866 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> 
>>>> identifier, so dropping the connection: (2, 0)
>> 
>>>> 2012-11-21 13:40:21,866 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> 
>>>> identifier, so dropping the connection: (1, 0)
>> 
>>>> 2012-11-21 13:40:21,866 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
>> 
>>>> out: 1600
>> 
>>>> 2012-11-21 13:40:22,113 INFO
>> 
>>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
>> 
>>>> connection from /127.0.0.1:55216
>> 
>>>> 2012-11-21 13:40:22,122 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> 
>>>> Exception causing close of session 0x0 due to java.io.IOException:
>> 
>>>> ZooKeeperServer not running
>> 
>>>> 2012-11-21 13:40:22,122 INFO org.apache.zookeeper.server.NIOServerCnxn:
>> 
>>>> Closed socket connection for client /127.0.0.1:55216 (no session
>> 
>>>> established for client)
>> 
>>>> 2012-11-21 13:40:22,373 INFO
>> 
>>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
>> 
>>>> connection from /10.64.155.52:60339
>> 
>>>> 2012-11-21 13:40:22,374 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> 
>>>> Exception causing close of session 0x0 due to java.io.IOException:
>> 
>>>> ZooKeeperServer not running
>> 
>>>> 2012-11-21 13:40:22,374 INFO org.apache.zookeeper.server.NIOServerCnxn:
>> 
>>>> Closed socket connection for client /10.64.155.52:60339 (no session
>> 
>>>> established for client)
>> 
>>>> 2012-11-21 13:40:22,968 INFO
>> 
>>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
>> 
>>>> connection from /10.64.155.52:60342
>> 
>>>> 2012-11-21 13:40:22,968 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> 
>>>> Exception causing close of session 0x0 due to java.io.IOException:
>> 
>>>> ZooKeeperServer not running
>> 
>>>> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.server.NIOServerCnxn:
>> 
>>>> Closed socket connection for client /10.64.155.52:60342 (no session
>> 
>>>> established for client)
>> 
>>>> 2012-11-21 13:40:23,187 INFO
>> 
>>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
>> 
>>>> connection from /127.0.0.1:55221
>> 
>>>> 2012-11-21 13:40:23,188 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> 
>>>> Exception causing close of session 0x0 due to java.io.IOException:
>> 
>>>> ZooKeeperServer not running
>> 
>>>> 2012-11-21 13:40:23,188 INFO org.apache.zookeeper.server.NIOServerCnxn:
>> 
>>>> Closed socket connection for client /127.0.0.1:55221 (no session
>> 
>>>> established for client)
>> 
>>>> 2012-11-21 13:40:23,467 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> 
>>>> identifier, so dropping the connection: (2, 0)
>> 
>>>> 2012-11-21 13:40:23,467 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> 
>>>> identifier, so dropping the connection: (1, 0)
>> 
>>>> 2012-11-21 13:40:23,467 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
>> 
>>>> out: 3200
>> 
>>>> 2012-11-21 13:40:24,116 INFO
>> 
>>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
>> 
>>>> connection from /10.64.155.54:35599
>> 
>>>> 2012-11-21 13:40:24,117 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> 
>>>> Exception causing close of session 0x0 due to java.io.IOException:
>> 
>>>> ZooKeeperServer not running
>> 
>>>> 2012-11-21 13:40:24,117 INFO org.apache.zookeeper.server.NIOServerCnxn:
>> 
>>>> Closed socket connection for client /10.64.155.54:35599 (no session
>> 
>>>> established for client)
>> 
>>>> 2012-11-21 13:40:24,176 INFO
>> 
>>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
>> 
>>>> connection from /127.0.0.1:55225
>> 
>>>> ...
>> 
>>>> 
>> 
>>>> Here are the logs when I manage ZK myself (showing the 127.0.0.1 problem
>> 
>>>> in /etc/hosts):
>> 
>>>> Wed Nov 21 14:46:21 EST 2012 Stopping hbase (via master)
>> 
>>>> Wed Nov 21 14:46:35 EST 2012 Starting master on hadoop1
>> 
>>>> core file size          (blocks, -c) 0
>> 
>>>> data seg size           (kbytes, -d) unlimited
>> 
>>>> scheduling priority             (-e) 0
>> 
>>>> file size               (blocks, -f) unlimited
>> 
>>>> pending signals                 (-i) 386178
>> 
>>>> max locked memory       (kbytes, -l) 64
>> 
>>>> max memory size         (kbytes, -m) unlimited
>> 
>>>> open files                      (-n) 1024
>> 
>>>> pipe size            (512 bytes, -p) 8
>> 
>>>> POSIX message queues     (bytes, -q) 819200
>> 
>>>> real-time priority              (-r) 0
>> 
>>>> stack size              (kbytes, -s) 8192
>> 
>>>> cpu time               (seconds, -t) unlimited
>> 
>>>> max user processes              (-u) 386178
>> 
>>>> virtual memory          (kbytes, -v) unlimited
>> 
>>>> file locks                      (-x) unlimited
>> 
>>>> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> 
>>>> HBase 0.94.2
>> 
>>>> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> 
>>>> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
>> 
>>>> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> 
>>>> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
>> 
>>>> 2012-11-21 14:46:36,555 DEBUG org.apache.hadoop.hbase.master.HMaster: Set
>> 
>>>> serverside HConnection retries=100
>> 
>>>> 2012-11-21 14:46:36,822 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 14:46:36,825 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 14:46:36,829 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 14:46:36,832 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 14:46:36,835 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 14:46:36,838 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 14:46:36,842 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 14:46:36,845 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 14:46:36,848 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 14:46:36,851 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 14:46:36,862 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
>> 
>>>> Initializing RPC Metrics with hostName=HMaster, port=60000
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:host.name=hadoop1
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.version=1.6.0_25
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.vendor=Sun Microsystems Inc.
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.home=/home/ngc/jdk1.6.0_25/jre
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4:/home/ngc/hadoop-1.0.4/libexec/../conf:/home/ngc/jdk1.6.0_25/lib/tools.jar:/home/ngc/hadoop-1.0.4/libexec/..:/home/ngc/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.library.path=/home/ngc/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64:/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.io.tmpdir=/tmp
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.compiler=<NA>
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:os.name=Linux
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:os.arch=amd64
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:os.version=3.2.0-24-generic
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:user.name=ngc
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:user.home=/home/ngc
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:user.dir=/home/ngc/hbase-0.94.2
>> 
>>>> 2012-11-21 14:46:37,072 INFO org.apache.zookeeper.ZooKeeper: Initiating
>> 
>>>> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181
>> 
>>>> sessionTimeout=180000 watcher=master:60000
>> 
>>>> 2012-11-21 14:46:37,087 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server /10.64.155.54:2181
>> 
>>>> 2012-11-21 14:46:37,087 INFO
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
>> 
>>>> this process is 12692@hadoop1
>> 
>>>> 2012-11-21 14:46:37,095 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 14:46:37,095 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 14:46:37,098 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 14:46:37,131 INFO org.apache.zookeeper.ClientCnxn: Session
>> 
>>>> establishment complete on server
>> 
>>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181, sessionid =
>> 
>>>> 0x33b247f4c380000, negotiated timeout = 40000
>> 
>>>> 2012-11-21 14:46:37,224 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> Responder: starting
>> 
>>>> 2012-11-21 14:46:37,225 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> listener on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,240 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> handler 0 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> handler 1 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> handler 2 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> handler 3 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,242 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> handler 4 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,246 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> handler 5 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,246 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> handler 6 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> handler 7 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> handler 8 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> handler 9 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC
>> 
>>>> Server handler 0 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC
>> 
>>>> Server handler 1 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC
>> 
>>>> Server handler 2 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,253 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> 
>>>> Initializing JVM Metrics with processName=Master,
>> 
>>>> sessionId=hadoop1,60000,1353527196915
>> 
>>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> 
>>>> MetricsString added: revision
>> 
>>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> 
>>>> MetricsString added: hdfsUser
>> 
>>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> 
>>>> MetricsString added: hdfsDate
>> 
>>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> 
>>>> MetricsString added: hdfsUrl
>> 
>>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> 
>>>> MetricsString added: date
>> 
>>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> 
>>>> MetricsString added: hdfsRevision
>> 
>>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> 
>>>> MetricsString added: user
>> 
>>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> 
>>>> MetricsString added: hdfsVersion
>> 
>>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> 
>>>> MetricsString added: url
>> 
>>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> 
>>>> MetricsString added: version
>> 
>>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
>> 
>>>> 2012-11-21 14:46:37,272 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
>> 
>>>> 2012-11-21 14:46:37,272 INFO
>> 
>>>> org.apache.hadoop.hbase.master.metrics.MasterMetrics: Initialized
>> 
>>>> 2012-11-21 14:46:37,299 INFO
>> 
>>>> org.apache.hadoop.hbase.master.ActiveMasterManager: Deleting ZNode for
>> 
>>>> /hbase/backup-masters/hadoop1,60000,1353527196915 from backup master
>> 
>>>> directory
>> 
>>>> 2012-11-21 14:46:37,320 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node
>> 
>>>> /hbase/backup-masters/hadoop1,60000,1353527196915 already deleted, and this
>> 
>>>> is not a retry
>> 
>>>> 2012-11-21 14:46:37,321 INFO
>> 
>>>> org.apache.hadoop.hbase.master.ActiveMasterManager:
>> 
>>>> Master=hadoop1,60000,1353527196915
>> 
>>>> 2012-11-21 14:46:38,475 INFO org.apache.hadoop.ipc.Client: Retrying
>> 
>>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 0 time(s).
>> 
>>>> 2012-11-21 14:46:39,476 INFO org.apache.hadoop.ipc.Client: Retrying
>> 
>>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 1 time(s).
>> 
>>>> 2012-11-21 14:46:40,477 INFO org.apache.hadoop.ipc.Client: Retrying
>> 
>>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 2 time(s).
>> 
>>>> 2012-11-21 14:46:41,477 INFO org.apache.hadoop.ipc.Client: Retrying
>> 
>>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 3 time(s).
>> 
>>>> 2012-11-21 14:46:42,478 INFO org.apache.hadoop.ipc.Client: Retrying
>> 
>>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 4 time(s).
>> 
>>>> 2012-11-21 14:46:43,478 INFO org.apache.hadoop.ipc.Client: Retrying
>> 
>>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 5 time(s).
>> 
>>>> 2012-11-21 14:46:44,479 INFO org.apache.hadoop.ipc.Client: Retrying
>> 
>>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 6 time(s).
>> 
>>>> 2012-11-21 14:46:45,479 INFO org.apache.hadoop.ipc.Client: Retrying
>> 
>>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 7 time(s).
>> 
>>>> 2012-11-21 14:46:46,480 INFO org.apache.hadoop.ipc.Client: Retrying
>> 
>>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 8 time(s).
>> 
>>>> 2012-11-21 14:46:47,480 INFO org.apache.hadoop.ipc.Client: Retrying
>> 
>>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 9 time(s).
>> 
>>>> 2012-11-21 14:46:47,483 FATAL org.apache.hadoop.hbase.master.HMaster:
>> 
>>>> Unhandled exception. Starting shutdown.
>> 
>>>> java.net.ConnectException: Call to hadoop1/127.0.0.1:9000 failed on
>> 
>>>> connection exception: java.net.ConnectException: Connection refused
>> 
>>>>    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1099)
>> 
>>>>    at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>> 
>>>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>> 
>>>>    at $Proxy10.getProtocolVersion(Unknown Source)
>> 
>>>>    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>> 
>>>>    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
>> 
>>>>    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
>> 
>>>>    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
>> 
>>>>    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>> 
>>>>    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
>> 
>>>>    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
>> 
>>>>    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
>> 
>>>>    at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:561)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:94)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:482)
>> 
>>>>  ...
>> 
>>>> 
>> 
>>>> [Message clipped]
>> 
>>> 
>> 
>> 
> 


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message