Return-Path: Delivered-To: apmail-hadoop-hbase-user-archive@minotaur.apache.org Received: (qmail 67751 invoked from network); 31 Mar 2010 16:24:58 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 31 Mar 2010 16:24:58 -0000 Received: (qmail 30751 invoked by uid 500); 31 Mar 2010 16:24:57 -0000 Delivered-To: apmail-hadoop-hbase-user-archive@hadoop.apache.org Received: (qmail 30721 invoked by uid 500); 31 Mar 2010 16:24:57 -0000 Mailing-List: contact hbase-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hbase-user@hadoop.apache.org Delivered-To: mailing list hbase-user@hadoop.apache.org Received: (qmail 30713 invoked by uid 99); 31 Mar 2010 16:24:57 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 31 Mar 2010 16:24:57 +0000 X-ASF-Spam-Status: No, hits=2.2 required=10.0 tests=FREEMAIL_FROM,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_NONE,SPF_PASS,T_TO_NO_BRKTS_FREEMAIL,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of joysenthil@gmail.com designates 74.125.78.25 as permitted sender) Received: from [74.125.78.25] (HELO ey-out-2122.google.com) (74.125.78.25) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 31 Mar 2010 16:24:52 +0000 Received: by ey-out-2122.google.com with SMTP id 4so19412eyf.23 for ; Wed, 31 Mar 2010 09:24:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:date:received:message-id :subject:from:to:content-type; bh=pPua+QSyD6SPk757O8GSQjR1Vt/lTTsFaaUIeqIWSKU=; b=XkazX04uN3AqIximA5uMT898YOWm8HO9Y6rN2GCus0Qj3kjX3uzmHNi6b+gI0bPg7M G5T0qZIh7tD0Mba67Y5VLFomkCs7+/hnhLy3yBK0e7ubuMvYwbxWhFXhnvEuiEY0h7hd BIvJrEuoRnm5878QmINlz20GqY++7z6URM4Zs= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=pEBTictJ4KqPwuYygXMrht9Hzkr50u47L0MSU0UsuQ8v6Jk4sOqo+XH1qfiimoQBWB 5pP+UYLW1wtoQGvCCBWhTZG6/QvXSwZJQ5IjZ3WozxKnLU6bV/FcJM+nObi/GuCmTga4 KkgxwozDXfN6C0zVkvl5hJFRGKSDc4gyJw5ks= MIME-Version: 1.0 Received: by 10.213.105.78 with HTTP; Wed, 31 Mar 2010 09:24:30 -0700 (PDT) Date: Wed, 31 Mar 2010 18:24:30 +0200 Received: by 10.213.45.197 with SMTP id g5mr1149222ebf.94.1270052670559; Wed, 31 Mar 2010 09:24:30 -0700 (PDT) Message-ID: Subject: Failed to create /hbase.... KeeperErrorCode = ConnectionLoss for /hbase From: jayavelu jaisenthilkumar To: hbase-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=00c09fc2bee746580604831b2e50 --00c09fc2bee746580604831b2e50 Content-Type: text/plain; charset=ISO-8859-1 Hi, I am using 1 master and 2 slaves one has password for ssh. I am using hadoop0.20.1 and hbase0.20.3(direct one not upgraded one) 1)The slave one with password is could not be disabled, i removed the whole .ssh directory try to ssh-keygen with passwordless phrase, still i am asked for the password when i "ssh localhost" 2) I am able to run hadoop and successfuly run the Mapreduce in the hadoop environment as per the Running Hadoop On Ubuntu Linux (Multi-Node Cluster) by noel 3) I am now following the tutorial hbase: overview HBase 0.20.3 API Its not clearly stated as the mulitnode cluster hadoop for the distributed mode hbase. I ran the hdfs and the hbase using start-dfs.sh and start-hbase.sh respectively. The master log indicates connection loss on the /hbase : ( is this hbase is created by Hbase or should we do to create it again 2010-03-31 16:45:57,850 INFO org.apache.zookeeper. ClientCnxn: Attempting connection to server Hadoopserver/192.168.1.65:2222 2010-03-31 16:45:57,858 INFO org.apache.zookeeper.ClientCnxn: Priming connection to java.nio.channels.SocketChannel[connected local=/ 192.168.1.65:43017 remote=Hadoopserver/192.168.1.65:2222] 2010-03-31 16:45:57,881 INFO org.apache.zookeeper.ClientCnxn: Server connection successful 2010-03-31 16:45:57,883 WARN org.apache.zookeeper.ClientCnxn: Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@11c2b67 java.io.IOException: Read error rc = -1 java.nio.DirectByteBuffer[pos=0 lim=4 cap=4] at org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:701) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:945) 2010-03-31 16:45:57,885 WARN org.apache.zookeeper.ClientCnxn: Ignoring exception during shutdown input java.net.SocketException: Transport endpoint is not connected at sun.nio.ch.SocketChannelImpl.shutdown(Native Method) at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:640) at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360) at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970) 2010-03-31 16:45:57,885 WARN org.apache.zookeeper.ClientCnxn: Ignoring exception during shutdown output java.net.SocketException: Transport endpoint is not connected at sun.nio.ch.SocketChannelImpl.shutdown(Native Method) at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:651) at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368) at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970) 2010-03-31 16:45:57,933 INFO org.apache.hadoop.hbase.master.RegionManager: -ROOT- region unset (but not set to be reassigned) 2010-03-31 16:45:57,934 INFO org.apache.hadoop.hbase.master.RegionManager: ROOT inserted into regionsInTransition 2010-03-31 16:45:58,024 DEBUG org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper: Failed to read: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master 2010-03-31 16:45:58,422 INFO org.apache.zookeeper.ClientCnxn: Attempting connection to server Hadoopclient1/192.168.1.2:2222 2010-03-31 16:45:58,423 INFO org.apache.zookeeper.ClientCnxn: Priming connection to java.nio.channels.SocketChannel[connected local=/ 192.168.1.65:51219 remote=Hadoopclient1/192.168.1.2:2222] 2010-03-31 16:45:58,423 INFO org.apache.zookeeper.ClientCnxn: Server connection successful 2010-03-31 16:45:58,436 WARN org.apache.zookeeper.ClientCnxn: Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@17b6643 java.io.IOException: Read error rc = -1 java.nio.DirectByteBuffer[pos=0 lim=4 cap=4] at org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:701) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:945) 2010-03-31 16:45:58,437 WARN org.apache.zookeeper.ClientCnxn: Ignoring exception during shutdown input java.net.SocketException: Transport endpoint is not connected at sun.nio.ch.SocketChannelImpl.shutdown(Native Method) at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:640) at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360) at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970) 2010-03-31 16:45:58,437 WARN org.apache.zookeeper.ClientCnxn: Ignoring exception during shutdown output java.net.SocketException: Transport endpoint is not connected at sun.nio.ch.SocketChannelImpl.shutdown(Native Method) at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:651) at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368) at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970) 2010-03-31 16:45:58,537 WARN org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper: Failed to create /hbase -- check quorum servers, currently=Hadoopclient1:2222,Hadoopclient:2222,Hadoopserver:2222 org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase at org.apache.zookeeper.KeeperException.create(KeeperException.java:90) at org.apache.zookeeper.KeeperException.create(KeeperException.java:42) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:608) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.ensureExists(ZooKeeperWrapper.java:405) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.ensureParentExists(ZooKeeperWrapper.java:428) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.writeMasterAddress(ZooKeeperWrapper.java:516) at org.apache.hadoop.hbase.master.HMaster.writeAddressToZooKeeper(HMaster.java:263) at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:245) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1241) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1282) 2010-03-31 16:45:58,549 INFO org.apache.zookeeper.ClientCnxn: Attempting connection to server Hadoopclient/192.168.1.3:2222 2010-03-31 16:45:58,550 INFO org.apache.zookeeper.ClientCnxn: Priming connection to java.nio.channels.SocketChannel[connected local=/ 192.168.1.65:56142 remote=Hadoopclient/192.168.1.3:2222] 2010-03-31 16:45:58,550 INFO org.apache.zookeeper.ClientCnxn: Server connection successful 2010-03-31 16:45:58,577 WARN org.apache.zookeeper.ClientCnxn: Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@76e8a7 java.io.IOException: Read error rc = -1 java.nio.DirectByteBuffer[pos=0 lim=4 cap=4] at org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:701) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:945) 2010-03-31 16:45:58,577 WARN org.apache.zookeeper.ClientCnxn: Ignoring exception during shutdown input java.net.SocketException: Transport endpoint is not connected at sun.nio.ch.SocketChannelImpl.shutdown(Native Method) at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:640) at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360) at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970) 2010-03-31 16:45:58,577 WARN org.apache.zookeeper.ClientCnxn: Ignoring exception during shutdown output java.net.SocketException: Transport endpoint is not connected at sun.nio.ch.SocketChannelImpl.shutdown(Native Method) at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:651) at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368) at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970) 2010-03-31 16:45:58,678 DEBUG org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper: Failed to read: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master my hbase-site.xml ( Master) hbase.rootdir hdfs://Hadoopserver:54310/hbase The directory shared by region servers. Should be fully-qualified to include the filesystem to use. E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR hbase.cluster.distributed true The mode the cluster will be in. Possible values are false: standalone and pseudo-distributed setups with managed Zookeeper true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh) hbase.zookeeper.quorum Hadoopserver,Hadoopclient1,Hadoopclient Comma separated list of servers in the ZooKeeper Quorum. For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com". By default this is set to localhost for local and pseudo-distributed modes of operation. For a fully-distributed setup, this should be set to a full list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh this is the list of servers which we will start/stop ZooKeeper on. hbase.zookeeper.property.tickTime 1 Property from ZooKeeper's config zoo.cfg. The number of milliseconds of each tick. See zookeeper.session.timeout description. zookeeper.retries 5 How many times to retry connections to ZooKeeper. Used for reading/writing root region location, checking/writing out of safe mode. Used together with ${zookeeper.pause} in an exponential backoff fashion when making queries to ZooKeeper. hbase.zookeeper.property.clientPort 2222 Property from ZooKeeper's config zoo.cfg. the port at which the clients will connect. dfs.replication 3 Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. The hbase-site.xml in slave1, slave2 hbase.rootdir hdfs://Hadoopserver:54310/hbase The directory shared by region servers. Should be fully-qualified to include the filesystem to use. E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR hbase.cluster.distributed true The mode the cluster will be in. Possible values are false: standalone and pseudo-distributed setups with managed Zookeeper true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh) hbase.zookeeper.quorum Hadoopserver,Hadoopclient1,Hadoopclient Comma separated list of servers in the ZooKeeper Quorum. For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com". By default this is set to localhost for local and pseudo-distributed modes of operation. For a fully-distributed setup, this should be set to a full list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh this is the list of servers which we will start/stop ZooKeeper on. hbase.zookeeper.property.clientPort 2222 Property from ZooKeeper's config zoo.cfg. the port at which the clients will connect. dfs.replication 3 Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. Regionservers ( only on Master in my case Hadoopserver) Hadoopserver Hadoopclient1 Hadoopclient Regionservers ( on slaves) localhost Blocked with this error for the past one week. Googled so much didnt get any solution. REgs. senthil --00c09fc2bee746580604831b2e50--