hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Yu <yuzhih...@gmail.com>
Subject Re: consistent KeeperException$ConnectionLossException
Date Tue, 04 Jan 2011 19:47:42 GMT
So I should be using HTablePool.
For 0.20.6, I didn't see ConnectionLossException this often.

I wonder if something changed from 0.20.6 to 0.90

On Tue, Jan 4, 2011 at 11:29 AM, Stack <stack@duboce.net> wrote:

> Are you passing the same Configuration instance when creating your
> HTables?   See
> http://people.apache.org/~stack/hbase-0.90.0-candidate-2/docs/apidocs/org/apache/hadoop/hbase/client/HConnectionManager.html<http://people.apache.org/%7Estack/hbase-0.90.0-candidate-2/docs/apidocs/org/apache/hadoop/hbase/client/HConnectionManager.html>
> if not.  It explains how we figure whether zk client, rpc connections,
> etc. are shared or not.
>
> St.Ack
>
> On Tue, Jan 4, 2011 at 11:12 AM, Jean-Daniel Cryans <jdcryans@apache.org>
> wrote:
> > It's a zookeeper setting, you cannot have by default more than 30
> > connections from the same IP per ZK peer.
> >
> > If HBase is starting ZK for you, do change
> > hbase.zookeeper.property.maxClientCnxns
> >
> > J-D
> >
> > On Tue, Jan 4, 2011 at 11:09 AM, Ted Yu <yuzhihong@gmail.com> wrote:
> >> Hi,
> >> I am using HBase 0.90 and our job fails consistently with the following
> >> exception:
> >>
> >> Caused by: org.apache.hadoop.hbase.ZooKeeperConnectionException:
> >> org.apache.zookeeper.KeeperException$ConnectionLossException:
> >> KeeperErrorCode = ConnectionLoss for /hbase
> >>        at
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:147)
> >>        at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1035)
> >>        ... 19 more
> >> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException:
> >> KeeperErrorCode = ConnectionLoss for /hbase
> >>        at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
> >>        at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
> >>        at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:608)
> >>        at
> org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:902)
> >>        at
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:133)
> >>        ... 20 more
> >>
> >> Zookeeper quorum runs on the same node as NameNode. HMaster is on
> another
> >> node. Hadoop is cdh3b2.
> >>
> >> In zookeeper log, I see (10.202.50.79 is the same node where the
> exception
> >> above happened):
> >>
> >> 2011-01-04 18:47:40,633 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Too
> >> many connections from /10.202.50.79 - max is 30
> >> 2011-01-04 18:47:41,187 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Too
> >> many connections from /10.202.50.79 - max is 30
> >> 2011-01-04 18:47:42,375 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Too
> >> many connections from /10.202.50.79 - max is 30
> >> 2011-01-04 18:47:42,447 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Too
> >> many connections from /10.202.50.79 - max is 30
> >> 2011-01-04 18:47:43,113 WARN org.apache.zookeeper.server.NIOServerCnxn:
> >> EndOfStreamException: Unable to read additional data from client
> sessionid
> >> 0x12d5220eb970025, likely client has closed socket
> >> 2011-01-04 18:47:43,113 INFO org.apache.zookeeper.server.NIOServerCnxn:
> >> Closed socket connection for client /10.202.50.79:37845 which had
> sessionid
> >> 0x12d5220eb970025
> >> 2011-01-04 18:47:43,113 WARN org.apache.zookeeper.server.NIOServerCnxn:
> >> EndOfStreamException: Unable to read additional data from client
> sessionid
> >> 0x12d5220eb970087, likely client has closed socket
> >>
> >> Please advise what parameter I should tune.
> >>
> >> Thanks
> >>
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message