hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Yu <yuzhih...@gmail.com>
Subject Re: Zookeeper timeout exception seen whenever too much of data is being pushed to hbase
Date Wed, 18 Jun 2014 17:35:16 GMT
There was a recent thread related to using HTablePool :

http://search-hadoop.com/m/DHED4zrOq61/HBase+with+multiple+threads&subj=+Discuss+HBase+with+multiple+threads

Please take a look.


On Wed, Jun 18, 2014 at 10:20 AM, arunas <sivaram.aruna@gmail.com> wrote:

> Hi All
>
> I basically have a thread pool which has the task of pushing data into
> HBASE. However, it is seen that wheneever the data rate is very high, which
> means that i have many records to be pushed into hbase at one time. I get
> the following exception which is thrown by the put api of hbase client.
> org.apache.zookeeper.KeeperException$SessionExpiredException:
> KeeperErrorCode = Session expired for /hbase/meta-region-server
>         at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
>         at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>         at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1151)
>         at
>
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:342)
>         at
> org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:683)
>         at
>
> org.apache.hadoop.hbase.zookeeper.ZKUtil.blockUntilAvailable(ZKUtil.java:1833)
>         at
>
> org.apache.hadoop.hbase.zookeeper.MetaRegionTracker.blockUntilAvailable(MetaRegionTracker.java:183)
>         at
>
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:58)
>         at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1044)
>         at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1134)
>         at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1047)
>         at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1004)
>         at
> org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:325)
>         at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:191)
>         at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:164)
>         at
>
> org.apache.hadoop.hbase.client.HTableFactory.createHTableInterface(HTableFactory.java:39)
>         at
> org.apache.hadoop.hbase.client.HTablePool.createHTable(HTablePool.java:271)
>         at
>
> org.apache.hadoop.hbase.client.HTablePool.findOrCreateTable(HTablePool.java:201)
>         at
> org.apache.hadoop.hbase.client.HTablePool.getTable(HTablePool.java:180)
>         at
>
> com.narus.cdp.backend.hbase.ConfigurationLoader.getHTable(ConfigurationLoader.java:46)
>         at
> com.narus.cdp.backend.hbase.HBaseUtil.getTable(HBaseUtil.java:32)
>         at com.narus.cdp.backend.hbase.HBaseUtil.put(HBaseUtil.java:98)
>
> PS :  I am using the Htablepool to manage the htable connections. I did
> increase the zookeeper.session.timeout in the hbase-site.xml but that is
> not
> helping either
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/Zookeeper-timeout-exception-seen-whenever-too-much-of-data-is-being-pushed-to-hbase-tp4060560.html
> Sent from the HBase User mailing list archive at Nabble.com.
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message