hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Anoop John <anoop.hb...@gmail.com>
Subject Re: Zookeeper timeout exception seen whenever too much of data is being pushed to hbase
Date Thu, 19 Jun 2014 11:36:18 GMT
org.apache.hadoop.hbase.client.HTablePool.getTable(HTablePool.java:180)
        at
com.narus.cdp.backend.hbase.ConfigurationLoader.getHTable(ConfigurationLoader.java:46)
        at com.narus.cdp.backend.hbase.HBaseUtil.getTable(HBaseUtil.java:32)
        at com.narus.cdp.backend.hbase.HBaseUtil.put(HBaseUtil.java:98)

In your HBaseUtil, after getting the table from the pool and doing the op,
are you returning back the table to the pool using  returnTable(HTableInterface
table)  (?)

Which version u r using? HTablePool is deprecated. See the javadoc and try
using the alternate way.

-Anoop-

On Thu, Jun 19, 2014 at 6:37 AM, Ted Yu <yuzhihong@gmail.com> wrote:

> I don't have much experience with commons pool framework.
>
> Using HConnection effectively should be the way to go.
>
> Cheers
>
>
> On Wed, Jun 18, 2014 at 5:18 PM, arunas <sivaram.aruna@gmail.com> wrote:
>
> > Thanks Ted, The link was indeed helpful.
> >
> > The issue is that whenever we hit this exception, since the HBASE writes
> > are
> > individual tasks in a threadpool, this exception causes the application
> to
> > go in crazy loop and the handle never comes to the parent thread.
> >
> > Sesondly, is it a good idea if I write a custom pool which manages the
> > htable life cycle using the commons pool framework instead of relying on
> > the
> > htablepool or the Hconnection??
> >
> >
> >
> > --
> > View this message in context:
> >
> http://apache-hbase.679495.n3.nabble.com/Zookeeper-timeout-exception-seen-whenever-too-much-of-data-is-being-pushed-to-hbase-tp4060560p4060566.html
> > Sent from the HBase User mailing list archive at Nabble.com.
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message