hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stack <st...@duboce.net>
Subject Re: Threads leaking from Apache tomcat application
Date Tue, 06 Jan 2015 17:43:35 GMT
On Tue, Jan 6, 2015 at 4:52 AM, Serega Sheypak <serega.sheypak@gmail.com>
wrote:

> yes, one of them (random) gets more connections than others.
>
> 9.3.1.1 Is OK.
> I have 1 HConnection for logical module per application and each
> ServletRequest gets it's own HTable. HTable closed each tme after
> ServletRequest is done. HConnection is never closed.
>
>
This is you, right: http://search-hadoop.com/m/DHED4lJSA32

Then, we were leaking zk connections.  Is that fixed?

Can you reproduce in the small?  By setting up your webapp deploy in test
bed and watching it for leaking?

For this issue, can you post a thread dump in postbin or gist so can see?

Can you post code too?

St.Ack



> 2015-01-05 21:22 GMT+03:00 Ted Yu <yuzhihong@gmail.com>:
>
> > In 022_zookeeper_metrics.png, server names are anonymized. Looks like
> only
> > one server got high number of connections.
> >
> > Have you seen 9.3.1.1 of http://hbase.apache.org/book.html#client ?
> >
> > Cheers
> >
> > On Mon, Jan 5, 2015 at 8:57 AM, Serega Sheypak <serega.sheypak@gmail.com
> >
> > wrote:
> >
> > > Hi, here is repost with images link
> > >
> > > Hi, I'm still trying to deal with apache tomcat web-app and hbase HBase
> > > 0.98.6
> > > The root problem is that user threads constantly grows. I do get
> > thousands
> > > of live threads on tomcat instance. Then it dies of course.
> > >
> > > please see visualVM threads count dynamics
> > >
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/01_threads_count-grow.png
> > >
> > >
> > > Please see selected thread. It should be related to zookeeper (because
> of
> > > thread-name suffix SendThread)
> > >
> > >
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/011_long_running_threads.png
> > >
> > > The threaddump for this thread is:
> > >
> > > "visit-thread-27799752116280271-EventThread" - Thread t@75
> > >    java.lang.Thread.State: WAITING
> > > at sun.misc.Unsafe.park(Native Method)
> > > - parking to wait for <34671cea> (a
> > > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> > > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
> > > at
> > >
> > >
> >
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
> > > at
> > >
> >
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> > > at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
> > >
> > >    Locked ownable synchronizers:
> > > - None
> > >
> > > Why does it live "forever"? I next 24 hours I would get ~1200 live
> > theads.
> > >
> > > "visit thread" does simple put/get by key, newrelic says it takes 30-40
> > ms
> > > to respond.
> > > I just set a name for the thread inside servlet method.
> > >
> > > Here is CPU profiling result:
> > > http://bigdatapath.com/wp-content/uploads/2015/01/03_cpu_prifling.png
> > >
> > > Here is zookeeper status:
> > >
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/022_zookeeper_metrics.png
> > >
> > > How can I debug and get root cause for long living threads? Looks like
> I
> > > got threads leaking, but have no Idea why...
> > >
> > >
> > >
> > >
> > > 2015-01-05 17:57 GMT+03:00 Ted Yu <yuzhihong@gmail.com>:
> > >
> > > > I used gmail.
> > > >
> > > > Please consider using third party site where you can upload images.
> > > >
> > > >
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message