Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1F45817729 for ; Tue, 6 Jan 2015 20:31:31 +0000 (UTC) Received: (qmail 56756 invoked by uid 500); 6 Jan 2015 20:31:30 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 56682 invoked by uid 500); 6 Jan 2015 20:31:30 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 56659 invoked by uid 99); 6 Jan 2015 20:31:29 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 06 Jan 2015 20:31:29 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of saint.ack@gmail.com designates 209.85.223.171 as permitted sender) Received: from [209.85.223.171] (HELO mail-ie0-f171.google.com) (209.85.223.171) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 06 Jan 2015 20:31:25 +0000 Received: by mail-ie0-f171.google.com with SMTP id ar1so26770iec.2 for ; Tue, 06 Jan 2015 12:31:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:content-type; bh=LquFQNTdt6lQbUELpY/6jyd8grgdTMspW2JavDn+L/k=; b=g2CwtqN3Sa/AQ5lAVlh99Q+wdStYE3SoAxO45HKYUOGuD01xlfoz9hTadU/QjwYKPq SchJEUaRiihd2aCucObx3VGkCVl8N7l4V34CKJFWKGORpRLlWCaTmuOSujeN9UvWUbai kPLThjKHSO0wa09b+3uZpFaZUQvULaeYd6ebRbqyhEerSMeHdDwiCl/pJO9dKlaI4vOn QOAglKK3bIvX1MuepHWr4V2t/Je+Xvd3mbZE+l04HzbHXIiMu9SXtTWX1GPXKhrvVb5q l9ph0JhGThuIV1Cgd0fRDl/63X1hvdx5y5F4DG+eEt58Nynw7GZ4YkyYjpVkh0VNBQOZ Vajg== MIME-Version: 1.0 X-Received: by 10.42.253.195 with SMTP id nb3mr11070icb.34.1420576264583; Tue, 06 Jan 2015 12:31:04 -0800 (PST) Sender: saint.ack@gmail.com Received: by 10.107.167.70 with HTTP; Tue, 6 Jan 2015 12:31:04 -0800 (PST) In-Reply-To: References: Date: Tue, 6 Jan 2015 12:31:04 -0800 X-Google-Sender-Auth: veLCpogxbTTx6LmRiDowiAip_Bc Message-ID: Subject: Re: Threads leaking from Apache tomcat application From: Stack To: Hbase-User Content-Type: multipart/alternative; boundary=bcaec510194da08203050c01afcc X-Virus-Checked: Checked by ClamAV on apache.org --bcaec510194da08203050c01afcc Content-Type: text/plain; charset=UTF-8 The threads that are sticking around are tomcat threads out of a tomcat executor pool. IIRC, your server has high traffic. The pool is running up to 800 connections on occasion and taking a while to die back down? Googling, seems like this issue comes up frequently enough. Try it yourself. If you can't figure something like putting a bound on the executor, come back here and we'll try and help you out. St.Ack On Tue, Jan 6, 2015 at 12:10 PM, Serega Sheypak wrote: > Hi, yes, it was me. > I've followed advices, ZK connections on server side are stable. > Here is current state of Tomcat: > http://bigdatapath.com/wp-content/uploads/2015/01/002_jvisualvm_summary.png > There are more than 800 threads and daemon threads. > > and the state of three ZK servers: > http://bigdatapath.com/wp-content/uploads/2015/01/001_zk_server_state.png > > here is pastebin: > http://pastebin.com/Cq8ppg08 > > P.S. > Looks like tomcat is running on OpenJDK 64-Bit Server VM. > I'll ask to fix it, it should be Oracle 7 JDK > > 2015-01-06 20:43 GMT+03:00 Stack : > > > On Tue, Jan 6, 2015 at 4:52 AM, Serega Sheypak > > > wrote: > > > > > yes, one of them (random) gets more connections than others. > > > > > > 9.3.1.1 Is OK. > > > I have 1 HConnection for logical module per application and each > > > ServletRequest gets it's own HTable. HTable closed each tme after > > > ServletRequest is done. HConnection is never closed. > > > > > > > > This is you, right: http://search-hadoop.com/m/DHED4lJSA32 > > > > Then, we were leaking zk connections. Is that fixed? > > > > Can you reproduce in the small? By setting up your webapp deploy in test > > bed and watching it for leaking? > > > > For this issue, can you post a thread dump in postbin or gist so can see? > > > > Can you post code too? > > > > St.Ack > > > > > > > > > 2015-01-05 21:22 GMT+03:00 Ted Yu : > > > > > > > In 022_zookeeper_metrics.png, server names are anonymized. Looks like > > > only > > > > one server got high number of connections. > > > > > > > > Have you seen 9.3.1.1 of http://hbase.apache.org/book.html#client ? > > > > > > > > Cheers > > > > > > > > On Mon, Jan 5, 2015 at 8:57 AM, Serega Sheypak < > > serega.sheypak@gmail.com > > > > > > > > wrote: > > > > > > > > > Hi, here is repost with images link > > > > > > > > > > Hi, I'm still trying to deal with apache tomcat web-app and hbase > > HBase > > > > > 0.98.6 > > > > > The root problem is that user threads constantly grows. I do get > > > > thousands > > > > > of live threads on tomcat instance. Then it dies of course. > > > > > > > > > > please see visualVM threads count dynamics > > > > > > > > > > > > > > > http://bigdatapath.com/wp-content/uploads/2015/01/01_threads_count-grow.png > > > > > > > > > > > > > > > Please see selected thread. It should be related to zookeeper > > (because > > > of > > > > > thread-name suffix SendThread) > > > > > > > > > > > > > > > > > > > > http://bigdatapath.com/wp-content/uploads/2015/01/011_long_running_threads.png > > > > > > > > > > The threaddump for this thread is: > > > > > > > > > > "visit-thread-27799752116280271-EventThread" - Thread t@75 > > > > > java.lang.Thread.State: WAITING > > > > > at sun.misc.Unsafe.park(Native Method) > > > > > - parking to wait for <34671cea> (a > > > > > > > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) > > > > > at > java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) > > > > > at > > > > > > > > > > > > > > > > > > > > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) > > > > > at > > > > > > > > > > > > > > > java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) > > > > > at > > org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494) > > > > > > > > > > Locked ownable synchronizers: > > > > > - None > > > > > > > > > > Why does it live "forever"? I next 24 hours I would get ~1200 live > > > > theads. > > > > > > > > > > "visit thread" does simple put/get by key, newrelic says it takes > > 30-40 > > > > ms > > > > > to respond. > > > > > I just set a name for the thread inside servlet method. > > > > > > > > > > Here is CPU profiling result: > > > > > > > http://bigdatapath.com/wp-content/uploads/2015/01/03_cpu_prifling.png > > > > > > > > > > Here is zookeeper status: > > > > > > > > > > > > > > > http://bigdatapath.com/wp-content/uploads/2015/01/022_zookeeper_metrics.png > > > > > > > > > > How can I debug and get root cause for long living threads? Looks > > like > > > I > > > > > got threads leaking, but have no Idea why... > > > > > > > > > > > > > > > > > > > > > > > > > 2015-01-05 17:57 GMT+03:00 Ted Yu : > > > > > > > > > > > I used gmail. > > > > > > > > > > > > Please consider using third party site where you can upload > images. > > > > > > > > > > > > > > > > > > > > > > > > > > > --bcaec510194da08203050c01afcc--