tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeffrey Janner <Jeffrey.Jan...@PolyDyne.com>
Subject RE: help with a thread dump
Date Fri, 10 Apr 2015 16:53:46 GMT


> -----Original Message-----
> From: Jeffrey Janner [mailto:Jeffrey.Janner@PolyDyne.com]
> Sent: Friday, April 10, 2015 11:22 AM
> To: 'Tomcat Users List'
> Subject: RE: help with a thread dump
> 
> 
> 
> > -----Original Message-----
> > From: Aurélien Terrestris [mailto:aterrestris@gmail.com]
> > Sent: Friday, April 10, 2015 11:08 AM
> > To: Tomcat Users List
> > Subject: Re: help with a thread dump
> >
> > When something is going wrong, you can use jstat -gcutil to monitor
> > the garbage collector ; if the process looks hanged, it can be because
> > it's doing lots of garbaging. You would have 100% cpu at this moment.
> >
> > Also, always use verbose garbage collectors, then you have information
> > for later debugging.
> >
> > A.T.
> >
> >
> > 2015-04-10 17:40 GMT+02:00 Jeffrey Janner
> <Jeffrey.Janner@polydyne.com>:
> > > Thanks, Filip.  That's what I was wondering, if those threads were
> > normal.
> > > There was nothing in the rest of the thread dump to indicate a
> problem
> > either.
> > > At the time, I wasn't able to get in to the manager app, or use
> > jconsole to see what was going on. It's possible that I was out of
> > HTML/processor threads for some reason but I couldn't check at the
> time.
> > > Jeff
> > >
> > >> -----Original Message-----
> > >> From: Filip Hanik [mailto:filip@hanik.com]
> > >> Sent: Friday, April 10, 2015 10:23 AM
> > >> To: Tomcat Users List
> > >> Subject: Re: help with a thread dump
> > >>
> > >> You would need a thread dump of all the threads. What you are
> seeing
> > >> here
> > >> seems completely normal
> > >>
> > >> On Fri, Apr 10, 2015 at 9:10 AM, Jeffrey Janner
> > >> <Jeffrey.Janner@polydyne.com
> > >> > wrote:
> > >>
> > >> > We had a Tomcat instance just appear to hang, and the tech took a
> > >> thread
> > >> > dump prior to restarting the service.
> > >> > I am still looking through it to try to find a root cause, but
> came
> > >> across
> > >> > some entries that looked strange to me:
> > >> >
> > >> > "http-apr-10.3.1.36-80-Sendfile" daemon prio=6
> > tid=0x000000000d4b0000
> > >> > nid=0x3490 in Object.wait() [0x000000001b90f000]
> > >> >    java.lang.Thread.State: WAITING (on object monitor)
> > >> >                 at java.lang.Object.wait(Native Method)
> > >> >                 - waiting on <0x000000078576a940> (a
> > >> > org.apache.tomcat.util.net.AprEndpoint$Sendfile)
> > >> >                 at java.lang.Object.wait(Object.java:503)
> > >> >                 at
> > >> >
> > >>
> >
> org.apache.tomcat.util.net.AprEndpoint$Sendfile.run(AprEndpoint.java:221
> > >> 3)
> > >> >                 - locked <0x000000078576a940> (a
> > >> > org.apache.tomcat.util.net.AprEndpoint$Sendfile)
> > >> >                 at java.lang.Thread.run(Thread.java:744)
> > >> >
> > >> > "http-apr-10.3.1.36-80-Poller" daemon prio=6
> tid=0x000000000d4af000
> > >> > nid=0x31fc in Object.wait() [0x000000001b80f000]
> > >> >    java.lang.Thread.State: TIMED_WAITING (on object monitor)
> > >> >                 at java.lang.Object.wait(Native Method)
> > >> >                 - waiting on <0x000000078576ab30> (a
> > >> > org.apache.tomcat.util.net.AprEndpoint$Poller)
> > >> >                 at
> > >> >
> > >>
> >
> org.apache.tomcat.util.net.AprEndpoint$Poller.run(AprEndpoint.java:1702)
> > >> >                 - locked <0x000000078576ab30> (a
> > >> > org.apache.tomcat.util.net.AprEndpoint$Poller)
> > >> >                 at java.lang.Thread.run(Thread.java:744)
> > >> >
> > >> > The second entry is less strange, since it is a timed_waiting,
> but
> > >> both
> > >> > look to me like self-deadlock conditions.
> > >> > I found several Timer-xx threads that look like the second above,
> > so
> > >> that
> > >> > is probably OK.  But what about the first one?
> > >> >
> > >> > I have several other threads waiting on various
> > >> >
> > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject
> > >> > objects that I don't see being held by any other thread, so I
> > assume
> > >> they
> > >> > can be ignored.
> > >> >
> > >> > Windows Server 2008 R2 Datacenter
> > >> > Tomcat 7.0.57
> > >> > Java 7u51 server.dll
> > >> >
> > >> >
> > >> > Jeffrey Janner
> > >> > Sr. Network Administrator
> > >> >
> > >
> 
> Sorry for the top posts.
> 
> Thanks A.T.
> I've actually had the issue on two separate Tomcats on different servers
> in the past few days.  The other one is logging
> "java.lang.OutOfMemoryError: GC overhead limit exceeded" in its stderr
> log and the thread dump shows 99% PSPermGen usage, though it eventually
> recovered on its own in the last round of GC overhead spikes, with
> memory showing normal.  No indication that anything restarted.
> 
> I'm still looking at the other (original) system, but so far I haven't
> found any GC issues logged there, but it's thread dump did show a 99%
> used PSPermGen space as well.
> 
> Both services have -XXMaxPermGen set, the one showing the overhead limit
> issue is set to 1024M out of a 5120M max memory pool setting.  The other
> is set to 512M out of 2048M.  The first has more copies of the app
> running that the latter.
> 
> Still investigating causes.
> 
> Jeff

Wow, while watching the instance with 5Gb allocated using jconsole, I saw something interesting.
Heap memory usage suddenly spiked from .5Gb to almost full usage in one go, and then jconsole
lost connection. It showed OldGen hitting the max just before it lost connection, so I guess
the JVM is trying to garbage collect its little heart out right now. Got two thread dumps
(which took a while to show up) so hopefully I can figure out what is going on in the application.

Jeff

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org

Mime
View raw message