tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brett Delle Grazie <brett.dellegra...@gmail.com>
Subject Re: NIO connector issue: SEVERE: Error processing request
Date Thu, 17 Jan 2013 10:00:25 GMT
On 16 January 2013 22:52, Kevin Priebe <kevin@realtyserver.com> wrote:

> Thanks for the info.  We made some changes to the linux TCP settings last
> night and haven't noticed the issue yet today, so are hoping that does the
> trick.  We won't know for sure until there are several days without issues.
> If it continues, we'll try upgrading to the latest tomcat and will gather a
> bunch of more info as Igor and Chris suggested.
>

One more thing you should check - confirm that your network ports
are not flapping due to mismatching speed configurations
Either all set to auto-negotiation or all set to a fixed specific rate
(include switches in this check)
I've seen situations where this has caused responses to be
terminated prematurely particularly when the response is larger than
usual.

Brett

Kevin
>
>
> -----Original Message-----
> From: Igor Cicimov [mailto:icicimov@gmail.com]
> Sent: Tuesday, January 15, 2013 5:19 PM
> To: Tomcat Users List
> Subject: Re: NIO connector issue: SEVERE: Error processing request
>
> On Wed, Jan 16, 2013 at 9:34 AM, Kevin Priebe <kevin@realtyserver.com
> >wrote:
>
> > Hi,
> >
> >
> >
> > We have a setup with Nginx load balancing between 2 clustered tomcat
> > instances.  1 instance is on the same server as Nginx and the other is
> > on a separate physical server (same rackspace).  We're using pretty
> > standard default settings and are using the NIO tomcat connector.
> > Tomcat version is
> > 7.0.32 running on Debian.
> >
> >
> >
> > The problem is with the second tomcat instance where at random times
> > will start showing SEVERE errors in the tomcat logs, which gets worse
> > and worse until the instance is unusable and has to be restarted.  At
> > first we thought it was related to high load, but once it happened
> > early in the morning when load was fairly low.  It does seem to happen
> > more often at high load times though, and is about once a day,
> > sometimes twice.  AWSTATS says we get just over a million hits per day
> > to the secondary tomcat instance.  Here's the errors:
> >
> >
> >
> > Jan 15, 2013 11:22:21 AM
> > org.apache.coyote.http11.AbstractHttp11Processor
> > process
> >
> > SEVERE: Error processing request
> >
> > java.lang.NullPointerException
> >
> >
> >
> > Jan 15, 2013 11:22:21 AM
> > org.apache.coyote.http11.AbstractHttp11Processor
> > endRequest
> >
> > SEVERE: Error finishing response
> >
> > java.lang.NullPointerException
> >
> >         at
> > org.apache.coyote.http11.InternalNioOutputBuffer.flushBuffer(InternalN
> > ioOutputBuffer.java:233)
> >
> >         at
> > org.apache.coyote.http11.InternalNioOutputBuffer.endRequest(InternalNi
> > oOutputBuffer.java:121)
> >
> >         at
> > org.apache.coyote.http11.AbstractHttp11Processor.endRequest(AbstractHt
> > tp11Processor.java:1653)
> >
> >         at
> > org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp1
> > 1Processor.java:1046)
> >
> >         at
> > org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(A
> > bstractProtocol.java:585)
> >
> >         at
> > org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint
> > .java:1653)
> >
> >         at
> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.j
> > ava:1110)
> >
> >         at
> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
> > java:603)
> >
> >         at java.lang.Thread.run(Thread.java:722)
> >
> >
> >
> >
> >
> > Nothing else helpful seems to show up in the logs before it starts
> > happening.  This ONLY happens on the tomcat instance on a separate
> > machine from Nginx.  Any ideas what might be happening and how it can be
> resolved?
> >  We're not even sure this is related to tomcat or something in the
> > communications before it gets to tomcat, but we're looking at all
> > options right now.  Thanks,
> >
> >
> >
> > Kevin
> >
> >
> >
> >
> >
> >
> >
> >   _____
> >
> > I am using the Free version of SPAMfighter
> > <http://www.spamfighter.com/len> .
> > SPAMfighter has removed 3 of my spam emails to date.
> >
> > Do you have a slow PC? <
> > http://www.spamfighter.com/SLOW-PCfighter?cid=sigen>  Try a free scan!
> >
> > Hi Kevin,
>
> I'm not nginx nor tomcat expert but it looks like tomcat gets interrupted
> during sending the response back ie like the connection gets closed whiles
> it's still flushing the output buffer.
> Have you done any tuning of the http connections and tcp timeout maybe in
> nginx and set the timeout too low? Have you checked for possible network
> latency (I know you said they are in the same rackspace but doesn't hurt to
> ask), switch problems etc? What else is between nginx and tomcat 2? Can you
> see in the nginx logs how much time the requests to instance 1 and instance
> 2 take? Also by comparing timestamps you should be able to find in nginx
> the
> request that failed (there must be error on nginx side too) and see if it
> happens on small or big data streams (check the data size in the log
> line) etc.
>
> So my point is start troubleshooting on nginx side until you get response
> from some of the more experienced tomcat users/developers here :) And get
> ready to send your NIO connector and related nginx settings too I would say
> :)
>
> Igor
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message