tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From André Warnier (tomcat) ...@ice-sa.com>
Subject Re: 21 second pause that randomly happens
Date Tue, 17 Jul 2018 14:22:15 GMT
Some additional comments in the text below.

But as a general comment : neither tomcat nor your application seem to log any error. This

suggest that when a connection is established by the client, and it sends a request to 
tomcat on that conection, it does get processed without error (and apparently without an 
extraordinary long delay). If there was a problem at the tomcat level either reading the 
request, or processing it, or sending the response to the wire (from the tomcat point of 
view), then you would see errors in those logs.
(Such as "client broke connection" or "timeout while reading request" or similar).

This all suggests that the problem is indeed down either at the TCP/socket level on the 
tomcat server, or even at some other intermediate agent level.
If the socket Tomcat is using is a java socket, then it is at the JVM level that you need

to look for logging capabilities. If it is a "Tomcat native" socket, then it would be at 
that level (because that is native code, not java).
(Otherwise said : tomcat will never be aware of, or have access to data related to, TCP 
packet transmission/retransmission issues; and even less your application)


On 16.07.2018 20:48, David Cleary wrote:
> On 16.07.2018 16:35, David Cleary wrote:
>> 2018-07-16 15:55 GMT+03:00 David Cleary <davec@progress.com>:
>>>> We have a customer who is experiencing a random, 21 second pause when using
out
> Tomcat
>> based application server. We believe this may be during a TCP connect and timeout.
Logging
>> indicates the pause happens before the request makes it to our back end.
>
>> Logging where then ?
>
> Sorry for any formatting issues. I have a digest subscription which doesn't lend well
to interactivity.
>
> Clients are running on Windows machines. Server is running on AWS and Linux. There is
a cloud firewall in between (pfSense). Do not have the details if they are running the cloud
version available on AWS.
>
> Client logging shows this:
>
> [18/05/16@12:12:48.822+1000] P-006760 T-002372 1 4GL REV            Trying Connection
> [18/05/16@12:13:09.925+1000] P-006760 T-002372 1 4GL REV            Connect Complete
21102
> [18/05/16@12:13:09.925+1000] P-006760 T-002372 1 4GL REV            WARNING: LONG CONNECTION

 From the point of view of the client (low-level), the TCP connection is with the 
front-end firewall/load balancer. The firewall has a separate TCP connection with the 
back-end server, and copies packets between these two connections, changing 
addresses/ports as required.
I do not know the client, and I guess tht it is possible that these "connection" messages

relate to the logical connection with the application, rather than purely the TCP level.
But it sounds somewhat unlikely.
Do you have any way to re-configure this (for testing) in such a way that the client would

bypass the firewall/load-balancer and connect directly to your application server ?
(and see if the issue happens also then)


> [18/05/16@12:13:09.925+1000] P-006760 T-002372 1 4GL REV            A4DC513EA548E24508E1E90AA9EA61DD9386DDB475AD.clintons
connected 21102
>
> Localhost access log shows this
>
> localhost_access_log.2018-05-16.txt:10.255.11.250 - - [16/May/2018:12:13:16 +1000] "POST
/apsv?CONNHDL=A4DC513EA548E24508E1E90AA9EA61DD9386DDB475AD.clintons HTT P/1.1" 200 253 1
>

The access log line is written when the request is in effect terminated (processed) and 
the result has already been sent to the wire (that is e.g. how it can log the size of the

response).  I think that if you look at that Access Log in the documentation, you will 
find tat you can log additional details, such as how much time it took to process the 
request e.g.
But so far, that does not seem to be relavant to the problem at hand.

> And our back end agent log shows this:
>
> clintons.agent.log:[18/05/16@12:13:16.294+1000] P-019364 T-2819262208 2 AS-19 AS Application
Server connected with connection id: A4DC513EA548E24508E1E90AA9EA61 DD9386DDB475AD.clintons.
(8358)
> clintons.agent.log:[18/05/16@12:13:16.299+1000] P-019364 T-3688318720 2 AS-19 AS Application
Server disconnected with connection id: A4DC513EA548E24508E1E90AA9E A61DD9386DDB475AD.clintons.
(8359)
>

So again, no problem visible at that level.

> Customer had some weird reconnection logic that was part of their application. After
removing the code so the logical connection would be kept open, we saw this pause happen on
a standard request. I do not know how long this logical connection was idle before running.
I also do not know if Tomcat closed the underlying socket either due to resources or a keep-alive
timeout. I was hoping logging could tell me when Tomcat binds to an incoming socket and releases
it. I was hoping to show in the above example, as far as Tomcat is concerned, the 21 second
delay happened outside of the server. Scouring the source code and trying some experimentation,
it does not appear there is logging available at the socket level.
>
>    It mostly happens
>> when we create an initial logical connection, but we have also seen it elsewhere
where
> we
>> believe the TCP Keep alive was expired and a new socket had to be established. However,
> I
>> do not know this and am hoping there is some logging I can turn on in the NIO connector
> to
>> collect more data. I tried turning on logging in the Endpoint class, but that did
not
> provide
>> anything useful.
>
>> If the connection request does not even reach the Tomcat back-end, that is also unlikely
>> to provide much information. (Not being facetious here, just stating a fact).
>> Can you do a "netstat" command on your Tomcat server when this happens ?
>> If yes, maybe some part of the output would provide some information from the TCP
level
>> (such as a high number of connections, to the Tomcat NIO port, in some specific TCP
state
>
> e.g.)
>
> Customer did some probing with Wireshark and said they were seeing a TCP retransmission
(sorry, I do not have many more details). In investigating this, we discovered this info on
TCP timeouts:
>
> " There's probably a million reasons why the client may never receive a SYN-ACK. The
one I've seen more often is packet loss, which in turn can have lots of reasons, for example
a malfunctioning or misconfigured network switch.
> However, you can immediately spot if your timeout/hang problems are caused by TCP retransmission
because they happen to cause response times that are unusually frequently distributed around
3, 9 and 21 seconds (and on, of course).
> In fact, the TCP retransmission timeout starts at 3 seconds, but if the client tries
to resend after a timeout and still receives no answer, it doubles the wait to 6 s, so the
total response time will be 9 seconds, assuming that the client now finally receives the SYN-ACK.
Otherwise, 3 + 6 + 12 = 21, then 3 + 6 + 12 + 24 = 45 s and so on and so forth."
>
> This is why we are focusing on the TCP layer.
>
>>   There is a NAT firewall between the client and server, so I'm looking for
>> some TCP level logging that could point me in the proper direction.
>>
>>> Tomcat version = ?
>>
>> Sorry. Tomcat 8.5.27.
>>
>
>> And on which kind of O.S. is this happening ?
>
>> Also maybe another question : is this happening on a Tomcat server which is dedicated
to
>> that particular customer ? or is the Tomcat server shared between different customers,
and
>
>> only that particular customer experiences these delays ?
>
> We sell an application server that customers create their own applications on. This particular
customer has many customers themselves. The customer's application do not exhibit this running
on our older, non-Tomcat based AppServer. Since this happens randomly, only a couple of times
a day, it is difficult to diagnose. Since the customer doesn't see this issue, running the
same exact client, on our older AppServer, they believe it is the new one. However, the older
one isn't HTTP-based, and they had a bunch of hacks related to connections where they would
recycle a connection after 5 minutes of inactivity, or after a 30 minute lifespan. This says
to me the issue clearly isn't our appserver, but I can't prove it at this point. The firewall
is my likely culprit, but without logging at the Tomcat endpoint, I can't definitively say
where the pause is.
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Mime
View raw message