hc-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hiranya Jayathilaka <hiranya...@gmail.com>
Subject Possible Race Condition Due to NHTTP Connection Pooling
Date Wed, 11 Apr 2012 08:06:51 GMT
Hi Devs,

I have identified a possible race condition in the Synapse NHTTP transport.
This happens because Synapse pools NHttpClientConnection instances for easy
reuse. Imagine the following scenario:

1. Synapse receives a message which should be forwarded to X
2. Synapse detects that a previously established connection to X already
exists in the NHTTP connection pool
3. Synapse attempts to forward the message over the above connection

Now if the connection has stayed in the connection pool for a fairly long
time in idle state, then the ClientHandler#timeout event may get fired on
it as Synapse attempts to send the new request over it. In this case the
Synapse ClientHandler simply closes the connection which causes the request
invocation to fail. I'm able to reproduce this issue consistently with a
simple mutation test:

[2012-04-11 13:10:34,927] DEBUG - ConnectionPool A connection to host :
localhost on port : 9000 is available in the pool, and will be reused
[2012-04-11 13:10:34,929] DEBUG - ClientHandler Connection timeout For :
127.0.0.1:9000
[2012-04-11 13:10:34,930] DEBUG - headers >> POST
/services/SimpleStockQuoteService HTTP/1.1
[2012-04-11 13:10:34,931] DEBUG - headers >> Content-Type: text/xml;
charset=UTF-8
[2012-04-11 13:10:34,931] DEBUG - headers >> SOAPAction: "urn:getQuote"
[2012-04-11 13:10:34,931] DEBUG - headers >> Transfer-Encoding: chunked
[2012-04-11 13:10:34,931] DEBUG - headers >> Host: localhost:9000
[2012-04-11 13:10:34,931] DEBUG - headers >> Connection: Keep-Alive
[2012-04-11 13:10:34,932] DEBUG - headers >> User-Agent:
Synapse-HttpComponents-NIO
[2012-04-11 13:10:34,932] DEBUG - ClientHandler Connection to remote
address : localhost/127.0.0.1:9000 from local address : /127.0.0.1:35208 is
closed!
[2012-04-11 13:10:34,933] DEBUG - HttpCoreNIOSender An existing connection
reused to : localhost:9000
[2012-04-11 13:10:34,933] DEBUG - ClientHandler HTTP connection
127.0.0.1:35208->127.0.0.1:9000: Closed
[2012-04-11 13:10:34,933] DEBUG - Axis2HttpRequest Start streaming outgoing
http request : [Message ID : urn:uuid:ec5957cc-73a6-4877-b76f-a26b72b2edd3]
[2012-04-11 13:10:34,934] DEBUG - ClientHandler Keep-alive connection
closed For : 127.0.0.1:9000 For Request : Axis2Request [Message ID :
urn:uuid:ec5957cc-73a6-4877-b76f-a26b72b2edd3] [Status Completed : true]
[Status SendingCompleted : false]
[2012-04-11 13:10:34,934] DEBUG - ClientHandler Connection to remote
address : localhost/127.0.0.1:9000 from local address : /127.0.0.1:35208 is
closed!
[2012-04-11 13:10:34,952] DEBUG - ClientHandler Sending Fault for Request
with Message ID : urn:uuid:ec5957cc-73a6-4877-b76f-a26b72b2edd3


I believe that ideally we should "reset the clock" for connections returned
from the ConnectionPool. That way they would behave as newly established
connections regardless of how much time they have spent in the connection
pool. Is there a way to achieve this?

Thanks
-- 
Hiranya Jayathilaka
Associate Technical Lead;
WSO2 Inc.;  http://wso2.org
E-mail: hiranya@wso2.com;  Mobile: +94 77 633 3491
Blog: http://techfeast-hiranya.blogspot.com

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message