activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hendley, Sam" <>
Subject RE: Faster detection of failure on network connections in "cable pull" scenario
Date Mon, 23 Feb 2015 17:45:04 GMT
Hi, thanks for looking at this. I don't think I was clear enough in my description of the issue.

Each of our brokers has a network connector configured to all of the other brokers. This means
there are two connections between each broker. We have found that the outbound (to a remote
broker) direction will always log with a port number of 61616, and the inbound direction will
use an ephermal port. That means this log message is on the read side since the port is ephemeral.

> 2015-02-20 14:20:41,798 | WARN  | Transport Connection to: tcp:// failed:org.apache.activemq.transport.InactivityIOException:
Channel was inactive for too (>3000) long: tcp:// |
| ActiveMQInactivityMonitor Worker

So we find that the "inbound" connection drops at expected time, but the outbound connection
still stays alive for a long time. 

By "successful completion of a message" I was hoping that the inactivity timer was tied to
the "ACKs" from the remote broker. I know that the responses are being tracked since all of
the normal "bookkeeping" on the piped message works (they get Dispatched but not Dequeued
from their queues). I had been hoping that the inactivity monitor would be watching for the
responses from the remote side to determine if the connection was inactive or not. However
it seems to treat every attempted write we are doing as proof that the connection is "active",
I would have thought we would wait for the resonses before 

In the meantime we found a workaround by setting the "socketBufferSize" to a very small value
(2048). This meant that the socket buffer fills up very quickly and the detection time went
from 5 minutes to around 10 seconds. This obviously will have throughput ramifications so
we are still looking for a better solution.

Thanks for your input, hopefully this is a bit clearer.

Sam Hendley

-----Original Message-----
From: [] On Behalf Of Tim Bain
Sent: Sunday, February 22, 2015 6:17 PM
To: ActiveMQ Users
Subject: Re: Faster detection of failure on network connections in "cable pull" scenario

It sounds to me like what's going on is that the inactivity interval is tripping but either
1) the inactivity exception doesn't trigger the bridge to try to re-establish the connection,
or 2) it tries to re-establish the connection but the connection establishment attempt send()
succeeds (going into the OS socket buffer but not onto the network card) and nothing notices
that it's been more than an acceptable amount of time without a response from the remote broker.
 Can you try to determine which of those it is by stepping through the code with a debugger?
org.apache.activemq.transport.AbstractInactivityMonitor.readCheck() is where you would want
the breakpoint, in the Runnable's call to onException().

Also, as far as I can tell, the inactivity interval algorithm correctly considers only reads
to be an indication of connection liveness, so unless I'm missing something in the code of
I don't think your theory that the inactivity monitor isn't firing is right.  (After all,
the log you posted shows that it does in fact fire as expected at 2015-02-20 14:20:41,798;
the problem is simply what happens after that.)

I'm not at all clear what you meant by "I was expecting that the inactivity timer was looking
the successful completion of a message"; if what you meant wasn't addressed by what I wrote
above and is still relevant, please clarify the question.


On Fri, Feb 20, 2015 at 1:43 PM, Hendley, Sam <>

> We are doing "break testing" on our clustered deployment and are 
> running into an issue with activemq being slow to notice a node that 
> has been "cable pulled". We are seeking advice on how to best 
> configure the connections between the brokers to notice this error quickly.
> We have a producer box and two consumer boxes connected with a network 
> connector with the below options. Our producer is enqueing messages to 
> a distributed queue that is serviced by both of the consumer boxes. 
> Normally traffic is round-robbined between the two consumer boxes 
> correctly. In all of these tests we are producing messages at a constant rate.
> Prodcuer configuration:
> <networkConnector name="producer-to-consumer1"
> uri="static:tcp://consumer1:61616?soTimeout=2000&amp;soWriteTimeout=2000&amp;wireFormat.maxInactivityDuration=3000"
> />
> <networkConnector name="producer-to-consumer2"
> uri="static:tcp://consumer2:61616?soTimeout=2000&amp;soWriteTimeout=2000&amp;wireFormat.maxInactivityDuration=3000"
> />
> Consumer configuration:
> <networkConnector name="consumer-to-produce"
> uri="static:tcp://producerbox:61616?soTimeout=2000&amp;soWriteTimeout=2000&amp;wireFormat.maxInactivityDuration=3000"
> />
> If we do a "cable pull" on one of the consumer boxes we it can take 
> many minutes before the broker notices that our connection is down. 
> Eventually we did get a failure from the writeTimeoutFilter but 
> instead of after two seconds like expected, we didn't get the failure 
> for nearly 5 minutes. When it finally does trip all of the messages 
> that had been enqueued for the bad consumer are correctly resent to 
> the good consumer and all the future traffic is switched over to the good consumer.
> Below is the log from the producer side. We pulled the cable at 
> 14:20:36 and we get the expected inactivity failure error on the "reverse bridge"
> from the consumer broker came a few seconds later. Our "forward bridge"
> doesn't fail for around 5 minutes.
> 2015-02-20 14:20:41,798 | WARN  | Transport Connection to: tcp://
> failed:
> org.apache.activemq.transport.InactivityIOException: Channel was 
> inactive for too (>3000) long: tcp:// | 
> | ActiveMQ 
> InactivityMonitor Worker
> 2015-02-20 14:25:15,276 | WARN  | Forced write timeout for:tcp://
> | org.apache.activemq.transport.WriteTimeoutFilter 
> |
> WriteTimeoutFilter-Timeout-1
> 2015-02-20 14:25:15,278 | WARN  | Caught an exception processing local 
> command | | 
> ActiveMQ BrokerService[scaleha-gw2] Task-37
> Socket closed
>         at
>         at
>         at
> org.apache.activemq.transport.tcp.TcpBufferedOutputStream.flush(
>         at
>         at
> org.apache.activemq.transport.tcp.TcpTransport.oneway(
>         at
> org.apache.activemq.transport.AbstractInactivityMonitor.doOnewaySend(
>         at
> org.apache.activemq.transport.AbstractInactivityMonitor.oneway(
>         at
> org.apache.activemq.transport.TransportFilter.oneway(
>         at
> org.apache.activemq.transport.WireFormatNegotiator.oneway(
>         at
> org.apache.activemq.transport.TransportFilter.oneway(
>         at
> org.apache.activemq.transport.WriteTimeoutFilter.oneway(
>         at
> org.apache.activemq.transport.MutexTransport.oneway(
>         at
> org.apache.activemq.transport.ResponseCorrelator.asyncRequest(
>         at
>         at
>         at
> org.apache.activemq.transport.ResponseCorrelator.onCommand(
>         at
> org.apache.activemq.transport.MutexTransport.onCommand(
>         at
> org.apache.activemq.transport.vm.VMTransport.doDispatch(
>         at
> org.apache.activemq.transport.vm.VMTransport.dispatch(
>         at
> org.apache.activemq.transport.vm.VMTransport.oneway(
>         at
> org.apache.activemq.transport.MutexTransport.oneway(
>         at
> org.apache.activemq.transport.ResponseCorrelator.oneway(
>         at
>         at
>         at
>         at
> org.apache.activemq.thread.PooledTaskRunner.runTask(
>         at
> org.apache.activemq.thread.PooledTaskRunner$
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(
>         at
> java.util.concurrent.ThreadPoolExecutor$
>         at
> I believe I know what is occurring but we don't know how to fix it. 
> What we believe is occurring is the soWriteTimeout filter isn't firing 
> because each socket write is actually succeeding and returning quickly 
> but the data is being queued into the linux socket caches. The actual 
> write to the socket doesn't block until we fill those buffers. We 
> validated this by watching netstat as we ran the test and saw that the 
> send-q slowly filled up as we ran traffic and then once it stopped 
> growing the socket write timeout tripped a few seconds later.
> tcp        0        0
>     ESTABLISHED   - // normal
> tcp        0     7653
>     ESTABLISHED   - // early after cable pull
> tcp        0    53400
>     ESTABLISHED   - // two minutes after cable pull
> tcp        0   129489
>     FIN_WAIT1     - // final state after writeTimeout fires
> I tried to rerun this same experiment without any messages flowing and 
> the detection took 15 minutes. I think this took much longer because 
> the number of bytes being written to the socket was much lower so we 
> didn't fill the buffer as quickly.
> I was expecting the InactivityMonitorTImeout to fire to protect us 
> from this case but I think it considers the connection to be active 
> every time it dispatches a message to that consumer so that timer never fires either.
> Normally this is handled with an application level timeout. 
> Presumabley in this case we should be waiting for the acknowledgement 
> of receipt of the message from the other broker. The stats appear to 
> show the messages as dispatched but not dequeued while we keep the consumer box off the
> I was expecting that the inactivity timer was looking the successful 
> completion of a message, not just it's send.
> Is there a setting somewhere I am missing? It seems like this should 
> be a relatively common failure mode, maybe other people have enough 
> data flowing that they fill those buffers incredibly quickly? We have 
> investigated using keepAlive but in general those timeouts are still too slow for our
> Sam Hendley
View raw message