activemq-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (ARTEMIS-450) Deadlocked broker over addHead and Rollback with AMQP
Date Wed, 18 Oct 2017 15:58:00 GMT

    [ https://issues.apache.org/jira/browse/ARTEMIS-450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16209576#comment-16209576
] 

ASF GitHub Bot commented on ARTEMIS-450:
----------------------------------------

Github user franz1981 commented on the issue:

    https://github.com/apache/activemq-artemis/pull/1596
  
    @clebertsuconic It is not following the [principle of least astonishment](https://en.wikipedia.org/wiki/Principle_of_least_astonishment)
from the API (and doc) perspective: if you remove the `getInitialDelay`probably you could
use safetly that change, but considering that all the other properties are exposed, probably
it is not the right choice.
    That's my 2 cents on it: I'm happy anyway that the functionality isn't broken (for my
usage I mean) :+1: 


> Deadlocked broker over addHead and Rollback with AMQP
> -----------------------------------------------------
>
>                 Key: ARTEMIS-450
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-450
>             Project: ActiveMQ Artemis
>          Issue Type: Bug
>          Components: AMQP, Broker
>    Affects Versions: 1.2.0
>            Reporter: Gordon Sim
>            Assignee: clebert suconic
>             Fix For: 2.4.0
>
>         Attachments: stack-dump.txt, thread-dump-1.3.txt
>
>
> Not sure exactly how it came about, I noticed it on trying to shutdown the broker. The
log has:
> {noformat}
> 21:43:17,985 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue examples,
on address=myqueue, is taking too long to flush deliveries. Watch out for frozen clients.
> 21:43:18,986 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue examples,
on address=myqueue, is taking too long to flush deliveries. Watch out for frozen clients.
> 21:43:19,986 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue examples,
on address=myqueue, is taking too long to flush deliveries. Watch out for frozen clients.
> 21:43:20,986 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue examples,
on address=myqueue, is taking too long to flush deliveries. Watch out for frozen clients.
> 21:43:28,928 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue examples,
on address=myqueue, is taking too long to flush deliveries. Watch out for frozen clients.
> 21:43:45,937 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue examples,
on address=myqueue, is taking too long to flush deliveries. Watch out for frozen clients.
> 21:44:18,698 WARN  [org.apache.activemq.artemis.core.client] AMQ212037: Connection failure
has been detected: AMQ119014: Did not receive data from /127.0.0.1:51232. It is likely the
client has exited or crashed without closing its connection, or the network between the server
and client has failed. You also might have configured connection-ttl and client-failure-check-period
incorrectly. Please check user manual for more information. The connection will now be closed.
[code=CONNECTION_TIMEDOUT]
> 21:44:18,698 WARN  [org.apache.activemq.artemis.core.server] AMQ222061: Client connection
failed, clearing up resources for session ebd714e5-efad-11e5-83fc-fe540024bf8d
> Exception in thread "Thread-0 (ActiveMQ-AIO-poller-pool2081191879-2061347276)" java.lang.Error:
java.io.IOException: Error while submitting IO: Interrupted system call
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1148)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 	at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Error while submitting IO: Interrupted system call
> 	at org.apache.activemq.artemis.jlibaio.LibaioContext.blockedPoll(Native Method)
> 	at org.apache.activemq.artemis.jlibaio.LibaioContext.poll(LibaioContext.java:360)
> 	at org.apache.activemq.artemis.core.io.aio.AIOSequentialFileFactory$PollerRunnable.run(AIOSequentialFileFactory.java:355)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 	... 2 more
> {noformat}
> I'll attach a thread dump in which you will see Thread-10 has locked the handler lock
in AbstractConnectionContext
> (part of the 'proton plug'), and is itself blocked on the lock in
> ServerConsumerImpl, which is held by Thread-21. Thread-21 is waiting
> for a write lock on the deliveryLock in ServerConsumerImpl. However
> Thread-20 already has a read lock on this, and is blocked (while
> holding the read lock) on the same handler lock within the proton plug
> (object 0x00000000f3d2bd90) that Thread-10 has locked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message