Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id AB9C0200B87 for ; Sun, 4 Sep 2016 23:43:22 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id A8F93160AAB; Sun, 4 Sep 2016 21:43:22 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 0017A160AB9 for ; Sun, 4 Sep 2016 23:43:21 +0200 (CEST) Received: (qmail 81606 invoked by uid 500); 4 Sep 2016 21:43:21 -0000 Mailing-List: contact issues-help@activemq.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@activemq.apache.org Delivered-To: mailing list issues@activemq.apache.org Received: (qmail 81591 invoked by uid 99); 4 Sep 2016 21:43:21 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 04 Sep 2016 21:43:21 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 0B82B2C1B79 for ; Sun, 4 Sep 2016 21:43:21 +0000 (UTC) Date: Sun, 4 Sep 2016 21:43:21 +0000 (UTC) From: "John Leach (JIRA)" To: issues@activemq.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Created] (AMQ-6415) LevelDB IOException MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Sun, 04 Sep 2016 21:43:22 -0000 John Leach created AMQ-6415: ------------------------------- Summary: LevelDB IOException Key: AMQ-6415 URL: https://issues.apache.org/jira/browse/AMQ-6415 Project: ActiveMQ Issue Type: Bug Components: Message Store Affects Versions: 5.14.0 Environment: Ubuntu 14.04, 32bit, OpenJDK 7u111 Reporter: John Leach two brokers bridged together, producers and consumers spread across the two. After about 8 hours or so processing ~30 messages/second, amq logs a java.io.IOException, shuts down, and is restarted where it seems to continue on its way: {quote} 2016-09-04 19:19:01,315 | INFO | Stopping BrokerService[a.staging_queue.example.com] due to exception, java.io.IOException | org.apache.activemq.util.DefaultIOExceptionHandler | LevelDB IOException handler. java.io.IOException at org.apache.activemq.util.IOExceptionSupport.create(IOExceptionSupport.java:40)[activemq-client-5.14.0.jar:5.14.0] at org.apache.activemq.leveldb.LevelDBClient.might_fail(LevelDBClient.scala:552)[activemq-leveldb-store-5.14.0.jar:5.14.0] at org.apache.activemq.leveldb.LevelDBClient.might_fail_using_index(LevelDBClient.scala:1044)[activemq-leveldb-store-5.14.0.jar:5.14.0] at org.apache.activemq.leveldb.LevelDBClient.store(LevelDBClient.scala:1390)[activemq-leveldb-store-5.14.0.jar:5.14.0] at org.apache.activemq.leveldb.DBManager$$anonfun$drainFlushes$1.apply$mcV$sp(DBManager.scala:627)[activemq-leveldb-store-5.14.0.jar:5.14.0] at org.fusesource.hawtdispatch.package$$anon$4.run(hawtdispatch.scala:330)[hawtdispatch-scala-2.11-1.22.jar:1.22] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_111] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_111] at java.lang.Thread.run(Thread.java:745)[:1.7.0_111] 2016-09-04 19:19:01,321 | INFO | Apache ActiveMQ 5.14.0 (a.staging_queue.example.com,, ID:srv-2uv19-35903-1472985997586-1:1) is shutting down | org.apache.activemq.broker.BrokerService | IOExceptionHandler: stopping BrokerService[a.staging_queue.example.com] ... 2016-09-04 19:19:01,414 | ERROR | Failed to remove expired Message from the store | org.apache.activemq.broker.region.Queue | ActiveMQ Broker[a.staging_queue.example.com] Scheduler java.io.IOException: org.apache.activemq.broker.SuppressReplyException: ShutdownBrokerInitiated at org.apache.activemq.util.IOExceptionSupport.create(IOExceptionSupport.java:40)[activemq-client-5.14.0.jar:5.14.0] at org.apache.activemq.leveldb.LevelDBStore$.toIOException(LevelDBStore.scala:65)[activemq-leveldb-store-5.14.0.jar:5.14.0] at org.apache.activemq.leveldb.LevelDBStore$.waitOn(LevelDBStore.scala:74)[activemq-leveldb-store-5.14.0.jar:5.14.0] at org.apache.activemq.leveldb.LevelDBStore$LevelDBMessageStore.removeAsyncMessage(LevelDBStore.scala:811)[activemq-leveldb-store-5.14.0.jar:5.14.0] at org.apache.activemq.broker.region.Queue.acknowledge(Queue.java:922)[activemq-broker-5.14.0.jar:5.14.0] at org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1737)[activemq-broker-5.14.0.jar:5.14.0] at org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1729)[activemq-broker-5.14.0.jar:5.14.0] at org.apache.activemq.broker.region.Queue.messageExpired(Queue.java:1799)[activemq-broker-5.14.0.jar:5.14.0] at org.apache.activemq.broker.region.Queue.messageExpired(Queue.java:1790)[activemq-broker-5.14.0.jar:5.14.0] at org.apache.activemq.broker.region.Queue.doBrowseList(Queue.java:1153)[activemq-broker-5.14.0.jar:5.14.0] at org.apache.activemq.broker.region.Queue.doBrowse(Queue.java:1131)[activemq-broker-5.14.0.jar:5.14.0] at org.apache.activemq.broker.region.Queue.expireMessages(Queue.java:908)[activemq-broker-5.14.0.jar:5.14.0] at org.apache.activemq.broker.region.Queue.access$100(Queue.java:103)[activemq-broker-5.14.0.jar:5.14.0] at org.apache.activemq.broker.region.Queue$2.run(Queue.java:146)[activemq-broker-5.14.0.jar:5.14.0] at org.apache.activemq.thread.SchedulerTimerTask.run(SchedulerTimerTask.java:33)[activemq-client-5.14.0.jar:5.14.0] at java.util.TimerThread.mainLoop(Timer.java:555)[:1.7.0_111] at java.util.TimerThread.run(Timer.java:505)[:1.7.0_111] 2016-09-04 19:19:01,465 | INFO | Stopped LevelDB[/path/to/data/leveldb] | org.apache.activemq.leveldb.LevelDBStore | LevelDB IOException handler. {quote} We'd been running 5.9.1 on these exact same servers for many months using leveldb without this problem. Upgraded to 5.14.0 yesterday and hit it twice already. I tried wiping out the leveldb database before starting as a test and it still reoccurred. Interesting thing is, it happened on both servers within a couple minutes of each other, even though they had quite different uptimes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)