activemq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Maxim Fateev (JIRA)" <j...@apache.org>
Subject [jira] Commented: (AMQ-732) Infinite recovery loop.
Date Tue, 06 Jun 2006 19:18:51 GMT
    [ https://issues.apache.org/activemq/browse/AMQ-732?page=comments#action_36245 ] 

Maxim Fateev commented on AMQ-732:
----------------------------------

Also note that my first entry shows how to reproduce the problem without ANY consumers and
producers.

> Infinite recovery loop.
> -----------------------
>
>          Key: AMQ-732
>          URL: https://issues.apache.org/activemq/browse/AMQ-732
>      Project: ActiveMQ
>         Type: Bug

>   Components: Broker
>     Versions: 4.0
>  Environment: Linux RHEL 3
>     Reporter: Maxim Fateev
>     Assignee: Hiram Chirino

>
>
> The simplest way to reproduce the problem:
> 1. Remove storage directory. 
> 2. Start broker using the following code:
>  public static void main(String[] args)  throws Exception {
>            BrokerService broker = new BrokerService();
>            broker.setPersistent(true);
>            DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory();
>            pFactory.setJournalLogFiles(1);
>            pFactory.setJournalLogFileSize(2048);
>            broker.setPersistenceFactory(pFactory);
>            broker.setUseJmx(false);
>            broker.addConnector("tcp://localhost:61616");
>            broker.start();
>            Thread.sleep(1000000000000l);
>    }
> 3. Shutdown the broker.
> 4. Start broker.
> It enters infinite loop
> [                          main] BrokerService                  INFO  ActiveMQ null JMS
Message Broker (localhost) is starting
> [                          main] BrokerService                  INFO  For help or more
information please see: http://incubator.apache.org/activemq/
> [                          main] JDBCPersistenceAdapter         INFO  Database driver
recognized: [apache_derby_embedded_jdbc_driver]
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL:
CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250),
MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) )
> [                          main] DefaultJDBCAdapter             DEBUG Could not create
JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_MSGS(ID
INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION
BIGINT, MSG BLOB, PRIMARY KEY ( ID ) ) Message: Table/View 'ACTIVEMQ_MSGS' already exists
in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL:
CREATE INDEX ACTIVEMQ_MSGS_MIDX ON ACTIVEMQ_MSGS (MSGID_PROD,MSGID_SEQ)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL:
CREATE INDEX ACTIVEMQ_MSGS_CIDX ON ACTIVEMQ_MSGS (CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL:
CREATE INDEX ACTIVEMQ_MSGS_EIDX ON ACTIVEMQ_MSGS (EXPIRATION)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL:
CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL,
SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY
( CONTAINER, CLIENT_ID, SUB_NAME))
> [                          main] DefaultJDBCAdapter             DEBUG Could not create
JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER
VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR
VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME)) Message:
Table/View 'ACTIVEMQ_ACKS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] JDBCPersistenceAdapter         DEBUG Cleaning up old
messages.
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL:
DELETE FROM ACTIVEMQ_MSGS WHERE ( EXPIRATION<>0 AND EXPIRATION<?) OR ID <= ( SELECT
min(ACTIVEMQ_ACKS.LAST_ACKED_ID) FROM ACTIVEMQ_ACKS WHERE ACTIVEMQ_ACKS.CONTAINER=ACTIVEMQ_MSGS.CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Deleted 0 old message(s).
> [                          main] JDBCPersistenceAdapter         DEBUG Cleanup done.
> [                          main] JournalPersistenceAdapter      INFO  Journal Recovery
Started from: Active Journal: using 1 x 0.001953125 Megs at: /workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data/journal
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation
offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation
offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation
overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation
offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation
offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation
overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation
offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation
offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation
overflowing into next logFile=0
> The log entry from getNextDataRecordLocation is coming from log statement I've added
to LogFileManager.getNextDataRecordLocation:
>    public Location getNextDataRecordLocation(Location lastLocation) throws IOException,
InvalidRecordLocationException {
>         RecordInfo ri = readRecordInfo(lastLocation);
>         while (true) {
>             int logFileId = ri.getLocation().getLogFileId();
>             int offset = ri.getNextLocation();
>             log.debug("getNextDataRecordLocation offset=" + offset);
>             // Are we overflowing into next logFile?
>             if (offset >= ri.getLogFileState().getAppendOffset()) {
>                 LogFileNode nextActive = ri.getLogFileState().getNextActive();
>                 log.debug("getNextDataRecordLocation overflowing into next logFile="
+ (nextActive == null ? "null"  : String.valueOf(nextActive.getId())));
>                 if (nextActive == null) {
>                     return null;
>                 }
>                 logFileId = nextActive.getId();
>                 offset = 0;
>             }
>             try {
>                 ri = readRecordInfo(new Location(logFileId, offset));
>             } catch (InvalidRecordLocationException e) {
>                 return null;
>             }
>             // Is the next record the right record type?
>             if (ri.getHeader().getRecordType() == DATA_RECORD_TYPE) {
>                 return ri.getLocation();
>             }
>             // No? go onto the next record.
>         }
>     }
> It looks like recovery doesn't take into account that journaling storage file should
have some end at some point. 
> Similar problem happens if multiple files of bigger size are used. It happens every time
one of the log files grows bigger then size specified in pFactory.setJournalLogFileSize(...)
call.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   https://issues.apache.org/activemq/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


Mime
View raw message