db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kristian Waagan (JIRA)" <j...@apache.org>
Subject [jira] Commented: (DERBY-4119) Compress on a large table fails with IllegalArgumentException - Illegal Capacity
Date Mon, 30 Mar 2009 10:04:50 GMT

    [ https://issues.apache.org/jira/browse/DERBY-4119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12693711#action_12693711
] 

Kristian Waagan commented on DERBY-4119:
----------------------------------------

I managed to reproduce the error and get the stack trace, but this time it happened during
the creation of an index. Larry reported to have seen the error during index creation.
It happens in the same code area as suggested by Knut Anders. I also observed the error once
during compress in a different run, but a reboot of the database had overwritten the log before
I could look at it.

I believe Knut's suggested fix will allow the vector to grow until it reaches Integer.MAX_VALUE,
given there is enough heap memory to do so.
I have only tested the calculation itself, as each of my test runs with the repro takes many
hours.
If we get this into the next release candidate, I can do some more test runs with it.


2009-03-30 01:23:21.340 GMT Thread[DRDAConnThread_8,5,main] (XID = 12059858), (SESSIONID =
421), (DATABASE = db), (DRDAID = NF000001.F2EB-4196790664386933061{211}), Failed Statement
is: CREATE INDEX my_index ON my_table(my_col)
java.lang.IllegalArgumentException: Illegal Capacity: -18547753
        at java.util.Vector.<init>(Vector.java:109)
        at java.util.Vector.<init>(Vector.java:124)
        at org.apache.derby.impl.store.access.sort.MergeSort.multiStageMerge(Unknown Source)
        at org.apache.derby.impl.store.access.sort.MergeSort.openSortRowSource(Unknown Source)
        at org.apache.derby.impl.store.access.RAMTransaction.openSortRowSource(Unknown Source)
        at org.apache.derby.impl.sql.execute.CreateIndexConstantAction.loadSorter(Unknown
Source)
        at org.apache.derby.impl.sql.execute.CreateIndexConstantAction.executeConstantAction(Unknown
Source)
        at org.apache.derby.impl.sql.execute.MiscResultSet.open(Unknown Source)
        at org.apache.derby.impl.sql.GenericPreparedStatement.executeStmt(Unknown Source)
        at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown Source)
        at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown Source)
        at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source)
        at org.apache.derby.impl.jdbc.EmbedStatement.executeUpdate(Unknown Source)
        at org.apache.derby.impl.drda.DRDAConnThread.parseEXCSQLIMM(Unknown Source)
        at org.apache.derby.impl.drda.DRDAConnThread.processCommands(Unknown Source)
        at org.apache.derby.impl.drda.DRDAConnThread.run(Unknown Source)
Cleanup action completed


> Compress on a large table fails with IllegalArgumentException - Illegal Capacity
> --------------------------------------------------------------------------------
>
>                 Key: DERBY-4119
>                 URL: https://issues.apache.org/jira/browse/DERBY-4119
>             Project: Derby
>          Issue Type: Bug
>          Components: Store
>    Affects Versions: 10.5.1.0
>            Reporter: Kristian Waagan
>         Attachments: overflow.diff
>
>
> When compressing a large table, Derby failed with the following exception:
> IllegalArgumentException; Illegal Capacity: -X
> I was able to access the database afterwards, but haven't yet checked if all the data
is still available.
> The compress was started with CALL SYSCS_UTIL.SYSCS_COMPRESS_TABLE('schema', 'table',
1) from ij.
> The data in the table was inserted with 25 concurrent threads. This seems to cause excessive
table growth, as the data inserted should weigh in at around 2 GB. The table size after the
insert is ten times bigger, 20 GB.
> I have been able to generate the table and do a compress earlier, but then I have been
using fewer insert threads.
> I have also been able to successfully compress the table when retrying after the failure
occurred (shut down the database, then booted again and compressed).
> I'm trying to reproduce, and will post more information (like the stack trace) later.
> So far my attempts at reproducing has failed. Normally the data is generated and the
compress is started without shutting down the database. My attempts this far has consisted
of doing compress on the existing database (where the failure was first seen).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message