db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kristian Waagan (JIRA)" <j...@apache.org>
Subject [jira] Commented: (DERBY-4119) Compress on a large table fails with IllegalArgumentException - Illegal Capacity
Date Tue, 31 Mar 2009 08:07:50 GMT

    [ https://issues.apache.org/jira/browse/DERBY-4119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12694007#action_12694007

Kristian Waagan commented on DERBY-4119:

Regarding your first fix, I thought Derby ended up casting a float to int, which would result
in Integer.MAX_VALUE if the float was bigger than that.
In any case, the way you solved it in the second patch is more readable.

When it comes to the second patch, I haven't had the time to review it, but I do have one
Does the JVM really handle a Vector of size Integer.MAX_VALUE? At least it used to be Integer.MAX_VALUE
- X.
I tested quickly with a byte array, and for various JVMs I found X to be 0, 18 and 39. Would
it make sense to subtract a small value from Integer.MAX_VALUE?

This is an edge case, as you need quite a few gigs of heap to support an Object array with
a capacity close to Integer.MAX_VALUE, but many machines these days do have enough memory
for this. It is not clear to me what it takes for Derby to actually grow the vector to such
sizes though.

> Compress on a large table fails with IllegalArgumentException - Illegal Capacity
> --------------------------------------------------------------------------------
>                 Key: DERBY-4119
>                 URL: https://issues.apache.org/jira/browse/DERBY-4119
>             Project: Derby
>          Issue Type: Bug
>          Components: Store
>    Affects Versions:
>            Reporter: Kristian Waagan
>            Assignee: Knut Anders Hatlen
>         Attachments: overflow.diff, overflow2.diff
> When compressing a large table, Derby failed with the following exception:
> IllegalArgumentException; Illegal Capacity: -X
> I was able to access the database afterwards, but haven't yet checked if all the data
is still available.
> The compress was started with CALL SYSCS_UTIL.SYSCS_COMPRESS_TABLE('schema', 'table',
1) from ij.
> The data in the table was inserted with 25 concurrent threads. This seems to cause excessive
table growth, as the data inserted should weigh in at around 2 GB. The table size after the
insert is ten times bigger, 20 GB.
> I have been able to generate the table and do a compress earlier, but then I have been
using fewer insert threads.
> I have also been able to successfully compress the table when retrying after the failure
occurred (shut down the database, then booted again and compressed).
> I'm trying to reproduce, and will post more information (like the stack trace) later.
> So far my attempts at reproducing has failed. Normally the data is generated and the
compress is started without shutting down the database. My attempts this far has consisted
of doing compress on the existing database (where the failure was first seen).

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message