db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Knut Anders Hatlen (JIRA)" <j...@apache.org>
Subject [jira] Updated: (DERBY-4119) Compress on a large table fails with IllegalArgumentException - Illegal Capacity
Date Mon, 30 Mar 2009 15:20:50 GMT

     [ https://issues.apache.org/jira/browse/DERBY-4119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Knut Anders Hatlen updated DERBY-4119:
--------------------------------------

    Attachment: overflow2.diff

Thanks for testing. I'll run the regression tests and check in a fix if they pass.

The suggested fix only reduces the chance of an overflow caused by an overflow in the intermediate
result. It does not prevent an overflow if the final result is to large. This new patch (overflow2.diff)
also addresses that problem by calculating the new max size as a long, and set it to Integer.MAX_VALUE
if the value doesn't fit in an int. A similar fix is applied to the newNode() method because
it may currently allocate an array larger than maxSize and therefore has the possibility to
overflow even if we have maxSize under control.

> Compress on a large table fails with IllegalArgumentException - Illegal Capacity
> --------------------------------------------------------------------------------
>
>                 Key: DERBY-4119
>                 URL: https://issues.apache.org/jira/browse/DERBY-4119
>             Project: Derby
>          Issue Type: Bug
>          Components: Store
>    Affects Versions: 10.5.1.0
>            Reporter: Kristian Waagan
>            Assignee: Knut Anders Hatlen
>         Attachments: overflow.diff, overflow2.diff
>
>
> When compressing a large table, Derby failed with the following exception:
> IllegalArgumentException; Illegal Capacity: -X
> I was able to access the database afterwards, but haven't yet checked if all the data
is still available.
> The compress was started with CALL SYSCS_UTIL.SYSCS_COMPRESS_TABLE('schema', 'table',
1) from ij.
> The data in the table was inserted with 25 concurrent threads. This seems to cause excessive
table growth, as the data inserted should weigh in at around 2 GB. The table size after the
insert is ten times bigger, 20 GB.
> I have been able to generate the table and do a compress earlier, but then I have been
using fewer insert threads.
> I have also been able to successfully compress the table when retrying after the failure
occurred (shut down the database, then booted again and compressed).
> I'm trying to reproduce, and will post more information (like the stack trace) later.
> So far my attempts at reproducing has failed. Normally the data is generated and the
compress is started without shutting down the database. My attempts this far has consisted
of doing compress on the existing database (where the failure was first seen).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message