db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mayuresh Nirhali (JIRA)" <j...@apache.org>
Subject [jira] Commented: (DERBY-606) SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE fails on (very) large tables
Date Fri, 17 Nov 2006 16:43:42 GMT
    [ http://issues.apache.org/jira/browse/DERBY-606?page=comments#action_12450785 ] 
Mayuresh Nirhali commented on DERBY-606:

I looked at the OnlineCompressTest and realized that to reproduce this case, the simplest
way is to increase the number of rows added to the table in one of the existing testcases.
However, I see a following comment in the testcase,

     * 4000 rows  - reasonable number of pages to test out, still 1 alloc page
     * note that row numbers greater than 4000 may lead to lock escalation
     * issues, if queries like "delete from x" are used to delete all the 
     * rows.

This is very relevant to the testcase which I would like to add and so, would like to know
the Lock Escalation issue here. Has anyone seen this kind of issue before ? any pointers ??

The repro attached to the bug has almost similar testcase, I have not seen any problems with
that so far. So, it might be that the Lock Escalation issue has already been fixed. (I did
not find any related JIRA for this though). Can someone please confirm this ?? I can update
the comments if that problem has been fixed.


> SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE fails on (very) large tables
> --------------------------------------------------------------------
>                 Key: DERBY-606
>                 URL: http://issues.apache.org/jira/browse/DERBY-606
>             Project: Derby
>          Issue Type: Bug
>          Components: Store
>    Affects Versions:
>         Environment: Java 1.5.0_04 on Windows Server 2003 Web Edition
>            Reporter: Jeffrey Aguilera
>         Assigned To: Mayuresh Nirhali
>             Fix For:
>         Attachments: A606Test.java, derby606_v1.diff
> SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE fails with one of the following error messages
when applied to a very large table (>2GB):
> Log operation null encounters error writing itself out to the log stream, this could
be caused by an errant log operation or internal log buffer full due to excessively large
log operation. SQLSTATE: XJ001: Java exception: ': java.io.IOException'.
> or
> The exception 'java.lang.ArrayIndexOutOfBoundsException' was thrown while evaluating
an expression. SQLSTATE: XJ001: Java exception: ': java.lang.ArrayIndexOutOfBoundsException'.
> In either case, no entry is written to the console log or to derby.log.

This message is automatically generated by JIRA.
If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message