db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mike Matrigali (JIRA)" <j...@apache.org>
Subject [jira] Updated: (DERBY-606) SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE fails on (very) large tables
Date Fri, 10 Nov 2006 18:09:39 GMT
     [ http://issues.apache.org/jira/browse/DERBY-606?page=all ]

Mike Matrigali updated DERBY-606:
---------------------------------


Do you know how you are going to fix this issue, definitely consider upgrade implications
of any change made.

I think any change to allow CompressedNumber to write a negative number is likely not going
to be backward
compatible - now maybe that does not matter as any attempt to write a negative number with
it, is already a bug.
The bit patterns for the compressed number representation are pretty carefully chosen assuming
non-negative
numbers.  This code is used extensively in every row/column on disk in the database, so requiring
a hard 
upgrade of the format is a large issue.

a few easier paths:
1) rewrite the log format for the record to not compress the numbers.  The space doesn't really
matter as it doesn't happen very much.  As this would be a new log format it should be handled
under hard upgrade.  The bug would
not be fixed under soft upgrade.  Probably should throw an error earlier if you see a negative
number under
soft upgrade.  If we could set back time on the db this would seem the natural fix.  Or maybe
if you are soft 
upgrade you change the -1 to be the last page in the conglomerate, which would mean in this
special case
the code would think there was a free page when we know there aren't any, but this is only
a hint anyway.  
If you go this route look at the upgrade tests and add a case for both soft and hard upgrade
in 10.3.

2) change the conglomerate code to not generate negative numbers, maybe use something like
maxint or maxlong.  This will of course require looking at all the code that currently checks
for -1.  

> SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE fails on (very) large tables
> --------------------------------------------------------------------
>
>                 Key: DERBY-606
>                 URL: http://issues.apache.org/jira/browse/DERBY-606
>             Project: Derby
>          Issue Type: Bug
>          Components: Store
>    Affects Versions: 10.1.1.0
>         Environment: Java 1.5.0_04 on Windows Server 2003 Web Edition
>            Reporter: Jeffrey Aguilera
>         Assigned To: Mayuresh Nirhali
>         Attachments: A606Test.java
>
>
> SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE fails with one of the following error messages
when applied to a very large table (>2GB):
> Log operation null encounters error writing itself out to the log stream, this could
be caused by an errant log operation or internal log buffer full due to excessively large
log operation. SQLSTATE: XJ001: Java exception: ': java.io.IOException'.
> or
> The exception 'java.lang.ArrayIndexOutOfBoundsException' was thrown while evaluating
an expression. SQLSTATE: XJ001: Java exception: ': java.lang.ArrayIndexOutOfBoundsException'.
> In either case, no entry is written to the console log or to derby.log.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message