db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Knut Anders Hatlen (JIRA)" <j...@apache.org>
Subject [jira] Commented: (DERBY-4348) Copy table content with "INSERT INTO table SELECT FROM (...)" statement leads to corrupt data
Date Fri, 21 Aug 2009 15:00:15 GMT

    [ https://issues.apache.org/jira/browse/DERBY-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12746003#action_12746003

Knut Anders Hatlen commented on DERBY-4348:

I've had a look at the repro in a debugger. It looks like the rows returned by the BulkTableScanResultSet
that reads the source table are correct. The corruption seems to happen somewhere in the normalization
after a chunk of 16 rows has been fetched from the BTSRS (BTSRS has an internal fetch buffer
which holds 16 rows) and before the rows are inserted into the destination table. When the
(16N+1)th row is normalized (N>0), the contents of the LONG VARCHAR column are actually
copied into the SQLLongVarchar that holds the value for the (16N+16)th row. Why that happens
is still not clear to me, but I suspect there's some kind of aliasing problem where two SQLLongVarchar
references point to the same underlying instance.

DataTypeDescriptor.normalize() has a special case for LONG VARCHAR (see below). If I comment
out the special case, the repro doesn't produce a corrupt table, which is also an indication
that the problem is related to the normalization.

			//doing the following check after normalize so that normalize method would get called on
long varchs and long varbinary
			//Need normalize to be called on long varchar for bug 5592 where we need to enforce a lenght
limit in db2 mode
			if ((jdbcId == Types.LONGVARCHAR) || (jdbcId == Types.LONGVARBINARY)) {
				// special case for possible streams
				if (source.getClass() == cachedDest.getClass()) 
					return source;

> Copy table content with "INSERT INTO table SELECT FROM (...)" statement leads to corrupt
> ---------------------------------------------------------------------------------------------
>                 Key: DERBY-4348
>                 URL: https://issues.apache.org/jira/browse/DERBY-4348
>             Project: Derby
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions:,,,,,,,
>         Environment: Derby: embedded driver - (764942); testet with 10.4 and
client driver also
> OS: Windows XP
> SQL statements executed using SQuirrel SQL Client, but behavior is the same with ij
>            Reporter: Stefan Huber
>            Assignee: Knut Anders Hatlen
>            Priority: Critical
>         Attachments: d4348-import.sql, D4348.java, d4348.sql, out.dat, rmdb.zip
> I had to change a the primary key of a table and used ddlutils to do so. Ddlutils recreated
the table to perform this task.
> After the schema conversion the row data of the changed table were corrupted. 
> The values of the last table column were filled with values from other rows!
> After performing a few tests I could break down the problem to the SQL statement "INSERT
INTO table SELECT FROM (...)"
> To reprocude the effect do the following:
> 1. unpack attached database 'rmdb.zip'
> 2. connect to the database with embedded driver 
>     User: IGEL
>     Password: test 
> 3. read data of an example row from database
>     select * from stringrangenew where classname = 'x.xserver%.colordepth';
> result is 
>     x.xserver%.colordepth	2	[16] [24] [32]
> 4. now copy the data to the second table (column INSTANCENR has been added to this table)
> 5. select data of example row from second table
>     select * from stringrangenew_ where classname = 'x.xserver%.colordepth';
> result is 
>     x.xserver%.colordepth	2	-1	[CCW][CW][XX]
> -> value of last column is not the same as in orignal table!
> Here some additional information i worked out during my tests:
> * if you change the copy statement to include the additional column INSTANCENR, the copied
data are correct.
>     delete from STRINGRANGENEW_;
the result shows correct data
> Note: 
> The effect is not restricted to this row but also applies to other rows. But it's always
the same rows, that get corrupted.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message