db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Army <a...@golux.com>
Subject Truncation error w/ INSERT stmt using blob concatenation...
Date Mon, 17 Jan 2005 18:21:54 GMT

I've come across the following behavior with the Derby engine.  It seems like a bug to me,
but I thought I'd check to 
make sure it's not "working as designed" before filing a JIRA entry.

If I create a table with a large blob column (say 100K), and then I try to insert a CONCATENATED
blob value into the 
table, where one of the blobs to be concatenated is larger than 32672 bytes, I get a truncation
error, even though the 
blob column is large enough to hold the value without truncation.

Basically, what I'm doing is:

create table bt (b blob(100K));
insert into bt values (cast (x'0101' as blob) || ?);

I'm using a prepared statement to bind the parameter to a blob that is larger than 32672 bytes.
 Here's a snippet of the 
code I wrote to reproduce the problem:

----

	// Create an array of bytes to be used as the input parameter.
	// NOTE: Size of the parameter is greater than 32672.
	byte [] bData = new byte[32700];
	for (int i = 0; i < bData.length; i++)
		bData[i] = (byte)(i % 10);

	Statement st = conn1.createStatement();
	try {

		// Create table with a BLOB column.
		st.execute("CREATE TABLE bt (b blob(100K))");

		// Now, prepare a statement to execute an INSERT command that uses
		// blob concatenation.
		PreparedStatement pSt = conn1.prepareStatement(
			"insert into bt values (cast (x'1010' as blob) || ?)");
		pSt.setBytes(1, bData);

		// And now try to execute.  This will throw the truncation error
		// seen below.
		pSt.execute();

	} catch (SQLException se) {
		se.printStackTrace();
	}

----

ERROR 22001: A truncation error was encountered trying to shrink VARCHAR () FOR BIT DATA 'XX-RESOLVE-XX'
to length 32672.
         at org.apache.derby.iapi.error.StandardException.newException(StandardException.java:333)
         at org.apache.derby.iapi.types.SQLBinary.checkHostVariable(SQLBinary.java:982)
         at org.apache.derby.exe.acdcd58064x0101x81d4x2a4cx00000019b5c03.e0(Unknown Source)
         at org.apache.derby.impl.services.reflect.DirectCall.invoke(ReflectGeneratedClass.java:138)

----

 From the stack trace, it can be seen that this error is thrown from a method called "checkHostVariable",
inside of 
which is the following comment:

	/**
		Host variables are rejected if their length is
		bigger than the declared length, regardless of
		if the trailing bytes are the pad character.
	*/

This is where my uncertainty begins.  It seems to me that, in the above reproduction, "the
declared length" would be 
100K and the length of the host variable would be 32700.  In that case, since 32700 < 100K,
this should be a valid 
insertion.  But since the variable is rejected, either 1) I'm misinterpreting what the "declared
length" is, or 2) the 
declared length is not being calculated correctly (it's being set to 32672 when it _should_
be 100K).

Note that this "checkHostVariable" method is called for Blobs, but is NOT called for Clobs.
 Thus, if I try to do the 
exact same thing using clobs instead of blobs (with characters instead of bytes, of course),
everything works fine.

Anyone have any input/feedback?  Is this a bug?  My guess is "Yes", but as I could be misunderstanding
the comment in 
the code, I'm not sure...

Thanks,
Army

Mime
View raw message