portals-jetspeed-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jan Goyvaerts (jgoyvaer)" <jgoyv...@cisco.com>
Subject RE: serious db psml oracle problem
Date Tue, 05 Nov 2002 08:59:39 GMT
Hi Glenn,

What version of the village code has this bug ? 1.5.3 ?

Regards,

Jan.

-----Original Message-----
From: Glenn Golden [mailto:ggolden@umich.edu] 
Sent: Monday, November 04, 2002 16:45
To: Jetspeed-Dev (jetspeed-dev@jakarta.apache.org)
Subject: serious db psml oracle problem


(Note: I'm still using code from Oct 8 cvs, not the cvs trunk.  I'm not
sure
if this is any different in the trunk).
 
When pointing the db psml manager at oracle, the field used to store the
psml is a "long raw".  Once the profile gets above about 4k in size, it
cannot be written to the database any more.  I'm using the oracle thin
client.
 
I've tracked this down to the village code that sets the value into the
jdbc
prepared statement
(com.workingdogs.village.Value.setPreparedStatementValue(), line 260).
For
types LONGVARBINARY, VARBINARY and BINARY which covers our "long raw",
it
uses this code:
 
     stmt.setBytes (stmtNumber, this.asBytes());
 
This has the 4k limit.
 
I've successfully changed the village code to use this instead:
 
     byte[] value = this.asBytes();
     stmt.setBinaryStream(stmtNumber, new
java.io.ByteArrayInputStream(value), value.length);

using "setBinaryStream" instead of "setBytes", which doesn't have the
limit.
 
This works.
 
I wonder if there have been any changes in the trunk, specifically in
the
upgrading to new libraries, that would cause this to go away.
 
I also wonder if there's a better way to fix this, something about
changing
the schema?
 
If not, we can patch our village code or get this fix to workingdogs.
 
- Glenn
 

--
To unsubscribe, e-mail:   <mailto:jetspeed-dev-unsubscribe@jakarta.apache.org>
For additional commands, e-mail: <mailto:jetspeed-dev-help@jakarta.apache.org>


Mime
View raw message