db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mike Matrigali (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (DERBY-5235) Remove the artificial limit on the length of VARCHAR values, allowing them to be java.lang.Integer.MAX_VALUE long
Date Tue, 17 May 2011 17:40:47 GMT

    [ https://issues.apache.org/jira/browse/DERBY-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13034913#comment-13034913
] 

Mike Matrigali commented on DERBY-5235:
---------------------------------------

I definitely do not agree with the statement "Nothing in Derby should break if we restore
the original limit. "   The Derby code has changed a lot since
the original change, so although the codeline supported a longer limit in the past it does
not mean that the current code is ready to accept the longer limit.  It also does not mean
there were not problems with very long varchars in original cloudscape code.  

I would suggest that anyone working on this scan all the code that looks a CLOBS and see if
that code now has to change to handle varchars.  For instance ddl code likely assumes that
varchars are less than a page and make some assumptions on default page sizes based on that
where it
assumes clobs are bigger.   This may not actually cause a bug but is an example of an implicit
assumption in the code.

Unfortunately the hardest places to find that may lead to bugs is cases where the code just
implicitly "knows" that a varchar can't be bigger than a page and no need to code for the
bigger case.  The problem here is not find buggy code but to find code that is "missing" because
varchars can
now be bigger than before.

I sort of remember that making varchar's less than 32k solved some problems in the code but
at this point is a distant memory.  It could be the network stuff already mentioned.  

I would defnitely suggest adding additional sort based tests of very big varchars if this
issue is worked on.  Make sure to test both in memory sorts and sorts that go to disk.

Another issue that will happen if users use varchars instead of clobs is that varchars are
going to read everything into memory while clobs are optimized to stream rather than read
into memory.  The implicit assumption for a varchar is that it's max size is ok to read into
memory, while for a clob that is not the case.  This assumption has made the varchar code
simpler, but of course could be enhanced in the future.  


> Remove the artificial limit on the length of VARCHAR values, allowing them to be java.lang.Integer.MAX_VALUE
long
> -----------------------------------------------------------------------------------------------------------------
>
>                 Key: DERBY-5235
>                 URL: https://issues.apache.org/jira/browse/DERBY-5235
>             Project: Derby
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 10.9.0.0
>            Reporter: Rick Hillegas
>
> The original Cloudscape limit for the length of VARCHAR values was java.lang.Integer.MAX_VALUE.
That is the limit in Cloudscape 5.1. Nothing in Derby should break if we restore the original
limit. The current limit is an artificial bound introduced to make Derby agree with DB2. 32672
is the upper bound on the length of a DB2 VARCHAR: http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/admin/r0001029.htm

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message