db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mike Matrigali (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (DERBY-5235) Remove the artificial limit on the length of VARCHAR values, allowing them to be java.lang.Integer.MAX_VALUE long
Date Wed, 18 May 2011 20:50:47 GMT

    [ https://issues.apache.org/jira/browse/DERBY-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13035660#comment-13035660
] 

Mike Matrigali commented on DERBY-5235:
---------------------------------------

Thanks rick for the clob = clob example, I assume it applies to any clob comparison.  Is there
anything else about clob that doesn't
meet the same needs as varchar not mentioned already.

Could a standards expert comment.  I believe derby disallows clob = clob to be compliant with
the standard.  Is that true?  If the standard defines different behaviors for varchar, long
varchar, and clob then I don't think derby should ever make them synonyms.

But the suggestion made me think.  My biggest issue with this is actually more of implementation
and giving up the simplicity and 
efficiency of the current varchar implementation.  I think there is still value to a simple,
fast implementation of small character strings vs. a slower but more memory efficient implementation
of larger strings.  If we decide to support very large varchar, I actually think we should
just use the 
existing clob code  and improve/extend it as necessary.   I am sort of thinking that current
sized varchar gets the current varchar implementation.
varchar that is longer than current gets a new implementation that is actually all the clob
code and then extended to do comparisons
expected by varchar, and probably some other stuff to make it look like a varchar for things
like resolution.  I don't know how hard it
will be to make 2 different internal implementations return the same sql datatype.

The goal would be that users never see any of this.  The choice of datatype implementation
is done a ddl create time based on 
size that user gives us, and we deliver varchar standard behaviour.   For indexes we don't
change current support, it is based on
length of the varchar, and we should look to see if documentation needs to be changed based
on knut's observation that N sized
varchar may take up 3N length on disk.  Longer term probably should change all varchar indexes
to only index the first N characters
where that length is something reasonable for good fanning indexes like 256.  

Can anyone who has worked on clob code recently comment on how hard it would be to make it
support varchar like comparisons in a
subclass?  

> Remove the artificial limit on the length of VARCHAR values, allowing them to be java.lang.Integer.MAX_VALUE
long
> -----------------------------------------------------------------------------------------------------------------
>
>                 Key: DERBY-5235
>                 URL: https://issues.apache.org/jira/browse/DERBY-5235
>             Project: Derby
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 10.9.0.0
>            Reporter: Rick Hillegas
>
> The original Cloudscape limit for the length of VARCHAR values was java.lang.Integer.MAX_VALUE.
That is the limit in Cloudscape 5.1. Nothing in Derby should break if we restore the original
limit. The current limit is an artificial bound introduced to make Derby agree with DB2. 32672
is the upper bound on the length of a DB2 VARCHAR: http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/admin/r0001029.htm

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message