Return-Path: Delivered-To: apmail-db-derby-dev-archive@www.apache.org Received: (qmail 13374 invoked from network); 24 Jan 2005 20:47:13 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (209.237.227.199) by minotaur-2.apache.org with SMTP; 24 Jan 2005 20:47:13 -0000 Received: (qmail 67856 invoked by uid 500); 24 Jan 2005 20:47:12 -0000 Delivered-To: apmail-db-derby-dev-archive@db.apache.org Received: (qmail 67831 invoked by uid 500); 24 Jan 2005 20:47:12 -0000 Mailing-List: contact derby-dev-help@db.apache.org; run by ezmlm Precedence: bulk list-help: list-unsubscribe: list-post: List-Id: Reply-To: "Derby Development" Delivered-To: mailing list derby-dev@db.apache.org Received: (qmail 67817 invoked by uid 99); 24 Jan 2005 20:47:11 -0000 X-ASF-Spam-Status: No, hits=0.0 required=10.0 tests= X-Spam-Check-By: apache.org Received-SPF: neutral (hermes.apache.org: local policy) Received: from e6.ny.us.ibm.com (HELO e6.ny.us.ibm.com) (32.97.182.146) by apache.org (qpsmtpd/0.28) with ESMTP; Mon, 24 Jan 2005 12:47:10 -0800 Received: from d01relay02.pok.ibm.com (d01relay02.pok.ibm.com [9.56.227.234]) by e6.ny.us.ibm.com (8.12.10/8.12.10) with ESMTP id j0OKl88S025879 for ; Mon, 24 Jan 2005 15:47:08 -0500 Received: from d01av02.pok.ibm.com (d01av02.pok.ibm.com [9.56.224.216]) by d01relay02.pok.ibm.com (8.12.10/NCO/VER6.6) with ESMTP id j0OKl8QK236018 for ; Mon, 24 Jan 2005 15:47:08 -0500 Received: from d01av02.pok.ibm.com (loopback [127.0.0.1]) by d01av02.pok.ibm.com (8.12.11/8.12.11) with ESMTP id j0OKl7HA010221 for ; Mon, 24 Jan 2005 15:47:07 -0500 Received: from [192.168.1.101] (sig-9-48-116-164.mts.ibm.com [9.48.116.164]) by d01av02.pok.ibm.com (8.12.11/8.12.11) with ESMTP id j0OKl6G7010184 for ; Mon, 24 Jan 2005 15:47:07 -0500 Message-ID: <41F55EC4.8080001@Sourcery.Org> Date: Mon, 24 Jan 2005 12:47:00 -0800 From: Kathey Marsden User-Agent: Mozilla Thunderbird 0.7.3 (Windows/20040803) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Derby Development Subject: Re: On DERBY-107 : ODBC Metadata functions References: <41F18707.4060700@golux.com> In-Reply-To: <41F18707.4060700@golux.com> X-Enigmail-Version: 0.85.0.0 X-Enigmail-Supports: pgp-inline, pgp-mime Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked X-Spam-Rating: minotaur-2.apache.org 1.6.2 0/1000/N -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Option I: > > Add new SQL statements, such as "getColumnsForODBC", to the existing > metadata.properties file [snip] - -1 because we had so many problems with maintenance of two sets of almost identical queries when we tried to do that before and if we need ODBC 2 and ODBC3 it gets really messy. Option 2: [snip] - -1 for the same reason as Option 1. Army wrote: > Option III: > > Create some kind of internal VTI for ODBC metadata and use that. I have > to admit that I don't know too much about how VTIs work, but Kathey gave > me some places to look, so I'm going to read up. Apparently, we can > execute the same metadata SQL statements that already exist for JDBC, > then use a VTI to "massage" the result set into something that complies > with ODBC specifications. This might be a good choice given that most > of the differences between ODBC and JDBC are in the types of the columns > returned. For example, JDBC might say that any String will do, whereas > ODBC will say it has to be VARCHAR. In that case, a literal value ' ' > will be fine for JDBC, but since it's treated as a CHAR value by Derby, > it would be breaking ODBC standard. With a VTI, we could theoretically > accomplish the same things that we'd be doing with new SQL > statements--such as casting ' ' to VARCHAR in this particular case. > Other modifications we'd have to implement include casting certain > integer columns to smallints, and replacing null values in JDBC (such as > for "sql_data_type" and "buffer_length" columns) to legal values > (neither column is supposed to be null for ODBC). > > Upside to this is that we still only have a single metadata.properties > file, which (theoretically) makes maintenance of metadata procedures > easier. As I don't know much about VTIs, I can't say what else this > approach would require, but it seems safe to say that it would at least > require another class to serve as the ODBC VTI. How that would tie into > the SystemProcedures and EmbedDatabaseMetadata classes, I don't know yet... > > So... > > [ #2 **** COMMUNITY INPUT? **** ] > > What are people's feelings/reactions/comments in terms of which of the > above three options is most appropriate for the Derby codeline? In a > nutshell, it comes to 1) new public methods on EmbedDatabaseMetadata, 2) > new subclass to EmbedDatabaseMetadata that sets ODBC-related state (with > or without creating a new odbc_metadata.properties file), and 3) create > an ODBC VTI to manipulate existing JDBC result sets into ODBC-compliant > ones. > > NOTE: another thing we have to decide is if we're going to support > different levels of ODBC metadata--for example, ODBC 1.0 metadata > differs from ODBC 2.0 in some respects, and likewise for ODBC 3.0. Do > we want to support all of the different levels? If so, which of the > above approaches is going to make that most feasible? My guess is that > option III would be the best choice in that case... > After talking to Army some more about this, it seems there are three different types of difference between JDBC and ODBC, 1) Changes that could be made to our metadata.properties that would make it continue to be JDBC compliant but would satisfy the ODBC requirements. For example fields where we return a CHAR column now but need a VARCHAR for ODBC. The JDBC spec just says string so I think a VARCHAR would be ok. SQL_DATA_TYPE, unused in JDBC but required in ODBC can be filled in. Also I think it would be ok to change our INTEGER columns to SMALLINT where required as a getObject would still return Integer, but that might be more controversial. For these I think we change the metadata.properties as much as possible to conform to both. 2) Column Name changes. e.g JDBC returns RADIX for getProcedureColumns whereas ODBC uses NUM_PREC_RADIX and rearranging of columns. For these we could either make a VTI or perhaps there is some more direct internal way to hack at the metadata. I don't know. If done with a VTI I would see it happenning like this: Make a new class which is a wrapper for the DatabaseMetaData object. ODBCMetaData implements java.sql.DatabaseMetaData The constructor takes a Connection object and initializes dbmd = conn.getMetaData(). Where there is no difference between ODBC and JDBC it just has something like return dbmd.getTypeInfo() Where ODBC is different we call the VTI. Maybe there could be statements in the metadata.properties but just to call the VTI's. If there are new VTI's they could take the options passed to the stored procedures. Later that class could be extended for ODBC3MetaData etc. In SystemProcedures the getDMD() method could be changed to take the options and for ODBC create the wrapper instead of getting the DatabaseMetadata. e.g. return new ODBCMetaData(conn.getMetaData()); The VTI itself would use the metadata call as it's underlying resultset and then for the columninfo it would have something like EmbedResultSetMetaData.getResultColumnDescriptor("NUM_PREC_RADIX", Types.SMALLINT, false), Again maybe all of this could be avoided if we could somehow internally change the columnname of the resultset. 3) Extra columns in ODBC. For getProcedureColumns, the spec explicitly says it's ok to have extra columns. I wonder if that would be ok for other methods too. > [ #2 **** End **** ] > > The final part of Kathey's and my discussion involved the issue of Derby > upgrade. While it IS true that, as Dan said, "no upgrade code has been > added to support upgrading databases from 10.0 to 10.1" (see > http://nagoya.apache.org/eyebrowse/ReadMsg?listName=derby-dev@db.apache.org&msgNo=1703), > it nonetheless seems like a good idea to consider what effects on > upgrade the changes to Derby metadata might have. > In DD_Version.upgradeIfNeeded() the JDBC Metadata SPS's get dropped and recreated with any version change up or down so changes to the stored prepared statements should be ok as the version changes with this change, for instance if this change went to the maintenance branch it would need to be bumped from 10.0.2.1 to 10.0.2.2. Army, could you verify that these are the only three types of changes? If we could find an easier way to change the names, maybe this isn't all that hard, but I really think copying the queries, especially the really complex ones is a bad idea, especially if we start having to support multiple ODBC versions. Kathey -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.5 (MingW32) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFB9V7EG0h36bFmkocRApJJAJ9NPPBe5hMaw0qCS2UBB9eGsXUZpACfVG2u M7ScC4YKGV2bV78RC2iMTx4= =dK2c -----END PGP SIGNATURE-----