BTW, here is the docs for ODBC for the PRECISION column which as of  ODB 3.0 is now known as COLUMN_SIZE

The maximum column size that the server supports for this data type. For numeric data, this is the maximum precision. For string data, this is the length in characters. For datetime data types, this is the length in characters of the string representation (assuming the maximum allowed precision of the fractional seconds component). NULL is returned for data types where column size is not applicable. For interval data types, this is the number of characters in the character representation of the interval literal (as defined by the interval leading precision; see Interval Data Type Length" in Appendix D: Data Types).

For more information on column size, see Column Size, Decimal Digits, Transfer Octet Length, and Display Size in Appendix D: Data Types.

Daniel John Debrunner (JIRA) wrote:
     [ ]
Daniel John Debrunner commented on DERBY-194:

While the JDBC spec does say 'length', it does not explictly say what length is being referred to. Length of the object as a String, length of the stored form of the value, maximum length of the Java serialized form of getObject or something else?
Is there any clarification in the JDBC tutorial book, or is returning NULL a better option here?

getPrecision() on TIME and TIMESTAMP is zero

         Key: DERBY-194
     Project: Derby
        Type: Bug
  Components: JDBC
 Environment: Windows XP SP1 Professional
    Reporter: George Baklarz
    Priority: Minor

Sun JDBC defines getPrecision() to return either the maximum length or maximum number of digits of the column, or zero for failure (such as the precision is unknown).
The DATE field returns 10 characters on a getPrecision() call so why doesn't TIME and TIMESTAMP give a precision length equal to the display length? Just seems inconsistent that DATE would return a precision (as well as all other data types) and not TIME nor TIMESTAMP.