openjpa-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pinaki Poddar (JIRA)" <>
Subject [jira] [Commented] (OPENJPA-2280) MappingTool ignores column precision / scale for some databases
Date Mon, 04 Feb 2013 23:14:13 GMT


Pinaki Poddar commented on OPENJPA-2280:

>From the responses, I gather that we agree on the primary use case i.e. 

Rule 1: if both precision and scale are specified in a @Column annotation, we must define
a database column of appropriate type and that column must honor the precision and scale specified.
If the settings are such that the specified precision and scale can not be honored, that raises
a schema definition error.

Rule 2: When scale is specified, the user may set any BigDecimal value in their in-memory
object, but when such an instance is stored or retrieved from the database, the returned value
is always the scale as mentioned in the @Column. This process may loose some accuracy (scale)
i.e. for scale=2 field, the user might have set 1234.56789 but the returned value from database
is always 1234.56. 
But what if user had set a value of lower scale? Let us say s/he sets 1234. In this case will
the returned value be 1234 or 1234.00 ? 


I think this ground rule will give us a good start to rationalize the special case behavior
when not all things are explicitly stated.

But before we lay down those special cases, let us clarify another DBDictionary settings namely
StoreLargeNumbersAsString. The default value for this switch is false. But what if it is turned
to true? 
It is true that we define the database schema as a VARCHAR when StoreLargeNumbersAsString=true.
We should maintain that.
But I believe, even in this case the rule #2 above must remain the same. That is if user sets
a value of 1234.56789 on a @Column(scale=2) field, they value will be stored in a VARCHAR
column with 1234.56789, but when accessed via that field the value will be 1234.56.

Do you agree on this proposed rule (say Rule 3)?

Of course, the above rules do not cover the scenarios when either precision or scale or both
are unspecified in @Column. 

What are the default values of precision and scale? How does that impact the schema? How is
that impacted by StoreLargeNumbersAsString=true. Given that StoreLargeNumbersAsString=false
by default, and it is specific to OpenJPA, let us skip that for now.

According to the spec, precision and scale in @Column annotation default to zero. 

In OpenJPA, if neither precision nor scale is specified in @Column, the database column is
of type as defined by NumericTypeName. That is NUMERIC by default dictionary, DOUBLE in Derby
and DB2, DECIMAL for Ingres, NUMBER for Oracle etc. Even if NumericTypeName=DECIMAL which
can take the precision and scale argument, the DECIMAL field in the database will be defined
without those argument as simply DECIMAL i.e. databases own discipline will control what can
be stored in the column and how.
I have not yet analyzed/experimented with other variations of such special cases where either
of them but not both are specified, or both are specified but inconsistent i.e. say precision
< scale etc. 
> MappingTool ignores column precision / scale for some databases
> ---------------------------------------------------------------
>                 Key: OPENJPA-2280
>                 URL:
>             Project: OpenJPA
>          Issue Type: Bug
>          Components: tooling
>    Affects Versions: 1.2.3, 2.3.0, 2.2.1
>            Reporter: Rick Curtis
> This JIRA is the same issue as reported by OPENJPA-1224.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see:

View raw message