empire-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Shaun Forbes (JIRA)" <empire-db-...@incubator.apache.org>
Subject [jira] [Updated] (EMPIREDB-208) Decimal fields don't support scales above 9
Date Wed, 20 Aug 2014 10:17:26 GMT

     [ https://issues.apache.org/jira/browse/EMPIREDB-208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Shaun Forbes updated EMPIREDB-208:
----------------------------------

    Description: 
The code for determining a decimal/numeric columns scale from it's size is assuming that 9
is the maximum value for the scale value, when in fact the scale can be as large as the columns
precision (up to 38 I believe).

Precision and scale for a decimal field is encoded in a double as "precision.scale", so a
numeric(10,3) field size is encoded as the double value 10.3.
numeric(17,12) is also a valid numeric field, but because the code assumes that the scale
is less than 10 this causes problem, even if it's stored as 17.12 in the columns size field.
Even worse would be numeric(17,10), which would be interpreted as having a scale of 1 rather
than 10.

This manifests itself in 2 location, that I've found so far, when validating a decimal field
and when generating a DDL statement.

The offending code in DBTableColumn.validateNumber(DataType type, Number n) is:
{code}
int reqScale =((int)(size*10)-(reqPrec*10))
{code}

and in DBDDLGenerator.appendColumnDataType(DataType type, double size, DBTableColumn c, StringBuilder
sql) is:
{code}
int scale = (int) ((size - prec) * 10 + 0.5);
{code}

My first suggestion for a fix would be to alter the 2 pieces of code above to perform the
correct conversions from double to precision/scale, using something like BigDecimal, but the
problem is by then it's too late, numeric(17,12) would be fine, but it would not fix the problem
for numeric(17,10) as it would already be stored as 17.1 in the column size.

I don't see any option but to either move the scale into a separate value and stop encoding
it in the size, or change the column size field to be a BigDecimal (or some other value that
can represent both the precision and scale correctly).
The former feels more correct to me, but the later would seem easier to implement, unfortunately
both of those options are API breaking changes.

Thanks,
Shaun

  was:
The code for determining a decimal/numeric columns scale from it's size is assuming that 9
is the maximum value for the scale value, when in fact the scale can be as large as the columns
precision (up to 38 I believe).

Precision and scale for a decimal field is encoded in a double as "precision.scale", so a
numeric(10,3) field size is encoded as the double value 10.3.
numeric(17,12) is also a valid numeric field, but because the code assumes that the scale
is less than 10 this causes problem, even if it's stored as 17.12 in the columns size field.
Even worse would be numeric(17,10), which would be interpreted as having a scale of 1 rather
than 10.

This manifests itself in 2 location, that I've found so far, when validating a decimal field
and when generating a DDL statement.

The offending code in DBTableColumn.validateNumber(DataType type, Number n) is:
{code}
int reqScale =((int)(size*10)-(reqPrec*10))
{code}

and in DBDDLGenerator.appendColumnDataType(DataType type, double size, DBTableColumn c, StringBuilder
sql) is:
{code}
int scale = (int) ((size - prec) * 10 + 0.5);
{code}

By first suggestion for a fix would be to alter the 2 pieces of code above to perform the
correct conversions from double to precision/scale, using something like BigDecimal, but the
problem is by then it's too late, numeric(17,12) would be fine, but it would not fix the problem
for numeric(17,10) as it would already be stored as 17.1 in the column size.

I don't see any option but to either move the scale into a separate value and stop encoding
it in the size, or change the column size field to be a BigDecimal (or some other value that
can represent both the precision and scale correctly).
The former feels more correct to me, but the later would seem easier to implement, unfortunately
both of those options are API breaking changes.

Thanks,
Shaun


> Decimal fields don't support scales above 9
> -------------------------------------------
>
>                 Key: EMPIREDB-208
>                 URL: https://issues.apache.org/jira/browse/EMPIREDB-208
>             Project: Empire-DB
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: empire-db-2.4.2
>            Reporter: Shaun Forbes
>              Labels: newbie
>
> The code for determining a decimal/numeric columns scale from it's size is assuming that
9 is the maximum value for the scale value, when in fact the scale can be as large as the
columns precision (up to 38 I believe).
> Precision and scale for a decimal field is encoded in a double as "precision.scale",
so a numeric(10,3) field size is encoded as the double value 10.3.
> numeric(17,12) is also a valid numeric field, but because the code assumes that the scale
is less than 10 this causes problem, even if it's stored as 17.12 in the columns size field.
> Even worse would be numeric(17,10), which would be interpreted as having a scale of 1
rather than 10.
> This manifests itself in 2 location, that I've found so far, when validating a decimal
field and when generating a DDL statement.
> The offending code in DBTableColumn.validateNumber(DataType type, Number n) is:
> {code}
> int reqScale =((int)(size*10)-(reqPrec*10))
> {code}
> and in DBDDLGenerator.appendColumnDataType(DataType type, double size, DBTableColumn
c, StringBuilder sql) is:
> {code}
> int scale = (int) ((size - prec) * 10 + 0.5);
> {code}
> My first suggestion for a fix would be to alter the 2 pieces of code above to perform
the correct conversions from double to precision/scale, using something like BigDecimal, but
the problem is by then it's too late, numeric(17,12) would be fine, but it would not fix the
problem for numeric(17,10) as it would already be stored as 17.1 in the column size.
> I don't see any option but to either move the scale into a separate value and stop encoding
it in the size, or change the column size field to be a BigDecimal (or some other value that
can represent both the precision and scale correctly).
> The former feels more correct to me, but the later would seem easier to implement, unfortunately
both of those options are API breaking changes.
> Thanks,
> Shaun



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message