avro-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ryan Blue (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (AVRO-1864) Decimal conversion should convert values with scale less than intended scale instead of erroring.
Date Mon, 20 Jun 2016 17:26:05 GMT

    [ https://issues.apache.org/jira/browse/AVRO-1864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15339964#comment-15339964

Ryan Blue commented on AVRO-1864:

[~tscott@cloudera.com], implementing what you suggest here would silently modify values, which
is not something that a storage format should do. If you store "2.15" then you should get
"2.15" back, not "2.150". With the semantics of decimal, these conversions are significant
and must be explicit.

The right place to modify these values is in Sqoop, if and only if the user instructs Sqoop
to modify the value to fit. Another option that could be supported by Avro is to store the
scale as a single by with each value. We decided that this feature wasn't needed when we first
added the decimal spec because the SQL spec requires a fixed scale for a column. If you think
you have a use case that requires a per-value scale, then lets discuss that. [~marcelk] may
also want to weigh in on that use case.

For now, I'm going to mark this as "won't fix" because the current behavior is correct.

> Decimal conversion should convert values with scale less than intended scale instead
of erroring.
> -------------------------------------------------------------------------------------------------
>                 Key: AVRO-1864
>                 URL: https://issues.apache.org/jira/browse/AVRO-1864
>             Project: Avro
>          Issue Type: Bug
>    Affects Versions: 1.7.6
>            Reporter: Thomas Scott
>            Priority: Minor
> Using Sqoop to import data to AVRO can mean that decimal scales in the incoming values
do not match the scales expected in AVRO. In this situation AVRO file creation fails. However,
in some cases this is not the correct behaviour, given a value of 3.1 and a scale of 3 the
value will fit into the scale requirements and so should be adjusted.
> Looking through the code this seems to be enforced here:
> src/main/java/org/apache/avro/Conversions.java
> public ByteBuffer toBytes(BigDecimal value, Schema schema, LogicalType type) {
> int scale = ((LogicalTypes.Decimal) type).getScale();
> if (scale != value.scale()) {
> throw new AvroTypeException("Cannot encode decimal with scale " +
> value.scale() + " as scale " + scale);
> Should this not be:
> if (scale < value.scale()) {
> The same applies in:  toFixed()

This message was sent by Atlassian JIRA

View raw message