nifi-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <>
Subject [GitHub] [nifi] simonbence commented on a change in pull request #4223: NIFI-7369 Adding big decimal support for record handling in order to avoid missing precision when reading in records
Date Mon, 11 May 2020 11:38:33 GMT

simonbence commented on a change in pull request #4223:

File path: nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/avro/
@@ -256,6 +256,12 @@ private static Schema buildAvroSchema(final DataType dataType, final
String fiel
             case LONG:
                 schema = Schema.create(Type.LONG);
+            case BIGDECIMAL:
+                // One more byte than below to allow the dot in the string representation
+                schema = Schema.createFixed(fieldName + "Type", null,  "org.apache.nifi",39);

Review comment:
       Based what I saw within org.apache.avro.Schema, other than creating schema of a composite
type or one based on the Type enum within (which does not support BigDecimal), creating a
fixed one is supported by it. As we do use it in [convertToAvroObject](
as well it seems like a working approach.
   As for the 38, the idea was to be consistent with ORC, but you are right, we do not need
to bound ourselves for the smallest common set but we might say in case of Avro, we pick a
bigger limit.

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:

View raw message