nifi-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <>
Subject [GitHub] [nifi] simonbence commented on a change in pull request #4223: NIFI-7369 Adding big decimal support for record handling in order to avoid missing precision when reading in records
Date Mon, 11 May 2020 11:24:17 GMT

simonbence commented on a change in pull request #4223:

File path: nifi-nar-bundles/nifi-hive-bundle/nifi-hive3-processors/src/main/java/org/apache/hadoop/hive/ql/io/orc/
@@ -284,6 +290,11 @@ public static TypeInfo getOrcField(DataType dataType, boolean hiveFieldNames)
                 || RecordFieldType.STRING.equals(fieldType)) {
             return getPrimitiveOrcTypeFromPrimitiveFieldType(dataType);
+        if (RecordFieldType.BIGDECIMAL.equals(fieldType)) {
+            // 38 is the maximum allowed precision and 19 digit is needed to represent long

Review comment:
       The Hive libraries the NiFi currently depends on limit the precision to 38. Also it
expects user to specify the precision and scale. The idea of having fix numbers came from
this. As for the 19 digits the aim was to be able to represent longs without information loss,
but I am open for any opinion. That was an arbitrary choice from my side as it looked the
best fitting.
   Also, passing down this extra type information via DataType within RecordField looks possible,
thus avoiding predefined values here, however it would come with a lot of other changes. In
longer term it might be better but for initial support, I think it covers a lot of possible
use cases even with limited representation capabilities.
   Other than the code of TypeInfoFactory I found the [following](
about the Hive decimal support and the limit.

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:

View raw message