kylin-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (KYLIN-3644) NumberFormatExcetion on null values when building cube with Spark
Date Thu, 01 Nov 2018 10:29:00 GMT

    [ https://issues.apache.org/jira/browse/KYLIN-3644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16671426#comment-16671426
] 

ASF GitHub Bot commented on KYLIN-3644:
---------------------------------------

shaofengshi closed pull request #322: KYLIN-3644 Fix SparkFactDistinct step NumberFormatException
URL: https://github.com/apache/kylin/pull/322
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/core-cube/src/main/java/org/apache/kylin/cube/util/KeyValueBuilder.java b/core-cube/src/main/java/org/apache/kylin/cube/util/KeyValueBuilder.java
index 0ba4fd8ed1..0636a5c343 100644
--- a/core-cube/src/main/java/org/apache/kylin/cube/util/KeyValueBuilder.java
+++ b/core-cube/src/main/java/org/apache/kylin/cube/util/KeyValueBuilder.java
@@ -56,7 +56,7 @@ private void initNullStrings() {
         }
     }
 
-    protected boolean isNull(String v) {
+    public boolean isNull(String v) {
         return nullStrs.contains(v);
     }
 
diff --git a/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkFactDistinct.java
b/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkFactDistinct.java
index 5cfd2d7ccb..cdd0ac22d7 100644
--- a/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkFactDistinct.java
+++ b/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkFactDistinct.java
@@ -63,6 +63,7 @@
 import org.apache.kylin.cube.cuboid.CuboidUtil;
 import org.apache.kylin.cube.model.CubeDesc;
 import org.apache.kylin.cube.model.CubeJoinedFlatTableEnrich;
+import org.apache.kylin.cube.util.KeyValueBuilder;
 import org.apache.kylin.dict.DictionaryGenerator;
 import org.apache.kylin.dict.IDictionaryBuilder;
 import org.apache.kylin.engine.EngineFactory;
@@ -250,6 +251,7 @@ protected void execute(OptionsHelper optionsHelper) throws Exception {
         private Map<Integer, DimensionRangeInfo> dimensionRangeInfoMap;
         private transient ByteBuffer tmpbuf;
         private LongAccumulator bytesWritten;
+        private KeyValueBuilder keyValueBuilder;
 
         public FlatOutputFucntion(String cubeName, String segmentId, String metaurl, SerializableConfiguration
conf,
                 int samplingPercent, LongAccumulator bytesWritten) {
@@ -272,6 +274,7 @@ private void init() {
                 CubeJoinedFlatTableEnrich intermediateTableDesc = new CubeJoinedFlatTableEnrich(
                         EngineFactory.getJoinedFlatTableDesc(cubeSegment), cubeDesc);
 
+                keyValueBuilder = new KeyValueBuilder(intermediateTableDesc);
                 reducerMapping = new FactDistinctColumnsReducerMapping(cubeInstance);
                 tmpbuf = ByteBuffer.allocate(4096);
 
@@ -317,7 +320,7 @@ private void init() {
 
                 for (int i = 0; i < allCols.size(); i++) {
                     String fieldValue = row[columnIndex[i]];
-                    if (fieldValue == null)
+                    if (fieldValue == null || keyValueBuilder.isNull(fieldValue))
                         continue;
 
                     final DataType type = allCols.get(i).getType();


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


> NumberFormatExcetion on null values when building cube with Spark
> -----------------------------------------------------------------
>
>                 Key: KYLIN-3644
>                 URL: https://issues.apache.org/jira/browse/KYLIN-3644
>             Project: Kylin
>          Issue Type: Bug
>          Components: Spark Engine
>    Affects Versions: v2.5.0
>            Reporter: Hubert STEFANI
>            Assignee: Chao Long
>            Priority: Major
>             Fix For: v2.5.1
>
>         Attachments: 00_zeppelin_notebook.jpg, 01_overview_table.jpg, 02_dimension_cube.jpg,
03_measure_cube.jpg, sortieData.csv
>
>
> We encounter an error any time we try to build a cube with the following steps :
>  * upload a csv on AWS S3 with following characteristics : the column on which the measure
will be defined has some null values (Cf. attachment)
>  * create a hive table with spark
>  * create a model on  top of this table,
>  * create a cube with a SUM measure
>  * chose Spark as Engine
>  * Launch build
> Result : The build process fails at '{color:#4383b4}#7 Step Name: {color}Build Cube with
Spark' with the following error :
>  
> """"""
> 18/10/23 09:25:39 INFO scheduler.DAGScheduler: Job 0 failed: saveAsNewAPIHadoopDataset
at SparkCubingByLayer.java:253, took 7,277136 s
> Exception in thread "main" java.lang.RuntimeException: error execute org.apache.kylin.engine.spark.SparkCubingByLayer.
Root cause: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent
failure: Lost task 0.3 in stage 0.0 (TID 4, ip-172-31-35-113.eu-west-1.compute.internal, executor
4): java.lang.NumberFormatException: For input string: "\N"
>     at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2043)
>     at sun.misc.FloatingDecimal.parseDouble(FloatingDecimal.java:110)
>     at java.lang.Double.parseDouble(Double.java:538)
>     at org.apache.kylin.measure.basic.DoubleIngester.valueOf(DoubleIngester.java:38)
>     at org.apache.kylin.measure.basic.DoubleIngester.valueOf(DoubleIngester.java:28)
>     at org.apache.kylin.engine.mr.common.BaseCuboidBuilder.buildValueOf(BaseCuboidBuilder.java:162)
>     at org.apache.kylin.engine.mr.common.BaseCuboidBuilder.buildValueObjects(BaseCuboidBuilder.java:127)
>     at org.apache.kylin.engine.spark.SparkCubingByLayer$EncodeBaseCuboid.call(SparkCubingByLayer.java:297)
>     at org.apache.kylin.engine.spark.SparkCubingByLayer$EncodeBaseCuboid.call(SparkCubingByLayer.java:257)
>     at org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1043)
>     at org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1043)
> """""
> Note 1: the build  process is OK when run with Map/Reduce Engine.
> Note 2: the error doesn't seem to be related to AWS environment.
>  
> Sample of csv :
> ID;CATEGORIE;TEL;MONTANT;MAGASIN;MATRICULE;VILLE;
> 970;161;6-98-6-6-42;838.47034;Magasin_19;Client_Matricule_28;MARSEILLE;
> 971;89;62-15-2-64-86;;;Client_Matricule_1;LYON;
> 972;87;17-64-97-74-42;;;Client_Matricule_105;ORBEC;
> 973;174;79-33-90-0-55;;Magasin_7;Client_Matricule_55;AJACCIO;
> 974;172;89-95-71-6-49;141.64174;Magasin_9;Client_Matricule_105;BASTIA;
> 975;83;7-27-95-28-7;897.28204;;Client_Matricule_199;AJACCIO;
> 976;170;67-72-18-29-34;164.07967;Magasin_3;Client_Matricule_137;LILLE;
> 977;130;14-69-4-23-27;1928.9557;Magasin_1;Client_Matricule_17;NOMNOM;
> 978;43;55-91-84-98-49;891.2691;Magasin_0;Client_Matricule_22;NOMNOM;
> 979;117;98-96-0-54-39;1636.3994;Magasin_9;Client_Matricule_142;MARSEILLE;
> 980;163;37-55-76-53-38;;;Client_Matricule_64;NEWYORK;
> 981;106;32-40-6-46-15;;Magasin_2;Client_Matricule_158;NOMNOM;
> 982;56;95-60-83-89-90;;;Client_Matricule_102;NOMNOM;
> 983;168;21-56-62-0-58;;;Client_Matricule_160;NOMNOM;
> 984;154;92-67-37-94-60;;;Client_Matricule_137;PARIS;
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message