kylin-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hubert STEFANI (JIRA)" <j...@apache.org>
Subject [jira] [Created] (KYLIN-3667) ArrayIndexOutOfBoundsException in NDCuboidBuilder
Date Mon, 05 Nov 2018 16:46:00 GMT
Hubert STEFANI created KYLIN-3667:
-------------------------------------

             Summary: ArrayIndexOutOfBoundsException in NDCuboidBuilder
                 Key: KYLIN-3667
                 URL: https://issues.apache.org/jira/browse/KYLIN-3667
             Project: Kylin
          Issue Type: Bug
          Components: Spark Engine
    Affects Versions: v2.5.0
         Environment: AWS EMR 
            Reporter: Hubert STEFANI


The former errors

https://issues.apache.org/jira/browse/KYLIN-3115

and

https://issues.apache.org/jira/browse/KYLIN-1768

still remains in step SparkCubingByLayer.

we encounter the following error : 

java.lang.ArrayIndexOutOfBoundsException at java.lang.System.arraycopy(Native Method)
 at org.apache.kylin.engine.mr.common.NDCuboidBuilder.buildKeyInternal(NDCuboidBuilder.java:106)
 at org.apache.kylin.engine.mr.common.NDCuboidBuilder.buildKey2(NDCuboidBuilder.java:87)
 at org.apache.kylin.engine.spark.SparkCubingByLayer$CuboidFlatMap.call(SparkCubingByLayer.java:425)
 at org.apache.kylin.engine.spark.SparkCubingByLayer$CuboidFlatMap.call(SparkCubingByLayer.java:370)
 at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:143)
 at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:143)
 at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
 at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
 at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:191)
 at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
 at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
 at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
 at org.apache.spark.scheduler.Task.run(Task.scala:99)
 at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

 

Do we have to (painfully) change dimensions size or should it be fixed through a patch ? 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message