Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 7CE1A2004A1 for ; Thu, 24 Aug 2017 08:56:09 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 7B18B16A3AC; Thu, 24 Aug 2017 06:56:09 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 714C716A329 for ; Thu, 24 Aug 2017 08:56:08 +0200 (CEST) Received: (qmail 36969 invoked by uid 500); 24 Aug 2017 06:56:07 -0000 Mailing-List: contact issues-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hive.apache.org Delivered-To: mailing list issues@hive.apache.org Received: (qmail 36960 invoked by uid 99); 24 Aug 2017 06:56:07 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 24 Aug 2017 06:56:07 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 16EE918071F for ; Thu, 24 Aug 2017 06:56:07 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -99.202 X-Spam-Level: X-Spam-Status: No, score=-99.202 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id 8B4c0YycNcJN for ; Thu, 24 Aug 2017 06:56:04 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 554D15F297 for ; Thu, 24 Aug 2017 06:56:03 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 4D3AFE0D57 for ; Thu, 24 Aug 2017 06:56:01 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 26D9E25385 for ; Thu, 24 Aug 2017 06:56:00 +0000 (UTC) Date: Thu, 24 Aug 2017 06:56:00 +0000 (UTC) From: "liyunzhang_intel (JIRA)" To: issues@hive.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HIVE-16823) "ArrayIndexOutOfBoundsException" in spark_vectorized_dynamic_partition_pruning.q MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Thu, 24 Aug 2017 06:56:09 -0000 [ https://issues.apache.org/jira/browse/HIVE-16823?page=3Dcom.atlassian= .jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D1613= 9655#comment-16139655 ]=20 liyunzhang_intel commented on HIVE-16823: ----------------------------------------- [~lirui]: Although ConstantPropagate influence the logical plan=EF=BC=8Chiv= e on tez will not throw the exception. {code} set hive.optimize.ppd=3Dtrue; set hive.ppd.remove.duplicatefilters=3Dtrue; set hive.tez.dynamic.partition.pruning=3Dtrue; set hive.optimize.metadataonly=3Dfalse; set hive.optimize.index.filter=3Dtrue; set hive.vectorized.execution.enabled=3Dtrue; set hive.strict.checks.cartesian.product=3Dfalse; set hive.cbo.enable=3Dfalse; set hive.user.install.directory=3Dfile:///tmp; set fs.default.name=3Dfile:///; set fs.defaultFS=3Dfile:///; set tez.staging-dir=3D/tmp; set tez.ignore.lib.uris=3Dtrue; set tez.runtime.optimize.local.fetch=3Dtrue; set tez.local.mode=3Dtrue; set hive.explain.user=3Dfalse; select count(*) from srcpart join (select ds as ds, ds as `date` from srcpa= rt group by ds) s on (srcpart.ds =3D s.ds) where s.`date` =3D '2008-04-08'; {code} the explain(It seems the key of GroupByOperator is not right) {code} Reducer 2=20 Execution mode: vectorized Reduce Operator Tree: Group By Operator keys: '2008-04-08' (type: string) mode: mergepartial outputColumnNames: _col0 Statistics: Num rows: 1 Data size: 11624 Basic stats: COMPL= ETE Column stats: NONE Select Operator Statistics: Num rows: 1 Data size: 11624 Basic stats: COM= PLETE Column stats: NONE Map Join Operator {code} Need more time to investigate why tez is not influenced when cbo is disable= d. But i guess this is another problem, any suggestion? > "ArrayIndexOutOfBoundsException" in spark_vectorized_dynamic_partition_pr= uning.q > -------------------------------------------------------------------------= ------- > > Key: HIVE-16823 > URL: https://issues.apache.org/jira/browse/HIVE-16823 > Project: Hive > Issue Type: Bug > Reporter: Jianguo Tian > Assignee: liyunzhang_intel > Attachments: explain.spark, explain.tez, HIVE-16823.1.patch, HIVE= -16823.patch > > > spark_vectorized_dynamic_partition_pruning.q > {code} > set hive.optimize.ppd=3Dtrue; > set hive.ppd.remove.duplicatefilters=3Dtrue; > set hive.spark.dynamic.partition.pruning=3Dtrue; > set hive.optimize.metadataonly=3Dfalse; > set hive.optimize.index.filter=3Dtrue; > set hive.vectorized.execution.enabled=3Dtrue; > set hive.strict.checks.cartesian.product=3Dfalse; > -- parent is reduce tasks > select count(*) from srcpart join (select ds as ds, ds as `date` from src= part group by ds) s on (srcpart.ds =3D s.ds) where s.`date` =3D '2008-04-08= '; > {code} > The exceptions are as follows: > {code} > 2017-06-05T09:20:31,468 ERROR [Executor task launch worker-0] spark.Spark= ReduceRecordHandler: Fatal error: org.apache.hadoop.hive.ql.metadata.HiveEx= ception: Error while processing vector batch (tag=3D0) Column vector types:= 0:BYTES, 1:BYTES > ["2008-04-08", "2008-04-08"] > org.apache.hadoop.hive.ql.metadata.HiveException: Error while processing = vector batch (tag=3D0) Column vector types: 0:BYTES, 1:BYTES > ["2008-04-08", "2008-04-08"] > =09at org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.proce= ssVectors(SparkReduceRecordHandler.java:413) ~[hive-exec-3.0.0-SNAPSHOT.jar= :3.0.0-SNAPSHOT] > =09at org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.proce= ssRow(SparkReduceRecordHandler.java:301) ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0= .0-SNAPSHOT] > =09at org.apache.hadoop.hive.ql.exec.spark.HiveReduceFunctionResultList.p= rocessNextRecord(HiveReduceFunctionResultList.java:54) ~[hive-exec-3.0.0-SN= APSHOT.jar:3.0.0-SNAPSHOT] > =09at org.apache.hadoop.hive.ql.exec.spark.HiveReduceFunctionResultList.p= rocessNextRecord(HiveReduceFunctionResultList.java:28) ~[hive-exec-3.0.0-SN= APSHOT.jar:3.0.0-SNAPSHOT] > =09at org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.has= Next(HiveBaseFunctionResultList.java:85) ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0= .0-SNAPSHOT] > =09at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers= .scala:42) ~[scala-library-2.11.8.jar:?] > =09at scala.collection.Iterator$class.foreach(Iterator.scala:893) ~[scala= -library-2.11.8.jar:?] > =09at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) ~[sc= ala-library-2.11.8.jar:?] > =09at org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonf= un$apply$12.apply(AsyncRDDActions.scala:127) ~[spark-core_2.11-2.0.0.jar:2.= 0.0] > =09at org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonf= un$apply$12.apply(AsyncRDDActions.scala:127) ~[spark-core_2.11-2.0.0.jar:2.= 0.0] > =09at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:= 1974) ~[spark-core_2.11-2.0.0.jar:2.0.0] > =09at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:= 1974) ~[spark-core_2.11-2.0.0.jar:2.0.0] > =09at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) = ~[spark-core_2.11-2.0.0.jar:2.0.0] > =09at org.apache.spark.scheduler.Task.run(Task.scala:85) ~[spark-core_2.1= 1-2.0.0.jar:2.0.0] > =09at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:27= 4) ~[spark-core_2.11-2.0.0.jar:2.0.0] > =09at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecuto= r.java:1142) [?:1.8.0_112] > =09at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecut= or.java:617) [?:1.8.0_112] > =09at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112] > Caused by: java.lang.ArrayIndexOutOfBoundsException: 1 > =09at org.apache.hadoop.hive.ql.exec.vector.VectorGroupKeyHelper.copyGrou= pKey(VectorGroupKeyHelper.java:107) ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SN= APSHOT] > =09at org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator$Process= ingModeReduceMergePartial.doProcessBatch(VectorGroupByOperator.java:832) ~[= hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > =09at org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator$Process= ingModeBase.processBatch(VectorGroupByOperator.java:179) ~[hive-exec-3.0.0-= SNAPSHOT.jar:3.0.0-SNAPSHOT] > =09at org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.process= (VectorGroupByOperator.java:1035) ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAP= SHOT] > =09at org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.proce= ssVectors(SparkReduceRecordHandler.java:400) ~[hive-exec-3.0.0-SNAPSHOT.jar= :3.0.0-SNAPSHOT] > =09... 17 more > 2017-06-05T09:20:31,472 ERROR [Executor task launch worker-0] executor.Ex= ecutor: Exception in task 2.0 in stage 1.0 (TID 8) > java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveExcept= ion: Error while processing vector batch (tag=3D0) Column vector types: 0:B= YTES, 1:BYTES > ["2008-04-08", "2008-04-08"] > =09at org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.proce= ssRow(SparkReduceRecordHandler.java:315) ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0= .0-SNAPSHOT] > =09at org.apache.hadoop.hive.ql.exec.spark.HiveReduceFunctionResultList.p= rocessNextRecord(HiveReduceFunctionResultList.java:54) ~[hive-exec-3.0.0-SN= APSHOT.jar:3.0.0-SNAPSHOT] > =09at org.apache.hadoop.hive.ql.exec.spark.HiveReduceFunctionResultList.p= rocessNextRecord(HiveReduceFunctionResultList.java:28) ~[hive-exec-3.0.0-SN= APSHOT.jar:3.0.0-SNAPSHOT] > =09at org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.has= Next(HiveBaseFunctionResultList.java:85) ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0= .0-SNAPSHOT] > =09at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers= .scala:42) ~[scala-library-2.11.8.jar:?] > =09at scala.collection.Iterator$class.foreach(Iterator.scala:893) ~[scala= -library-2.11.8.jar:?] > =09at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) ~[sc= ala-library-2.11.8.jar:?] > =09at org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonf= un$apply$12.apply(AsyncRDDActions.scala:127) ~[spark-core_2.11-2.0.0.jar:2.= 0.0] > =09at org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonf= un$apply$12.apply(AsyncRDDActions.scala:127) ~[spark-core_2.11-2.0.0.jar:2.= 0.0] > =09at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:= 1974) ~[spark-core_2.11-2.0.0.jar:2.0.0] > =09at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:= 1974) ~[spark-core_2.11-2.0.0.jar:2.0.0] > =09at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) = ~[spark-core_2.11-2.0.0.jar:2.0.0] > =09at org.apache.spark.scheduler.Task.run(Task.scala:85) ~[spark-core_2.1= 1-2.0.0.jar:2.0.0] > =09at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:27= 4) ~[spark-core_2.11-2.0.0.jar:2.0.0] > =09at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecuto= r.java:1142) [?:1.8.0_112] > =09at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecut= or.java:617) [?:1.8.0_112] > =09at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112] > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error while = processing vector batch (tag=3D0) Column vector types: 0:BYTES, 1:BYTES > ["2008-04-08", "2008-04-08"] > =09at org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.proce= ssVectors(SparkReduceRecordHandler.java:413) ~[hive-exec-3.0.0-SNAPSHOT.jar= :3.0.0-SNAPSHOT] > =09at org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.proce= ssRow(SparkReduceRecordHandler.java:301) ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0= .0-SNAPSHOT] > =09... 16 more > Caused by: java.lang.ArrayIndexOutOfBoundsException: 1 > =09at org.apache.hadoop.hive.ql.exec.vector.VectorGroupKeyHelper.copyGrou= pKey(VectorGroupKeyHelper.java:107) ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SN= APSHOT] > =09at org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator$Process= ingModeReduceMergePartial.doProcessBatch(VectorGroupByOperator.java:832) ~[= hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > =09at org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator$Process= ingModeBase.processBatch(VectorGroupByOperator.java:179) ~[hive-exec-3.0.0-= SNAPSHOT.jar:3.0.0-SNAPSHOT] > =09at org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.process= (VectorGroupByOperator.java:1035) ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAP= SHOT] > =09at org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.proce= ssVectors(SparkReduceRecordHandler.java:400) ~[hive-exec-3.0.0-SNAPSHOT.jar= :3.0.0-SNAPSHOT] > =09at org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.proce= ssRow(SparkReduceRecordHandler.java:301) ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0= .0-SNAPSHOT] > =09... 16 more > 2017-06-05T09:20:31,488 DEBUG [dispatcher-event-loop-2] scheduler.TaskSch= edulerImpl: parentName: , name: TaskSet_1, runningTasks: 0 > 2017-06-05T09:20:31,493 WARN [task-result-getter-0] scheduler.TaskSetMan= ager: Lost task 2.0 in stage 1.0 (TID 8, localhost): java.lang.RuntimeExcep= tion: org.apache.hadoop.hive.ql.metadata.HiveException: Error while process= ing vector batch (tag=3D0) Column vector types: 0:BYTES, 1:BYTES > ["2008-04-08", "2008-04-08"] > =09at org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.proce= ssRow(SparkReduceRecordHandler.java:315) > =09at org.apache.hadoop.hive.ql.exec.spark.HiveReduceFunctionResultList.p= rocessNextRecord(HiveReduceFunctionResultList.java:54) > =09at org.apache.hadoop.hive.ql.exec.spark.HiveReduceFunctionResultList.p= rocessNextRecord(HiveReduceFunctionResultList.java:28) > =09at org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.has= Next(HiveBaseFunctionResultList.java:85) > =09at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers= .scala:42) > =09at scala.collection.Iterator$class.foreach(Iterator.scala:893) > =09at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) > =09at org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonf= un$apply$12.apply(AsyncRDDActions.scala:127) > =09at org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonf= un$apply$12.apply(AsyncRDDActions.scala:127) > =09at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:= 1974) > =09at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:= 1974) > =09at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) > =09at org.apache.spark.scheduler.Task.run(Task.scala:85) > =09at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:27= 4) > =09at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecuto= r.java:1142) > =09at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecut= or.java:617) > =09at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error while = processing vector batch (tag=3D0) Column vector types: 0:BYTES, 1:BYTES > ["2008-04-08", "2008-04-08"] > =09at org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.proce= ssVectors(SparkReduceRecordHandler.java:413) > =09at org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.proce= ssRow(SparkReduceRecordHandler.java:301) > =09... 16 more > Caused by: java.lang.ArrayIndexOutOfBoundsException: 1 > =09at org.apache.hadoop.hive.ql.exec.vector.VectorGroupKeyHelper.copyGrou= pKey(VectorGroupKeyHelper.java:107) > =09at org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator$Process= ingModeReduceMergePartial.doProcessBatch(VectorGroupByOperator.java:832) > =09at org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator$Process= ingModeBase.processBatch(VectorGroupByOperator.java:179) > =09at org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.process= (VectorGroupByOperator.java:1035) > =09at org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.proce= ssVectors(SparkReduceRecordHandler.java:400) > =09... 17 more > 2017-06-05T09:20:31,495 ERROR [task-result-getter-0] scheduler.TaskSetMan= ager: Task 2 in stage 1.0 failed 1 times; aborting job > {code} > This exception happens in this line of VectorGroupKeyHelper.java: > {code} > BytesColumnVector outputColumnVector =3D (BytesColumnVector) outputBatch.= cols[columnIndex]; > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)