Return-Path: X-Original-To: apmail-drill-issues-archive@minotaur.apache.org Delivered-To: apmail-drill-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 9007210929 for ; Tue, 10 Feb 2015 00:37:34 +0000 (UTC) Received: (qmail 91378 invoked by uid 500); 10 Feb 2015 00:37:34 -0000 Delivered-To: apmail-drill-issues-archive@drill.apache.org Received: (qmail 91352 invoked by uid 500); 10 Feb 2015 00:37:34 -0000 Mailing-List: contact issues-help@drill.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@drill.apache.org Delivered-To: mailing list issues@drill.apache.org Received: (qmail 91343 invoked by uid 99); 10 Feb 2015 00:37:34 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 10 Feb 2015 00:37:34 +0000 Date: Tue, 10 Feb 2015 00:37:34 +0000 (UTC) From: "Victoria Markman (JIRA)" To: issues@drill.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (DRILL-2196) Assert when . notation is used in union all query MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/DRILL-2196?page=3Dcom.atlassia= n.jira.plugin.system.issuetabpanels:all-tabpanel ] Victoria Markman updated DRILL-2196: ------------------------------------ Attachment: t3.csv > Assert when
. notation is used in union all query > -------------------------------------------------------------- > > Key: DRILL-2196 > URL: https://issues.apache.org/jira/browse/DRILL-2196 > Project: Apache Drill > Issue Type: Bug > Reporter: Victoria Markman > Attachments: t1.csv, t2.csv, t3.csv > > > It seems to happen when we have join in one of the legs of the union all: > Both legs have inner join: > {code} > 0: jdbc:drill:schema=3Ddfs> select t1.*, t2.* from t1, t2 where t1.c1 =3D= t2.c2 union all select t1.*, t3.* from t1, t3 where t1.c1 =3D t3.c3; > +------------+------------+------------+------------+------------+-------= -----+ > | a1 | b1 | c1 | a2 | b2 | c2= | > +------------+------------+------------+------------+------------+-------= -----+ > | 1 | aaaaa | 2015-01-01 | 1 | aaaaa | 2015-0= 1-01 | > | 2 | bbbbb | 2015-01-02 | 2 | bbbbb | 2015-0= 1-02 | > | 2 | bbbbb | 2015-01-02 | 2 | bbbbb | 2015-0= 1-02 | > | 2 | bbbbb | 2015-01-02 | 2 | bbbbb | 2015-0= 1-02 | > | 3 | ccccc | 2015-01-03 | 3 | ccccc | 2015-0= 1-03 | > | 4 | null | 2015-01-04 | 4 | ddddd | 2015-0= 1-04 | > | 5 | eeeee | 2015-01-05 | 5 | eeeee | 2015-0= 1-05 | > | 6 | fffff | 2015-01-06 | 6 | fffff | 2015-0= 1-06 | > | 7 | ggggg | 2015-01-07 | 7 | ggggg | 2015-0= 1-07 | > | 7 | ggggg | 2015-01-07 | 7 | ggggg | 2015-0= 1-07 | > | null | hhhhh | 2015-01-08 | 8 | hhhhh | 2015-0= 1-08 | > java.lang.IndexOutOfBoundsException: index: 0, length: 1 (expected: range= (0, 0)) > at io.netty.buffer.DrillBuf.checkIndexD(DrillBuf.java:156) > at io.netty.buffer.DrillBuf.chk(DrillBuf.java:178) > at io.netty.buffer.DrillBuf.getByte(DrillBuf.java:673) > at org.apache.drill.exec.vector.UInt1Vector$Accessor.get(UInt1Vec= tor.java:309) > at org.apache.drill.exec.vector.NullableIntVector$Accessor.isSet(= NullableIntVector.java:342) > at org.apache.drill.exec.vector.NullableIntVector$Accessor.isNull= (NullableIntVector.java:338) > at org.apache.drill.exec.vector.NullableIntVector$Accessor.getObj= ect(NullableIntVector.java:359) > at org.apache.drill.exec.vector.accessor.NullableIntAccessor.getO= bject(NullableIntAccessor.java:98) > at org.apache.drill.jdbc.AvaticaDrillSqlAccessor.getObject(Avatic= aDrillSqlAccessor.java:136) > at net.hydromatic.avatica.AvaticaResultSet.getObject(AvaticaResul= tSet.java:351) > at sqlline.SqlLine$Rows$Row.(SqlLine.java:2388) > at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2504) > at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2148) > at sqlline.SqlLine.print(SqlLine.java:1809) > at sqlline.SqlLine$Commands.execute(SqlLine.java:3766) > at sqlline.SqlLine$Commands.sql(SqlLine.java:3663) > at sqlline.SqlLine.dispatch(SqlLine.java:889) > at sqlline.SqlLine.begin(SqlLine.java:763) > at sqlline.SqlLine.start(SqlLine.java:498) > at sqlline.SqlLine.main(SqlLine.java:460) > {code} > One leg has inner join: > {code} > 0: jdbc:drill:schema=3Ddfs> select t1.*, t2.* from t1, t2 where t1.c1 =3D= t2.c2 union all select t3.*, t3.* from t3; > +------------+------------+------------+------------+------------+-------= -----+ > | a1 | b1 | c1 | a2 | b2 | c2= | > +------------+------------+------------+------------+------------+-------= -----+ > | 1 | aaaaa | 2015-01-01 | 1 | aaaaa | 2015-0= 1-01 | > | 2 | bbbbb | 2015-01-02 | 2 | bbbbb | 2015-0= 1-02 | > | 2 | bbbbb | 2015-01-02 | 2 | bbbbb | 2015-0= 1-02 | > | 2 | bbbbb | 2015-01-02 | 2 | bbbbb | 2015-0= 1-02 | > | 3 | ccccc | 2015-01-03 | 3 | ccccc | 2015-0= 1-03 | > | 4 | null | 2015-01-04 | 4 | ddddd | 2015-0= 1-04 | > | 5 | eeeee | 2015-01-05 | 5 | eeeee | 2015-0= 1-05 | > | 6 | fffff | 2015-01-06 | 6 | fffff | 2015-0= 1-06 | > | 7 | ggggg | 2015-01-07 | 7 | ggggg | 2015-0= 1-07 | > | 7 | ggggg | 2015-01-07 | 7 | ggggg | 2015-0= 1-07 | > | null | hhhhh | 2015-01-08 | 8 | hhhhh | 2015-0= 1-08 | > java.lang.IndexOutOfBoundsException: index: 0, length: 1 (expected: range= (0, 0)) > at io.netty.buffer.DrillBuf.checkIndexD(DrillBuf.java:156) > at io.netty.buffer.DrillBuf.chk(DrillBuf.java:178) > at io.netty.buffer.DrillBuf.getByte(DrillBuf.java:673) > at org.apache.drill.exec.vector.UInt1Vector$Accessor.get(UInt1Vec= tor.java:309) > at org.apache.drill.exec.vector.NullableIntVector$Accessor.isSet(= NullableIntVector.java:342) > at org.apache.drill.exec.vector.NullableIntVector$Accessor.isNull= (NullableIntVector.java:338) > at org.apache.drill.exec.vector.NullableIntVector$Accessor.getObj= ect(NullableIntVector.java:359) > at org.apache.drill.exec.vector.accessor.NullableIntAccessor.getO= bject(NullableIntAccessor.java:98) > at org.apache.drill.jdbc.AvaticaDrillSqlAccessor.getObject(Avatic= aDrillSqlAccessor.java:136) > at net.hydromatic.avatica.AvaticaResultSet.getObject(AvaticaResul= tSet.java:351) > at sqlline.SqlLine$Rows$Row.(SqlLine.java:2388) > at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2504) > at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2148) > at sqlline.SqlLine.print(SqlLine.java:1809) > at sqlline.SqlLine$Commands.execute(SqlLine.java:3766) > at sqlline.SqlLine$Commands.sql(SqlLine.java:3663) > at sqlline.SqlLine.dispatch(SqlLine.java:889) > at sqlline.SqlLine.begin(SqlLine.java:763) > at sqlline.SqlLine.start(SqlLine.java:498) > at sqlline.SqlLine.main(SqlLine.java:460) > {code} > Query plan: > {code} > 00-01 ProjectAllowDup(*=3D[$0], *0=3D[$1]) > 00-02 UnionAll(all=3D[true]) > 00-04 Project(T29=C2=A6=C2=A6*=3D[$0], T30=C2=A6=C2=A6*=3D[$2]) > 00-06 HashJoin(condition=3D[=3D($1, $3)], joinType=3D[inner]) > 00-09 Project(T29=C2=A6=C2=A6*=3D[$0], c1=3D[$1]) > 00-11 Scan(groupscan=3D[ParquetGroupScan [entries=3D[ReadE= ntryWithPath [path=3Dmaprfs:/aggregation/sanity/t1]], selectionRoot=3D/aggr= egation/sanity/t1, numFiles=3D1, columns=3D[`*`]]]) > 00-08 Project(T30=C2=A6=C2=A6*=3D[$0], c2=3D[$1]) > 00-10 Scan(groupscan=3D[ParquetGroupScan [entries=3D[ReadE= ntryWithPath [path=3Dmaprfs:/aggregation/sanity/t2]], selectionRoot=3D/aggr= egation/sanity/t2, numFiles=3D1, columns=3D[`*`]]]) > 00-03 Project(T31=C2=A6=C2=A6*=3D[$0], T31=C2=A6=C2=A6*0=3D[$0]) > 00-05 Project(T31=C2=A6=C2=A6*=3D[$0]) > 00-07 Scan(groupscan=3D[ParquetGroupScan [entries=3D[ReadEnt= ryWithPath [path=3Dmaprfs:/aggregation/sanity/t3]], selectionRoot=3D/aggreg= ation/sanity/t3, numFiles=3D1, columns=3D[`*`]]]) > {code} > This is how tables were created: > {code} > create table t1(a1, b1, c1) as > select > case when columns[0] =3D '' then cast(null as integer) else cast(= columns[0] as integer) end, > case when columns[1] =3D '' then cast(null as varchar(10)) else c= ast(columns[1] as varchar(10)) end, > case when columns[2] =3D '' then cast(null as date) else cast(col= umns[2] as date) end > from `t1.csv`; > create table t2(a2, b2, c2) as > select > case when columns[0] =3D '' then cast(null as integer) else cast(= columns[0] as integer) end, > case when columns[1] =3D '' then cast(null as varchar(10)) else c= ast(columns[1] as varchar(10)) end, > case when columns[2] =3D '' then cast(null as date) else cast(col= umns[2] as date) end > from `t2.csv`; > create table t3(a3, b3, c3) as > select > case when columns[0] =3D '' then cast(null as integer) else cast(= columns[0] as integer) end, > case when columns[1] =3D '' then cast(null as varchar(10)) else c= ast(columns[1] as varchar(10)) end, > case when columns[2] =3D '' then cast(null as date) else cast(col= umns[2] as date) end > from `t3.csv`; > {code} > Tables data types are compatible, but names are different. -- This message was sent by Atlassian JIRA (v6.3.4#6332)