Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id EFA3E200C57 for ; Fri, 31 Mar 2017 16:48:45 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id EE2BD160B9C; Fri, 31 Mar 2017 14:48:45 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 4153E160B80 for ; Fri, 31 Mar 2017 16:48:45 +0200 (CEST) Received: (qmail 3416 invoked by uid 500); 31 Mar 2017 14:48:44 -0000 Mailing-List: contact issues-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hive.apache.org Delivered-To: mailing list issues@hive.apache.org Received: (qmail 3407 invoked by uid 99); 31 Mar 2017 14:48:44 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 31 Mar 2017 14:48:44 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 129611A08CA for ; Fri, 31 Mar 2017 14:48:44 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -100.002 X-Spam-Level: X-Spam-Status: No, score=-100.002 tagged_above=-999 required=6.31 tests=[RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id 5hO23JfJUVaX for ; Fri, 31 Mar 2017 14:48:43 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id 21C685FC84 for ; Fri, 31 Mar 2017 14:48:43 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id A652BE0A75 for ; Fri, 31 Mar 2017 14:48:42 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id EAAD821DDE for ; Fri, 31 Mar 2017 14:48:41 +0000 (UTC) Date: Fri, 31 Mar 2017 14:48:41 +0000 (UTC) From: "Aihua Xu (JIRA)" To: issues@hive.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HIVE-16291) Hive fails when unions a parquet table with itself MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Fri, 31 Mar 2017 14:48:46 -0000 [ https://issues.apache.org/jira/browse/HIVE-16291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15951046#comment-15951046 ] Aihua Xu commented on HIVE-16291: --------------------------------- The patch looks good to me. +1. > Hive fails when unions a parquet table with itself > -------------------------------------------------- > > Key: HIVE-16291 > URL: https://issues.apache.org/jira/browse/HIVE-16291 > Project: Hive > Issue Type: Bug > Components: Hive > Reporter: Yibing Shi > Assignee: Yibing Shi > Attachments: HIVE-16291.1.patch > > > Reproduce commands: > {code:sql} > create table tst_unin (col1 int) partitioned by (p_tdate int) stored as parquet; > insert into tst_unin partition (p_tdate=201603) values (20160312), (20160310); > insert into tst_unin partition (p_tdate=201604) values (20160412), (20160410); > select count(*) from (select tst_unin.p_tdate from tst_unin where tst_unin.col1=20160302 union all select tst_unin.p_tdate from tst_unin) t1; > {code} > The table is stored in Parquet format, which is a columnar file format. Hive tries to push the query predicates to the table scan operators so that only the needed columns are read. This is done by adding the needed column IDs into job configuration with property "hive.io.file.readcolumn.ids". > In above case, the query unions the result of 2 subqueries, which select data from one same table. The first subquery doesn't need any column from Parquet file, while the second subquery needs a column "col1". Hive has a bug here, it finally set "hive.io.file.readcolumn.ids" to a value like "0,,0", which method ColumnProjectionUtils.getReadColumnIDs cannot parse. -- This message was sent by Atlassian JIRA (v6.3.15#6346)