From issues-return-242896-archive-asf-public=cust-asf.ponee.io@spark.apache.org Wed Dec 11 03:02:02 2019 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [207.244.88.153]) by mx-eu-01.ponee.io (Postfix) with SMTP id 91BDA18067C for ; Wed, 11 Dec 2019 04:02:02 +0100 (CET) Received: (qmail 38461 invoked by uid 500); 11 Dec 2019 03:02:01 -0000 Mailing-List: contact issues-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@spark.apache.org Received: (qmail 38385 invoked by uid 99); 11 Dec 2019 03:02:01 -0000 Received: from mailrelay1-us-west.apache.org (HELO mailrelay1-us-west.apache.org) (209.188.14.139) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 11 Dec 2019 03:02:01 +0000 Received: from jira-he-de.apache.org (static.172.67.40.188.clients.your-server.de [188.40.67.172]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id ADF0FE2E18 for ; Wed, 11 Dec 2019 03:02:00 +0000 (UTC) Received: from jira-he-de.apache.org (localhost.localdomain [127.0.0.1]) by jira-he-de.apache.org (ASF Mail Server at jira-he-de.apache.org) with ESMTP id 2D0B5780376 for ; Wed, 11 Dec 2019 03:02:00 +0000 (UTC) Date: Wed, 11 Dec 2019 03:02:00 +0000 (UTC) From: "Dilly King (Jira)" To: issues@spark.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Created] (SPARK-30212) Could not use COUNT(DISTINCT) window function in SparkSQL MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 Dilly King created SPARK-30212: ---------------------------------- Summary: Could not use COUNT(DISTINCT) window function in SparkSQL Key: SPARK-30212 URL: https://issues.apache.org/jira/browse/SPARK-30212 Project: Spark Issue Type: Bug Components: SQL Affects Versions: 2.4.4 Environment: Spark 2.4.4 Scala 2.11.12 Hive 2.3.6 Reporter: Dilly King Suppose we have a typical table in Hive like below: {code:sql} CREATE TABLE DEMO_COUNT_DISTINCT ( demo_date string, demo_id string ); {code} {noformat} +--------------------------------+------------------------------+ | demo_count_distinct.demo_date | demo_count_distinct.demo_id | +--------------------------------+------------------------------+ | 20180301 | 101 | | 20180301 | 102 | | 20180301 | 103 | | 20180401 | 201 | | 20180401 | 202 | +--------------------------------+------------------------------+ {noformat} Now I want to count distinct number of DEMO_DATE but also reserve every columns' data in each row. So I use COUNT(DISTINCT) window function like below in Hive beeline and it work: {code:sql} SELECT T.*, COUNT(DISTINCT T.DEMO_DATE) OVER(PARTITION BY NULL) UNIQ_DATES FROM DEMO_COUNT_DISTINCT T; {code} {noformat} +--------------+------------+-------------+ | t.demo_date | t.demo_id | uniq_dates | +--------------+------------+-------------+ | 20180401 | 202 | 2 | | 20180401 | 201 | 2 | | 20180301 | 103 | 2 | | 20180301 | 102 | 2 | | 20180301 | 101 | 2 | +--------------+------------+-------------+ {noformat} But when I came to SparkSQL, it threw exception even if I run the same SQL. {code:sql} spark.sql(""" SELECT T.*, COUNT(DISTINCT T.DEMO_DATE) OVER(PARTITION BY NULL) UNIQ_DATES FROM DEMO_COUNT_DISTINCT T """).show {code} {noformat} org.apache.spark.sql.AnalysisException: Distinct window functions are not supported: count(distinct DEMO_DATE#1) windowspecdefinition(null, specifiedwindowframe(RowFrame, unboundedpreceding$(), unboundedfollowing$()));; Project [demo_date#1, demo_id#2, UNIQ_DATES#0L] +- Project [demo_date#1, demo_id#2, UNIQ_DATES#0L, UNIQ_DATES#0L] +- Window [count(distinct DEMO_DATE#1) windowspecdefinition(null, specifiedwindowframe(RowFrame, unboundedpreceding$(), unboundedfollowing$())) AS UNIQ_DATES#0L], [null] +- Project [demo_date#1, demo_id#2] +- SubqueryAlias `T` +- SubqueryAlias `default`.`demo_count_distinct` +- HiveTableRelation `default`.`demo_count_distinct`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [demo_date#1, demo_id#2] {noformat} Then I try to use countDistinct function but also got exceptions. {code:sql} spark.sql(""" SELECT T.*, countDistinct(T.DEMO_DATE) OVER(PARTITION BY NULL) UNIQ_DATES FROM DEMO_COUNT_DISTINCT T """).show {code} {noformat} org.apache.spark.sql.AnalysisException: Undefined function: 'countDistinct'. This function is neither a registered temporary function nor a permanent function registered in the database 'default'.; line 2 pos 12 at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15$$anonfun$applyOrElse$49.apply(Analyzer.scala:1279) at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15$$anonfun$applyOrElse$49.apply(Analyzer.scala:1279) at org.apache.spark.sql.catalyst.analysis.package$.withPosition(package.scala:53) ...... {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org For additional commands, e-mail: issues-help@spark.apache.org