From issues-return-194622-archive-asf-public=cust-asf.ponee.io@spark.apache.org Fri Jun 22 17:51:09 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id 684CC18077D for ; Fri, 22 Jun 2018 17:51:08 +0200 (CEST) Received: (qmail 94121 invoked by uid 500); 22 Jun 2018 15:51:05 -0000 Mailing-List: contact issues-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@spark.apache.org Received: (qmail 94064 invoked by uid 99); 22 Jun 2018 15:51:05 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 22 Jun 2018 15:51:05 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id BD3DE1A318E for ; Fri, 22 Jun 2018 15:51:04 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -109.801 X-Spam-Level: X-Spam-Status: No, score=-109.801 tagged_above=-999 required=6.31 tests=[ENV_AND_HDR_SPF_MATCH=-0.5, KAM_NUMSUBJECT=0.5, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, USER_IN_DEF_SPF_WL=-7.5, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id rMsoBPfCndJI for ; Fri, 22 Jun 2018 15:51:02 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id 659DA5F3CE for ; Fri, 22 Jun 2018 15:51:02 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 416F4E0CB8 for ; Fri, 22 Jun 2018 15:51:01 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 7C00623F9E for ; Fri, 22 Jun 2018 15:51:00 +0000 (UTC) Date: Fri, 22 Jun 2018 15:51:00 +0000 (UTC) From: "Marcelo Vanzin (JIRA)" To: issues@spark.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (SPARK-23710) Upgrade Hive to 2.3.2 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/SPARK-23710?page=3Dcom.atlassia= n.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D165= 20522#comment-16520522 ]=20 Marcelo Vanzin commented on SPARK-23710: ---------------------------------------- There are a few places in Spark that are affected by a Hive upgrade: - Hive serde support - Hive UD(*)F support - The thrift server The first two are for supporting Hive's API in Spark so people can keep usi= ng their serdes and udfs. The risk here is that we're crossing a Hive major= version boundary, and things in the API may have been broken, and that wou= ld transitively affect Spark's API. In the real world that's already sort of a risk, though, because people mig= ht be running Hive 2 and thus have Hive 2 serdes in their tables, and Spark= trying to read or write data to that table with an old version of the same= serde could cause issues. I think switching to the Hive mainline is a good medium or long term goal, = but that probably would require a major Spark version to be more palatable = - and perhaps should be coupled with deprecation of some features so that w= e can isolate ourselves from Hive more. It's a bit risky in a minor version= . In the short term my preference would be to either fix the fork, or go with= Saisai's patch in HIVE-16391, which requires collaboration from the Hive s= ide... > Upgrade Hive to 2.3.2 > --------------------- > > Key: SPARK-23710 > URL: https://issues.apache.org/jira/browse/SPARK-23710 > Project: Spark > Issue Type: Improvement > Components: SQL > Affects Versions: 2.4.0 > Reporter: Yuming Wang > Priority: Critical > > h1. Mainly changes > * Maven dependency: > hive.version from {{1.2.1.spark2}} to {{2.3.2}} and change {{hive.classi= fier}} to {{core}} > calcite.version from {{1.2.0-incubating}} to {{1.10.0}} > datanucleus-core.version from {{3.2.10}} to {{4.1.17}} > remove {{orc.classifier}}, it means orc use the {{hive.storage.api}}, se= e: ORC-174 > add new dependency {{avatica}} and {{hive.storage.api}} > * ORC compatibility changes: > OrcColumnVector.java, OrcColumnarBatchReader.java, OrcDeserializer.scala= , OrcFilters.scala, OrcSerializer.scala, OrcFilterSuite.scala > * hive-thriftserver java file update: > update {{sql/hive-thriftserver/if/TCLIService.thrift}} to hive 2.3.2 > update {{sql/hive-thriftserver/src/main/java/org/apache/hive/service/*}}= to hive 2.3.2 > * TestSuite should update: > ||TestSuite||Reason|| > |StatisticsSuite|HIVE-16098| > |SessionCatalogSuite|Similar to [VersionsSuite.scala#L427|#L427]| > |CliSuite, HiveThriftServer2Suites, HiveSparkSubmitSuite, HiveQuerySuite,= SQLQuerySuite|Update hive-hcatalog-core-0.13.1.jar to hive-hcatalog-core-2= .3.2.jar| > |SparkExecuteStatementOperationSuite|Interface changed from org.apache.hi= ve.service.cli.Type.NULL_TYPE to org.apache.hadoop.hive.serde2.thrift.Type.= NULL_TYPE| > |ClasspathDependenciesSuite|org.apache.hive.com.esotericsoftware.kryo.Kry= o change to com.esotericsoftware.kryo.Kryo| > |HiveMetastoreCatalogSuite|Result format changed from Seq("1.1\t1", "2.1\= t2") to Seq("1.100\t1", "2.100\t2")| > |HiveOrcFilterSuite|Result format changed| > |HiveDDLSuite|Remove $ (This change needs to be reconsidered)| > |HiveExternalCatalogVersionsSuite| java.lang.ClassCastException: org.data= nucleus.identity.DatastoreIdImpl cannot be cast to org.datanucleus.identity= .OID| > * Other changes: > Close hive schema verification: [HiveClientImpl.scala#L251|https://githu= b.com/wangyum/spark/blob/75e4cc9e80f85517889e87a35da117bc361f2ff3/sql/hive/= src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala#L251] = and [HiveExternalCatalog.scala#L58|https://github.com/wangyum/spark/blob/75= e4cc9e80f85517889e87a35da117bc361f2ff3/sql/hive/src/main/scala/org/apache/s= park/sql/hive/HiveExternalCatalog.scala#L58] > Update [IsolatedClientLoader.scala#L189-L192|https://github.com/wangyum/s= park/blob/75e4cc9e80f85517889e87a35da117bc361f2ff3/sql/hive/src/main/scala/= org/apache/spark/sql/hive/client/IsolatedClientLoader.scala#L189-L192] > Because Hive 2.3.2's {{org.apache.hadoop.hive.ql.metadata.Hive}} can't co= nnect to Hive 1.x metastore, We should use {{HiveMetaStoreClient.getDelegat= ionToken}} instead of {{Hive.getDelegationToken}} and update {{HiveClientIm= pl.toHiveTable}} > All changes can be found at=C2=A0[PR-20659|https://github.com/apache/spar= k/pull/20659]. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org For additional commands, e-mail: issues-help@spark.apache.org