spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Marcelo Vanzin (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-23710) Upgrade Hive to 2.3.2
Date Fri, 22 Jun 2018 15:51:00 GMT

    [ https://issues.apache.org/jira/browse/SPARK-23710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16520522#comment-16520522
] 

Marcelo Vanzin commented on SPARK-23710:
----------------------------------------

There are a few places in Spark that are affected by a Hive upgrade:
- Hive serde support
- Hive UD(*)F support
- The thrift server

The first two are for supporting Hive's API in Spark so people can keep using their serdes
and udfs. The risk here is that we're crossing a Hive major version boundary, and things in
the API may have been broken, and that would transitively affect Spark's API.

In the real world that's already sort of a risk, though, because people might be running Hive
2 and thus have Hive 2 serdes in their tables, and Spark trying to read or write data to that
table with an old version of the same serde could cause issues.

I think switching to the Hive mainline is a good medium or long term goal, but that probably
would require a major Spark version to be more palatable - and perhaps should be coupled with
deprecation of some features so that we can isolate ourselves from Hive more. It's a bit risky
in a minor version.

In the short term my preference would be to either fix the fork, or go with Saisai's patch
in HIVE-16391, which requires collaboration from the Hive side...


> Upgrade Hive to 2.3.2
> ---------------------
>
>                 Key: SPARK-23710
>                 URL: https://issues.apache.org/jira/browse/SPARK-23710
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.4.0
>            Reporter: Yuming Wang
>            Priority: Critical
>
> h1. Mainly changes
>  * Maven dependency:
>  hive.version from {{1.2.1.spark2}} to {{2.3.2}} and change {{hive.classifier}} to {{core}}
>  calcite.version from {{1.2.0-incubating}} to {{1.10.0}}
>  datanucleus-core.version from {{3.2.10}} to {{4.1.17}}
>  remove {{orc.classifier}}, it means orc use the {{hive.storage.api}}, see: ORC-174
>  add new dependency {{avatica}} and {{hive.storage.api}}
>  * ORC compatibility changes:
>  OrcColumnVector.java, OrcColumnarBatchReader.java, OrcDeserializer.scala, OrcFilters.scala,
OrcSerializer.scala, OrcFilterSuite.scala
>  * hive-thriftserver java file update:
>  update {{sql/hive-thriftserver/if/TCLIService.thrift}} to hive 2.3.2
>  update {{sql/hive-thriftserver/src/main/java/org/apache/hive/service/*}} to hive 2.3.2
>  * TestSuite should update:
> ||TestSuite||Reason||
> |StatisticsSuite|HIVE-16098|
> |SessionCatalogSuite|Similar to [VersionsSuite.scala#L427|#L427]|
> |CliSuite, HiveThriftServer2Suites, HiveSparkSubmitSuite, HiveQuerySuite, SQLQuerySuite|Update
hive-hcatalog-core-0.13.1.jar to hive-hcatalog-core-2.3.2.jar|
> |SparkExecuteStatementOperationSuite|Interface changed from org.apache.hive.service.cli.Type.NULL_TYPE
to org.apache.hadoop.hive.serde2.thrift.Type.NULL_TYPE|
> |ClasspathDependenciesSuite|org.apache.hive.com.esotericsoftware.kryo.Kryo change to
com.esotericsoftware.kryo.Kryo|
> |HiveMetastoreCatalogSuite|Result format changed from Seq("1.1\t1", "2.1\t2") to Seq("1.100\t1",
"2.100\t2")|
> |HiveOrcFilterSuite|Result format changed|
> |HiveDDLSuite|Remove $ (This change needs to be reconsidered)|
> |HiveExternalCatalogVersionsSuite| java.lang.ClassCastException: org.datanucleus.identity.DatastoreIdImpl
cannot be cast to org.datanucleus.identity.OID|
>  * Other changes:
> Close hive schema verification:  [HiveClientImpl.scala#L251|https://github.com/wangyum/spark/blob/75e4cc9e80f85517889e87a35da117bc361f2ff3/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala#L251]
and [HiveExternalCatalog.scala#L58|https://github.com/wangyum/spark/blob/75e4cc9e80f85517889e87a35da117bc361f2ff3/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala#L58]
> Update [IsolatedClientLoader.scala#L189-L192|https://github.com/wangyum/spark/blob/75e4cc9e80f85517889e87a35da117bc361f2ff3/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/IsolatedClientLoader.scala#L189-L192]
> Because Hive 2.3.2's {{org.apache.hadoop.hive.ql.metadata.Hive}} can't connect to Hive
1.x metastore, We should use {{HiveMetaStoreClient.getDelegationToken}} instead of {{Hive.getDelegationToken}}
and update {{HiveClientImpl.toHiveTable}}
> All changes can be found at [PR-20659|https://github.com/apache/spark/pull/20659].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message