spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mallman <>
Subject [GitHub] spark pull request #15998: [SPARK-18572][SQL] Add a method `listPartitionNam...
Date Sun, 27 Nov 2016 00:12:59 GMT
Github user mallman commented on a diff in the pull request:
    --- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala
    @@ -922,6 +923,29 @@ private[spark] class HiveExternalCatalog(conf: SparkConf, hadoopConf:
        * Returns the partition names from hive metastore for a given table in a database.
    +  override def listPartitionNames(
    +      db: String,
    +      table: String,
    +      partialSpec: Option[TablePartitionSpec] = None): Seq[String] = withClient {
    +    val actualPartColNames = getTable(db, table).partitionColumnNames
    +    val clientPartitionNames =
    +      client.getPartitionNames(db, table,
    +    if (actualPartColNames.exists(partColName => partColName != partColName.toLowerCase))
    + { partName =>
    +        val partSpec = PartitioningUtils.parsePathFragmentAsSeq(partName)
    --- End diff --
    I've run some tests to compare behavior between Hive and Spark in handling gnarly partition
column names, and I found some disparities. We've spent a considerable amount of time wrangling
with partition column name handling recently, and I'm not sure what semantics we've decided
on. To ensure the behavior I'm seeing is what we're expecting, I want to describe a scenario
I ran.
    In my test scenario, I created a table named `test` with the stock Hive 2.1.0 distribution.
(I simply downloaded it from its download page and initialized an empty Derby schema store.)
The exact DDL I used to create this table is as follows:
    ```create table test(a string) partitioned by (`P``Дr t` int);```
    When I do a `describe test` with `hive` it shows the column name as ``p`дr t``. It appears
to lowercase the P and the cyrillic Д before storing the table schema it in the metastore.
I then run
    ```alter table test add partition(`P``Дr t`=0);```
    When I run `show partitions test` in `hive` it gives me ``p`дr t=0``. Additionally, when
I list the contents of the `test` table's base directory in HDFS, the partition directory
entry is
    ```/user/hive/warehouse/test/p`дr t=0```
    If I drop the table, create it with `spark-sql` using the same DDL as before and do a
`describe test`, the partition column is given as ``P`Дr t``. Spark has preserved the case
of the partition column name. If I then do
    ```alter table test add partition(`P``Дr t`=0);```
    in `spark-sql` and `show partitions test` I get ``P`Дr t=0``. When I list the directory
contents in HDFS, I get
    ```/user/hive/warehouse/test/P`Дr t=0```
    The upshot is Hive is lowercasing the partition column name and Spark is leaving it unaltered.
Is this correct?

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at or file a JIRA ticket
with INFRA.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message