Return-Path: X-Original-To: apmail-spark-issues-archive@minotaur.apache.org Delivered-To: apmail-spark-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id AA17518D27 for ; Thu, 21 May 2015 23:13:18 +0000 (UTC) Received: (qmail 36383 invoked by uid 500); 21 May 2015 23:13:18 -0000 Delivered-To: apmail-spark-issues-archive@spark.apache.org Received: (qmail 36323 invoked by uid 500); 21 May 2015 23:13:18 -0000 Mailing-List: contact issues-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@spark.apache.org Received: (qmail 36247 invoked by uid 99); 21 May 2015 23:13:18 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 21 May 2015 23:13:18 +0000 Date: Thu, 21 May 2015 23:13:18 +0000 (UTC) From: "Cheng Lian (JIRA)" To: issues@spark.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Resolved] (SPARK-7737) parquet schema discovery should not fail because of empty _temporary dir MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/SPARK-7737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cheng Lian resolved SPARK-7737. ------------------------------- Resolution: Fixed Fix Version/s: 1.4.0 Issue resolved by pull request 6329 [https://github.com/apache/spark/pull/6329] > parquet schema discovery should not fail because of empty _temporary dir > ------------------------------------------------------------------------- > > Key: SPARK-7737 > URL: https://issues.apache.org/jira/browse/SPARK-7737 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 1.4.0 > Reporter: Yin Huai > Assignee: Yin Huai > Priority: Blocker > Fix For: 1.4.0 > > > Parquet schema discovery will fail when the dir is like > {code} > /partitions5k/i=2/_SUCCESS > /partitions5k/i=2/_temporary/ > /partitions5k/i=2/part-r-00001.gz.parquet > /partitions5k/i=2/part-r-00002.gz.parquet > /partitions5k/i=2/part-r-00003.gz.parquet > /partitions5k/i=2/part-r-00004.gz.parquet > {code} > {code} > java.lang.AssertionError: assertion failed: Conflicting partition column names detected: > > at scala.Predef$.assert(Predef.scala:179) > at org.apache.spark.sql.sources.PartitioningUtils$.resolvePartitions(PartitioningUtils.scala:159) > at org.apache.spark.sql.sources.PartitioningUtils$.parsePartitions(PartitioningUtils.scala:71) > at org.apache.spark.sql.sources.HadoopFsRelation.org$apache$spark$sql$sources$HadoopFsRelation$$discoverPartitions(interfaces.scala:468) > at org.apache.spark.sql.sources.HadoopFsRelation$$anonfun$partitionSpec$3.apply(interfaces.scala:424) > at org.apache.spark.sql.sources.HadoopFsRelation$$anonfun$partitionSpec$3.apply(interfaces.scala:423) > at scala.Option.getOrElse(Option.scala:120) > at org.apache.spark.sql.sources.HadoopFsRelation.partitionSpec(interfaces.scala:422) > at org.apache.spark.sql.sources.HadoopFsRelation.schema$lzycompute(interfaces.scala:482) > at org.apache.spark.sql.sources.HadoopFsRelation.schema(interfaces.scala:480) > at org.apache.spark.sql.sources.LogicalRelation.(LogicalRelation.scala:30) > at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:134) > at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:118) > at org.apache.spark.sql.SQLContext.load(SQLContext.scala:1135) > {code} > 1.3 works fine. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org For additional commands, e-mail: issues-help@spark.apache.org