spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andre Schumacher (JIRA)" <>
Subject [jira] [Commented] (SPARK-2112) ParquetTypesConverter should not create its own conf
Date Fri, 20 Jun 2014 10:36:24 GMT


Andre Schumacher commented on SPARK-2112:

Since commit
the SparkContext's Hadoop configuration should be used when reading metadata from the file
source. I wasn't yet able to test this with say S3 bucket names.

Are the the S3 credentials copied from SparkConfig to its Hadoop configuration?  If someone
could confirm this to be working we could close this issue.

> ParquetTypesConverter should not create its own conf
> ----------------------------------------------------
>                 Key: SPARK-2112
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.0.0
>            Reporter: Michael Armbrust
> [~adav]: "this actually makes it so that we can't use S3 credentials set in the SparkContext,
or add new FileSystems at runtime, for instance."

This message was sent by Atlassian JIRA

View raw message