spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andre Schumacher (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-2112) ParquetTypesConverter should not create its own conf
Date Fri, 20 Jun 2014 10:36:24 GMT

    [ https://issues.apache.org/jira/browse/SPARK-2112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038666#comment-14038666
] 

Andre Schumacher commented on SPARK-2112:
-----------------------------------------

Since commit
https://github.com/apache/spark/commit/f479cf3743e416ee08e62806e1b34aff5998ac22
the SparkContext's Hadoop configuration should be used when reading metadata from the file
source. I wasn't yet able to test this with say S3 bucket names.

Are the the S3 credentials copied from SparkConfig to its Hadoop configuration?  If someone
could confirm this to be working we could close this issue.

> ParquetTypesConverter should not create its own conf
> ----------------------------------------------------
>
>                 Key: SPARK-2112
>                 URL: https://issues.apache.org/jira/browse/SPARK-2112
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.0.0
>            Reporter: Michael Armbrust
>
> [~adav]: "this actually makes it so that we can't use S3 credentials set in the SparkContext,
or add new FileSystems at runtime, for instance."



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message