spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Marcelo Vanzin (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (SPARK-24271) sc.hadoopConfigurations can not be overwritten in the same spark context
Date Mon, 14 May 2018 17:25:00 GMT

    [ https://issues.apache.org/jira/browse/SPARK-24271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16474495#comment-16474495
] 

Marcelo Vanzin edited comment on SPARK-24271 at 5/14/18 5:24 PM:
-----------------------------------------------------------------

[FileSystem.get()|https://hadoop.apache.org/docs/stable2/api/org/apache/hadoop/fs/FileSystem.html#get(java.net.URI,%20org.apache.hadoop.conf.Configuration)]


was (Author: vanzin):
[[https://hadoop.apache.org/docs/stable2/api/org/apache/hadoop/fs/FileSystem.html#get(java.net.URI,%20org.apache.hadoop.conf.Configuration])|[https://hadoop.apache.org/docs/stable2/api/org/apache/hadoop/fs/FileSystem.html#get(java.net.URI,%20org.apache.hadoop.conf.Configuration])]

> sc.hadoopConfigurations can not be overwritten in the same spark context
> ------------------------------------------------------------------------
>
>                 Key: SPARK-24271
>                 URL: https://issues.apache.org/jira/browse/SPARK-24271
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Shell
>    Affects Versions: 2.3.0
>            Reporter: Jami Malikzade
>            Priority: Major
>
> If for example we pass to spark context  following configs :
> sc.hadoopConfiguration.set("fs.s3a.access.key", "correctAK") 
> sc.hadoopConfiguration.set("fs.s3a.secret.key", "correctSK") 
> sc.hadoopConfiguration.set("fs.s3a.endpoint", "objectstorage:8773") //
> sc.hadoopConfiguration.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
> sc.hadoopConfiguration.set("fs.s3a.connection.ssl.enabled", "false")
> We are able later read from bucket. So behavior is expected.
> If in the same sc I will change credentials to wrong, and will try to read from bucket
it will still work,
> and vice versa if it were wrong credentials,changing to working will not work.
> sc.hadoopConfiguration.set("fs.s3a.access.key", "wrongAK") // 
> sc.hadoopConfiguration.set("fs.s3a.secret.key", "wrongSK") //



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message