hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Farshid (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HADOOP-15248) 400 Bad Request while trying to access S3 through Spark
Date Wed, 21 Feb 2018 03:45:00 GMT
Farshid created HADOOP-15248:
--------------------------------

             Summary: 400 Bad Request while trying to access S3 through Spark
                 Key: HADOOP-15248
                 URL: https://issues.apache.org/jira/browse/HADOOP-15248
             Project: Hadoop Common
          Issue Type: Bug
          Components: fs/s3
    Affects Versions: 2.7.3
         Environment: macOS 10.13.3 (17D47)

Spark 2.2.1

Hadoop 2.7.3
            Reporter: Farshid


 

I'm trying to read a file thorugh {{s3a}} from a bucket in us-east-2 (Ohio) and I'm getting
400 Bad Request response:

{{com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon
S3, AWS Request ID: [removed], AWS Error Code: null, AWS Error Message: Bad Request, S3 Extended
Request ID: [removed] }}

Since my code works with another bucket in Sydney, it seems to be a signing API version issue
(Ohio supports only 4, Sydney supports 2 and 4). So I tried setting the endpoint by adding
this to {{spark-submit}} as suggested in other posts:

{{--conf "spark.hadoop.fs.s3a.endpoint=s3.us-east-2.amazonaws.com" }}

But that didn't make any difference. I also tried adding the same to a conf file and passing
it using {{--properties-file [file_path]}}

{{spark.hadoop.fs.s3a.endpoint s3.us-east-2.amazonaws.com }}

No difference. I still get the same error for Ohio (and it doesn't work with Sydney any more,
for obvious reasons).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message