hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-13058) S3A FS failes during init against a read-only FS if multipart purge is enabled
Date Mon, 25 Apr 2016 13:27:12 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-13058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15256332#comment-15256332
] 

Steve Loughran commented on HADOOP-13058:
-----------------------------------------

{code}
, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 7.281 sec <<< FAILURE! - in org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance
testTimeToOpenAndReadWholeFileByByte(org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance)
 Time elapsed: 6.973 sec  <<< ERROR!
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status
Code: 403; Error Code: AccessDenied; Request ID: 8ECDC7355F5EFCCC)
	at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
	at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
	at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
	at com.amazonaws.services.s3.AmazonS3Client.abortMultipartUpload(AmazonS3Client.java:2664)
	at com.amazonaws.services.s3.transfer.TransferManager.abortMultipartUploads(TransferManager.java:1222)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.initMultipartUploads(S3AFileSystem.java:349)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:244)
	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2786)
	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2823)
	at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:2811)
	at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:428)
	at org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance.openFS(TestS3AInputStreamPerformance.java:52)
{code}

> S3A FS failes during init against a read-only FS if multipart purge is enabled
> ------------------------------------------------------------------------------
>
>                 Key: HADOOP-13058
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13058
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.8.0
>            Reporter: Steve Loughran
>
> If you try to open a read-only filesystem, and the multipart upload option is set to
purge existing uploads, then the FS fails to load with an access denied exception.
> it should catch the exception, downgrade to a debug and await until a file write operation
for access exceptions to reject on access rights.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message