hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-11670) Fix IAM instance profile auth for s3a
Date Fri, 06 Mar 2015 20:54:39 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14350877#comment-14350877
] 

Steve Loughran commented on HADOOP-11670:
-----------------------------------------

All the S3a tests are now failing for me
{code}

testOutputStreamClosedTwice(org.apache.hadoop.fs.s3a.TestS3AFileSystemContract)  Time elapsed:
0.01 sec  <<< ERROR!
com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the
chain
	at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3521)
	at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
	at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
	at org.apache.hadoop.fs.s3a.S3ATestUtils.createTestFileSystem(S3ATestUtils.java:51)
	at org.apache.hadoop.fs.s3a.TestS3AFileSystemContract.setUp(TestS3AFileSystemContract.java:46)

{code}

> Fix IAM instance profile auth for s3a
> -------------------------------------
>
>                 Key: HADOOP-11670
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11670
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.7.0
>            Reporter: Adam Budde
>            Assignee: Adam Budde
>             Fix For: 2.7.0
>
>         Attachments: HADOOP-11670-001.patch, HADOOP-11670.002.patch
>
>
> One big advantage provided by the s3a filesystem is the ability to use an IAM instance
profile in order to authenticate when attempting to access an S3 bucket from an EC2 instance.
This eliminates the need to deploy AWS account credentials to the instance or to provide them
to Hadoop via the fs.s3a.awsAccessKeyId and fs.s3a.awsSecretAccessKey params.
> The patch submitted to resolve HADOOP-10714 breaks this behavior by using the S3Credentials
class to read the value of these two params. The change in question is presented below:
> S3AFileSystem.java, lines 161-170:
> {code}
>     // Try to get our credentials or just connect anonymously
>     S3Credentials s3Credentials = new S3Credentials();
>     s3Credentials.initialize(name, conf);
>     AWSCredentialsProviderChain credentials = new AWSCredentialsProviderChain(
>         new BasicAWSCredentialsProvider(s3Credentials.getAccessKey(),
>                                         s3Credentials.getSecretAccessKey()),
>         new InstanceProfileCredentialsProvider(),
>         new AnonymousAWSCredentialsProvider()
>     );
> {code}
> As you can see, the getAccessKey() and getSecretAccessKey() methods from the S3Credentials
class are now used to provide constructor arguments to BasicAWSCredentialsProvider. These
methods will raise an exception if the fs.s3a.awsAccessKeyId or fs.s3a.awsSecretAccessKey
params are missing, respectively. If a user is relying on an IAM instance profile to authenticate
to an S3 bucket and therefore doesn't supply values for these params, they will receive an
exception and won't be able to access the bucket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message