flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Metzger <rmetz...@apache.org>
Subject Re: S3 Input/Output with temporary credentials (IAM Roles)
Date Sat, 12 Dec 2015 20:00:07 GMT
Hi Vladimir,

Flink is using Hadoop's S3 File System implementation. It seems that this
feature is not supported by their implementation:
https://issues.apache.org/jira/browse/HADOOP-9680
This issue contains some more information:
https://issues.apache.org/jira/browse/HADOOP-9384 It seems that the s3a
implementation is the one where to implement the feature.

And realistically, I don't see Hadoop fixing this in the foreseeable future
:)

I see the following options:
- We could try adding the feature to Hadoop's s3a implementation. It'll
probably be a few months until the fix is reviewed, merged and released
(but you could probably extract the relevant code into your own project and
run it from there)
- You implement an s3 file system implementation for Flink with the
required features (that is not as hard as it sounds).

Sorry that I can not give you a better solution for this.

Regards,
Robert


On Fri, Dec 11, 2015 at 3:42 PM, Vladimir Stoyak <vstoyak@yahoo.com> wrote:

> Our setup involves AWS IAM roles when with permanent access_key and
> access_secret we need to assume specific role (ie getting temporary
> credentials to use AWS resources).
>
> I was wondering what would be the best way handling this, ie how to set fs.s3n.awsAccessKeyId
> and fs.s3n.awsSecretAccessKey programmatically and also how to handle
> expired sessions.
>
> Thanks,
> Vladimir
>

Mime
View raw message