hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-13336) S3A to support per-bucket configuration
Date Mon, 09 Jan 2017 18:51:58 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15812523#comment-15812523
] 

Steve Loughran commented on HADOOP-13336:
-----------------------------------------

Larrrt  there's a scan for a forbidden prefix (currently bucket.") and then actual unmodifiable
value. I actually think I could fix them to just the simple checks, and not-overengineer this.

w.r.t Configuration.getPassword(), I see the problem you are alluding to. Even though we are
migrating fs.s3a.bucket.* to fs.s3a.*, that does nothing to the credential providers, as they
have hard-coded keys in their key:value mappings; this isn't changing anything.

hmmm.

Would it be possible for us to update the {{"hadoop.security.credential.provider.path"}} at
the same time, that is add a special property to s3a, say {{fs.s3a.security.credential.provider.path}}.
All the contents of that property would be _prepanded_ to that of the base one. You'd then
need to specify different providers for the different endpoints. By prepending the values,
we can ensure that properties stated in a bucket will take priority over any in the default
provider path.

We'd need to document this, especially how it's likely that once there's a secret in a JCEKS
file, then you must overrride those secrets with new files: you can't move back to a password
from a credentials file: you can't downgrade security. 

Would that work? If so, I can include that in this patch as it's related to the per-bucket
config, isn't it?


> S3A to support per-bucket configuration
> ---------------------------------------
>
>                 Key: HADOOP-13336
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13336
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.8.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>         Attachments: HADOOP-13336-006.patch, HADOOP-13336-007.patch, HADOOP-13336-HADOOP-13345-001.patch,
HADOOP-13336-HADOOP-13345-002.patch, HADOOP-13336-HADOOP-13345-003.patch, HADOOP-13336-HADOOP-13345-004.patch,
HADOOP-13336-HADOOP-13345-005.patch, HADOOP-13336-HADOOP-13345-006.patch
>
>
> S3a now supports different regions, by way of declaring the endpoint —but you can't
do things like read in one region, write back in another (e.g. a distcp backup), because only
one region can be specified in a configuration.
> If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt s3a://b2.seol
, then this would be possible. 
> Swift does this with a full filesystem binding/config: endpoints, username, etc, in the
XML file. Would we need to do that much? It'd be simpler initially to use a domain suffix
of a URL to set the region of a bucket from the domain and have the aws library sort the details
out itself, maybe with some config options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message