hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Larry McCay (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (HADOOP-14507) extend per-bucket secret key config with explicit getPassword() on fs.s3a.$bucket.secret.key
Date Thu, 15 Feb 2018 21:44:00 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-14507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16366289#comment-16366289
] 

Larry McCay edited comment on HADOOP-14507 at 2/15/18 9:43 PM:
---------------------------------------------------------------

{quote}bq. Note that the value of {{fs.s3a.server-side-encryption.key}} can be a simple
string to a KMS key, which is the best way to manage keys. The only time you'd have a number
in there is for SSE-C encryption, where every client must supply a key. That's not an easy
one to work with...my stance is "just use AWS KMS" If there's an API explicitly for managing
the real encryption keys, that's only relevant for SSE-C and future client-side encryption.
In which case, we could treat those keys differently
{quote}
Ahh, I think that is where my confusion was.

If I understand properly that the fs.s3a.server-side-encryption.key is a name into another
KMS for the actual key than I am fine with it.

Any future management of actual encryption keys should definitely consider the Key Provider
API at that time.

LGTM.

+1

 

 


was (Author: lmccay):
{quote}bq. Note that the value of {{fs.s3a.server-side-encryption.key}} can be a simple
string to a KMS key, which is the best way to manage keys. The only time you'd have a number
in there is for SSE-C encryption, where every client must supply a key. That's not an easy
one to work with...my stance is "just use AWS KMS"

If there's an API explicitly for managing the real encryption keys, that's only relevant for
SSE-C and future client-side encryption. In which case, we could treat those keys differently
{quote}
Ahh, I think that is where my confusion was.

If I understand properly that the fs.s3a.server-side-encryption.key is a name into another
KMS for the actual key than I am fine with it.

Any future management of actual encryption keys should definitely consider the Key Provider
API at that time.

LGTM.

+1

 

 

> extend per-bucket secret key config with explicit getPassword() on fs.s3a.$bucket.secret.key
> --------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-14507
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14507
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.8.1
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Critical
>         Attachments: HADOOP-14507-001.patch, HADOOP-14507-002.patch, HADOOP-14507-003.patch,
HADOOP-14507-004.patch, HADOOP-14507-005.patch, HADOOP-14507-006.patch, HADOOP-14507-006.patch,
HADOOP-14507-007.patch
>
>
> Per-bucket jceks support turns out to be complex as you have to manage multiple jecks
files & configure the client to ask for the right one. This is because we're calling {{Configuration.getPassword{"fs,s3a.secret.key"}}.

> If before that, we do a check for the explict id, key, session key in the properties
{{fs.s3a.$bucket.secret}} ( & c), we could have a single JCEKs file with all the secrets
for different bucket. You would only need to explicitly point the base config to the secrets
file, and the right credentials would be picked up, if set



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message