hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-15576) S3A Multipart Uploader to work with S3Guard and encryption
Date Tue, 07 Aug 2018 00:56:00 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-15576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16570980#comment-16570980

Steve Loughran commented on HADOOP-15576:

I think this is good to go in, if you are happy with my changes
And there's no need for protobuf; we aren't worrying about wire compat over time, are we?

But there's still some ambiguity about things, something which surfaces precisely because
the spec of HDFS-13713 doesn't exist, and we are left looking at the behaviours of the two
existing implementations and guessing which are "the reference" behaviours vs "implementation

* what policy for 0-entry commits MUST be (here: fail)
* what happens if your MPU complete call, there are uploaded parts which are not listed?
* what happens if >1 part is listed twice in completion
* what happens if you try to upload a part after the MPU has completed

If we had the time, I'd say "pull that specification task into this one and define things
alongside the tests", that being how I like to do tests and reverse-engineer a spec from behaviours:
build the spec, come up with ways to break the preconditions, see what happens when you try
that in tests, fix code/refine spec.

But...we are approaching the cutoff for 3.2, and ideally I'd like this in ASAP, along with
the other 3.2 features. Getting this in gives us time to finish & Review those.


I'm +1 for this patch as is. It is working for me against us-west-1 + s3guard, where it wasn't
before. (I'm not reliably testing encryption BTW, as there's no test actually verifying the
object has the encryption header.)

If you are happy with the patch-as-modified, it's good to go. But we do need that spec still,
which I'd like before the actual apps using this stuff come together

> S3A  Multipart Uploader to work with S3Guard and encryption
> -----------------------------------------------------------
>                 Key: HADOOP-15576
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15576
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.2
>            Reporter: Steve Loughran
>            Assignee: Ewan Higgs
>            Priority: Blocker
>         Attachments: HADOOP-15576-005.patch, HADOOP-15576-007.patch, HADOOP-15576-008.patch,
HADOOP-15576.001.patch, HADOOP-15576.002.patch, HADOOP-15576.003.patch, HADOOP-15576.004.patch
> The new Multipart Uploader API of HDFS-13186 needs to work with S3Guard, with the tests
to demonstrate this
> # move from low-level calls of S3A client to calls of WriteOperationHelper; adding any
new methods are needed there.
> # Tests. the tests of HDFS-13713. 
> # test execution, with -DS3Guard, -DAuth
> There isn't an S3A version of {{AbstractSystemMultipartUploaderTest}}, and even if there
was, it might not show that S3Guard was bypassed, because there's no checks that listFiles/listStatus
shows the newly committed files.
> Similarly, because MPU requests are initiated in S3AMultipartUploader, encryption settings
are't picked up. Files being uploaded this way *are not being encrypted*

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org

View raw message