hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (Jira)" <j...@apache.org>
Subject [jira] [Work logged] (HADOOP-17261) s3a rename() now requires s3:deleteObjectVersion permission
Date Wed, 16 Sep 2020 13:31:00 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-17261?focusedWorklogId=485147&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-485147

ASF GitHub Bot logged work on HADOOP-17261:

                Author: ASF GitHub Bot
            Created on: 16/Sep/20 13:30
            Start Date: 16/Sep/20 13:30
    Worklog Time Spent: 10m 
      Work Description: steveloughran commented on pull request #2303:
URL: https://github.com/apache/hadoop/pull/2303#issuecomment-693406274

   Test with -
   mvit -Dparallel-tests -DtestsThreadCount=6 -Dmarkers=keep -Ds3guard -Ddynamo  -Dfs.s3a.directory.marker.audit=true
   https://issues.apache.org/jira/browse/HADOOP-17263 : read() didn't get as much back as
expected (unrelated)
   https://issues.apache.org/jira/browse/HADOOP-17226 Failure of ITestAssumeRole.testRestrictedCommitActions
   That -Dscale option kicks off some MR jobs which spawn extra processes beyond the maven
JUnit runners, the system may be overloading. Surprised how it surfaces though. I can see
it amplifies a race condition in a non-atomic ++ call (more chance of pre-emption, longer
delay for rescheduling), but read() not receiving enough data

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:

Issue Time Tracking

    Worklog Id:     (was: 485147)
    Time Spent: 50m  (was: 40m)

> s3a rename() now requires s3:deleteObjectVersion permission
> -----------------------------------------------------------
>                 Key: HADOOP-17261
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17261
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.4.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 50m
>  Remaining Estimate: 0h
> With the directory marker change (HADOOP-13230) you need the s3:deleteObjectVersion permission
in your role, else the operation will fail in the bulk delete, *if S3Guard is in use*
> Root cause
> -if fileStatus has a versionId, we pass that in to the delete KeyVersion pair
> -an unguarded listing doesn't get that versionId, so this is not an issue
> -but if files in a directory were previously created such that S3Guard has their versionId
in its tables, that is used in the request
> -which then fails if the caller doesn't have the permission
> Although we say "you need s3:delete*", this is a regression as any IAM role without the
permission will have rename fail during delete

This message was sent by Atlassian Jira

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org

View raw message