hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-11572) s3a delete() operation fails during a concurrent delete of child entries
Date Wed, 22 Feb 2017 13:33:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-11572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15878208#comment-15878208
] 

Steve Loughran commented on HADOOP-11572:
-----------------------------------------

I now propose scanning the failed objects, seeing if they exist

# if the HEAD check fails: ignore
# if any object exists, the rejection is more serious as it could be permissions or something.
Fail

Make no attempt to retry, simply decide whether to ignore or reject.

> s3a delete() operation fails during a concurrent delete of child entries
> ------------------------------------------------------------------------
>
>                 Key: HADOOP-11572
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11572
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.6.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>         Attachments: HADOOP-11572-001.patch
>
>
> Reviewing the code, s3a has the problem raised in HADOOP-6688: deletion of a child entry
during a recursive directory delete is propagated as an exception, rather than ignored as
a detail which idempotent operations should just ignore.
> the exception should be caught and, if a file not found problem, logged rather than propagated



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message