hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-14971) Merge S3A committers into trunk
Date Thu, 26 Oct 2017 20:13:00 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-14971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16221137#comment-16221137
] 

ASF GitHub Bot commented on HADOOP-14971:
-----------------------------------------

Github user ajfabbri commented on a diff in the pull request:

    https://github.com/apache/hadoop/pull/282#discussion_r147253647
  
    --- Diff: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
---
    @@ -1130,17 +1359,23 @@ private void blockRootDelete(String key) throws InvalidRequestException
{
        * Perform a bulk object delete operation.
        * Increments the {@code OBJECT_DELETE_REQUESTS} and write
        * operation statistics.
    +   * Retry policy: retry untranslated; delete considered idempotent.
        * @param deleteRequest keys to delete on the s3-backend
        * @throws MultiObjectDeleteException one or more of the keys could not
        * be deleted.
        * @throws AmazonClientException amazon-layer failure.
        */
    +  @Retries.RetryRaw
       private void deleteObjects(DeleteObjectsRequest deleteRequest)
    -      throws MultiObjectDeleteException, AmazonClientException {
    +      throws MultiObjectDeleteException, AmazonClientException, IOException {
         incrementWriteOperations();
    -    incrementStatistic(OBJECT_DELETE_REQUESTS, 1);
         try {
    -      s3.deleteObjects(deleteRequest);
    +      invoker.retryUntranslated("delete",
    --- End diff --
    
    not quite following your comment here.  One question on my mind is the behavior of recursive
delete.  Should it fail if the set of objects changes underneath it during execution?  For
example, I'm executing a recursive delete, and an object disappears due to concurrent modification
or eventual consistency.  Should the delete fail, or succeed because the contract of delete(path)
was satisfied (stuff was deleted).  Based on my reading of your RetryPolicy, you don't retry
on object not found (good), but we still fail.  We may want to actually throw to top-level
delete() and restart the recursive delete.  A bit orthogonal but I want to understand the
big picture to feel confident this code is right, and where it is headed in the future.


> Merge S3A committers into trunk
> -------------------------------
>
>                 Key: HADOOP-14971
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14971
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.0.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>
> Merge the HADOOP-13786 committer into trunk. This branch is being set up as a github
PR for review there & to keep it out the mailboxes of the watchers on the main JIRA



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message