Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 79579200D29 for ; Thu, 26 Oct 2017 22:13:05 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 77E08160BF3; Thu, 26 Oct 2017 20:13:05 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id BCF2E1609E5 for ; Thu, 26 Oct 2017 22:13:04 +0200 (CEST) Received: (qmail 31553 invoked by uid 500); 26 Oct 2017 20:13:03 -0000 Mailing-List: contact common-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list common-issues@hadoop.apache.org Received: (qmail 31541 invoked by uid 99); 26 Oct 2017 20:13:03 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 26 Oct 2017 20:13:03 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id ECB29C07C0 for ; Thu, 26 Oct 2017 20:13:02 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -100.001 X-Spam-Level: X-Spam-Status: No, score=-100.001 tagged_above=-999 required=6.31 tests=[RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id 2A8wQTmjNNqI for ; Thu, 26 Oct 2017 20:13:02 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 687515F238 for ; Thu, 26 Oct 2017 20:13:01 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 9C298E0A3A for ; Thu, 26 Oct 2017 20:13:00 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 500F6212F8 for ; Thu, 26 Oct 2017 20:13:00 +0000 (UTC) Date: Thu, 26 Oct 2017 20:13:00 +0000 (UTC) From: "ASF GitHub Bot (JIRA)" To: common-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HADOOP-14971) Merge S3A committers into trunk MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Thu, 26 Oct 2017 20:13:05 -0000 [ https://issues.apache.org/jira/browse/HADOOP-14971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16221137#comment-16221137 ] ASF GitHub Bot commented on HADOOP-14971: ----------------------------------------- Github user ajfabbri commented on a diff in the pull request: https://github.com/apache/hadoop/pull/282#discussion_r147253647 --- Diff: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java --- @@ -1130,17 +1359,23 @@ private void blockRootDelete(String key) throws InvalidRequestException { * Perform a bulk object delete operation. * Increments the {@code OBJECT_DELETE_REQUESTS} and write * operation statistics. + * Retry policy: retry untranslated; delete considered idempotent. * @param deleteRequest keys to delete on the s3-backend * @throws MultiObjectDeleteException one or more of the keys could not * be deleted. * @throws AmazonClientException amazon-layer failure. */ + @Retries.RetryRaw private void deleteObjects(DeleteObjectsRequest deleteRequest) - throws MultiObjectDeleteException, AmazonClientException { + throws MultiObjectDeleteException, AmazonClientException, IOException { incrementWriteOperations(); - incrementStatistic(OBJECT_DELETE_REQUESTS, 1); try { - s3.deleteObjects(deleteRequest); + invoker.retryUntranslated("delete", --- End diff -- not quite following your comment here. One question on my mind is the behavior of recursive delete. Should it fail if the set of objects changes underneath it during execution? For example, I'm executing a recursive delete, and an object disappears due to concurrent modification or eventual consistency. Should the delete fail, or succeed because the contract of delete(path) was satisfied (stuff was deleted). Based on my reading of your RetryPolicy, you don't retry on object not found (good), but we still fail. We may want to actually throw to top-level delete() and restart the recursive delete. A bit orthogonal but I want to understand the big picture to feel confident this code is right, and where it is headed in the future. > Merge S3A committers into trunk > ------------------------------- > > Key: HADOOP-14971 > URL: https://issues.apache.org/jira/browse/HADOOP-14971 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 > Affects Versions: 3.0.0 > Reporter: Steve Loughran > Assignee: Steve Loughran > > Merge the HADOOP-13786 committer into trunk. This branch is being set up as a github PR for review there & to keep it out the mailboxes of the watchers on the main JIRA -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: common-issues-help@hadoop.apache.org