hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-14239) S3A Retry Multiple S3 Key Deletion
Date Sat, 01 Apr 2017 12:32:42 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15952198#comment-15952198

Steve Loughran commented on HADOOP-14239:

Assuming its just race conditions we need to retry on, and we can identify them (how?) then
retry makes sense,  In which case, this JIRA should be closed as a duplicate of HADOOP-11572,
and you take on that work.

1. If we can identify 404-failures without making calls on each one, handling is easy: strip
them out.
2. Otherwise, yes, individual failures should be queued for 1 by 1 attempts
3. If any of the individual attempts fails with something other than 404, then all queued
work must be halted, operation raises an exception.

Note that s3guard is going to add complexity here, just because of new codepaths/deployment
situations, as well as enough diffs in the files to make merge hard. I'm not going to be reviewing
any work on this patch until s3guard is merged into trunk and/or branch 2. If you start coding
atop the HADOOP-11345 branch, then you may have less merge pain

> S3A Retry Multiple S3 Key Deletion
> ----------------------------------
>                 Key: HADOOP-14239
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14239
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 2.8.0
>         Environment: EC2, AWS
>            Reporter: Kazuyuki Tanimura
> When fs.s3a.multiobjectdelete.enable == true, It tries to delete multiple S3 keys at
> Although this is a great feature, it becomes problematic when AWS fails deleting some
S3 keys out of the deletion list. The aws-java-sdk internally retries to delete them, but
it does not help because it simply retries the same list of S3 keys including the successfully
deleted ones. In that case, all successive retries fail deleting previously deleted keys since
they do not exist any more. Eventually it throws an Exception and leads to a job failure entirely.
> Luckily, the AWS API reports which keys it failed to delete. We should retry only for
the keys that failed to be deleted from S3A

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org

View raw message