hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-13904) DynamoDBMetadataStore to handle DDB throttling failures through retry policy
Date Fri, 03 Feb 2017 15:54:51 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-13904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15851644#comment-15851644

Steve Loughran commented on HADOOP-13904:

Yetus is unhappy with it...is it in sync with the branch?

* that fix to line 196 of the pom should go into branch-2...submit a separate patch for that
and I'll get it in

h2. {{retryBackoff}}
* the retry policy should really detect and reject the auth failures as non-retryable. Looking
@ the s3a block output stream, we get away with it only because you don't get as far as completing
a multipart write without having the credentials —though I should add the check there too,
to failfast on situations like session credential expiry during a multiday streaming app.
* Take a look at {{S3aBlockOutputStream.shouldRetry}} for some things to consider: (a) handle
interruptions by interrupting thread again, and (b) handling any other exception by just returning
false to the shouldRetry probe. Why? it means the caller can fail with whatever exception
caused the initial problem, which is presumably the most useful.

other than that, LGTM.

> DynamoDBMetadataStore to handle DDB throttling failures through retry policy
> ----------------------------------------------------------------------------
>                 Key: HADOOP-13904
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13904
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: HADOOP-13345
>            Reporter: Steve Loughran
>            Assignee: Aaron Fabbri
>         Attachments: HADOOP-13904-HADOOP-13345.001.patch, HADOOP-13904-HADOOP-13345.002.patch
> When you overload DDB, you get error messages warning of throttling, [as documented by
> Reduce load on DDB by doing a table lookup before the create, then, in table create/delete
operations and in get/put actions, recognise the error codes and retry using an appropriate
retry policy (exponential backoff + ultimate failure) 

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org

View raw message