hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-13904) DynamoDBMetadataStore to handle DDB throttling failures through retry policy
Date Mon, 20 Feb 2017 13:07:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-13904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15874507#comment-15874507

Steve Loughran commented on HADOOP-13904:

* Again., there's some TODO in the code. Better to add a comment in the relevant JIRA to mention
"and patch {{DynamoDBMetadataStore  & AbstractITestS3AMetadataStoreScale}}" as one of
the work items. There are 557 TODO entries in branch-2: don't add any more unless you are
prepared to go through the old ones and fix a couple. ( to be fair, I think some are mine)
* {{S3GUARD_DDB_MAX_RETRIES_DEFAULT, MIN_RETRY_SLEEP_MSEC, ... }}: always good to have the
javadoc for a constant to include the {{@value}} marker, so the javadocs show what the value

I think the {{retryBackoff}} logic should look a bit about what failed. At the very least,
auth failures should be recognised and propagated. It's really annoying when auth problems
trigger failure/retry, and again, too much of the hadoop stack gets this wrong (e.g ZOOKEEPER-2346;
We can handle anything else which happens (assuming all connectivity errors are transient),
but if the user can't log on, we should fail fast

> DynamoDBMetadataStore to handle DDB throttling failures through retry policy
> ----------------------------------------------------------------------------
>                 Key: HADOOP-13904
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13904
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: HADOOP-13345
>            Reporter: Steve Loughran
>            Assignee: Aaron Fabbri
>         Attachments: HADOOP-13904-HADOOP-13345.001.patch, HADOOP-13904-HADOOP-13345.002.patch,
HADOOP-13904-HADOOP-13345.003.patch, screenshot-1.png
> When you overload DDB, you get error messages warning of throttling, [as documented by
> Reduce load on DDB by doing a table lookup before the create, then, in table create/delete
operations and in get/put actions, recognise the error codes and retry using an appropriate
retry policy (exponential backoff + ultimate failure) 

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org

View raw message