brooklyn-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alex Heneveld (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (BROOKLYN-394) "Request limit exceeded" on Amazon
Date Wed, 23 Nov 2016 12:36:58 GMT

    [ https://issues.apache.org/jira/browse/BROOKLYN-394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15689990#comment-15689990
] 

Alex Heneveld commented on BROOKLYN-394:
----------------------------------------

Has this resolved the issue, or simply improved it a bit ?

I tend to think this needs a fix in jclouds or our use of it:  in particular using the same
rate limiter instance across all requests to a particular cloud account + endpoint.  The comments
here suggest this isn't the case, back-off is per thread/request, which means we'll keep banging
our heads against this with big clusters in parallel.  The parameter tweaks here help us eke
out a bit higher success rate but don't solve the problem.

cc [~andreaturli] ?

> "Request limit exceeded" on Amazon
> ----------------------------------
>
>                 Key: BROOKLYN-394
>                 URL: https://issues.apache.org/jira/browse/BROOKLYN-394
>             Project: Brooklyn
>          Issue Type: Bug
>            Reporter: Svetoslav Neykov
>            Assignee: Aled Sage
>             Fix For: 0.10.0
>
>
> Any moderately sized blueprint could trigger {{Request limit exceeded}} on Amazon (say
kubernetes). The only way users have control over the request rate is by setting {{maxConcurrentMachineCreations}}
with the current recommended value of 3 (see clocker.io).
> It's bad user experience if one needs to adapt the location based on the blueprint.
> Possible steps to improve:
> * Add to troubleshooting documentation
> * Make maxConcurrentMachineCreations default to 3
> * Check are we polling for machine creation too often.
> * Check how many requests are we hitting Amazon with (per created machine)
> * The number of requests per machine could vary from blueprint to blueprint (say if the
blueprint is creating security networks, using other amazon services). Is there a way to throttle
our requests to amazon and stay below a certain limit per second?
> * I've hit the error during machine tear down as well, so {{maxConcurrentMachineCreations}}
is not enough to work around
> Some docs on rate limits at http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html.
> Related: https://github.com/jclouds/legacy-jclouds/issues/1214



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message