cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sowmya Krishnan <>
Subject RE: [DISCUSS]API request throttling
Date Wed, 30 Jan 2013 11:17:34 GMT
Min, have few questions on this feature while I was coming up with test plan -

1. Do we allow specifying multiple limits based on different intervals - for ex: 10 requests
for interval = 5 sec, and 100 for interval = 60 sec. Essentially multiple time slices for
better granularity and control. If yes, how do I set up this? 
2. What is the purpose of resetApiLimitCmd being provided to the User? Can a user not keep
invoking this API and reset his counter every time it's exceeding his limit? This should be
available only to the admin isn't it?
3. Can we have a "negative list" (or a better name) which shouldn't be accounted for throttling?
For example, queryAsyncJob could be one candidate since a user cannot really control that.

4. FS states the back-off algorithm is TBD. I am assuming it's manual for now, at least for
4.1 release?


-----Original Message-----
From: Pranav Saxena [] 
Sent: Saturday, December 22, 2012 5:20 AM
Subject: RE: [DISCUSS]API request throttling

A proper error code is certainly seems to be the standard . Just for an example , Twitter
uses the same for handling their API throttling response errors as well  (
) . The back-off algorithm discussion I was referring to was for handling automatic  triggering
of blocked requests  but I could not think of a scenario where it might be useful for cloudstack
to have such a functionality . 	Any ideas /suggestions?


-----Original Message-----
From: Alex Huang []
Sent: Saturday, December 22, 2012 12:51 AM
Subject: RE: [DISCUSS]API request throttling

> Which brings me to another question: what is the response: is it a 
> HTTP error code or a normal response that has to be parsed?
> The reaction of most users to an error from the cloud is to re-try -- 
> thereby making the problem worse.

A proper error code is the right way to do it.  It only makes the problem worse if it causes
the system to behave poorly so we have to design this feature such that processing it doesn't
cause considerable performance/scale problem in the system.  One possibility is a backoff
algorithm (saw some discussion about it but wasn't sure if it was for this), where we hold
off the response if it continues to send requests, in effect choking the client.


View raw message