kafka-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jay Kreps (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (KAFKA-656) Add Quotas to Kafka
Date Wed, 27 Feb 2013 04:48:13 GMT

    [ https://issues.apache.org/jira/browse/KAFKA-656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13587997#comment-13587997
] 

Jay Kreps commented on KAFKA-656:
---------------------------------

Yes, it would definitely make sense to use a metric for it since that will make it easier
to monitor how close you are to the limit.

One related patch is the dynamic per-topic config patch. I have a feeling that these quotas
would definitely be the kind of thing you would want to update dynamically. See KAFKA-554.

If you want to take a stab at it, that would be fantastic and I would be happy to help however
I can. It would probably be good to start with a simple wiki of how it would work and get
consensus on that.

Here is what I was thinking, we could add a class something like
  class Quotas {
    def record(client: String, topic: String, bytesToRead: Long, bytesToWrite: Long)
  }
The record() method would record the work done, and if we are over quota for that topic or
client throw a QuotaExceededException. (We might need to split the record and the check, not
sure).

This class can be integrated in KafkaApis to do the appropriate checks for each API. We should
probably apply the quota to all client-facing apis, even things like metadata fetch which
do no real read or write. These would just count against your total request counter and could
have the bytes arguments both set to 0.
                
> Add Quotas to Kafka
> -------------------
>
>                 Key: KAFKA-656
>                 URL: https://issues.apache.org/jira/browse/KAFKA-656
>             Project: Kafka
>          Issue Type: New Feature
>          Components: core
>    Affects Versions: 0.8.1
>            Reporter: Jay Kreps
>              Labels: project
>
> It would be nice to implement a quota system in Kafka to improve our support for highly
multi-tenant usage. The goal of this system would be to prevent one naughty user from accidently
overloading the whole cluster.
> There are several quantities we would want to track:
> 1. Requests pers second
> 2. Bytes written per second
> 3. Bytes read per second
> There are two reasonable groupings we would want to aggregate and enforce these thresholds
at:
> 1. Topic level
> 2. Client level (e.g. by client id from the request)
> When a request hits one of these limits we will simply reject it with a QUOTA_EXCEEDED
exception.
> To avoid suddenly breaking things without warning, we should ideally support two thresholds:
a soft threshold at which we produce some kind of warning and a hard threshold at which we
give the error. The soft threshold could just be defined as 80% (or whatever) of the hard
threshold.
> There are nuances to getting this right. If you measure second-by-second a single burst
may exceed the threshold, so we need a sustained measurement over a period of time.
> Likewise when do we stop giving this error? To make this work right we likely need to
charge against the quota for request *attempts* not just successful requests. Otherwise a
client that is overloading the server will just flap on and off--i.e. we would disable them
for a period of time but when we re-enabled them they would likely still be abusing us.
> It would be good to a wiki design on how this would all work as a starting point for
discussion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message