kafka-jira mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jason Gustafson (Jira)" <j...@apache.org>
Subject [jira] [Updated] (KAFKA-12351) Fix misleading max.request.size behavior
Date Sat, 20 Feb 2021 18:45:00 GMT

     [ https://issues.apache.org/jira/browse/KAFKA-12351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Jason Gustafson updated KAFKA-12351:
------------------------------------
    Description: 
The producer has a configuration called `max.request.size`. It is documented as follows:
{code}
        "The maximum size of a request in bytes. This setting will limit the number of record
" +
        "batches the producer will send in a single request to avoid sending huge requests.
" +
        "This is also effectively a cap on the maximum uncompressed record batch size. Note
that the server " +
        "has its own cap on the record batch size (after compression if compression is enabled)
which may be different from this.";
{code}
So basically the intent is to limit the overall size of the request, but the documentation
says that it also serves as a maximum cap on the uncompressed batch size.

In the implementation, however, we use it as a maximum cap on uncompressed record sizes, not
batches. Additionally, we treat this as a soft limit when applied to requests. Both of these
differences are worth pointing out in the documentation. 

  was:
The producer has a configuration called `max.request.size`. It is documented as follows:
{code}
        "The maximum size of a request in bytes. This setting will limit the number of record
" +
        "batches the producer will send in a single request to avoid sending huge requests.
" +
        "This is also effectively a cap on the maximum uncompressed record batch size. Note
that the server " +
        "has its own cap on the record batch size (after compression if compression is enabled)
which may be different from this.";
{code}
So basically the intent is to limit the overall size of the request, but the documentation
says that it is also serves as a maximum cap on the uncompressed batch size.

In the implementation, however, we use it as a maximum cap on uncompressed record sizes, not
batches. Additionally, we treat this as a soft limit when applied to requests. Both of these
differences are worth pointing out in the documentation. 


> Fix misleading max.request.size behavior
> ----------------------------------------
>
>                 Key: KAFKA-12351
>                 URL: https://issues.apache.org/jira/browse/KAFKA-12351
>             Project: Kafka
>          Issue Type: Improvement
>            Reporter: Jason Gustafson
>            Assignee: Jason Gustafson
>            Priority: Major
>
> The producer has a configuration called `max.request.size`. It is documented as follows:
> {code}
>         "The maximum size of a request in bytes. This setting will limit the number of
record " +
>         "batches the producer will send in a single request to avoid sending huge requests.
" +
>         "This is also effectively a cap on the maximum uncompressed record batch size.
Note that the server " +
>         "has its own cap on the record batch size (after compression if compression is
enabled) which may be different from this.";
> {code}
> So basically the intent is to limit the overall size of the request, but the documentation
says that it also serves as a maximum cap on the uncompressed batch size.
> In the implementation, however, we use it as a maximum cap on uncompressed record sizes,
not batches. Additionally, we treat this as a soft limit when applied to requests. Both of
these differences are worth pointing out in the documentation. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Mime
View raw message