kafka-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jun Rao (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (KAFKA-5062) Kafka brokers can accept malformed requests which allocate gigabytes of memory
Date Thu, 13 Apr 2017 21:09:41 GMT

    [ https://issues.apache.org/jira/browse/KAFKA-5062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15968209#comment-15968209

Jun Rao commented on KAFKA-5062:

One far fetched scenario is the following. A bad application sent the broker some data that
happen to match a produce request. The data records in the produce request may appear to be
compressed. The broker will try to decompress the data records, which could lead to the creation
of an arbitrary byte array.

One improvement that we could make is to tighten up the parsing of a request on the server
side. Currently, Schema.read() could succeed if a struct can be completely constructed without
using all bytes in the input byte buffer. It's probably safer to throw an exception when the
struct is constructed but there are remaining bytes in the buffer.

> Kafka brokers can accept malformed requests which allocate gigabytes of memory
> ------------------------------------------------------------------------------
>                 Key: KAFKA-5062
>                 URL: https://issues.apache.org/jira/browse/KAFKA-5062
>             Project: Kafka
>          Issue Type: Bug
>            Reporter: Apurva Mehta
> In some circumstances, it is possible to cause a Kafka broker to allocate massive amounts
of memory by writing malformed bytes to the brokers port. 
> In investigating an issue, we saw byte arrays on the kafka heap upto 1.8 gigabytes, the
first 360 bytes of which were non kafka requests -- an application was writing the wrong data
to kafka, causing the broker to interpret the request size as 1.8GB and then allocate that
amount. Apart from the first 360 bytes, the rest of the 1.8GB byte array was null. 
> We have a socket.request.max.bytes set at 100MB to protect against this kind of thing,
but somehow that limit is not always respected. We need to investigate why and fix it.
> cc [~rnpridgeon], [~ijuma], [~gwenshap], [~cmccabe]

This message was sent by Atlassian JIRA

View raw message