kafka-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (KAFKA-3169) Kafka broker throws OutOfMemory error with invalid SASL packet
Date Sat, 30 Jan 2016 11:35:39 GMT

    [ https://issues.apache.org/jira/browse/KAFKA-3169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124854#comment-15124854
] 

ASF GitHub Bot commented on KAFKA-3169:
---------------------------------------

Github user asfgit closed the pull request at:

    https://github.com/apache/kafka/pull/831


> Kafka broker throws OutOfMemory error with invalid SASL packet
> --------------------------------------------------------------
>
>                 Key: KAFKA-3169
>                 URL: https://issues.apache.org/jira/browse/KAFKA-3169
>             Project: Kafka
>          Issue Type: Bug
>          Components: security
>    Affects Versions: 0.9.0.0
>            Reporter: Rajini Sivaram
>            Assignee: Rajini Sivaram
>            Priority: Critical
>             Fix For: 0.9.0.1
>
>
> Receive buffer used in Kafka servers to process SASL packets is unbounded. This can results
in brokers crashing with OutOfMemory error when an invalid SASL packet is received. 
> There is a standard SASL property in Java _javax.security.sasl.maxbuffer_ that can be
used to specify buffer size. When properties are added to the Sasl implementation in KAFKA-3149,
we can use the standard property to limit receive buffer size. 
> But since this is a potential DoS issue, we should set a reasonable limit in 0.9.0.1.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message