kafka-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brice Dutheil (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (KAFKA-3990) Kafka New Producer may raise an OutOfMemoryError
Date Fri, 29 Jul 2016 16:03:21 GMT

    [ https://issues.apache.org/jira/browse/KAFKA-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15399572#comment-15399572
] 

Brice Dutheil commented on KAFKA-3990:
--------------------------------------

Hi after further investigation we found out the issue came because we switched from bamboo
to marathon-lb, and marathon-lb opens the 9091 HTTP port (https://github.com/mesosphere/marathon-lb#operational-best-practices),
we missed that during the upgrade.

{code}
> curl -v dockerhost:9091
* About to connect() to dockerhost port 9091 (#0)
*   Trying 172.17.42.1...
* Connected to dockerhost (172.17.42.1) port 9091 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: dockerhost:9091
> Accept: */*
>
* Empty reply from server
* Connection #0 to host dockerhost left intact
curl: (52) Empty reply from server
{code}

However I'm surprised Kafka / clients don't check the validity of the payload.

> Kafka New Producer may raise an OutOfMemoryError
> ------------------------------------------------
>
>                 Key: KAFKA-3990
>                 URL: https://issues.apache.org/jira/browse/KAFKA-3990
>             Project: Kafka
>          Issue Type: Bug
>          Components: clients
>    Affects Versions: 0.9.0.1
>         Environment: Docker, Base image : CentOS
> Java 8u77
>            Reporter: Brice Dutheil
>         Attachments: app-producer-config.log, kafka-broker-logs.zip
>
>
> We are regularly seeing OOME errors on a kafka producer, we first saw :
> {code}
> java.lang.OutOfMemoryError: Java heap space
>     at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) ~[na:1.8.0_77]
>     at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77]
>     at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93)
~[kafka-clients-0.9.0.1.jar:na]
>     at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
~[kafka-clients-0.9.0.1.jar:na]
>     at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) ~[kafka-clients-0.9.0.1.jar:na]
>     at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) ~[kafka-clients-0.9.0.1.jar:na]
>     at org.apache.kafka.common.network.Selector.poll(Selector.java:286) ~[kafka-clients-0.9.0.1.jar:na]
>     at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) ~[kafka-clients-0.9.0.1.jar:na]
>     at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) ~[kafka-clients-0.9.0.1.jar:na]
>     at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) ~[kafka-clients-0.9.0.1.jar:na]
>     at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_77]
> {code}
> This line refer to a buffer allocation {{ByteBuffer.allocate(receiveSize)}} (see https://github.com/apache/kafka/blob/0.9.0.1/clients/src/main/java/org/apache/kafka/common/network/NetworkReceive.java#L93)
> Usually the app runs fine within 200/400 MB heap and a 64 MB Metaspace. And we are producing
small messages 500B at most.
> Also the error don't appear on the devlopment environment, in order to identify the issue
we tweaked the code to give us actual data of the allocation size, we got this stack :
> {code}
> 09:55:49.484 [auth] [kafka-producer-network-thread | producer-1] WARN  o.a.k.c.n.NetworkReceive
HEAP-ISSUE: constructor : Integer='-1', String='-1'
> 09:55:49.485 [auth] [kafka-producer-network-thread | producer-1] WARN  o.a.k.c.n.NetworkReceive
HEAP-ISSUE: method : NetworkReceive.readFromReadableChannel.receiveSize=1213486160
> java.lang.OutOfMemoryError: Java heap space
> Dumping heap to /tmp/tomcat.hprof ...
> Heap dump file created [69583827 bytes in 0.365 secs]
> 09:55:50.324 [auth] [kafka-producer-network-thread | producer-1] ERROR o.a.k.c.utils.KafkaThread
Uncaught exception in kafka-producer-network-thread | producer-1: 
> java.lang.OutOfMemoryError: Java heap space
>   at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) ~[na:1.8.0_77]
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77]
>   at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93)
~[kafka-clients-0.9.0.1.jar:na]
>   at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
~[kafka-clients-0.9.0.1.jar:na]
>   at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) ~[kafka-clients-0.9.0.1.jar:na]
>   at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) ~[kafka-clients-0.9.0.1.jar:na]
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:286) ~[kafka-clients-0.9.0.1.jar:na]
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) ~[kafka-clients-0.9.0.1.jar:na]
>   at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) ~[kafka-clients-0.9.0.1.jar:na]
>   at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) ~[kafka-clients-0.9.0.1.jar:na]
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_77]
> {code}
> Notice the size to allocate {{1213486160}} ~1.2 GB. I'm not yet sure how this size is
initialised.
> Notice as well that every time this OOME appear the {{NetworkReceive}} constructor at
https://github.com/apache/kafka/blob/0.9.0.1/clients/src/main/java/org/apache/kafka/common/network/NetworkReceive.java#L49
receive the parameters : {{maxSize=-1}}, {{source="-1"}}
> We may have missed configuration in our setup but kafka clients shouldn't raise an OOME.
For reference the producer is initialised with :
> {code}
>         Properties props = new Properties();
>         props.put(BOOTSTRAP_SERVERS_CONFIG, properties.bootstrapServers);
>         props.put(ACKS_CONFIG, "ONE");
>         props.put(RETRIES_CONFIG, 0);
>         props.put(BATCH_SIZE_CONFIG, 16384);
>         props.put(LINGER_MS_CONFIG, 0);
>         props.put(BUFFER_MEMORY_CONFIG, 33554432);
>         props.put(REQUEST_TIMEOUT_MS_CONFIG, 1000);
>         props.put(MAX_BLOCK_MS_CONFIG, 1000);
>         props.put(KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
>         props.put(VALUE_SERIALIZER_CLASS_CONFIG, JSONSerializer.class.getName());
> {code}
> For reference while googling for the issue we found a similar stack trace with the new
consumer API on the same class on the ATLAS project: https://issues.apache.org/jira/browse/ATLAS-665
> If anything is missing please reach out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message