kafka-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dawid Kulig (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (KAFKA-6892) Kafka Streams memory usage grows over the time till OOM
Date Tue, 22 May 2018 09:49:00 GMT

     [ https://issues.apache.org/jira/browse/KAFKA-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Dawid Kulig resolved KAFKA-6892.
--------------------------------
    Resolution: Cannot Reproduce

> Kafka Streams memory usage grows over the time till OOM
> -------------------------------------------------------
>
>                 Key: KAFKA-6892
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6892
>             Project: Kafka
>          Issue Type: Bug
>          Components: streams
>    Affects Versions: 1.1.0
>            Reporter: Dawid Kulig
>            Priority: Minor
>         Attachments: kafka-streams-per-pod-resources-usage.png
>
>
> Hi. I am observing indefinite memory growth of my kafka-streams application. It gets
killed by the OS when reaching the memory limit (10gb). 
> It's running two unrelated pipelines (read from 4 source topics - 100 partitions each
- aggregate data and write to two destination topics) 
> My environment: 
>  * Kubernetes cluster
>  * 4 app instances
>  * 10GB memory limit per pod (instance)
>  * JRE 8
> JVM / Streams app:
>  * -Xms2g
>  * -Xmx4g
>  * num.stream.threads = 4
>  * commit.interval.ms = 1000
>  * linger.ms = 1000
>  
> When my app is running for 24hours it reaches 10GB memory limit. Heap and GC looks good,
non-heap avg memory usage is 120MB. I've read it might be related to the RocksDB that works
underneath streams app, however I tried to tune it using [confluent doc|https://docs.confluent.io/current/streams/developer-guide/config-streams.html#streams-developer-guide-rocksdb-config] unfortunately
with no luck. 
> RocksDB config #1:
> {code:java}
> tableConfig.setBlockCacheSize(16 * 1024 * 1024L);
> tableConfig.setBlockSize(16 * 1024L);
> tableConfig.setCacheIndexAndFilterBlocks(true);
> options.setTableFormatConfig(tableConfig);
> options.setMaxWriteBufferNumber(2);{code}
> RocksDB config #2 
> {code:java}
> tableConfig.setBlockCacheSize(1024 * 1024L);
> tableConfig.setBlockSize(16 * 1024L);
> tableConfig.setCacheIndexAndFilterBlocks(true);
> options.setTableFormatConfig(tableConfig);
> options.setMaxWriteBufferNumber(2);
> options.setWriteBufferSize(8 * 1024L);{code}
>  
> This behavior has only been observed with our production traffic, where per topic input
message rate is 10msg/sec and is pretty much constant (no peaks). I am attaching cluster resources
usage from last 24h.
> Any help or advice would be much appreciated. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message