kafka-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "John Lu (JIRA)" <j...@apache.org>
Subject [jira] [Created] (KAFKA-6980) Recommended MaxDirectMemorySize for consumers
Date Fri, 01 Jun 2018 13:39:00 GMT
John Lu created KAFKA-6980:

             Summary: Recommended MaxDirectMemorySize for consumers
                 Key: KAFKA-6980
                 URL: https://issues.apache.org/jira/browse/KAFKA-6980
             Project: Kafka
          Issue Type: Wish
          Components: consumer, documentation
    Affects Versions:
         Environment: CloudFoundry
            Reporter: John Lu

We are observing that when MaxDirectMemorySize is set too low, our Kafka consumer threads
are failing and encountering the following exception:

{{java.lang.OutOfMemoryError: Direct buffer memory}}

Is there a way to estimate how much direct memory is required for optimal performance?  In
the documentation, it is suggested that the amount of memory required is  [Number of Partitions
* max.partition.fetch.bytes].  

When we pick a value slightly above that, we no longer encounter the error, but if we double
or triple the number, our throughput improves drastically.  So we are wondering if there
is another setting or parameter to consider?





This message was sent by Atlassian JIRA

View raw message