kafka-jira mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robin Tweedie (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (KAFKA-6199) Single broker with fast growing heap usage
Date Mon, 20 Nov 2017 13:00:00 GMT

    [ https://issues.apache.org/jira/browse/KAFKA-6199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259208#comment-16259208
] 

Robin Tweedie commented on KAFKA-6199:
--------------------------------------

I think I found the corresponding "old client" causing this problem this morning -- might
help with reproducing the issue. It is logging a similar error around the same rate as we
see on the Kafka broker:

{noformat}
[2017-11-20 10:20:17,495] ERROR kafka:102 Unable to receive data from Kafka
Traceback (most recent call last):
  File "/opt/kafka_offset_manager/venv/lib/python2.7/site-packages/kafka/conn.py", line 99,
in _read_bytes
    raise socket.error("Not enough data to read message -- did server kill socket?")
error: Not enough data to read message -- did server kill socket?
{noformat}

We have a python 2.7 process running to check topic and consumer offsets to report metrics.
It was running {{kafka-python==0.9.3}} (current version is 1.3.5). We are going to do some
experiments to make sure that this is the culprit of the heap growth.

> Single broker with fast growing heap usage
> ------------------------------------------
>
>                 Key: KAFKA-6199
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6199
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 0.10.2.1
>         Environment: Amazon Linux
>            Reporter: Robin Tweedie
>         Attachments: Screen Shot 2017-11-10 at 1.55.33 PM.png, Screen Shot 2017-11-10
at 11.59.06 AM.png, dominator_tree.png, merge_shortest_paths.png, path2gc.png
>
>
> We have a single broker in our cluster of 25 with fast growing heap usage which necessitates
us restarting it every 12 hours. If we don't restart the broker, it becomes very slow from
long GC pauses and eventually has {{OutOfMemory}} errors.
> See {{Screen Shot 2017-11-10 at 11.59.06 AM.png}} for a graph of heap usage percentage
on the broker. A "normal" broker in the same cluster stays below 50% (averaged) over the same
time period.
> We have taken heap dumps when the broker's heap usage is getting dangerously high, and
there are a lot of retained {{NetworkSend}} objects referencing byte buffers.
> We also noticed that the single affected broker logs a lot more of this kind of warning
than any other broker:
> {noformat}
> WARN Attempting to send response via channel for which there is no open connection, connection
id 13 (kafka.network.Processor)
> {noformat}
> See {{Screen Shot 2017-11-10 at 1.55.33 PM.png}} for counts of that WARN log message
visualized across all the brokers (to show it happens a bit on other brokers, but not nearly
as much as it does on the "bad" broker).
> I can't make the heap dumps public, but would appreciate advice on how to pin down the
problem better. We're currently trying to narrow it down to a particular client, but without
much success so far.
> Let me know what else I could investigate or share to track down the source of this leak.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message