Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 88E05200D4E for ; Thu, 7 Dec 2017 17:33:06 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 876C0160C0C; Thu, 7 Dec 2017 16:33:06 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id CC0A4160C08 for ; Thu, 7 Dec 2017 17:33:05 +0100 (CET) Received: (qmail 88541 invoked by uid 500); 7 Dec 2017 16:33:05 -0000 Mailing-List: contact jira-help@kafka.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: jira@kafka.apache.org Delivered-To: mailing list jira@kafka.apache.org Received: (qmail 88529 invoked by uid 99); 7 Dec 2017 16:33:05 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 07 Dec 2017 16:33:05 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 658911800E9 for ; Thu, 7 Dec 2017 16:33:04 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -100.002 X-Spam-Level: X-Spam-Status: No, score=-100.002 tagged_above=-999 required=6.31 tests=[RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id SP4Ctcvso8Vs for ; Thu, 7 Dec 2017 16:33:02 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 2F4E05F3B7 for ; Thu, 7 Dec 2017 16:33:01 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 65542E0D5F for ; Thu, 7 Dec 2017 16:33:00 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 184EC21E75 for ; Thu, 7 Dec 2017 16:33:00 +0000 (UTC) Date: Thu, 7 Dec 2017 16:33:00 +0000 (UTC) From: "Robin Tweedie (JIRA)" To: jira@kafka.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (KAFKA-6199) Single broker with fast growing heap usage MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Thu, 07 Dec 2017 16:33:06 -0000 [ https://issues.apache.org/jira/browse/KAFKA-6199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16282111#comment-16282111 ] Robin Tweedie commented on KAFKA-6199: -------------------------------------- [~omkreddy] Nothing obvious beyond the warnings I shared further up. I'll have another look. When you say thread dump, just the output of {{jstack PID}}? > Single broker with fast growing heap usage > ------------------------------------------ > > Key: KAFKA-6199 > URL: https://issues.apache.org/jira/browse/KAFKA-6199 > Project: Kafka > Issue Type: Bug > Affects Versions: 0.10.2.1 > Environment: Amazon Linux > Reporter: Robin Tweedie > Attachments: Screen Shot 2017-11-10 at 1.55.33 PM.png, Screen Shot 2017-11-10 at 11.59.06 AM.png, dominator_tree.png, histo_live.txt, histo_live_20171206.txt, histo_live_80.txt, merge_shortest_paths.png, path2gc.png > > > We have a single broker in our cluster of 25 with fast growing heap usage which necessitates us restarting it every 12 hours. If we don't restart the broker, it becomes very slow from long GC pauses and eventually has {{OutOfMemory}} errors. > See {{Screen Shot 2017-11-10 at 11.59.06 AM.png}} for a graph of heap usage percentage on the broker. A "normal" broker in the same cluster stays below 50% (averaged) over the same time period. > We have taken heap dumps when the broker's heap usage is getting dangerously high, and there are a lot of retained {{NetworkSend}} objects referencing byte buffers. > We also noticed that the single affected broker logs a lot more of this kind of warning than any other broker: > {noformat} > WARN Attempting to send response via channel for which there is no open connection, connection id 13 (kafka.network.Processor) > {noformat} > See {{Screen Shot 2017-11-10 at 1.55.33 PM.png}} for counts of that WARN log message visualized across all the brokers (to show it happens a bit on other brokers, but not nearly as much as it does on the "bad" broker). > I can't make the heap dumps public, but would appreciate advice on how to pin down the problem better. We're currently trying to narrow it down to a particular client, but without much success so far. > Let me know what else I could investigate or share to track down the source of this leak. -- This message was sent by Atlassian JIRA (v6.4.14#64029)