From jira-return-9455-archive-asf-public=cust-asf.ponee.io@kafka.apache.org Thu Feb 1 12:58:24 2018 Return-Path: X-Original-To: archive-asf-public@eu.ponee.io Delivered-To: archive-asf-public@eu.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by mx-eu-01.ponee.io (Postfix) with ESMTP id DEE9D180652 for ; Thu, 1 Feb 2018 12:58:23 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id CE729160C44; Thu, 1 Feb 2018 11:58:23 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 21784160C26 for ; Thu, 1 Feb 2018 12:58:22 +0100 (CET) Received: (qmail 38128 invoked by uid 500); 1 Feb 2018 11:58:22 -0000 Mailing-List: contact jira-help@kafka.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: jira@kafka.apache.org Delivered-To: mailing list jira@kafka.apache.org Received: (qmail 38071 invoked by uid 99); 1 Feb 2018 11:58:22 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 01 Feb 2018 11:58:22 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id C2F0319A911 for ; Thu, 1 Feb 2018 11:58:21 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -110.311 X-Spam-Level: X-Spam-Status: No, score=-110.311 tagged_above=-999 required=6.31 tests=[ENV_AND_HDR_SPF_MATCH=-0.5, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01, USER_IN_DEF_SPF_WL=-7.5, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id ATX705IgAaNX for ; Thu, 1 Feb 2018 11:58:20 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id 431FF5FC6D for ; Thu, 1 Feb 2018 11:58:14 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 38926E039B for ; Thu, 1 Feb 2018 11:58:10 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 25AC621E84 for ; Thu, 1 Feb 2018 11:58:08 +0000 (UTC) Date: Thu, 1 Feb 2018 11:58:08 +0000 (UTC) From: "Damian Guy (JIRA)" To: jira@kafka.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (KAFKA-5857) Excessive heap usage on controller node during reassignment MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/KAFKA-5857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damian Guy updated KAFKA-5857: ------------------------------ Fix Version/s: (was: 1.1.0) 1.2.0 > Excessive heap usage on controller node during reassignment > ----------------------------------------------------------- > > Key: KAFKA-5857 > URL: https://issues.apache.org/jira/browse/KAFKA-5857 > Project: Kafka > Issue Type: Bug > Components: controller > Affects Versions: 0.11.0.0 > Environment: CentOs 7, Java 1.8 > Reporter: Raoufeh Hashemian > Priority: Major > Labels: reliability > Fix For: 1.2.0 > > Attachments: CPU.png, disk_write_x.png, memory.png, reassignment_plan.txt > > > I was trying to expand our kafka cluster of 6 broker nodes to 12 broker nodes. > Before expansion, we had a single topic with 960 partitions and a replication factor of 3. So each node had 480 partitions. The size of data in each node was 3TB . > To do the expansion, I submitted a partition reassignment plan (see attached file for the current/new assignments). The plan was optimized to minimize data movement and be rack aware. > When I submitted the plan, it took approximately 3 hours for moving data from old to new nodes to complete. After that, it started deleting source partitions (I say this based on the number of file descriptors) and rebalancing leaders which has not been successful. Meanwhile, the heap usage in the controller node started to go up with a large slope (along with long GC times) and it took 5 hours for the controller to go out of memory and another controller started to have the same behaviour for another 4 hours. At this time the zookeeper ran out of disk and the service stopped. > To recover from this condition: > 1) Removed zk logs to free up disk and restarted all 3 zk nodes > 2) Deleted /kafka/admin/reassign_partitions node from zk > 3) Had to do unclean restarts of kafka service on oom controller nodes which took 3 hours to complete . After this stage there was still 676 under replicated partitions. > 4) Do a clean restart on all 12 broker nodes. > After step 4 , number of under replicated nodes went to 0. > So I was wondering if this memory footprint from controller is expected for 1k partitions ? Did we do sth wrong or it is a bug? > Attached are some resource usage graph during this 30 hours event and the reassignment plan. I'll try to add log files as well -- This message was sent by Atlassian JIRA (v7.6.3#76005)