kafka-jira mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (KAFKA-6134) High memory usage on controller during partition reassignment
Date Fri, 27 Oct 2017 05:11:00 GMT

    [ https://issues.apache.org/jira/browse/KAFKA-6134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16221718#comment-16221718

ASF GitHub Bot commented on KAFKA-6134:

Github user hachikuji closed the pull request at:


> High memory usage on controller during partition reassignment
> -------------------------------------------------------------
>                 Key: KAFKA-6134
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6134
>             Project: Kafka
>          Issue Type: Bug
>          Components: controller
>    Affects Versions:,
>            Reporter: Jason Gustafson
>            Assignee: Jason Gustafson
>            Priority: Critical
>              Labels: regression
>             Fix For: 1.0.0,
>         Attachments: Screen Shot 2017-10-26 at 3.05.40 PM.png
> We've had a couple users reporting spikes in memory usage when the controller is performing
partition reassignment in 0.11. After investigation, we found that the controller event queue
was using most of the retained memory. In particular, we found several thousand {{PartitionReassignment}}
objects, each one containing one fewer partition than the previous one (see the attached image).
> From the code, it seems clear why this is happening. We have a watch on the partition
reassignment path which adds the {{PartitionReassignment}} object to the event queue:
> {code}
>   override def handleDataChange(dataPath: String, data: Any): Unit = {
>     val partitionReassignment = ZkUtils.parsePartitionReassignmentData(data.toString)
>     eventManager.put(controller.PartitionReassignment(partitionReassignment))
>   }
> {code}
> In the {{PartitionReassignment}} event handler, we iterate through all of the partitions
in the reassignment. After we complete reassignment for each partition, we remove that partition
and update the node in zookeeper. 
> {code}
>     // remove this partition from that list
>     val updatedPartitionsBeingReassigned = partitionsBeingReassigned - topicAndPartition
>     // write the new list to zookeeper
>   zkUtils.updatePartitionReassignmentData(updatedPartitionsBeingReassigned.mapValues(_.newReplicas))
> {code}
> This triggers the handler above which adds a new event in the queue. So what you get
is an n^2 increase in memory where n is the number of partitions.

This message was sent by Atlassian JIRA

View raw message