flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aljoscha Krettek (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-7143) Partition assignment for Kafka consumer is not stable
Date Tue, 11 Jul 2017 14:46:00 GMT

    [ https://issues.apache.org/jira/browse/FLINK-7143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16082306#comment-16082306
] 

Aljoscha Krettek commented on FLINK-7143:
-----------------------------------------

IMHO, if we used {{partitionId % parallelism}} in the multi-topics cases we could get bad
utilisation. For example, assume we have 10 parallel source instances, we read from two topics,
each topic has 5 partitions. Now, if we used {{partitionId % parallelism}} each of the firsts
5 source instances would read two partitions (one from each topic) while the lasts 5 source
instances would not read any partition. Does that make sense?

> Partition assignment for Kafka consumer is not stable
> -----------------------------------------------------
>
>                 Key: FLINK-7143
>                 URL: https://issues.apache.org/jira/browse/FLINK-7143
>             Project: Flink
>          Issue Type: Bug
>          Components: Kafka Connector
>    Affects Versions: 1.3.1
>            Reporter: Steven Zhen Wu
>            Assignee: Tzu-Li (Gordon) Tai
>            Priority: Blocker
>             Fix For: 1.3.2
>
>
> while deploying Flink 1.3 release to hundreds of routing jobs, we found some issues with
partition assignment for Kafka consumer. some partitions weren't assigned and some partitions
got assigned more than once.
> Here is the bug introduced in Flink 1.3. 
> {code}
> 	protected static void initializeSubscribedPartitionsToStartOffsets(...) {
>                 ...
> 		for (int i = 0; i < kafkaTopicPartitions.size(); i++) {
> 			if (i % numParallelSubtasks == indexOfThisSubtask) {
> 				if (startupMode != StartupMode.SPECIFIC_OFFSETS) {
> 					subscribedPartitionsToStartOffsets.put(kafkaTopicPartitions.get(i), startupMode.getStateSentinel());
> 				}
>                 ...
>          }
> {code}
> The bug is using array index {{i}} to mod against {{numParallelSubtasks}}. if the {{kafkaTopicPartitions}}
has different order among different subtasks, assignment is not stable cross subtasks and
creates the assignment issue mentioned earlier. 
> fix is also very simple, we should use partitionId to do the mod {{if (kafkaTopicPartitions.get\(i\).getPartition()
% numParallelSubtasks == indexOfThisSubtask)}}. That would result in stable assignment cross
subtasks that is independent of ordering in the array.
> marking it as blocker because of its impact.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message