kafka-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Guozhang Wang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (KAFKA-2017) Persist Coordinator State for Coordinator Failover
Date Wed, 14 Oct 2015 20:16:05 GMT

    [ https://issues.apache.org/jira/browse/KAFKA-2017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957665#comment-14957665
] 

Guozhang Wang commented on KAFKA-2017:
--------------------------------------

Here is my two cents comparing those two approaches:

1. ZK-based approach:

Pros: simple implementation, and easy tooling for querying consumer group states.
Cons: ZK-writes (and perhaps also ZK-reads, which may be optimized further in the loading
process).

2. Kafka-based approach:

Pros: reuse offset topic, no ZK burden.
Cons: not-so-simple implementation, admin tools may need sending the consumer-group-metadata-request
only to the coordinator.

Now about the tradeoffs, I personally think the conceptual cleanness of "putting broker metadata
in ZK while consumer metadata in Kafka" should not put much weights in the design, since what
really matters is just the read / write workloads. For example we decided to move consumer
offsets from ZK to Kafka not primarily because we want to separate it from broker registry
but just that its write frequency is too high for ZK (of course later there are some other
motivations like security / multi-tenancy so that we want to make consumer ZK-free), while
broker registry changes are relatively infrequent so that it can live in ZK. Consumer group
changes are somewhere between this two workloads, but I assume it would still be closer to
broker registry changes in the spectrum compared with consumer offset changes.

In addition, I feel it is not a best solution in general to persist all data in a logging
format: it gives you better write performance in the trade of worse read performance. For
our case, we could unnecessarily increase the loading time upon coordinator migration, either
we piggy-back it in offsets topic or in another topic (BTW I agree with [~jjkoshy] that piggy-backing
in the offset topic is a bit tricky). If we agree that consumer membership change workload
is rather write-light instead of write-heavy, then this trade may not be worth-while.

I also second [~hachikuji]'s point about ops benefits in storing membership in ZK: it allows
all brokers to handle consumer group metadata request, and in addition for ops team to get
around admin requests (KIP-4) but directly query ZK.

> Persist Coordinator State for Coordinator Failover
> --------------------------------------------------
>
>                 Key: KAFKA-2017
>                 URL: https://issues.apache.org/jira/browse/KAFKA-2017
>             Project: Kafka
>          Issue Type: Sub-task
>          Components: consumer
>    Affects Versions: 0.9.0.0
>            Reporter: Onur Karaman
>            Assignee: Guozhang Wang
>             Fix For: 0.9.0.0
>
>         Attachments: KAFKA-2017.patch, KAFKA-2017_2015-05-20_09:13:39.patch, KAFKA-2017_2015-05-21_19:02:47.patch
>
>
> When a coordinator fails, the group membership protocol tries to failover to a new coordinator
without forcing all the consumers rejoin their groups. This is possible if the coordinator
persists its state so that the state can be transferred during coordinator failover. This
state consists of most of the information in GroupRegistry and ConsumerRegistry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message