kafka-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ewen Cheslack-Postava <e...@confluent.io>
Subject Re: [DISCUSS] KIP-97: Improved Kafka Client RPC Compatibility Policy
Date Sun, 11 Dec 2016 05:21:19 GMT
On Sat, Dec 10, 2016 at 5:23 PM, Sven Ludwig <s_ludwig@gmx.de> wrote:

> Hi,
> as a user I find this concept interesting, but not important enough. I
> just want to share my personal thoughts, potentially leaning myself
> somewhat out of the window, that came to my mind during reading:
> - I think it is of utmost importance for the Kafka project to continue the
> good work on easing and really securing the upgrading of the broker version
> via rolling upgrade. It is really cool for users when they can rely on
> that, allowing for continuous tech refresh, even on brokers with really
> large and/or really many partitions. I would always prefer this to stay in
> front.
> - Secondly, I, as most users, like a clear, easy to understand, yet
> powerful API contract together with producer/consumer configuration,
> including also aspects such as load-balancing and availability (consumer
> groups and alternative solutions via external partitioning and
> schedulers/supervisors respectively etc.).
> - Third, I find it important for the Kafka community to continue support
> for Scala-based APIs on the client side with the same devotion as seen in
> the support for the Java-based Kafka Streams API.
> - Fourth, I need production readiness features.
> Only after all these, I would look at a possibility to use a new client
> together with an old broker. Even though the concept seems to be sound on
> first read, I am somewhat afraid of the additional complexity, resulting
> complications and thus additional work for people. Think of database
> drivers. It can potentially become really complicated to support feature
> handshaking. Also, runtime exceptions that could have been avoided by a
> clear and simple policy would not be appealing. I really wonder if there
> are actually enough users who would need the ability to use a newer client
> with an older broker, to justify the increase in complexity.

This is something we see come up very regularly on the mailing list. It's
almost always a problem when you have one team that manages the broker
infrastructure and other teams that write apps against it -- the former
tend to be conservative with upgrades, the latter want new features even if
they are client-side only. There are a ton of folks looking for a) bug
fixes in producer/consumer that are newer than their broker (but unrelated
to broker-side fixes), b) support for connectors that are written against
newer versions of the Connect API than their broker (which rarely have any
coupled changes), and c) support using streams apps against older versions
(which also rarely have coupled changes).

This KIP covers the core clients, which actually solves that vast majority
of problems for Connect & Streams as well.

In terms of effort, the delta isn't exactly the same because librdkafka was
already setup for some compatibility, but KIP-35 support there was a
relatively small patch given what it was adding:
There have been minimal follow ups required after that patch. Kafka also
has the benefit of substantial cross-version system testing already being
in place, so getting decent realistic validation of compatibility will
actually be pretty low cost (as described in the KIP) and will provide
ongoing validation.

> I would agree however if someone said that it should better be possible to
> use two different client versions within one JVM, in order to talk to
> several brokers of different versions, which could become easier to achieve
> at the client side under Java 9 (Jigsaw).

With this KIP you don't need to support that because a single newer client
will be able to talk to a variety of broker versions anyway.


> Kind Regards
> Sven

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message