Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 398C7200C78 for ; Thu, 18 May 2017 09:38:05 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 382A4160BC4; Thu, 18 May 2017 07:38:05 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 520E9160BB0 for ; Thu, 18 May 2017 09:38:04 +0200 (CEST) Received: (qmail 99785 invoked by uid 500); 18 May 2017 07:38:03 -0000 Mailing-List: contact user-help@flink.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list user@flink.apache.org Received: (qmail 99776 invoked by uid 99); 18 May 2017 07:38:03 -0000 Received: from mail-relay.apache.org (HELO mail-relay.apache.org) (140.211.11.15) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 18 May 2017 07:38:03 +0000 Received: from Tzu-Lis-MacBook-Pro.local.mail (host-58-114-154-42.dynamic.kbtelecom.net [58.114.154.42]) by mail-relay.apache.org (ASF Mail Server at mail-relay.apache.org) with ESMTPSA id 80AFF1A031B for ; Thu, 18 May 2017 07:38:02 +0000 (UTC) Date: Thu, 18 May 2017 15:38:00 +0800 From: "Tzu-Li (Gordon) Tai" To: user@flink.apache.org Message-ID: In-Reply-To: References: <1494950622907-13170.post@n4.nabble.com> <97B3D27B-C34E-45B9-BB6B-69320C8EE973@data-artisans.com> Subject: Re: FlinkKafkaConsumer using Kafka-GroupID? X-Mailer: Airmail (420) MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="591d4f58_1a3e6e7a_371a" archived-at: Thu, 18 May 2017 07:38:05 -0000 --591d4f58_1a3e6e7a_371a Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Hi Valentin=21 Your understanding is correct, the Kafka connectors do not use the consum= er group functionality to distribute messages across multiple instances o= f a =46linkKafkaConsumer source. It=E2=80=99s basically determining which= instances should be assigned which Kafka partitions based on a simple ro= und-robin distribution. Is there any chance to run 2 different flink (standalone) apps consuming = messages from a single kafka-topic only once=3F This is what I could do b= y using 2 native Kafka-Consumers within the same consumer-group. Therefore, I don=E2=80=99t think this is possible with the =46linkKafkaCo= nsumers. However, this is exactly what =46link=E2=80=99s checkpointing an= d savepoints is designed for. If your single app fails, using checkpoints / savepoints the consumer can= just re-start from the offsets in that checkpoint / savepoint. In other words, with =46link=E2=80=99s streaming fault tolerance mechanic= s, you will get exactly-once guarantees across 2 different runs of the ap= p. The =46linkKafkaConnector docs should explain this thoroughly =5B1=5D. Does this address what your concerns=3F Cheers, Gordon =5B1=5D=C2=A0https://ci.apache.org/projects/flink/flink-docs-release-1.2/= dev/connectors/kafka.html=23kafka-consumers-and-fault-tolerance On 18 May 2017 at 1:35:35 AM, Valentin (valentin=40aseno.de) wrote: Hi there, As far as I understood, =46link Kafka Connectors don=E2=80=99t use the co= nsumer group management feature from Kafka. Here the post I got the info = from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/=46li= nk-kafka-group-question-td8185.html=23none =46or some reasons we cannot set up a flink-cluster environment, but we s= till need to assure high availability. e.g. in case one node goes down th= e second should still keep on running. My question: - Is there any chance to run 2 different flink (standalone) apps consumin= g messages from a single kafka-topic only once=3F This is what I could do= by using 2 native Kafka-Consumers within the same consumer-group. Many thanks in advance Valentin=C2=A0 =C2=A0 --591d4f58_1a3e6e7a_371a Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline