Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 6F4A7200D40 for ; Sat, 18 Nov 2017 10:28:51 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 6DCA4160BF7; Sat, 18 Nov 2017 09:28:51 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 98A14160BE9 for ; Sat, 18 Nov 2017 10:28:49 +0100 (CET) Received: (qmail 92058 invoked by uid 500); 18 Nov 2017 09:28:48 -0000 Mailing-List: contact user-help@flink.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list user@flink.apache.org Received: (qmail 92047 invoked by uid 99); 18 Nov 2017 09:28:48 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 18 Nov 2017 09:28:48 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 723D9180782 for ; Sat, 18 Nov 2017 09:28:47 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 0.399 X-Spam-Level: X-Spam-Status: No, score=0.399 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=2, KAM_LINEPADDING=1.2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-2.8, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd3-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=data-artisans-com.20150623.gappssmtp.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id GC-iKpVGPjoW for ; Sat, 18 Nov 2017 09:28:41 +0000 (UTC) Received: from mail-vk0-f47.google.com (mail-vk0-f47.google.com [209.85.213.47]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id CD0655F239 for ; Sat, 18 Nov 2017 09:28:40 +0000 (UTC) Received: by mail-vk0-f47.google.com with SMTP id t184so3049891vka.6 for ; Sat, 18 Nov 2017 01:28:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=data-artisans-com.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=p8oeMspvRsPVhiY6o6FAW4jWmCCCb6Mo68aWVA0u8ew=; b=aOWvDdU/IzJS5fsnSHPB7ZFDEqnSsjrjgNlbEcFVlCGZEGAZrM7cJngQbqhcdcIrtL n6lT/5oX/ZPebRGhDXbCj4WhFND5G1qKZ0ZElByrCIPhk+tWZLxk7dUn+jgpPWcfK/Ee bqeIEpq83uD4q/QNxKhlTbhfeLUrhoUB2h9VAbhTCEGU93PZsKPNRrjGSU5yzHpQm4JE HkfuME2qu6TDV+jQDdtqUOALrQodAOkUGwEZ1CQ0Qvl+LvMF0rgu7qEwahoSQAJvdmyr gXhwzpNaw/WNQScidqXFiT38uxBqTgyo0A56BhNWHPM575qXlZJshRrDtsHtB5s4ypT1 MNWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=p8oeMspvRsPVhiY6o6FAW4jWmCCCb6Mo68aWVA0u8ew=; b=OC43s88Go6kwrLV1mBmKeXmrNikmITFSc85L5PwZGlwzzKKqNcG05y6ZSxAxLZz/1J Ek7yBMlgQcq5juNc2NJ9R5I5Pjcd7oUjo63yV1tq31KR67eZYLSM2jAmNlfVtDTD5SKM rV00bndoVY0zVp1wFeX/ClJwRly6iglOPE/IQ+hAF2VsJynccJSxjjUxXp1yB2xKoWia A5sSP/Ba/WVkPeRZQZVHr6K+n81cUxNx45fT050xqaIti5kDdVX8mIPODMfVm4WITA0h sXXMzuuhqWQ0FeYK3xNJ7v4sL5e1Eg/Is/jQbqO2ZMpz4yj3FuNEePXwElAP6xTPFFSR WNrg== X-Gm-Message-State: AJaThX6mROmgQ2PkZqqyVfq8XKcv21L446VT6s1/iDrK6oSHEwut526h txPwv0q0ks7eE35tjtyyborkA5TyKEAYvjMgRd65yA== X-Google-Smtp-Source: AGs4zMYOE7nrgeaviSbzKTBGFClBQO7SSz8R81n9/0OwuX0EAgo+1giUm+AsOe+RpNd9pk4mxnxCHkYNMdBsAPXweZc= X-Received: by 10.31.96.65 with SMTP id u62mr6231340vkb.68.1510997319571; Sat, 18 Nov 2017 01:28:39 -0800 (PST) MIME-Version: 1.0 Received: by 10.103.72.157 with HTTP; Sat, 18 Nov 2017 01:28:39 -0800 (PST) In-Reply-To: <1358270700.1458675.1510956689106.JavaMail.apache@nm72.abv.bg> References: <433515902.4837493.1510933265583.JavaMail.apache@nm71.abv.bg> <1258168803.4845353.1510944883822.JavaMail.apache@nm71.abv.bg> <1358270700.1458675.1510956689106.JavaMail.apache@nm72.abv.bg> From: Gary Yao Date: Sat, 18 Nov 2017 10:28:39 +0100 Message-ID: Subject: Re: all task managers reading from all kafka partitions To: "r. r." Cc: user Content-Type: multipart/alternative; boundary="001a114e59c47d70b7055e3e7a6c" archived-at: Sat, 18 Nov 2017 09:28:51 -0000 --001a114e59c47d70b7055e3e7a6c Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hi Robert, Running a single job does not mean that you are limited to a single JVM. For example, a job with parallelism 4 by default requires 4 task slots to run. You can provision 4 single slot TaskMangers on different hosts to connect to the same JobManager. The JobManager can then take your job and distribute the execution on the 4 slots. To learn more about the distributed runtime environment: https://ci.apache.org/projects/flink/flink-docs-release-1.4/concepts/runtim= e.html Regarding your concerns about job failures, a failure in the JobManager or one of the TaskManagers can bring your job down but Flink has built-in fault-tolerance on different levels. You may want to read up on the following topics: - Data Streaming Fault Tolerance: https://ci.apache.org/projects/flink/flink-docs-release-1.3/internals/strea= m_checkpointing.html - Restart Strategies: https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/restart_str= ategies.html - JobManager High Availability: https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/jobmanage= r_high_availability.html Let me know if you have further questions. Best, Gary On Fri, Nov 17, 2017 at 11:11 PM, r. r. wrote: > Hmm, but I want single slot task managers and multiple jobs so that if on= e > job fails it doesn't bring the whole setup (for example 30+ parallel > consumers) down. > What setup would you advise? The job is quite heavy and might bring the V= M > down if run with such concurency in one JVM. > > Thanks! > > > > > > > > >-------- =D0=9E=D1=80=D0=B8=D0=B3=D0=B8=D0=BD=D0=B0=D0=BB=D0=BD=D0=BE = =D0=BF=D0=B8=D1=81=D0=BC=D0=BE -------- > > >=D0=9E=D1=82: Gary Yao gary@data-artisans.com > > >=D0=9E=D1=82=D0=BD=D0=BE=D1=81=D0=BD=D0=BE: Re: all task managers readi= ng from all kafka partitions > > >=D0=94=D0=BE: "r. r." > > >=D0=98=D0=B7=D0=BF=D1=80=D0=B0=D1=82=D0=B5=D0=BD=D0=BE =D0=BD=D0=B0: 17= .11.2017 22:58 > > > > > > > > > > > > > > > > > > Forgot to hit "reply all" in my last email. > > > > > > > > > > > > > > > On Fri, Nov 17, 2017 at 8:26 PM, Gary Yao > > > wrote: > > > > > > > > > > > > Hi Robert, > > > > > > > > > > > > > > > To get your desired behavior, you should start a single job wit= h > parallelism set to 4. > > > > > > > > > > > > > > > > > > Flink does not rely on Kafka's consumer groups to distribute th= e > partitions to the parallel subtasks. > > > > > > > > > Instead, Flink does the assignment of partitions itself and als= o > tracks and checkpoints the offsets internally. > > > > > > > > > This is needed to achieve exactly-once semantics. > > > > > > > > > > > > > > > > > > The > > > group.id that you are setting is used for different purposes, > e.g., to track the consumer lag of a job. > > > > > > > > > > > > > > > > > > Best, > > > > > > > > > > > > > > > > > > Gary > > > > > > > > > > > > > > > > > > > > > > > > On Fri, Nov 17, 2017 at 7:54 PM, r. r. > > > wrote: > > > > > > > > > Hi it's Flink 1.3.2, Kafka 0.10.2.0 I am starting 1 JM > and 4 TM (with 1 task slot each). Then I deploy 4 times (via ./flink run > -p1 x.jar), job parallelism is set to 1. A new thing I just noticed: i= f > I start in parallel to the Flink jobs two kafka-console-consumer (with > --consumer-property group.id=3DTopicConsumers) and write a msg to Kafka, > then one of the console consumers receives the msg together with both Fli= nk > jobs. I though maybe the Flink consumers didn't receive the group proper= ty > passed via "flink run .. --group.id TopicConsumers", but no - they do > belong to the group as well: taskmanager_3 | 2017-11-17 18:29:00,750 > INFO > > > org.apache.kafka.clients.consumer.ConsumerConfig > - > > > ConsumerConfig values: > > > > > > > > > > > > taskmanager_3 | > > > auto.commit.interval.ms =3D 5000 > > > > > > taskmanager_3 | auto.offset.reset =3D latest > > > > > > taskmanager_3 | bootstrap.servers =3D [kafka:9092] > > > > > > taskmanager_3 | check.crcs =3D true > > > > > > taskmanager_3 | > > > client.id =3D > > > > > > taskmanager_3 | > > > connections.max.idle.ms =3D 540000 > > > > > > taskmanager_3 | enable.auto.commit =3D true > > > > > > taskmanager_3 | exclude.internal.topics =3D true > > > > > > taskmanager_3 | fetch.max.bytes =3D 52428800 > > > > > > taskmanager_3 | > > > fetch.max.wait.ms =3D 500 > > > > > > taskmanager_3 | fetch.min.bytes =3D 1 > > > > > > taskmanager_3 | > > > group.id =3D TopicConsumers > > > > > > taskmanager_3 | > > > heartbeat.interval.ms =3D 3000 > > > > > > taskmanager_3 | interceptor.classes =3D null > > > > > > taskmanager_3 | key.deserializer =3D class > org.apache.kafka.common.serialization.ByteArrayDeserializer > > > > > > taskmanager_3 | max.partition.fetch.bytes =3D 104857= 6 > > > > > > taskmanager_3 | > > > max.poll.interval.ms =3D 300000 > > > > > > taskmanager_3 | max.poll.records =3D 500 > > > > > > taskmanager_3 | > > > metadata.max.age.ms =3D 300000 > > > > > > taskmanager_3 | metric.reporters =3D [] > > > > > > taskmanager_3 | metrics.num.samples =3D 2 > > > > > > taskmanager_3 | metrics.recording.level =3D INFO > > > > > > taskmanager_3 | > > > metrics.sample.window.ms =3D 30000 > > > > > > taskmanager_3 | partition.assignment.strategy =3D > [class org.apache.kafka.clients.consumer.RangeAssignor] > > > > > > taskmanager_3 | receive.buffer.bytes =3D 65536 > > > > > > taskmanager_3 | > > > reconnect.backoff.ms =3D 50 > > > > > > taskmanager_3 | > > > request.timeout.ms =3D 305000 > > > > > > taskmanager_3 | > > > retry.backoff.ms =3D 100 > > > > > > taskmanager_3 | sasl.jaas.config =3D null > > > > > > taskmanager_3 | sasl.kerberos.kinit.cmd =3D > /usr/bin/kinit > > > > > > taskmanager_3 | sasl.kerberos.min.time.before.relogi= n > =3D 60000 > > > > > > taskmanager_3 | > > > sasl.kerberos.service.name =3D null > > > > > > taskmanager_3 | sasl.kerberos.ticket.renew.jitter = =3D > 0.05 > > > > > > taskmanager_3 | sasl.kerberos.ticket.renew.window.fa= ctor > =3D 0.8 > > > > > > taskmanager_3 | sasl.mechanism =3D GSSAPI > > > > > > taskmanager_3 | security.protocol =3D PLAINTEXT > > > > > > taskmanager_3 | send.buffer.bytes =3D 131072 > > > > > > taskmanager_3 | > > > session.timeout.ms =3D 10000 > > > > > > taskmanager_3 | ssl.cipher.suites =3D null > > > > > > taskmanager_3 | ssl.enabled.protocols =3D [TLSv1.2, > TLSv1.1, TLSv1] > > > > > > taskmanager_3 | > > > ssl.endpoint.identification.algorithm =3D null > > > > > > taskmanager_3 | ssl.key.password =3D null > > > > > > taskmanager_3 | ssl.keymanager.algorithm =3D SunX509 > > > > > > taskmanager_3 | ssl.keystore.location =3D null > > > > > > taskmanager_3 | ssl.keystore.password =3D null > > > > > > taskmanager_3 | ssl.keystore.type =3D JKS > > > > > > taskmanager_3 | ssl.protocol =3D TLS > > > > > > taskmanager_3 | ssl.provider =3D null > > > > > > taskmanager_3 | ssl.secure.random.implementation =3D > null > > > > > > taskmanager_3 | ssl.trustmanager.algorithm =3D PKIX > > > > > > taskmanager_3 | ssl.truststore.location =3D null > > > > > > taskmanager_3 | ssl.truststore.password =3D null > > > > > > taskmanager_3 | ssl.truststore.type =3D JKS > > > > > > taskmanager_3 | value.deserializer =3D class > org.apache.kafka.common.serialization.ByteArrayDeserializer > > > > > > taskmanager_3 | > > > > > > taskmanager_3 | 2017-11-17 18:29:00,765 WARN > > > org.apache.kafka.clients.consumer.ConsumerConfig > - The > > > configuration 'topic' was supplied but isn't a known > config. > > > > > > taskmanager_3 | 2017-11-17 18:29:00,765 INFO > > > org.apache.kafka.common.utils. > AppInfoParser - Kafka > > > version : 0.10.2.1 > > > > > > taskmanager_3 | 2017-11-17 18:29:00,770 INFO > > > org.apache.kafka.common.utils. > AppInfoParser - Kafka > > > commitId : e89bffd6b2eff799 > > > > > > taskmanager_3 | 2017-11-17 18:29:00,791 INFO > > > org.apache.kafka.clients.consumer.internals.AbstractCoord= inator > - > > > Discovered coordinator kafka:9092 (id: > > > 2147482646 rack: null) for group > > > TopicConsumers. > > > > > > > > > > > > > > > > > > I'm running Kafka and Flink jobs in docker containers, th= e > console-consumers from localhost > > > > > > > > > > > > > > > > > > > > > > > > >-------- =D0=9E=D1=80=D0=B8=D0=B3=D0=B8=D0=BD=D0=B0=D0= =BB=D0=BD=D0=BE =D0=BF=D0=B8=D1=81=D0=BC=D0=BE -------- > > > > > > >=D0=9E=D1=82: Gary Yao > > > gary@data-artisans.com > > > > > > >=D0=9E=D1=82=D0=BD=D0=BE=D1=81=D0=BD=D0=BE: Re: all tas= k managers reading from all kafka > partitions > > > > > > >=D0=94=D0=BE: "r. r." < > > > robert@abv.bg> > > > > > > >=D0=98=D0=B7=D0=BF=D1=80=D0=B0=D1=82=D0=B5=D0=BD=D0=BE = =D0=BD=D0=B0: 17.11.2017 20:02 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Hi Robert, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Can you tell us which Flink version you are using? > > > > > > > > > > > > > > > > > > > > > Also, are you starting a single job with > parallelism 4 or are you starting several jobs? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks! > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Gary > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Fri, Nov 17, 2017 at 4:41 PM, r. r. > > > > > > > < > > > robert@abv.bg> wrote: > > > > > > > > > > > > > > > > > > > > > Hi > > > > > > > > > > > > > > I have this strange problem: 4 task managers eac= h > with one task slot, attaching to the same Kafka topic which has 10 > partitions. > > > > > > > > > > > > > > When I post a single message to the Kafka topic > it seems that all 4 consumers fetch the message and start processing > (confirmed by TM logs). > > > > > > > > > > > > > > If I run kafka-consumer-groups.sh --describe > --group TopicConsumers it says that only one message was posted to a sing= le > partition. Next message would generally go to another partition. > > > > > > > > > > > > > > In addition, while the Flink jobs are running on > the message, I start two kafka-console-consumer.sh and each would get onl= y > one message, as expected. > > > > > > > > > > > > > > On start each of the Flink TM would post > something that to me reads as if it would read from all partitions: > > > > > > > > > > > > > > 2017-11-17 15:03:38,688 INFO > org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer09 - Got > 10 partitions from these topics: [TopicToConsume] > > > > > > > 2017-11-17 15:03:38,689 INFO > org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer09 - > Consumer is going to read the following topics (with number of partitions= ): > TopicToConsume (10), > > > > > > > 2017-11-17 15:03:38,689 INFO > org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - > Consumer subtask 0 will start reading the following 10 partitions from th= e > committed group offsets in Kafka: [KafkaTopicPartition{topic=3D'TopicToCo= nsume', > partition=3D8}, KafkaTopicPartition{topic=3D'TopicToConsume', partition= =3D9}, > KafkaTopicPartition{topic=3D'TopicToConsume', partition=3D6}, > KafkaTopicPartition{topic=3D'TopicToConsume', partition=3D7}, > KafkaTopicPartition{topic=3D'TopicToConsume', partition=3D4}, > KafkaTopicPartition{topic=3D'TopicToConsume', partition=3D5}, > KafkaTopicPartition{topic=3D'TopicToConsume', partition=3D2}, > KafkaTopicPartition{topic=3D'TopicToConsume', partition=3D3}, > KafkaTopicPartition{topic=3D'TopicToConsume', partition=3D0}, > KafkaTopicPartition{topic=3D'TopicToConsume', partition=3D1}] > > > > > > > 2017-11-17 15:03:38,699 INFO > org.apache.kafka.clients.consumer.ConsumerConfig - > ConsumerConfig values: > > > > > > > > > > > > > > > > > auto.commit.interval.ms =3D 5000 > > > > > > > auto.offset.reset =3D latest > > > > > > > > > > > > > > > > > > > > > > > > > > > > Any hints? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --001a114e59c47d70b7055e3e7a6c Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi Robert,

Running a si= ngle job does not mean that you are limited to a single JVM.

=
For example, a job with parallelism 4 by default requires 4 task= slots to run.
You can provision 4 single slot TaskMangers on dif= ferent hosts to connect to the
same JobManager. The JobManager ca= n then take your job and distribute the
execution on the 4 slots.= To learn more about the distributed runtime
environment:


=
Regarding your concerns about job failures, a failure in the JobManage= r or one
of the TaskManagers can bring your job down but Flink ha= s built-in
fault-tolerance on different levels. You may want to r= ead up on the following
topics:


Let me kn= ow if you have further questions.

Best,
=
Gary

On Fri, Nov 17, 2017 at 11:11 PM, r. r. <= ;robert@abv.bg> wrote:
Hmm, but = I want single slot task managers and multiple jobs so that if one job fails= it doesn't bring the whole setup (for example 30+ parallel consumers) = down.
What setup would you advise? The job is quite heavy and might bring the VM = down if run with such concurency in one JVM.

Thanks!







=C2=A0>-------- =D0=9E=D1=80=D0=B8=D0=B3=D0=B8=D0=BD=D0=B0=D0=BB=D0=BD= =D0=BE =D0=BF=D0=B8=D1=81=D0=BC=D0=BE --------

=C2=A0>=D0=9E=D1=82: Gary Yao = gary@data-artisans.com

=C2=A0>=D0=9E=D1=82=D0=BD=D0=BE=D1=81=D0=BD=D0=BE: Re: all task managers= reading from all kafka partitions

=C2=A0>=D0=94=D0=BE: "r. r." <robert@abv.bg>

=C2=A0>=D0=98=D0=B7=D0=BF=D1=80=D0=B0=D1=82=D0=B5=D0=BD=D0=BE =D0= =BD=D0=B0: 17.11.2017 22:58




>

>

>

>

>=C2=A0 =C2=A0 Forgot to hit "reply all" in my last email.

>

>

>

>

>=C2=A0 =C2=A0 =C2=A0 On Fri, Nov 17, 2017 at 8:26 PM, Gary Yao

>=C2=A0 =C2=A0 =C2=A0 <gary= @data-artisans.com> wrote:

>

>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 Hi Robert,

>

>

>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0To get your desired behavior, you sho= uld start a single job with parallelism set to 4.

>

>

>

>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Flink does not rely on Kafka's co= nsumer groups to distribute the partitions to the parallel subtasks.

>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Instead, Flink does the assignment of= partitions itself and also tracks and checkpoints the offsets internally.<= br>
>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0This is needed to achieve exactly-onc= e semantics.

>

>

>

>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0The

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0group.id that you are setting is used for d= ifferent purposes, e.g., to track the consumer lag of a job.

>

>

>

>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Best,

>

>

>

>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Gary

>

>

>

>

>

>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0On Fri, Nov 17, 2017 at 7:54 P= M, r. r.

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0<robert@abv.bg> wrote:

>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Hi=C2=A0 =C2=A0 it's Flin= k 1.3.2, Kafka 0.10.2.0=C2=A0 I am starting 1 JM and 4 TM (with 1 task slot= each). Then I deploy 4 times (via ./flink run -p1 x.jar), job parallelism = is set to 1.=C2=A0 =C2=A0 A new thing I just noticed: if I start in paralle= l to the Flink jobs two =C2=A0kafka-console-consumer (with --consumer-prope= rty group.= id=3DTopicConsumers) and write a msg to Kafka, then one of the console = consumers receives the msg together with both Flink jobs.=C2=A0 I though ma= ybe the Flink consumers didn't receive the group property passed via &q= uot;flink run .. --group.id TopicConsumers", but no - they do belong to the = group as well:=C2=A0 =C2=A0 taskmanager_3=C2=A0 | 2017-11-17 18:29:00,750 I= NFO=C2=A0

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0org.apache.kafka.client= s.consumer.ConsumerConfig=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 -

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ConsumerConfig values:<= br>
>

>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 auto.commit.inter= val.ms =3D 5000

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 auto.offset.reset =3D latest

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 bootstrap.servers =3D [kafka:9092]

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 check.crcs =3D true

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 client.id =3D

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 connections.max.i= dle.ms =3D 540000

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 enable.auto.commit =3D true

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 exclude.internal.topics =3D true

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 fetch.max.bytes =3D 52428800

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 fetch.max.wait.ms = =3D 500

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 fetch.min.bytes =3D 1

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 group.id =3D TopicConsumers<= br>
>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 heartbeat.interval.= ms =3D 3000

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 interceptor.classes =3D null

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 key.deserializer =3D class org.apache.kafka.common= .serialization.ByteArrayDeserializer

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 max.partition.fetch.bytes =3D 1048576

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 max.poll.interval.ms= =3D 300000

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 max.poll.records =3D 500

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 metadata.max.age.ms =3D 300000

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 metric.reporters =3D []

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 metrics.num.samples =3D 2

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 metrics.recording.level =3D INFO

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0
metrics.sample.w= indow.ms =3D 30000

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 partition.assignment.strategy =3D [class org.apach= e.kafka.clients.consumer.RangeAssignor]

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 receive.buffer.bytes =3D 65536

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 reconnect.backoff.ms= =3D 50

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 request.timeout.ms= =3D 305000

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 retry.backoff.ms =3D= 100

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 sasl.jaas.config =3D null

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 sasl.kerberos.kinit.cmd =3D /usr/bin/kinit

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 sasl.kerberos.min.time.before.relogin =3D 600= 00

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 sasl.kerberos.= service.name =3D null

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 sasl.kerberos.ticket.renew.jitter =3D 0.05
>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 sasl.kerberos.ticket.renew.window.factor =3D = 0.8

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 sasl.mechanism =3D GSSAPI

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 security.protocol =3D PLAINTEXT

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 send.buffer.bytes =3D 131072

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 session.timeout.ms= =3D 10000

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 ssl.cipher.suites =3D null

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 ssl.enabled.protocols =3D [TLSv1.2, TLSv1.1, TLSv1= ]

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ssl.endpoint.identific= ation.algorithm =3D null

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 ssl.key.password =3D null

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 ssl.keymanager.algorithm =3D SunX509

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 ssl.keystore.location =3D null

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 ssl.keystore.password =3D null

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 ssl.keystore.type =3D JKS

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 ssl.protocol =3D TLS

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 ssl.provider =3D null

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 ssl.secure.random.implementation =3D null

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 ssl.trustmanager.algorithm =3D PKIX

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 ssl.truststore.location =3D null

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 ssl.truststore.password =3D null

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 ssl.truststore.type =3D JKS

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | =C2=A0=C2=A0=C2=A0 value.deserializer =3D class org.apache.kafka.comm= on.serialization.ByteArrayDeserializer

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 |

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | 2017-11-17 18:29:00,765 WARN=C2=A0

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0org.apache.kafka= .clients.consumer.ConsumerConfig=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - The

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0configuration &#= 39;topic' was supplied but isn't a known config.

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | 2017-11-17 18:29:00,765 INFO=C2=A0

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0org.apache.kafka= .common.utils.AppInfoParser=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - K= afka

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0version : 0.10.2= .1

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | 2017-11-17 18:29:00,770 INFO=C2=A0

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0org.apache.kafka= .common.utils.AppInfoParser=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - K= afka

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0commitId : e89bf= fd6b2eff799

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0taskmanager_3=C2= =A0 | 2017-11-17 18:29:00,791 INFO=C2=A0

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0org.apache.kafka= .clients.consumer.internals.AbstractCoordinator=C2=A0 -

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Discovered coord= inator kafka:9092 (id:

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 2147482646 rack: null)= for group

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0TopicConsumers.<= br>
>

>

>

>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0I'm running = Kafka and Flink jobs in docker containers, the console-consumers from local= host

>

>

>

>

>

>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0>------= -- =D0=9E=D1=80=D0=B8=D0=B3=D0=B8=D0=BD=D0=B0=D0=BB=D0=BD=D0=BE =D0=BF=D0= =B8=D1=81=D0=BC=D0=BE --------

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0>=D0=9E= =D1=82: Gary Yao

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 gary@data-artisans.com

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0>=D0=9E= =D1=82=D0=BD=D0=BE=D1=81=D0=BD=D0=BE: Re: all task managers reading from al= l kafka partitions

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0>=D0=94= =D0=BE: "r. r." <

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 robert@abv.bg>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0>=D0=98= =D0=B7=D0=BF=D1=80=D0=B0=D1=82=D0=B5=D0=BD=D0=BE =D0=BD=D0=B0: 17.11.2017 2= 0:02

>

>

>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>

>

>

>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2= =A0 Hi Robert,

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2= =A0 =C2=A0 Can you tell us which Flink version you are using?=C2=A0

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2= =A0 =C2=A0 Also, are you starting a single job with parallelism 4 or are yo= u starting several jobs?

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2= =A0 =C2=A0 Thanks!

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2= =A0 =C2=A0 Gary

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2= =A0 =C2=A0 On Fri, Nov 17, 2017 at 4:41 PM, r. r.

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2= =A0 =C2=A0 <

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 robert@abv.bg> wrote:

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2= =A0 =C2=A0 =C2=A0Hi

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2= =A0 =C2=A0 =C2=A0 I have this strange problem: 4 task managers each with on= e task slot, attaching to the same Kafka topic which has 10 partitions.

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2= =A0 =C2=A0 =C2=A0 When I post a single message to the Kafka topic it seems = that all 4 consumers fetch the message and start processing (confirmed by T= M logs).

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2= =A0 =C2=A0 =C2=A0 If I run kafka-consumer-groups.sh=C2=A0 --describe --grou= p TopicConsumers it says that only one message was posted to a single parti= tion. Next message would generally go to another partition.

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2= =A0 =C2=A0 =C2=A0 In addition, while the Flink jobs are running on the mess= age, I start two kafka-console-consumer.sh and each would get only one mess= age, as expected.

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2= =A0 =C2=A0 =C2=A0 On start each of the Flink TM would post something that t= o me reads as if it would read from all partitions:

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2= =A0 =C2=A0 =C2=A0 2017-11-17 15:03:38,688 INFO=C2=A0 org.apache.flink.strea= ming.connectors.kafka.FlinkKafkaConsumer09=C2=A0 - Got 10 partiti= ons from these topics: [TopicToConsume]

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2= =A0 =C2=A0 =C2=A0 2017-11-17 15:03:38,689 INFO=C2=A0 org.apache.flink.strea= ming.connectors.kafka.FlinkKafkaConsumer09=C2=A0 - Consumer is go= ing to read the following topics (with number of partitions): TopicToConsum= e (10),

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2= =A0 =C2=A0 =C2=A0 2017-11-17 15:03:38,689 INFO=C2=A0 org.apache.flink.strea= ming.connectors.kafka.FlinkKafkaConsumerBase=C2=A0 - Consumer sub= task 0 will start reading the following 10 partitions from the committed gr= oup offsets in Kafka: [KafkaTopicPartition{topic=3D'TopicToConsume= ', partition=3D8}, KafkaTopicPartition{topic=3D'TopicToConsume= ', partition=3D9}, KafkaTopicPartition{topic=3D'TopicToConsume= ', partition=3D6}, KafkaTopicPartition{topic=3D'TopicToConsume= ', partition=3D7}, KafkaTopicPartition{topic=3D'TopicToConsume= ', partition=3D4}, KafkaTopicPartition{topic=3D'TopicToConsume= ', partition=3D5}, KafkaTopicPartition{topic=3D'TopicToConsume= ', partition=3D2}, KafkaTopicPartition{topic=3D'TopicToConsume= ', partition=3D3}, KafkaTopicPartition{topic=3D'TopicToConsume= ', partition=3D0}, KafkaTopicPartition{topic=3D'TopicToConsume= ', partition=3D1}]

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2= =A0 =C2=A0 =C2=A0 2017-11-17 15:03:38,699 INFO=C2=A0 org.apache.kafka.clien= ts.consumer.ConsumerConfig=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - ConsumerConfig values:

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2= =A0 =C2=A0 =C2=A0

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 auto.commit.inter= val.ms =3D 5000

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 auto.offset.re= set =3D latest

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2= =A0 =C2=A0 =C2=A0 Any hints?

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

>

--001a114e59c47d70b7055e3e7a6c--