Return-Path: X-Original-To: apmail-nifi-users-archive@minotaur.apache.org Delivered-To: apmail-nifi-users-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 7D68A199FE for ; Tue, 15 Mar 2016 17:21:38 +0000 (UTC) Received: (qmail 3075 invoked by uid 500); 15 Mar 2016 17:21:33 -0000 Delivered-To: apmail-nifi-users-archive@nifi.apache.org Received: (qmail 3043 invoked by uid 500); 15 Mar 2016 17:21:33 -0000 Mailing-List: contact users-help@nifi.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@nifi.apache.org Delivered-To: mailing list users@nifi.apache.org Received: (qmail 3033 invoked by uid 99); 15 Mar 2016 17:21:33 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 15 Mar 2016 17:21:33 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 5A1C61A04E5 for ; Tue, 15 Mar 2016 17:21:33 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.28 X-Spam-Level: * X-Spam-Status: No, score=1.28 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001, WEIRD_PORT=0.001] autolearn=disabled Authentication-Results: spamd2-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=trapezoid-com.20150623.gappssmtp.com Received: from mx2-lw-us.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id ZGxr5Z4089ob for ; Tue, 15 Mar 2016 17:21:28 +0000 (UTC) Received: from mail-lf0-f50.google.com (mail-lf0-f50.google.com [209.85.215.50]) by mx2-lw-us.apache.org (ASF Mail Server at mx2-lw-us.apache.org) with ESMTPS id 3F10B5F1EC for ; Tue, 15 Mar 2016 17:21:27 +0000 (UTC) Received: by mail-lf0-f50.google.com with SMTP id h198so737680lfh.0 for ; Tue, 15 Mar 2016 10:21:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=trapezoid-com.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:date:message-id:subject:from:to; bh=hfulsqLyL/40e1p3pHarPsZk6MUueWsEMeDePzgm0Ms=; b=11QWqzS629eADxKjYwwmPA7RRnNOHMx7wS5ThMsjgle18fjcWxOnWB9mZ2wFZTYr9Y CaJf2YnSyyl7fvu0PM9JDX4bmpHpNHiE20t1JP60MGpablskjlhK7XovjXF8uE/sny1x pTdUgZV+EQCcsLGsaM6wQbcOxDxPNMPESwSbXZfL6IV8bhbGVxH2FWECVYPoPpJXQwjK vg/ReWltrgaGMmjg1+6/661XmDptIoYTMvkXBNycvLGTsI/UVxQjiuEPbX4DnHD+CW/9 YchZifoz4zgd1UEtB7HsvqliObDAIQxkFb+UiCScxbfYkvTd/7HZKWZcuRRAshQzQy6e yGpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to; bh=hfulsqLyL/40e1p3pHarPsZk6MUueWsEMeDePzgm0Ms=; b=KwUIYNqkyEKXaCJachDqCpl9QylT+Riz7wmjRN/dRysKVo+S2cz2St+xkqk9EY+gKo wpD2ds3fNGjaGwdpkCCGSUd6G+YiJiTN6ZgpGpuXNBU6t4LfVFUhRjogYZsAUtk5kSIp teB4QXkyGtHE4ezs2gQASB9ToFUPeDpelYzvZhOHSaJD4dqDVvdfgDvgArfTQCAXX8T2 H9X2JnzEeL0xRak+MPNhj3824YaA7JMp60e190yS0VNktBZh+fFs3kKN6rnenuvXASgz pqlyUxVsvO7PpGC1nsAFe8nf1KE5bopJnmeAlLnTl3PAOHmYh6frbGpbzInfm3yFZFRt PZ1w== X-Gm-Message-State: AD7BkJJmpjskzfPYc56wKFe5g3qzR5EgInzZltfBDHGxxG8wFnjNAIR/F2aGfPdgXlnl0lDg4nm7cu1u7aSTTQ== MIME-Version: 1.0 X-Received: by 10.25.40.210 with SMTP id o201mr8054664lfo.44.1458062479990; Tue, 15 Mar 2016 10:21:19 -0700 (PDT) Received: by 10.112.82.131 with HTTP; Tue, 15 Mar 2016 10:21:19 -0700 (PDT) In-Reply-To: <34FC2966-DB0F-4727-9256-0BD0F1E6C269@hortonworks.com> References: <1456000224.7244.10.camel@bose.com> <1456002618.7244.13.camel@bose.com> <07689D28-A5DB-427E-AD60-25266A1DD3FF@hortonworks.com> <1456091256.2810.7.camel@bose.com> <710A7460-9FF5-4CBD-87A1-021D348AF79E@ignitionone.com> <34FC2966-DB0F-4727-9256-0BD0F1E6C269@hortonworks.com> Date: Tue, 15 Mar 2016 13:21:19 -0400 Message-ID: Subject: Re: Nifi 0.50 and GetKafka Issues From: Michael Dyer To: users@nifi.apache.org Content-Type: multipart/alternative; boundary=001a11411b8a2e3caa052e19a0b3 --001a11411b8a2e3caa052e19a0b3 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Oleg, It's part of the Cloudera distro. Not sure of the lineage beyond that. Here are a couple of links. https://community.cloudera.com/t5/Data-Ingestion-Integration/New-Kafka-0-8-= 2-0-1-kafka1-3-1-p0-9-Parcel-What-are-the-changes/td-p/30506 http://archive.cloudera.com/kafka/parcels/1.3.1/ Michael On Tue, Mar 15, 2016 at 12:01 PM, Oleg Zhurakousky < ozhurakousky@hortonworks.com> wrote: > Michael > > What is KAFKA-0.8.2.0-1.kafka1.3.1.p0.9? I mean where can I get that > build? > I guess based on the previous email we=E2=80=99ve tested our code with 3 = versions > of ASF distribution of Kafka and the above version tells me that it may b= e > some kind of fork. > > Also, we are considering downgrading Kafka dependencies back to the 0.8 > and of 0.7 provide a new version of Kafka processors that utilize Kafka 0= .9 > new producer/consumer API > > Thanks > Oleg > > On Mar 15, 2016, at 11:46 AM, Michael Dyer > wrote: > > Joe, > > I'm seeing a similar issue moving from 0.3.0 to 0.5.1 with > KAFKA-0.8.2.0-1.kafka1.3.1.p0.9. > > I can see the tasks/time counter increment on the processor but no flow > data ever leaves the processor. There are no errors shown in the bulleti= n > board. The app log shows below (repeating). > > The rename 0.4.1 nar to 0.5.1 nar trick (restart nifi) works, except that > the '*batch size*' value does not seem to be honored. I have my batch > size set to 10000, but I'm seeing files written continually (every few > seconds) with much smaller sizes. I suspect this has to do with the ` > *auto.offset.reset`* value which defaults to `*largest*`. From what I > have read 'smallest' causes the client to start at the beginning which > sounds like I would be retrieving duplicates. > > Renaming 0.3.0 nar to 0.5.1 (restart nifi) restores the original behavior= . > > yBuffer([[netflow5,0], initOffset 297426 to broker BrokerEndPoint(176, > n2.foo.bar.com,9092)] ) > 2016-03-15 07:45:17,237 WARN > [ConsumerFetcherThread-b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-14580531= 14390-d60f3cc0-0-176] > kafka.consumer.ConsumerFetcherThread > [ConsumerFetcherThread-b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-14580531= 14390-d60f3cc0-0-176], > Error in fetch kafka.consumer.ConsumerFetcherThread$FetchRequest@11bd00f6= . > Possible cause: java.lang.IllegalArgumentException > 2016-03-15 07:45:17,443 INFO > [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-= finder-thread] > kafka.utils.VerifiableProperties Verifying properties > 2016-03-15 07:45:17,443 INFO > [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-= finder-thread] > kafka.utils.VerifiableProperties Property client.id is overridden to > NiFi-b6c67ee3-aa9e-419d-8a57-84ab5e76c017 > 2016-03-15 07:45:17,443 INFO > [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-= finder-thread] > kafka.utils.VerifiableProperties Property metadata.broker.list is > overridden to n3.foo.bar.com:9092,n2.foo.bar.com:9092,n4.foo.bar.com:9092 > 2016-03-15 07:45:17,443 INFO > [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-= finder-thread] > kafka.utils.VerifiableProperties Property request.timeout.ms is > overridden to 30000 > 2016-03-15 07:45:17,443 INFO > [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-= finder-thread] > kafka.client.ClientUtils$ Fetching metadata from broker BrokerEndPoint(19= 6, > n4.foo.bar.com,9092) with correlation id 14 for 1 topic(s) Set(netflow5) > 2016-03-15 07:45:17,443 INFO > [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-= finder-thread] > kafka.producer.SyncProducer Connected to n4.foo.bar.com:9092 for producin= g > 2016-03-15 07:45:17,444 INFO > [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-= finder-thread] > kafka.producer.SyncProducer Disconnecting from n4.foo.bar.com:9092 > 2016-03-15 07:45:17,444 INFO > [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-= finder-thread] > kafka.consumer.ConsumerFetcherManager > [ConsumerFetcherManager-1458053114395] Added fetcher for partitions > ArrayBuffer([[netflow5,0], initOffset 297426 to broker BrokerEndPoint(176= , > n2.foo.bar.com,9092)] ) > 2016-03-15 07:45:17,449 WARN > [ConsumerFetcherThread-b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-14580531= 14390-d60f3cc0-0-176] > kafka.consumer.ConsumerFetcherThread > [ConsumerFetcherThread-b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-14580531= 14390-d60f3cc0-0-176], > Error in fetch kafka.consumer.ConsumerFetcherThread$FetchRequest@3e69ef73= . > Possible cause: java.lang.IllegalArgumentException > 2016-03-15 07:45:17,626 INFO [NiFi Web Server-259] > o.a.n.c.s.TimerDrivenSchedulingAgent Stopped scheduling > GetKafka[id=3D4943a24e-af5c-4392-bc45-7008f30674bb] to run > 2016-03-15 07:45:17,626 INFO [Timer-Driven Process Thread-3] > k.consumer.ZookeeperConsumerConnector > [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0], > ZKConsumerConnector shutting down > 2016-03-15 07:45:17,632 INFO [Timer-Driven Process Thread-3] > kafka.consumer.ConsumerFetcherManager > [ConsumerFetcherManager-1458053114395] Stopping leader finder thread > 2016-03-15 07:45:17,633 INFO [Timer-Driven Process Thread-3] > k.c.ConsumerFetcherManager$LeaderFinderThread > [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-= finder-thread], > Shutting down > 2016-03-15 07:45:17,634 INFO > [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-= finder-thread] > k.c.ConsumerFetcherManager$LeaderFinderThread > [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-= finder-thread], > Stopped > 2016-03-15 07:45:17,634 INFO [Timer-Driven Process Thread-3] > k.c.ConsumerFetcherManager$LeaderFinderThread > [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-= finder-thread], > Shutdown completed > 2016-03-15 07:45:17,634 INFO [Timer-Driven Process Thread-3] > kafka.consumer.ConsumerFetcherManager > [ConsumerFetcherManager-1458053114395] Stopping all fetchers > 2016-03-15 07:45:17,634 INFO [Timer-Driven Process Thread-3] > kafka.consumer.ConsumerFetcherThread > [ConsumerFetcherThread-b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-14580531= 14390-d60f3cc0-0-176], > Shutting down > 2016-03-15 07:45:17,634 INFO > [ConsumerFetcherThread-b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-14580531= 14390-d60f3cc0-0-176] > kafka.consumer.ConsumerFetcherThread > [ConsumerFetcherThread-b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-14580531= 14390-d60f3cc0-0-176], > Stopped > 2016-03-15 07:45:17,635 INFO [Timer-Driven Process Thread-3] > kafka.consumer.ConsumerFetcherThread > [ConsumerFetcherThread-b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-14580531= 14390-d60f3cc0-0-176], > Shutdown completed > 2016-03-15 07:45:17,635 INFO [Timer-Driven Process Thread-3] > kafka.consumer.ConsumerFetcherManager > [ConsumerFetcherManager-1458053114395] All connections stopped > 2016-03-15 07:45:17,635 INFO [ZkClient-EventThread-302-192.168.1.1:2181] > org.I0Itec.zkclient.ZkEventThread Terminate ZkClient event thread. > 2016-03-15 07:45:17,638 INFO [Timer-Driven Process Thread-3] > org.apache.zookeeper.ZooKeeper Session: 0x1535e2aa53b3f61 closed > 2016-03-15 07:45:17,638 INFO [Timer-Driven Process Thread-3] > k.consumer.ZookeeperConsumerConnector > [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0], > ZKConsumerConnector shutdown completed in 11 ms > 2016-03-15 07:45:17,639 INFO [Timer-Driven Process Thread-4-EventThread] > org.apache.zookeeper.ClientCnxn EventThread shut down > 2016-03-15 07:45:17,745 INFO [Flow Service Tasks Thread-2] > o.a.nifi.controller.StandardFlowService Saved flow controller > org.apache.nifi.controller.FlowController@45d22bd5 // Another save > pending =3D false > 2016-03-15 07:45:18,414 INFO > [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0_watcher= _executor] > k.consumer.ZookeeperConsumerConnector > [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0], > stopping watcher executor thread for consumer > b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0 > > Hope this helps... > > Michael > > On Mon, Feb 22, 2016 at 4:12 PM, Joe Witt wrote: > >> Sorry to clarify it working against all three of these at once: >> - Kafka 0.8.1.1 >> - Kafka 0.8.2.1 >> - Kafka 0.9.0.1 >> >> Thanks >> Joe >> >> On Mon, Feb 22, 2016 at 4:12 PM, Joe Witt wrote: >> > All just as a point of reference we now have a live system running >> > that is on NiFi 0.5.0 and feeding three versions of Kafka at once: >> > - 0.8.1 >> > - 0.8.2.0 >> > - 0.9.0.1 >> > >> > So perhaps there are some particular configurations that cause issues. >> > Can you share more details about your configuration of Kafka/NiFi and >> > what sort of security is enabled? >> > >> > Thanks >> > Joe >> > >> > On Mon, Feb 22, 2016 at 1:01 PM, Kyle Burke >> wrote: >> >> I replaced my 0.5.0 kafka nar with the 0.4.1 kakfa nar and it fixed m= y >> kafka >> >> issue. I renamed the 0.4.1 nar to be 0.5.0.nar and restart nifi and m= y >> kafka >> >> processor started reading my 0.8.2.1 stream. Not elegant but glad it >> worked. >> >> >> >> >> >> Respectfully, >> >> >> >> Kyle Burke | Data Science Engineer >> >> IgnitionOne - Marketing Technology. Simplified. >> >> Office: 1545 Peachtree St NE, Suite 500 | Atlanta, GA | 30309 >> >> Direct: 404.961.3918 >> >> >> >> >> >> From: Joe Witt >> >> Reply-To: "users@nifi.apache.org" >> >> Date: Sunday, February 21, 2016 at 5:23 PM >> >> To: "users@nifi.apache.org" >> >> Subject: Re: Nifi 0.50 and GetKafka Issues >> >> >> >> Yeah the intent is to support 0.8 and 0.9. Will figure something out= . >> >> >> >> Thanks >> >> Joe >> >> >> >> On Feb 21, 2016 4:47 PM, "West, Joshua" wrote: >> >>> >> >>> Hi Oleg, >> >>> >> >>> Hmm -- from what I can tell, this isn't a Zookeeper communication >> issue. >> >>> Nifi is able to connect into the Kafka brokers' Zookeeper cluster an= d >> >>> retrieve the list of the kafka brokers to connect to. Seems, from >> the logs, >> >>> to be a problem when attempting to consume from Kafka itself. >> >>> >> >>> I'm guessing that the Kafka 0.9.0 client libraries just are not >> compatible >> >>> with Kafka 0.8.2.1 so in order to use Nifi 0.5.0 with Kafka, the Kaf= ka >> >>> version must be >=3D 0.9.0. >> >>> >> >>> Any change Nifi could add backwards compatible support for Kafka >> 0.8.2.1 >> >>> too? Let you choose which client library version, when setting up t= he >> >>> GetKafka processor? >> >>> >> >>> -- >> >>> Josh West >> >>> Bose Corporation >> >>> >> >>> >> >>> On Sun, 2016-02-21 at 15:02 +0000, Oleg Zhurakousky wrote: >> >>> >> >>> Josh >> >>> >> >>> Also, keep in mind that there are incompatible property names in Kaf= ka >> >>> between the 0.7 and 0.8 releases. One of the change that went it was >> >>> replacing =E2=80=9Czk.connectiontimeout.ms=E2=80=9D with =E2=80=9C >> zookeeper.connection.timeout.ms=E2=80=9D. >> >>> Not sure if it=E2=80=99s related though, but realizing that 0.4.1 wa= s relying >> on >> >>> this property it=E2=80=99s value was completely ignored with 0.8 cli= ent >> libraries >> >>> (you could actually see the WARN message to that effect) and now it >> is not >> >>> ignored, so take a look and see if tinkering with its value changes >> >>> something. >> >>> >> >>> Cheers >> >>> Oleg >> >>> >> >>> On Feb 20, 2016, at 6:47 PM, Oleg Zhurakousky >> >>> wrote: >> >>> >> >>> Josh >> >>> >> >>> The only change that =E2=80=99s went and relevant to your issue is t= he fact >> that >> >>> we=E2=80=99ve upgraded client libraries to Kafka 0.9 and between 0.8= and 0.9 >> Kafka >> >>> introduced wire protocol changes that break compatibility. >> >>> I am still digging so stay tuned. >> >>> >> >>> Oleg >> >>> >> >>> On Feb 20, 2016, at 4:10 PM, West, Joshua wrote= : >> >>> >> >>> Hi Oleg and Joe, >> >>> >> >>> Kafka 0.8.2.1 >> >>> >> >>> Attached is the app log with hostnames scrubbed. >> >>> >> >>> Thanks for your help. Much appreciated. >> >>> >> >>> -- >> >>> Josh West >> >>> Bose Corporation >> >>> >> >>> >> >>> On Sat, 2016-02-20 at 15:46 -0500, Joe Witt wrote: >> >>> >> >>> And also what version of Kafka are you using? >> >>> >> >>> On Feb 20, 2016 3:37 PM, "Oleg Zhurakousky" < >> ozhurakousky@hortonworks.com> >> >>> wrote: >> >>> >> >>> Josh >> >>> >> >>> Any chance to attache the app-log or relevant stack trace? >> >>> >> >>> Thanks >> >>> Oleg >> >>> >> >>> On Feb 20, 2016, at 3:30 PM, West, Joshua wrote= : >> >>> >> >>> Hi folks, >> >>> >> >>> I've upgraded from Nifi 0.4.1 to 0.5.0 and I am no longer able to us= e >> the >> >>> GetKafka processor. I'm seeing errors like so: >> >>> >> >>> 2016-02-20 20:10:14,953 WARN >> >>> >> [ConsumerFetcherThread-NiFi-sldjflkdsjflksjf_**SCRUBBED**-1455999008728-= 5b8c7108-0-0] >> >>> kafka.consumer.ConsumerFetcherThread >> >>> >> [ConsumerFetcherThread-NiFi-sldjflkdsjflksjf_**SCRUBBED**-1455999008728-= 5b8c7108-0-0], >> >>> Error in >> fetchkafka.consumer.ConsumerFetcherThread$FetchRequest@7b49a642. >> >>> Possible cause: java.lang.IllegalArgumentException >> >>> >> >>> ^ Note the hostname of the server has been scrubbed. >> >>> >> >>> My configuration is pretty generic, except that with Zookeeper we us= e >> a >> >>> different root path, so our Zookeeper connect string looks like so: >> >>> >> >>> zookeeper-node1:2181,zookeeper-node2:2181,zookeeper-node3:2181/kafka >> >>> >> >>> Is anybody else experiencing issues? >> >>> >> >>> Thanks. >> >>> >> >>> -- >> >>> Josh West >> >>> >> >>> Cloud Architect >> >>> Bose Corporation >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> >> >> > > > --001a11411b8a2e3caa052e19a0b3 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Oleg,

It's part of the Cloudera dis= tro.=C2=A0 Not sure of the lineage beyond that.=C2=A0 Here are a couple of = links.


Michael


On Tue, Mar 15, 2016 at 12:01 P= M, Oleg Zhurakousky <ozhurakousky@hortonworks.com> wrote:
Michael

What is=C2=A0KAFKA-0.8.2.0-1.kafka1.3.1.p0.9?= I mean where can I get that build?
I guess based on the previous email we=E2=80=99ve tested our code with= 3 versions of ASF distribution of Kafka and the above version tells me tha= t it may be some kind of fork.

Also, we are considering downgrading Kafka dependencies back to the 0.= 8 and of 0.7 provide a new version of Kafka processors that utilize Kafka 0= .9 new producer/consumer API

Thanks
Oleg

On Mar 15, 2016, at 11:46 AM, Michael Dyer <michael.dyer@trapezoid.com>= wrote:

Joe,

I'm seeing a similar issue moving from 0.3.0 to 0.5.1 with=C2=A0KAFKA-0.8.2.0-1.kafka1.3.1.p0.9.

I can see the tasks/time counter increment on the processor but no flo= w data ever leaves the processor.=C2=A0 There are no errors shown in the bu= lletin board.=C2=A0 The app log shows below (repeating). =C2=A0

The rename 0.4.1 nar to 0.5.1 nar trick (restart nifi) works, except t= hat the 'batch size' value does not seem to be honored.=C2= =A0 I have my batch size set to 10000, but I'm seeing files written con= tinually (every few seconds) with much smaller sizes.=C2=A0 I suspect this has to do with the `auto.offse= t.reset` value which defaults to `largest`.=C2=A0 From what I ha= ve read 'smallest' causes the client to start at the beginning whic= h sounds like I would be retrieving duplicates.

Renaming 0.3.0 nar to 0.5.1 (restart nifi) restores the original behav= ior.

yBuffer([[netflow5,0], initOffset 297426 to broker BrokerEndPoint(176,= n2.foo.bar.com,909= 2)] )
2016-03-15 07:45:17,237 WARN [ConsumerFetcherThread-b6c67ee3-aa9e-419d= -8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-0-176] kafka.consumer.Consum= erFetcherThread [ConsumerFetcherThread-b6c67ee3-aa9e-419d-8a57-84ab5e76c017= _nifi-1458053114390-d60f3cc0-0-176], Error in fetch kafka.consumer.ConsumerFetcherThread$FetchRequest@11bd00f6.= Possible cause: java.lang.IllegalArgumentException
2016-03-15 07:45:17,443 INFO [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nif= i-1458053114390-d60f3cc0-leader-finder-thread] kafka.utils.VerifiableProper= ties Verifying properties
2016-03-15 07:45:17,443 INFO [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nif= i-1458053114390-d60f3cc0-leader-finder-thread] kafka.utils.VerifiableProper= ties Property client.id is overridden= to NiFi-b6c67ee3-aa9e-419d-8a57-84ab5e76c017
2016-03-15 07:45:17,443 INFO [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nif= i-1458053114390-d60f3cc0-leader-finder-thread] kafka.utils.VerifiableProper= ties Property metadata.broker.list is overridden to n3.foo.bar.com:90= 92,n2.foo.bar= .com:9092,n4.= foo.bar.com:9092
2016-03-15 07:45:17,443 INFO [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nif= i-1458053114390-d60f3cc0-leader-finder-thread] kafka.utils.VerifiableProper= ties Property request.timeout.ms= is overridden to 30000
2016-03-15 07:45:17,443 INFO [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nif= i-1458053114390-d60f3cc0-leader-finder-thread] kafka.client.ClientUtils$ Fe= tching metadata from broker BrokerEndPoint(196,n4.foo.bar.com,9092) with correlation id 14 for 1 topic(s) Set(netflow5)
2016-03-15 07:45:17,443 INFO [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nif= i-1458053114390-d60f3cc0-leader-finder-thread] kafka.producer.SyncProducer = Connected to n4.foo.bar.com:90= 92 for producing
2016-03-15 07:45:17,444 INFO [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nif= i-1458053114390-d60f3cc0-leader-finder-thread] kafka.producer.SyncProducer = Disconnecting from n4.foo.bar.com:90= 92
2016-03-15 07:45:17,444 INFO [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nif= i-1458053114390-d60f3cc0-leader-finder-thread] kafka.consumer.ConsumerFetch= erManager [ConsumerFetcherManager-1458053114395] Added fetcher for partitio= ns ArrayBuffer([[netflow5,0], initOffset 297426 to broker BrokerEndPoint(176,n2.foo.bar.com,9092)] )
2016-03-15 07:45:17,449 WARN [ConsumerFetcherThread-b6c67ee3-aa9e-419d= -8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-0-176] kafka.consumer.Consum= erFetcherThread [ConsumerFetcherThread-b6c67ee3-aa9e-419d-8a57-84ab5e76c017= _nifi-1458053114390-d60f3cc0-0-176], Error in fetch kafka.consumer.ConsumerFetcherThread$FetchRequest@3e69ef73.= Possible cause: java.lang.IllegalArgumentException
2016-03-15 07:45:17,626 INFO [NiFi Web Server-259] o.a.n.c.s.TimerDriv= enSchedulingAgent Stopped scheduling GetKafka[id=3D4943a24e-af5c-4392-bc45-= 7008f30674bb] to run
2016-03-15 07:45:17,626 INFO [Timer-Driven Process Thread-3] k.consume= r.ZookeeperConsumerConnector [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-145= 8053114390-d60f3cc0], ZKConsumerConnector shutting down
2016-03-15 07:45:17,632 INFO [Timer-Driven Process Thread-3] kafka.con= sumer.ConsumerFetcherManager [ConsumerFetcherManager-1458053114395] Stoppin= g leader finder thread
2016-03-15 07:45:17,633 INFO [Timer-Driven Process Thread-3] k.c.Consu= merFetcherManager$LeaderFinderThread [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_= nifi-1458053114390-d60f3cc0-leader-finder-thread], Shutting down
2016-03-15 07:45:17,634 INFO [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nif= i-1458053114390-d60f3cc0-leader-finder-thread] k.c.ConsumerFetcherManager$L= eaderFinderThread [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-= d60f3cc0-leader-finder-thread], Stopped
2016-03-15 07:45:17,634 INFO [Timer-Driven Process Thread-3] k.c.Consu= merFetcherManager$LeaderFinderThread [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_= nifi-1458053114390-d60f3cc0-leader-finder-thread], Shutdown completed
2016-03-15 07:45:17,634 INFO [Timer-Driven Process Thread-3] kafka.con= sumer.ConsumerFetcherManager [ConsumerFetcherManager-1458053114395] Stoppin= g all fetchers
2016-03-15 07:45:17,634 INFO [Timer-Driven Process Thread-3] kafka.con= sumer.ConsumerFetcherThread [ConsumerFetcherThread-b6c67ee3-aa9e-419d-8a57-= 84ab5e76c017_nifi-1458053114390-d60f3cc0-0-176], Shutting down
2016-03-15 07:45:17,634 INFO [ConsumerFetcherThread-b6c67ee3-aa9e-419d= -8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-0-176] kafka.consumer.Consum= erFetcherThread [ConsumerFetcherThread-b6c67ee3-aa9e-419d-8a57-84ab5e76c017= _nifi-1458053114390-d60f3cc0-0-176], Stopped
2016-03-15 07:45:17,635 INFO [Timer-Driven Process Thread-3] kafka.con= sumer.ConsumerFetcherThread [ConsumerFetcherThread-b6c67ee3-aa9e-419d-8a57-= 84ab5e76c017_nifi-1458053114390-d60f3cc0-0-176], Shutdown completed
2016-03-15 07:45:17,635 INFO [Timer-Driven Process Thread-3] kafka.con= sumer.ConsumerFetcherManager [ConsumerFetcherManager-1458053114395] All con= nections stopped
2016-03-15 07:45:17,635 INFO [ZkClient-EventThread-302-192.168.1.1:218= 1] org.I0Itec.zkclient.ZkEventThread Terminate ZkClient event thread.
2016-03-15 07:45:17,638 INFO [Timer-Driven Process Thread-3] org.apach= e.zookeeper.ZooKeeper Session: 0x1535e2aa53b3f61 closed
2016-03-15 07:45:17,638 INFO [Timer-Driven Process Thread-3] k.consume= r.ZookeeperConsumerConnector [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-145= 8053114390-d60f3cc0], ZKConsumerConnector shutdown completed in 11 ms
2016-03-15 07:45:17,639 INFO [Timer-Driven Process Thread-4-EventThrea= d] org.apache.zookeeper.ClientCnxn EventThread shut down
2016-03-15 07:45:17,745 INFO [Flow Service Tasks Thread-2] o.a.nifi.co= ntroller.StandardFlowService Saved flow controller org.apache.nifi.controll= er.FlowController@45d22bd5 // Another save pending =3D false
2016-03-15 07:45:18,414 INFO [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nif= i-1458053114390-d60f3cc0_watcher_executor] k.consumer.ZookeeperConsumerConn= ector [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0], s= topping watcher executor thread for consumer b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-145805311439= 0-d60f3cc0

Hope this helps...

Michael

On Mon, Feb 22, 2016 at 4:12 PM, Joe Witt <joe.witt@gmail.= com> wrote:
Sorry to clarify it working against all three of these at once:
- Kafka 0.8.1.1
- Kafka 0.8.2.1
- Kafka 0.9.0.1

Thanks
Joe

On Mon, Feb 22, 2016 at 4:12 PM, Joe Witt <joe.witt@gmail.com> wrote:
> All just as a point of reference we now have a live system running
> that is on NiFi 0.5.0 and feeding three versions of Kafka at once:
> - 0.8.1
> - 0.8.2.0
> - 0.9.0.1
>
> So perhaps there are some particular configurations that cause issues.=
> Can you share more details about your configuration of Kafka/NiFi and<= br> > what sort of security is enabled?
>
> Thanks
> Joe
>
> On Mon, Feb 22, 2016 at 1:01 PM, Kyle Burke <kyle.burke@ignitionone.com>= ; wrote:
>> I replaced my 0.5.0 kafka nar with the 0.4.1 kakfa nar and it fixe= d my kafka
>> issue. I renamed the 0.4.1 nar to be 0.5.0.nar and restart nifi an= d my kafka
>> processor started reading my 0.8.2.1 stream. Not elegant but glad = it worked.
>>
>>
>> Respectfully,
>>
>> Kyle Burke | Data Science Engineer
>> IgnitionOne - Marketing Technology. Simplified.
>> Office: 1545 Peachtree St NE, Suite 500 | Atlanta, GA | 30309
>> Direct: 404.961.3918
>>
>>
>> From: Joe Witt
>> Reply-To: "users@nifi.apache.org"
>> Date: Sunday, February 21, 2016 at 5:23 PM
>> To: "users@nifi.apache.org"
>> Subject: Re: Nifi 0.50 and GetKafka Issues
>>
>> Yeah the intent is to support 0.8 and 0.9.=C2=A0 Will figure somet= hing out.
>>
>> Thanks
>> Joe
>>
>> On Feb 21, 2016 4:47 PM, "West, Joshua" <Josh_West@bose.com> wrot= e:
>>>
>>> Hi Oleg,
>>>
>>> Hmm -- from what I can tell, this isn't a Zookeeper commun= ication issue.
>>> Nifi is able to connect into the Kafka brokers' Zookeeper = cluster and
>>> retrieve the list of the kafka brokers to connect to.=C2=A0 Se= ems, from the logs,
>>> to be a problem when attempting to consume from Kafka itself.<= br> >>>
>>> I'm guessing that the Kafka 0.9.0 client libraries just ar= e not compatible
>>> with Kafka 0.8.2.1 so in order to use Nifi 0.5.0 with Kafka, t= he Kafka
>>> version must be >=3D 0.9.0.
>>>
>>> Any change Nifi could add backwards compatible support for Kaf= ka 0.8.2.1
>>> too?=C2=A0 Let you choose which client library version, when s= etting up the
>>> GetKafka processor?
>>>
>>> --
>>> Josh West <josh_west@bose.com>
>>> Bose Corporation
>>>
>>>
>>> On Sun, 2016-02-21 at 15:02 +0000, Oleg Zhurakousky wrote:
>>>
>>> Josh
>>>
>>> Also, keep in mind that there are incompatible property names = in Kafka
>>> between the 0.7 and 0.8 releases. One of the change that went = it was
>>> replacing =E2=80=9Czk.connectiontimeout.ms=E2=80=9D = with =E2=80=9Czookeeper.connection.timeout.ms=E2=80=9D. >>> Not sure if it=E2=80=99s related though, but realizing that 0.= 4.1 was relying on
>>> this property it=E2=80=99s value was completely ignored with 0= .8 client libraries
>>> (you could actually see the WARN message to that effect) and n= ow it is not
>>> ignored, so take a look and see if tinkering with its value ch= anges
>>> something.
>>>
>>> Cheers
>>> Oleg
>>>
>>> On Feb 20, 2016, at 6:47 PM, Oleg Zhurakousky
>>> <ozhurakousky@hortonworks.com> wrote:
>>>
>>> Josh
>>>
>>> The only change that =E2=80=99s went and relevant to your issu= e is the fact that
>>> we=E2=80=99ve upgraded client libraries to Kafka 0.9 and betwe= en 0.8 and 0.9 Kafka
>>> introduced wire protocol changes that break compatibility.
>>> I am still digging so stay tuned.
>>>
>>> Oleg
>>>
>>> On Feb 20, 2016, at 4:10 PM, West, Joshua <Josh_West@bose.com> wrote: >>>
>>> Hi Oleg and Joe,
>>>
>>> Kafka 0.8.2.1
>>>
>>> Attached is the app log with hostnames scrubbed.
>>>
>>> Thanks for your help.=C2=A0 Much appreciated.
>>>
>>> --
>>> Josh West <josh_west@bose.com>
>>> Bose Corporation
>>>
>>>
>>> On Sat, 2016-02-20 at 15:46 -0500, Joe Witt wrote:
>>>
>>> And also what version of Kafka are you using?
>>>
>>> On Feb 20, 2016 3:37 PM, "Oleg Zhurakousky" <ozhurakousky@= hortonworks.com>
>>> wrote:
>>>
>>> Josh
>>>
>>> Any chance to attache the app-log or relevant stack trace?
>>>
>>> Thanks
>>> Oleg
>>>
>>> On Feb 20, 2016, at 3:30 PM, West, Joshua <Josh_West@bose.com> wrote: >>>
>>> Hi folks,
>>>
>>> I've upgraded from Nifi 0.4.1 to 0.5.0 and I am no longer = able to use the
>>> GetKafka processor.=C2=A0 I'm seeing errors like so:
>>>
>>> 2016-02-20 20:10:14,953 WARN
>>> [ConsumerFetcherThread-NiFi-sldjflkdsjflksjf_**SCRUBBED**-1455= 999008728-5b8c7108-0-0]
>>> kafka.consumer.ConsumerFetcherThread
>>> [ConsumerFetcherThread-NiFi-sldjflkdsjflksjf_**SCRUBBED**-1455= 999008728-5b8c7108-0-0],
>>> Error in fetchkafka.consumer.ConsumerFetcherThread$FetchReques= t@7b49a642.
>>> Possible cause: java.lang.IllegalArgumentException
>>>
>>> ^ Note=C2=A0 the hostname of the server has been scrubbed.
>>>
>>> My configuration is pretty generic, except that with Zookeeper= we use a
>>> different root path, so our Zookeeper connect string looks lik= e so:
>>>
>>> zookeeper-node1:2181,zookeeper-node2:2181,zookeeper-node3:2181= /kafka
>>>
>>> Is anybody else experiencing issues?
>>>
>>> Thanks.
>>>
>>> --
>>> Josh West <josh_west@bose.com>
>>>
>>> Cloud Architect
>>> Bose Corporation
>>>
>>>
>>>
>>>
>>> <nifi-app.log.kafkaissues.bz2>
>>>
>>>
>>>
>>



--001a11411b8a2e3caa052e19a0b3--