Return-Path: X-Original-To: apmail-flink-user-archive@minotaur.apache.org Delivered-To: apmail-flink-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A203617CD5 for ; Thu, 10 Sep 2015 14:49:19 +0000 (UTC) Received: (qmail 52159 invoked by uid 500); 10 Sep 2015 14:49:10 -0000 Delivered-To: apmail-flink-user-archive@flink.apache.org Received: (qmail 52083 invoked by uid 500); 10 Sep 2015 14:49:10 -0000 Mailing-List: contact user-help@flink.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@flink.apache.org Delivered-To: mailing list user@flink.apache.org Received: (qmail 52074 invoked by uid 99); 10 Sep 2015 14:49:10 -0000 Received: from mail-relay.apache.org (HELO mail-relay.apache.org) (140.211.11.15) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 10 Sep 2015 14:49:10 +0000 Received: from mail-wi0-f180.google.com (mail-wi0-f180.google.com [209.85.212.180]) by mail-relay.apache.org (ASF Mail Server at mail-relay.apache.org) with ESMTPSA id 64AC81A0098 for ; Thu, 10 Sep 2015 14:49:09 +0000 (UTC) Received: by wicfx3 with SMTP id fx3so31239742wic.1 for ; Thu, 10 Sep 2015 07:49:08 -0700 (PDT) X-Received: by 10.180.85.194 with SMTP id j2mr7112282wiz.11.1441896548319; Thu, 10 Sep 2015 07:49:08 -0700 (PDT) MIME-Version: 1.0 Received: by 10.28.11.129 with HTTP; Thu, 10 Sep 2015 07:48:48 -0700 (PDT) In-Reply-To: References: From: Robert Metzger Date: Thu, 10 Sep 2015 16:48:48 +0200 Message-ID: Subject: Re: Flink 0.9.1 Kafka 0.8.1 To: "user@flink.apache.org" Content-Type: multipart/alternative; boundary=f46d0444ef3190c941051f65b3fc --f46d0444ef3190c941051f65b3fc Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hi Gwen, sorry that you ran into this issue. The implementation of the Kafka Consumer has been changed completely in 0.9.1 because there were some corner-case issues with the exactly-once guarantees in 0.9.0. I'll look into the issue immediately. On Thu, Sep 10, 2015 at 4:26 PM, Gwenhael Pasquiers < gwenhael.pasquiers@ericsson.com> wrote: > Hi everyone, > > > > We=E2=80=99re trying to use consume a 0.8.1 Kafka on Flink 0.9.1 and we= =E2=80=99ve run > into the following issue : > > > > My offset became OutOfRange however now when I start my job, it loops on > the OutOfRangeException, no matter what the value of auto.offset.reset is= =E2=80=A6 > (earliest, latest, largest, smallest) > > > > Looks like it doesn=E2=80=99t fix the invalid offset and immediately goes= into > error=E2=80=A6 Then Flink restarts the job, and failes again =E2=80=A6 et= c =E2=80=A6 > > > > Do you have an idea of what is wrong, or could it be an issue in flink ? > > > > B.R. > > > > Gwenha=C3=ABl PASQUIERS > --f46d0444ef3190c941051f65b3fc Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi Gwen,

sorry that you ran into this i= ssue. The implementation of the Kafka Consumer has been changed completely = in 0.9.1 because there were some corner-case issues with the exactly-once g= uarantees in 0.9.0.

I'll look into the issue i= mmediately.


On Thu, Sep 10, 2015 at 4:26 PM, Gwenhael Pasquiers <gwenhael.pasquiers@ericsson.com> wrote:

Hi everyone,

=C2=A0

We=E2=80=99re trying to use con= sume a 0.8.1 Kafka on Flink 0.9.1 and we=E2=80=99ve run into the following = issue :

=C2=A0

My offset became OutOfRange how= ever now when I start my job, it loops on the OutOfRangeException, no matte= r what the value of auto.offset.reset is=E2=80=A6 (earliest, latest, larges= t, smallest)

=C2=A0

Looks like it doesn=E2=80=99t f= ix the invalid offset and immediately goes into error=E2=80=A6 Then Flink r= estarts the job, and failes again =E2=80=A6 etc =E2=80=A6

=C2=A0

Do you have an idea of what is = wrong, or could it be an issue in flink ?

=C2=A0

B.R.

=C2=A0

Gwenha=C3=ABl PASQUIERS<= u>


--f46d0444ef3190c941051f65b3fc--