flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nishu <nishuta...@gmail.com>
Subject Re: Data loss in Flink Kafka Pipeline
Date Fri, 08 Dec 2017 12:52:07 GMT
Hi Fabian,

Actually I found a JIRA issue for the similar issue :
https://issues.apache.org/jira/browse/BEAM-3225 ,This is something similar
I am facing too.

I have 4 kafka topics as input source. Those are read using GlobalWindow
and processingTime triggers. And further joined based on common keys.
There are multiple GroupByKey transformations in pipeline. After reading
BEAM-3225, I assume that this is the bug in the runner.
Thanks for connecting with Aljoscha.   :)

Hi Aljoscha,
I will share the code with you in another mail thread.

Thanks & regards,
Nishu



On Fri, Dec 8, 2017 at 1:04 PM, Aljoscha Krettek <aljoscha@apache.org>
wrote:

> Hi,
>
> Could you maybe post your pipeline code. That way I could have a look.
>
> Best,
> Aljoscha
>
> > On 8. Dec 2017, at 12:31, Fabian Hueske <fhueske@gmail.com> wrote:
> >
> > Hmm, I see...
> > I'm running out of ideas.
> >
> > You might be right with your assumption about a bug in the Beam Flink
> runner. In this case, this would be an issue for the Beam project which
> hosts the Flink runner.
> > But it might also be an issue on the Flink side.
> >
> > Maybe Aljoscha (in CC), one of the authors of the Flink runner and a
> Beam+Flink committer, can help to identify the issue.
> >
> > Best, Fabian
> >
> >
>
>


-- 
Thanks & Regards,
Nishu Tayal

Mime
View raw message