flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ufuk Celebi <...@apache.org>
Subject Re: Kakfa batches
Date Wed, 03 Aug 2016 13:19:07 GMT
On Wed, Aug 3, 2016 at 2:07 PM, Prabhu V <vprabhu@gmail.com> wrote:
> Obeservations with Streaming.
>
> 1) Long running kerberos fails in 7 days (the data that is held in the
> window buffer is lost and restart results in event loss)

This is a known issue I think. Looping in Max who knows the details I think.

> 2) I hold on to the resouces/container in the cluster irrespective of volume
> of events for all time

Correct. There are plans for Flink 1.2 to make this dynamic.

> Is there a way the kafkaconnector can take a start and stop values for
> offsets that would be ideal for my scenario. The design in this scenario
> would be to...

This is not possible at the moment. What do you mean with "3) commit
the offsets after job is successful"? Do you want to manually do this?

Mime
View raw message