flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From miki haiat <miko5...@gmail.com>
Subject Re: Writing stream to Hadoop
Date Tue, 05 Jun 2018 10:37:19 GMT
OMG i missed it ...

Thanks,

MIki

On Tue, Jun 5, 2018 at 1:30 PM Chesnay Schepler <chesnay@apache.org> wrote:

> This particular version of the method is deprecated, use enableCheckpointing(long
> checkpointingInterval) instead.
>
> On 05.06.2018 12:19, miki haiat wrote:
>
> I saw the option of enabling checkpoint
> enabling-and-configuring-checkpointing
> <https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/stream/checkpointing.html#enabling-and-configuring-checkpointing>
>
> But on 1.5 it said that the method is deprecated so im a bit confused .
>
> /** @deprecated */@Deprecated
> @PublicEvolvingpublic StreamExecutionEnvironment enableCheckpointing() {
>     this.checkpointCfg.setCheckpointInterval(500L);    return this;}
>
>
>
>
> On Tue, Jun 5, 2018 at 1:11 PM Kostas Kloudas <k.kloudas@data-artisans.com>
> wrote:
>
>> Hi Miki,
>>
>> Have you enabled checkpointing?
>>
>> Kostas
>>
>> On Jun 5, 2018, at 11:14 AM, miki haiat <miko5054@gmail.com> wrote:
>>
>> Im trying to write some data to Hadoop by using this code
>>
>> The state backend is set without time
>>
>> StateBackend sb = new FsStateBackend("hdfs://***:9000/flink/my_city/checkpoints");env.setStateBackend(sb);
>>
>> BucketingSink<Tuple2<IntWritable, Text>> sink =
>>         new BucketingSink<>("hdfs://****:9000/mycity/raw");sink.setBucketer(new
DateTimeBucketer("yyyy-MM-dd--HHmm"));sink.setInactiveBucketCheckInterval(120000);sink.setInactiveBucketThreshold(120000);
>>
>> the result is that all the files are stuck in* in.programs  *status and
>> not closed.
>> is it related to the state backend configuration.
>>
>> thanks,
>>
>> Miki
>>
>>
>>
>

Mime
View raw message