flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Juho Autio <juho.au...@rovio.com>
Subject Re: writeAsCSV with partitionBy
Date Wed, 25 May 2016 06:35:02 GMT
RollingSink is part of Flink Streaming API. Can it be used in Flink Batch
jobs, too?

As implied in FLINK-2672, RollingSink doesn't support dynamic bucket paths
based on the tuple fields. The path must be given when creating the
RollingSink instance, ie. before deploying the job. Yes, a custom Bucketer
can be provided, but as the current method signature is, tuple is not
passed to Bucketer.

On Tue, May 24, 2016 at 4:45 PM, Srikanth <srikanth.ht@gmail.com> wrote:

> Isn't this related to -- https://issues.apache.org/jira/browse/FLINK-2672
> ??
>
> This can be achieved with a RollingSink[1] & custom Bucketer probably.
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/streaming/connectors/fs/RollingSink.html
>
> Srikanth
>
> On Tue, May 24, 2016 at 1:07 AM, KirstiLaurila <kirsti.laurila@rovio.com>
> wrote:
>
>> Yeah, created this one  https://issues.apache.org/jira/browse/FLINK-3961
>> <https://issues.apache.org/jira/browse/FLINK-3961>
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/writeAsCSV-with-partitionBy-tp4893p7118.html
>> Sent from the Apache Flink User Mailing List archive. mailing list
>> archive at Nabble.com.
>>
>

Mime
View raw message