flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Josh <jof...@gmail.com>
Subject Re: External DB as sink - with processing guarantees
Date Sat, 12 Mar 2016 08:57:03 GMT
Thanks Nick, that sounds good. I would still like to have an understanding of what determines
the processing guarantee though. Say I use a DynamoDB Hadoop OutputFormat with Flink, how
do I know what guarantee I have? And if it's at-least-once, is there a way to adapt it to
achieve exactly-once?


> On 12 Mar 2016, at 02:46, Nick Dimiduk <ndimiduk@gmail.com> wrote:
> Pretty much anything you can write to from a Hadoop MapReduce program can be a Flink
destination. Just plug in the OutputFormat and go.
> Re: output semantics, your mileage may vary. Flink should do you fine for at least once.
>> On Friday, March 11, 2016, Josh <jofo90@gmail.com> wrote:
>> Hi all,
>> I want to use an external data store (DynamoDB) as a sink with Flink. It looks like
there's no connector for Dynamo at the moment, so I have two questions:
>> 1. Is it easy to write my own sink for Flink and are there any docs around how to
do this?
>> 2. If I do this, will I still be able to have Flink's processing guarantees? I.e.
Can I be sure that every tuple has contributed to the DynamoDB state either at-least-once
or exactly-once?
>> Thanks for any advice,
>> Josh

View raw message