flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nick Dimiduk <ndimi...@gmail.com>
Subject Re: External DB as sink - with processing guarantees
Date Sat, 12 Mar 2016 02:46:00 GMT
Pretty much anything you can write to from a Hadoop MapReduce program can
be a Flink destination. Just plug in the OutputFormat and go.

Re: output semantics, your mileage may vary. Flink should do you fine for
at least once.

On Friday, March 11, 2016, Josh <jofo90@gmail.com> wrote:

> Hi all,
>
> I want to use an external data store (DynamoDB) as a sink with Flink. It
> looks like there's no connector for Dynamo at the moment, so I have two
> questions:
>
> 1. Is it easy to write my own sink for Flink and are there any docs around
> how to do this?
> 2. If I do this, will I still be able to have Flink's processing
> guarantees? I.e. Can I be sure that every tuple has contributed to the
> DynamoDB state either at-least-once or exactly-once?
>
> Thanks for any advice,
> Josh

Mime
View raw message