flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From fhueske <...@git.apache.org>
Subject [GitHub] flink issue #3712: [FLINK-6281] Create TableSink for JDBC.
Date Fri, 21 Apr 2017 09:09:22 GMT
Github user fhueske commented on the issue:

    https://github.com/apache/flink/pull/3712
  
    Hi @haohui, I think a JdbcTableSink would be a great feature! 
    
    However, there is a big issue with wrapping the `JdbcOutputFormat`. OutputFormats are
not integrated with Flink's checkpointing mechanism. The `JdbcOutputFormat` buffers rows to
write them out in batches. When records are buffered that arrived before the last checkpoint,
they will be lost in case of a failure because they will not be replayed.
    
    The JdbcTableSink should be integrated with Flink's checkpointing mechanism. In a nutshell,
it should buffer records and commit them to the database when a checkpoint is taken. I think
we need to think a bit more about a proper design for this feature. @zentol and @aljoscha
might have some thoughts on this as well as they are more familiar with the implementation
of checkpoint-aware sinks.
    
    What do you think?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

Mime
View raw message