flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-6281) Create TableSink for JDBC
Date Fri, 21 Apr 2017 09:10:04 GMT

    [ https://issues.apache.org/jira/browse/FLINK-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978358#comment-15978358

ASF GitHub Bot commented on FLINK-6281:

Github user fhueske commented on the issue:

    Hi @haohui, I think a JdbcTableSink would be a great feature! 
    However, there is a big issue with wrapping the `JdbcOutputFormat`. OutputFormats are
not integrated with Flink's checkpointing mechanism. The `JdbcOutputFormat` buffers rows to
write them out in batches. When records are buffered that arrived before the last checkpoint,
they will be lost in case of a failure because they will not be replayed.
    The JdbcTableSink should be integrated with Flink's checkpointing mechanism. In a nutshell,
it should buffer records and commit them to the database when a checkpoint is taken. I think
we need to think a bit more about a proper design for this feature. @zentol and @aljoscha
might have some thoughts on this as well as they are more familiar with the implementation
of checkpoint-aware sinks.
    What do you think?

> Create TableSink for JDBC
> -------------------------
>                 Key: FLINK-6281
>                 URL: https://issues.apache.org/jira/browse/FLINK-6281
>             Project: Flink
>          Issue Type: Improvement
>            Reporter: Haohui Mai
>            Assignee: Haohui Mai
> It would be nice to integrate the table APIs with the JDBC connectors so that the rows
in the tables can be directly pushed into JDBC.

This message was sent by Atlassian JIRA

View raw message