flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ken Geis (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-7221) JDBCOutputFormat swallows errors on last batch
Date Thu, 20 Jul 2017 16:29:00 GMT

    [ https://issues.apache.org/jira/browse/FLINK-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094924#comment-16094924
] 

Ken Geis commented on FLINK-7221:
---------------------------------

[~fhueske], I'd be glad to submit a patch. I'm currently on vacation, and it could be a month
or more before I can get to it.

> JDBCOutputFormat swallows errors on last batch
> ----------------------------------------------
>
>                 Key: FLINK-7221
>                 URL: https://issues.apache.org/jira/browse/FLINK-7221
>             Project: Flink
>          Issue Type: Bug
>          Components: Batch Connectors and Input/Output Formats
>    Affects Versions: 1.3.1
>         Environment: Java 1.8.0_131, PostgreSQL driver 42.1.3
>            Reporter: Ken Geis
>
> I have a data set with ~17000 rows that I was trying to write to a PostgreSQL table that
I did not (yet) have permission on. No data was loaded, and Flink did not report any problem
outputting the data set. The only indication I found of my problem was in the PostgreSQL log.
> With the default parallelism (8) and the default batch interval (5000), my batches were
~2000 rows each, so they were never executed in {{JDBCOutputFormat.writeRecord(..)}}. {{JDBCOutputFormat.close()}}
does a final call on {{upload.executeBatch()}}, but if there is a problem, it is logged at
INFO level and not rethrown. 
> If I decrease the batch interval to 100 or 1000, then an error is properly reported.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message