flink-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Flavio Pompermaier <pomperma...@okkam.it>
Subject FLINK-3750 (JDBCInputFormat)
Date Thu, 14 Apr 2016 14:59:11 GMT
Hi guys,

I'm integrating the comments of Chesnay to my PR but there's a couple of
thing that I'd like to discuss with the core developers.


   1. about the JDBC type mapping (addValue() method at [1]: At the moment
   if I find a null value for a  Double, the getDouble of jdbc return 0.0. Is
   it really the correct behaviour? Wouldn't be better to use a POJO or the
   Row of datatable that can handle void? Moreover, the mapping between SQL
   type and Java types varies much from the single JDBC implementation.
   Wouldn't be better to rely on the Java type coming from using
   resultSet.getObject() to get such a mapping rather than using the
   ResultSetMetadata types?
   2. I'd like to handle connections very efficiently because we have a use
   case with billions of records and thus millions of splits and establish a
   new connection each time could be expensive. Would it be a problem to add
   apache pool dependency to the jdbc batch connector in order to reuase the
   created connections?


[1]
https://github.com/fpompermaier/flink/blob/FLINK-3750/flink-batch-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/JDBCInputFormat.java

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message