flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From suez1224 <...@git.apache.org>
Subject [GitHub] flink pull request #5272: [Flink-8397][Connectors]Support Row type for Cassa...
Date Wed, 14 Feb 2018 23:02:15 GMT
Github user suez1224 commented on a diff in the pull request:

    https://github.com/apache/flink/pull/5272#discussion_r168338536
  
    --- Diff: flink-connectors/flink-connector-cassandra/src/main/java/org/apache/flink/batch/connectors/cassandra/CassandraOutputFormat.java
---
    @@ -95,15 +94,14 @@ public void writeRecord(OUT record) throws IOException {
     			throw new IOException("write record failed", exception);
     		}
     
    -		Object[] fields = new Object[record.getArity()];
    -		for (int i = 0; i < record.getArity(); i++) {
    -			fields[i] = record.getField(i);
    -		}
    +		Object[] fields = extractFields(record);
     		ResultSetFuture result = session.executeAsync(prepared.bind(fields));
     		Futures.addCallback(result, callback);
     	}
     
    -	/**
    +	protected abstract Object[] extractFields(OUT record);
    --- End diff --
    
    I am hesitated to make this change. There is no documentation in Cassandra code on how
PreparedStatement.bind will do with the input fields. Although the actual code will serialize
the fields values, and wont keep a reference, so it's Ok to reuse the fields array across
invocation even if executeAsync is used. I would be still hesitated to do so because it's
not stated in the Cassandra client document, so it might subject to future changes. I agree
performance is a concern here, one way to improve it is to add a getFields() method to Row,
so we can reuse it for CassandraRowOutputFormat instead of create a new Object[].


---

Mime
View raw message