flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Elias Levy (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (FLINK-4498) Better Cassandra sink documentation
Date Thu, 25 Aug 2016 16:49:20 GMT

     [ https://issues.apache.org/jira/browse/FLINK-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Elias Levy updated FLINK-4498:
------------------------------
    Description: 
The Cassandra sink documentation is somewhat muddled and could be improved.  For instance,
the fact that is only supports tuples and POJO's that use DataStax Mapper annotations is only
mentioned in passing, and it is not clear that the reference to tuples only applies to Flink
Java tuples and not Scala tuples.  

The documentation also does not mention that setQuery() is only necessary for tuple streams.


The explanation of the write ahead log could use some cleaning up to clarify when it is appropriate
to use, ideally with an example.  Maybe this would be best as a blog post to expand on the
type of non-deterministic streams this applies to.

It would also be useful to mention that tuple elements will be mapped to Cassandra columns
using the Datastax Java driver's default encoders, which are somewhat limited (e.g. to write
to a blob column the type in the tuple must be a java.nio.ByteBuffer and not just a byte[]).

  was:
The Cassandra sink documentation is somewhat muddled and could be improved.  For instance,
the fact that is only supports tuples and POJO's that use DataStax Mapper annotations is only
mentioned in passing, and it is not clear that the reference to tuples only applies to Flink
Java tuples and not Scala tuples.  

The documentation also does not mention that setQuery() is only necessary for tuple streams.
 It would be good to have an example of a POJO stream with the DataStax annotations.

The explanation of the write ahead log could use some cleaning up to clarify when it is appropriate
to use, ideally with an example.  Maybe this would be best as a blog post to expand on the
type of non-deterministic streams this applies to.

It would also be useful to mention that tuple elements will be mapped to Cassandra columns
using the Datastax Java driver's default encoders, which are somewhat limited (e.g. to write
to a blob column the type in the tuple must be a java.nio.ByteBuffer and not just a byte[]).


> Better Cassandra sink documentation
> -----------------------------------
>
>                 Key: FLINK-4498
>                 URL: https://issues.apache.org/jira/browse/FLINK-4498
>             Project: Flink
>          Issue Type: Improvement
>          Components: Documentation
>    Affects Versions: 1.1.0
>            Reporter: Elias Levy
>
> The Cassandra sink documentation is somewhat muddled and could be improved.  For instance,
the fact that is only supports tuples and POJO's that use DataStax Mapper annotations is only
mentioned in passing, and it is not clear that the reference to tuples only applies to Flink
Java tuples and not Scala tuples.  
> The documentation also does not mention that setQuery() is only necessary for tuple streams.

> The explanation of the write ahead log could use some cleaning up to clarify when it
is appropriate to use, ideally with an example.  Maybe this would be best as a blog post to
expand on the type of non-deterministic streams this applies to.
> It would also be useful to mention that tuple elements will be mapped to Cassandra columns
using the Datastax Java driver's default encoders, which are somewhat limited (e.g. to write
to a blob column the type in the tuple must be a java.nio.ByteBuffer and not just a byte[]).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message