flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Flink Jira Bot (Jira)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-4498) Better Cassandra sink documentation
Date Tue, 27 Apr 2021 23:25:12 GMT

    [ https://issues.apache.org/jira/browse/FLINK-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17334317#comment-17334317

Flink Jira Bot commented on FLINK-4498:

This issue was marked "stale-assigned" and has not received an update in 7 days. It is now
automatically unassigned. If you are still working on it, you can assign it to yourself again.
Please also give an update about the status of the work.

> Better Cassandra sink documentation
> -----------------------------------
>                 Key: FLINK-4498
>                 URL: https://issues.apache.org/jira/browse/FLINK-4498
>             Project: Flink
>          Issue Type: Improvement
>          Components: Connectors / Cassandra, Documentation
>    Affects Versions: 1.1.0
>            Reporter: Elias Levy
>            Assignee: Michael Fong
>            Priority: Major
>              Labels: stale-assigned
> The Cassandra sink documentation is somewhat muddled and could be improved.  For instance,
the fact that is only supports tuples and POJO's that use DataStax Mapper annotations is only
mentioned in passing, and it is not clear that the reference to tuples only applies to Flink
Java tuples and not Scala tuples.  
> The documentation also does not mention that setQuery() is only necessary for tuple streams.

> The explanation of the write ahead log could use some cleaning up to clarify when it
is appropriate to use, ideally with an example.  Maybe this would be best as a blog post to
expand on the type of non-deterministic streams this applies to.
> It would also be useful to mention that tuple elements will be mapped to Cassandra columns
using the Datastax Java driver's default encoders, which are somewhat limited (e.g. to write
to a blob column the type in the tuple must be a java.nio.ByteBuffer and not just a byte[]).

This message was sent by Atlassian Jira

View raw message