flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-4035) Bump Kafka producer in Kafka sink to Kafka 0.10.0.0
Date Mon, 18 Jul 2016 13:04:20 GMT

    [ https://issues.apache.org/jira/browse/FLINK-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382242#comment-15382242
] 

ASF GitHub Bot commented on FLINK-4035:
---------------------------------------

Github user aljoscha commented on the issue:

    https://github.com/apache/flink/pull/2231
  
    I agree with @tzulitai that it would be nice if you could have minimum code duplication
in the long-run but it might not be possible with the current design of the consumers.
    
    What about the new timestamps that were introduced in Kafka 0.10? This is also something
that wouldn't work with the 0.9 consumer and could only be implemented for the 0.10-specific
consumer, correct?


> Bump Kafka producer in Kafka sink to Kafka 0.10.0.0
> ---------------------------------------------------
>
>                 Key: FLINK-4035
>                 URL: https://issues.apache.org/jira/browse/FLINK-4035
>             Project: Flink
>          Issue Type: Bug
>          Components: Kafka Connector
>    Affects Versions: 1.0.3
>            Reporter: Elias Levy
>            Priority: Minor
>
> Kafka 0.10.0.0 introduced protocol changes related to the producer.  Published messages
now include timestamps and compressed messages now include relative offsets.  As it is now,
brokers must decompress publisher compressed messages, assign offset to them, and recompress
them, which is wasteful and makes it less likely that compression will be used at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message