flink-user-zh mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Wesley Peng <weslep...@gmail.com>
Subject Re: FlinkKafkaProducer 开启Excatly Once之后 初始化事务状态超时的问题
Date Mon, 02 Sep 2019 04:11:22 GMT
Hi

on 2019/9/2 11:49, 陈赋赟 wrote:
> 2019-09-02 10:24:28,599 INFO  org.apache.flink.runtime.taskmanager.Task             
       - Interval Join -> Sink: Unnamed (1/4) (e8b85b6f144879efbb0b4209f226c69b) switched
from RUNNING to FAILED.
> org.apache.kafka.common.errors.TimeoutException: Timeout expired while initializing transactional
state in 60000ms.

You may reference this:

https://stackoverflow.com/questions/54295588/kafka-streams-failed-to-rebalance-error

Possible options:

As this answer says, switch off Exactly Once for your streamer. It then 
doesn't use transactions and all seems to work ok. Not helpful if you 
require EOS or some other client code requires transactions.
restart any brokers that are reporting warnings to force them to 
re-resolve the IP address. They would need to be restarted in a way that 
they don't change IP address themselves. Not usually possible in kubernetes.
Defect raised Issue KAFKA-7958 - Transactions are broken with kubernetes 
hosted brokers

Update 2017-02-20 This may have been resolved in Kafka 2.1.1 (Confluent 
5.1.2) released today. See the linked issue.

Mime
View raw message