kafka-jira mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Manikumar (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (KAFKA-6337) Error for partition [__consumer_offsets,15] to broker
Date Mon, 12 Mar 2018 16:23:00 GMT

     [ https://issues.apache.org/jira/browse/KAFKA-6337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Manikumar resolved KAFKA-6337.
------------------------------
    Resolution: Cannot Reproduce

Please reopen if you think the issue still exists

> Error for partition [__consumer_offsets,15] to broker
> -----------------------------------------------------
>
>                 Key: KAFKA-6337
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6337
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 0.10.2.0
>         Environment: Windows running Kafka(0.10.2.0)
> 3 ZK Instances running on 3 different Windows Servers, 7 Kafka Broker nodes running on
single windows machine with different disk for logs directory.
>            Reporter: Abhi
>            Priority: Blocker
>              Labels: windows
>
> Hello *
> I am running Kafka(0.10.2.0) on windows from the past one year ...
> But off late there has been unique Broker issues that I have observed 4-5 times in
> last 4 months.
> Kafka setup cofig...
> 3 ZK Instances running on 3 different Windows Servers, 7 Kafka Broker nodes running on
single windows machine with different disk for logs directory....
> My Kafka has 2 Topics with partition size 50 each , and replication factor of 3.
> My partition logic selection: Each message has a unique ID and logic of selecting partition
is ( unique ID % 50), and then calling Kafka producer API to route a specific message to a
particular topic partition .
> My Each Broker Properties look like this
> {{broker.id=0
> port:9093
> num.network.threads=3
> num.io.threads=8
> socket.send.buffer.bytes=102400
> socket.receive.buffer.bytes=102400
> socket.request.max.bytes=104857600
> offsets.retention.minutes=360
> advertised.host.name=1.1.1.2
> advertised.port:9093
> ctories under which to store log files
> log.dirs=C:\\kafka_2.10-0.10.2.0-SNAPSHOT\\data\\kafka-logs
> num.partitions=1
> num.recovery.threads.per.data.dir=1
> log.retention.minutes=360
> log.segment.bytes=52428800
> log.retention.check.interval.ms=300000
> log.cleaner.enable=true
> log.cleanup.policy=delete
> log.cleaner.min.cleanable.ratio=0.5
> log.cleaner.backoff.ms=15000
> log.segment.delete.delay.ms=6000
> auto.create.topics.enable=false
> zookeeper.connect=1.1.1.2:2181,1.1.1.3:2182,1.1.1.4:2183
> zookeeper.connection.timeout.ms=6000
> }}
> But of-late there has been a unique case that's cropping out in Kafka broker nodes,
> _[2017-12-02 02:47:40,024] ERROR [ReplicaFetcherThread-0-4], Error for partition [__consumer_offsets,15]
to broker 4:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is
not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)_
> The entire server.log is filled with these logs, and its very huge too , please help
me in understanding under what circumstances these can occur, and what measures I need to
take.. 
> Please help me this is the third time in last three Saturdays i faced the similar issue.

> Courtesy
> Abhi
> !wq 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message