kafka-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "wanzi.zhao (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (KAFKA-3951) kafka.common.KafkaStorageException: I/O exception in append to log
Date Tue, 12 Jul 2016 01:26:11 GMT

     [ https://issues.apache.org/jira/browse/KAFKA-3951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

wanzi.zhao updated KAFKA-3951:
------------------------------
    Attachment:     (was: server-1.properties)

> kafka.common.KafkaStorageException: I/O exception in append to log
> ------------------------------------------------------------------
>
>                 Key: KAFKA-3951
>                 URL: https://issues.apache.org/jira/browse/KAFKA-3951
>             Project: Kafka
>          Issue Type: Bug
>          Components: log
>    Affects Versions: 0.9.0.1
>            Reporter: wanzi.zhao
>
> I have two brokers are in the same server using two ports,10.45.33.195:9092 and 10.45.33.195:9093.They
use two log directory "log.dirs=/tmp/kafka-logs" and "log.dirs=/tmp/kafka-logs-1".When I shutdown
my consumer application(java api)  then change a groupId and restart it,my kafka brokers will
stop working, this is the stack trace I get
> [2016-07-11 17:02:47,314] INFO [Group Metadata Manager on Broker 0]: Loading offsets
and group metadata from [__consumer_offsets,0] (kafka.coordinator.GroupMetadataManager)
> [2016-07-11 17:02:47,955] FATAL [Replica Manager on Broker 0]: Halting due to unrecoverable
I/O error while handling produce request:  (kafka.server.ReplicaManager)
> kafka.common.KafkaStorageException: I/O exception in append to log '__consumer_offsets-38'
>         at kafka.log.Log.append(Log.scala:318)
>         at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:442)
>         at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:428)
>         at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
>         at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
>         at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:428)
>         at kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:401)
>         at kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:386)
>         at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>         at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>         at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
>         at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>         at scala.collection.AbstractTraversable.map(Traversable.scala:105)
>         at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:386)
>         at kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:322)
>         at kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:228)
>         at kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:429)
>         at kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:429)
>         at scala.Option.foreach(Option.scala:236)
>         at kafka.coordinator.GroupCoordinator.handleCommitOffsets(GroupCoordinator.scala:429)
>         at kafka.server.KafkaApis.handleOffsetCommitRequest(KafkaApis.scala:280)
>         at kafka.server.KafkaApis.handle(KafkaApis.scala:76)
>         at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: /tmp/kafka-logs/__consumer_offsets-38/00000000000000000000.index
(No such file or directory)
>         at java.io.RandomAccessFile.open0(Native Method)
>         at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
>         at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
>         at kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:277)
>         at kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:276)
>         at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
>         at kafka.log.OffsetIndex.resize(OffsetIndex.scala:276)
>         at kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:265)
>         at kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
>         at kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
>         at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
>         at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:264)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message