kafka-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (KAFKA-4103) DumpLogSegments cannot print data from offsets topic
Date Wed, 31 Aug 2016 20:41:20 GMT

    [ https://issues.apache.org/jira/browse/KAFKA-4103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15453298#comment-15453298
] 

ASF GitHub Bot commented on KAFKA-4103:
---------------------------------------

Github user asfgit closed the pull request at:

    https://github.com/apache/kafka/pull/1807


> DumpLogSegments cannot print data from offsets topic
> ----------------------------------------------------
>
>                 Key: KAFKA-4103
>                 URL: https://issues.apache.org/jira/browse/KAFKA-4103
>             Project: Kafka
>          Issue Type: Bug
>          Components: tools
>            Reporter: Ewen Cheslack-Postava
>            Assignee: Jason Gustafson
>            Priority: Blocker
>             Fix For: 0.10.1.0
>
>
> It looks like there's been a regression in the DumpLogSegments tool. I'm marking it a
blocker since it appears we can no longer dump offset information from this tool, which makes
it really hard to debug anything related to __consumer_offsets.
> The 0.10.0 branch seems to work fine, but even with offsets log files generated using
only old formats (0.10.0 branch), the DumpLogSegments tool from trunk (i.e. 0.10.1.0-SNAPSHOT
with latest githash b91eeac9438b8718c410045b0e9191296ebb536d as of reporting this) will cause
the exception below. This was found while doing some basic testing of KAFKA-4062.
> {quote}
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> offset: 0 position: 0 CreateTime: 1472615183913 isvalid: true payloadsize: 199 magic:
1 compresscodec: NoCompressionCodec crc: 2036280914 keysize: 26Exception in thread "main"
java.util.IllegalFormatConversionException: x != scala.math.BigInt
>        	at java.util.Formatter$FormatSpecifier.failConversion(Formatter.java:4045)
>        	at java.util.Formatter$FormatSpecifier.printInteger(Formatter.java:2748)
>        	at java.util.Formatter$FormatSpecifier.print(Formatter.java:2702)
>        	at java.util.Formatter.format(Formatter.java:2488)
>        	at java.util.Formatter.format(Formatter.java:2423)
>        	at java.lang.String.format(String.java:2792)
>        	at kafka.tools.DumpLogSegments$OffsetsMessageParser.kafka$tools$DumpLogSegments$OffsetsMessageParser$$hex(DumpLogSegments.scala:240)
>        	at kafka.tools.DumpLogSegments$OffsetsMessageParser$$anonfun$3.apply(DumpLogSegments.scala:272)
>        	at kafka.tools.DumpLogSegments$OffsetsMessageParser$$anonfun$3.apply(DumpLogSegments.scala:262)
>        	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>        	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>        	at scala.collection.immutable.List.foreach(List.scala:318)
>        	at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>        	at scala.collection.AbstractTraversable.map(Traversable.scala:105)
>        	at kafka.tools.DumpLogSegments$OffsetsMessageParser.parseGroupMetadata(DumpLogSegments.scala:262)
>        	at kafka.tools.DumpLogSegments$OffsetsMessageParser.parse(DumpLogSegments.scala:290)
>        	at kafka.tools.DumpLogSegments$$anonfun$kafka$tools$DumpLogSegments$$dumpLog$1$$anonfun$apply$3.apply(DumpLogSegments.scala:332)
>        	at kafka.tools.DumpLogSegments$$anonfun$kafka$tools$DumpLogSegments$$dumpLog$1$$anonfun$apply$3.apply(DumpLogSegments.scala:312)
>        	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>        	at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:30)
>        	at kafka.tools.DumpLogSegments$$anonfun$kafka$tools$DumpLogSegments$$dumpLog$1.apply(DumpLogSegments.scala:312)
>        	at kafka.tools.DumpLogSegments$$anonfun$kafka$tools$DumpLogSegments$$dumpLog$1.apply(DumpLogSegments.scala:310)
>        	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>        	at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:30)
>        	at kafka.tools.DumpLogSegments$.kafka$tools$DumpLogSegments$$dumpLog(DumpLogSegments.scala:310)
>        	at kafka.tools.DumpLogSegments$$anonfun$main$1.apply(DumpLogSegments.scala:96)
>        	at kafka.tools.DumpLogSegments$$anonfun$main$1.apply(DumpLogSegments.scala:92)
>        	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>        	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
>        	at kafka.tools.DumpLogSegments$.main(DumpLogSegments.scala:92)
>        	at kafka.tools.DumpLogSegments.main(DumpLogSegments.scala)
> {quote}
> I haven't really dug in, but the source of the error is confusing since the relevant
string formatting code doesn't seem to have changed anytime recently. It seems it might be
related to changes in the group metadata code. I did the git bisect and this seems to be the
bad commit:
> {quote}
> 8c551675adb11947e9f27b20a9195c9c4a20b432 is the first bad commit
> commit 8c551675adb11947e9f27b20a9195c9c4a20b432
> Author: Jason Gustafson <jason@confluent.io>
> Date:   Wed Jun 15 19:46:42 2016 -0700
>     KAFKA-2720: expire group metadata when all offsets have expired
>     Author: Jason Gustafson <jason@confluent.io>
>     Reviewers: Liquan Pei, Onur Karaman, Guozhang Wang
>     Closes #1427 from hachikuji/KAFKA-2720
> :040000 040000 0da885a8896f0894940cc1b002160ca4e7176905 eb5a672ae09159264993bc61b6a18a2f19de804e
M     	core
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message