kafka-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jun Rao (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (KAFKA-273) Occassional GZIP errors on the server while writing compressed data to disk
Date Wed, 07 Mar 2012 17:42:56 GMT

    [ https://issues.apache.org/jira/browse/KAFKA-273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13224538#comment-13224538
] 

Jun Rao commented on KAFKA-273:
-------------------------------

The patch for EOF looks fine. We probably need to do some system test to make sure this doesn't
introduce new problems, especially when the compressed size is relatively large. Once that
test is done. We can commit the patch.
                
> Occassional GZIP errors on the server while writing compressed data to disk
> ---------------------------------------------------------------------------
>
>                 Key: KAFKA-273
>                 URL: https://issues.apache.org/jira/browse/KAFKA-273
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 0.7
>            Reporter: Neha Narkhede
>            Assignee: Neha Narkhede
>         Attachments: kafka-273.patch
>
>
> Occasionally, we see the following errors on the Kafka server -
> 2012/02/08 14:58:21.832 ERROR [KafkaRequestHandlers] [kafka-processor-6] [kafka] Error
processing MultiProducerRequest on NusImpressionSetEvent:0
> java.io.EOFException: Unexpected end of ZLIB input stream
>         at java.util.zip.InflaterInputStream.fill(InflaterInputStream.java:223)
>         at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:141)
>         at java.util.zip.GZIPInputStream.read(GZIPInputStream.java:92)
>         at java.io.FilterInputStream.read(FilterInputStream.java:90)
>         at kafka.message.GZIPCompression.read(CompressionUtils.scala:52)
>         at kafka.message.CompressionUtils$$anonfun$decompress$1.apply$mcI$sp(CompressionUtils.scala:143)
>         at kafka.message.CompressionUtils$$anonfun$decompress$1.apply(CompressionUtils.scala:143)
>         at kafka.message.CompressionUtils$$anonfun$decompress$1.apply(CompressionUtils.scala:143)
>         at scala.collection.immutable.Stream$$anonfun$continually$1.apply(Stream.scala:598)
>         at scala.collection.immutable.Stream$$anonfun$continually$1.apply(Stream.scala:598)
>         at scala.collection.immutable.Stream$Cons.tail(Stream.scala:555)
>         at scala.collection.immutable.Stream$Cons.tail(Stream.scala:549)
>         at scala.collection.immutable.Stream$$anonfun$takeWhile$1.apply(Stream.scala:394)
>         at scala.collection.immutable.Stream$$anonfun$takeWhile$1.apply(Stream.scala:394)
>         at scala.collection.immutable.Stream$Cons.tail(Stream.scala:555)
>         at scala.collection.immutable.Stream$Cons.tail(Stream.scala:549)
>         at scala.collection.immutable.Stream.foreach(Stream.scala:255)
>         at kafka.message.CompressionUtils$.decompress(CompressionUtils.scala:143)
>         at kafka.message.ByteBufferMessageSet$$anon$1.makeNextOuter(ByteBufferMessageSet.scala:119)
>         at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:132)
>         at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:81)
>         at kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:59)
>         at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:51)
>         at scala.collection.Iterator$class.foreach(Iterator.scala:631)
>         at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:30)
>         at scala.collection.IterableLike$class.foreach(IterableLike.scala:79)
>         at kafka.message.MessageSet.foreach(MessageSet.scala:87)
>         at kafka.log.Log.append(Log.scala:204)
>         at kafka.server.KafkaRequestHandlers.kafka$server$KafkaRequestHandlers$$handleProducerRequest(KafkaRequestHandlers.scala:70)
>         at kafka.server.KafkaRequestHandlers$$anonfun$handleMultiProducerRequest$1.apply(KafkaRequestHandlers.scala:63)
>         at kafka.server.KafkaRequestHandlers$$anonfun$handleMultiProducerRequest$1.apply(KafkaRequestHandlers.scala:63)
>         at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>         at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>         at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
>         at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
>         at scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
>         at scala.collection.mutable.ArrayOps.map(ArrayOps.scala:34)
>         at kafka.server.KafkaRequestHandlers.handleMultiProducerRequest(KafkaRequestHandlers.scala:63)
>         at kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$4.apply(KafkaRequestHandlers.scala:42)
>         at kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$4.apply(KafkaRequestHandlers.scala:42)
>         at kafka.network.Processor.handle(SocketServer.scala:297)
>         at kafka.network.Processor.read(SocketServer.scala:320)
>         at kafka.network.Processor.run(SocketServer.scala:215)
>         at java.lang.Thread.run(Thread.java:619)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message