kafka-jira mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Nick Dimiduk (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (KAFKA-1980) Console consumer throws OutOfMemoryError with large max-messages
Date Wed, 30 Aug 2017 17:38:03 GMT

    [ https://issues.apache.org/jira/browse/KAFKA-1980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16147659#comment-16147659
] 

Nick Dimiduk commented on KAFKA-1980:
-------------------------------------

Thanks for the explanation [~omkreddy]. Better to resolve this as 'won't fix', not 'fixed',
and to link this issue to the new one, so as to avoid repeating this mistake in the new implementation,
right?

> Console consumer throws OutOfMemoryError with large max-messages
> ----------------------------------------------------------------
>
>                 Key: KAFKA-1980
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1980
>             Project: Kafka
>          Issue Type: Bug
>          Components: tools
>    Affects Versions: 0.8.1.1, 0.8.2.0
>            Reporter: HÃ¥kon Hitland
>            Priority: Minor
>         Attachments: kafka-1980.patch
>
>
> Tested on kafka_2.11-0.8.2.0
> Steps to reproduce:
> - Have any topic with at least 1 GB of data.
> - Use kafka-console-consumer.sh on the topic passing a large number to --max-messages,
e.g.:
> $ bin/kafka-console-consumer.sh --zookeeper localhost --topic test.large --from-beginning
--max-messages 99999999 | head -n 40
> Expected result:
> Should stream messages up to max-messages
> Result:
> Out of memory error:
> [2015-02-23 19:41:35,006] ERROR OOME with size 1048618 (kafka.network.BoundedByteBufferReceive)
> java.lang.OutOfMemoryError: Java heap space
> 	at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
> 	at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
> 	at kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
> 	at kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
> 	at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
> 	at kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
> 	at kafka.network.BlockingChannel.receive(BlockingChannel.scala:111)
> 	at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:71)
> 	at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
> 	at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
> 	at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
> 	at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
> 	at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> 	at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
> 	at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
> 	at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
> 	at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> 	at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
> 	at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:94)
> 	at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
> 	at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> As a first guess I'd say that this is caused by slice() taking more memory than expected.
Perhaps because it is called on an Iterable and not an Iterator?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message