kafka-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Onur Karaman (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (KAFKA-1895) Investigate moving deserialization and decompression out of KafkaConsumer
Date Thu, 23 Feb 2017 00:28:44 GMT

    [ https://issues.apache.org/jira/browse/KAFKA-1895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15879561#comment-15879561
] 

Onur Karaman commented on KAFKA-1895:
-------------------------------------

I think it's worth defining the relation between the two problems mentioned earlier:
# no means of access to raw FetchResponse data
# lack of a separate IO thread

I think Problem 1 is more of a performance problem while Problem 2 is a performance and usability
problem (KAFKA-4753 shows that this is can lead to starvation).

Addressing Problem 1 doesn't solve Problem 2.

Addressing Problem 2 partially solves Problem 1. With a solution to Problem 2, we have the
potential to also do the decompression/deserialization in the separate IO thread, removing
decompression-in-user-thread performance concerns. But this wouldn't address the decompression-then-recompression
performance concerns in MirrorMaker or perhaps some stream processing use-cases.

I think we need to solve both problems.

> Investigate moving deserialization and decompression out of KafkaConsumer
> -------------------------------------------------------------------------
>
>                 Key: KAFKA-1895
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1895
>             Project: Kafka
>          Issue Type: Sub-task
>          Components: consumer
>            Reporter: Jay Kreps
>
> The consumer implementation in KAFKA-1760 decompresses fetch responses and deserializes
them into ConsumerRecords which are then handed back as the result of poll().
> There are several downsides to this:
> 1. It is impossible to scale serialization and decompression work beyond the single thread
running the KafkaConsumer.
> 2. The results can come back during the processing of other calls such as commit() etc
which can result in caching these records a little longer.
> An alternative would be to have ConsumerRecords wrap the actual compressed serialized
MemoryRecords chunks and do the deserialization during iteration. This way you could scale
this over a thread pool if needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message