kafka-jira mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Guozhang Wang (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (KAFKA-1895) Investigate moving deserialization and decompression out of KafkaConsumer
Date Mon, 06 Nov 2017 17:42:02 GMT

     [ https://issues.apache.org/jira/browse/KAFKA-1895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Guozhang Wang updated KAFKA-1895:
    Issue Type: Improvement  (was: Sub-task)
        Parent:     (was: KAFKA-1326)

> Investigate moving deserialization and decompression out of KafkaConsumer
> -------------------------------------------------------------------------
>                 Key: KAFKA-1895
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1895
>             Project: Kafka
>          Issue Type: Improvement
>          Components: consumer
>            Reporter: Jay Kreps
>            Assignee: Jason Gustafson
> The consumer implementation in KAFKA-1760 decompresses fetch responses and deserializes
them into ConsumerRecords which are then handed back as the result of poll().
> There are several downsides to this:
> 1. It is impossible to scale serialization and decompression work beyond the single thread
running the KafkaConsumer.
> 2. The results can come back during the processing of other calls such as commit() etc
which can result in caching these records a little longer.
> An alternative would be to have ConsumerRecords wrap the actual compressed serialized
MemoryRecords chunks and do the deserialization during iteration. This way you could scale
this over a thread pool if needed.

This message was sent by Atlassian JIRA

View raw message