kafka-jira mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Guozhang Wang (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (KAFKA-3999) Consumer bytes-fetched metric uses decompressed message size
Date Sat, 23 Sep 2017 04:49:05 GMT

     [ https://issues.apache.org/jira/browse/KAFKA-3999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Guozhang Wang updated KAFKA-3999:

*Reminder to the contributor / reviewer of the PR*: please note that the code deadline for
1.0.0 is less than 2 weeks away (Oct. 4th). Please re-evaluate your JIRA and see if it still
makes sense to be merged into 1.0.0 or it could be pushed out to 1.1.0, or be closed directly
if the JIRA itself is not valid any more, or re-assign yourself as contributor / committer
if you are no longer working on the JIRA.

> Consumer bytes-fetched metric uses decompressed message size
> ------------------------------------------------------------
>                 Key: KAFKA-3999
>                 URL: https://issues.apache.org/jira/browse/KAFKA-3999
>             Project: Kafka
>          Issue Type: Bug
>          Components: consumer
>    Affects Versions:,
>            Reporter: Jason Gustafson
>            Assignee: Vahid Hashemian
>            Priority: Minor
>             Fix For: 1.0.0
> It looks like the computation for the bytes-fetched metrics uses the size of the decompressed
message set. I would have expected it to be based off of the raw size of the fetch responses.
Perhaps it would be helpful to expose both the raw and decompressed fetch sizes? 

This message was sent by Atlassian JIRA

View raw message