Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 2F685200B8C for ; Mon, 12 Sep 2016 23:38:24 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 2DD53160AD7; Mon, 12 Sep 2016 21:38:24 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 75BEC160AB8 for ; Mon, 12 Sep 2016 23:38:23 +0200 (CEST) Received: (qmail 34048 invoked by uid 500); 12 Sep 2016 21:38:22 -0000 Mailing-List: contact dev-help@kafka.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@kafka.apache.org Delivered-To: mailing list dev@kafka.apache.org Received: (qmail 33743 invoked by uid 99); 12 Sep 2016 21:38:22 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 12 Sep 2016 21:38:22 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 24C7F2C1B8C for ; Mon, 12 Sep 2016 21:38:22 +0000 (UTC) Date: Mon, 12 Sep 2016 21:38:22 +0000 (UTC) From: "Alexey Ozeritskiy (JIRA)" To: dev@kafka.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (KAFKA-2063) Bound fetch response size (KIP-74) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Mon, 12 Sep 2016 21:38:24 -0000 [ https://issues.apache.org/jira/browse/KAFKA-2063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485357#comment-15485357 ] Alexey Ozeritskiy commented on KAFKA-2063: ------------------------------------------ The approah KAFKA-3979 is better for configurations with very big max.message.size. For example we have the following config on cluster of 77 hosts {code} replica.fetch.max.bytes=135000000 replica.socket.receive.buffer.bytes=135000000 message.max.bytes=134217728 socket.request.max.bytes=135000000 {code} For KIP-74 it is always needed 77*135000000 bytes of memory for process. For KAFKA-3979 it is only 77*1000000 bytes in average. I think we should to do something with that. > Bound fetch response size (KIP-74) > ---------------------------------- > > Key: KAFKA-2063 > URL: https://issues.apache.org/jira/browse/KAFKA-2063 > Project: Kafka > Issue Type: Improvement > Reporter: Jay Kreps > Assignee: Alexey Ozeritskiy > Fix For: 0.10.1.0 > > > Currently the only bound on the fetch response size is max.partition.fetch.bytes * num_partitions. There are two problems: > 1. First this bound is often large. You may chose max.partition.fetch.bytes=1MB to enable messages of up to 1MB. However if you also need to consume 1k partitions this means you may receive a 1GB response in the worst case! > 2. The actual memory usage is unpredictable. Partition assignment changes, and you only actually get the full fetch amount when you are behind and there is a full chunk of data ready. This means an application that seems to work fine will suddenly OOM when partitions shift or when the application falls behind. > We need to decouple the fetch response size from the number of partitions. > The proposal for doing this would be to add a new field to the fetch request, max_bytes which would control the maximum data bytes we would include in the response. > The implementation on the server side would grab data from each partition in the fetch request until it hit this limit, then send back just the data for the partitions that fit in the response. The implementation would need to start from a random position in the list of topics included in the fetch request to ensure that in a case of backlog we fairly balance between partitions (to avoid first giving just the first partition until that is exhausted, then the next partition, etc). > This setting will make the max.partition.fetch.bytes field in the fetch request much less useful and we should discuss just getting rid of it. > I believe this also solves the same thing we were trying to address in KAFKA-598. The max_bytes setting now becomes the new limit that would need to be compared to max_message size. This can be much larger--e.g. setting a 50MB max_bytes setting would be okay, whereas now if you set 50MB you may need to allocate 50MB*num_partitions. > This will require evolving the fetch request protocol version to add the new field and we should do a KIP for it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)