Return-Path: X-Original-To: apmail-kafka-dev-archive@www.apache.org Delivered-To: apmail-kafka-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 0429BE1F9 for ; Wed, 30 Jan 2013 05:43:14 +0000 (UTC) Received: (qmail 45620 invoked by uid 500); 30 Jan 2013 05:43:13 -0000 Delivered-To: apmail-kafka-dev-archive@kafka.apache.org Received: (qmail 45454 invoked by uid 500); 30 Jan 2013 05:43:13 -0000 Mailing-List: contact dev-help@kafka.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@kafka.apache.org Delivered-To: mailing list dev@kafka.apache.org Received: (qmail 45426 invoked by uid 99); 30 Jan 2013 05:43:13 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 30 Jan 2013 05:43:13 +0000 Date: Wed, 30 Jan 2013 05:43:12 +0000 (UTC) From: "Jay Kreps (JIRA)" To: dev@kafka.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (KAFKA-671) DelayedProduce requests should not hold full producer request data MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/KAFKA-671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13566188#comment-13566188 ] Jay Kreps commented on KAFKA-671: --------------------------------- Took a look at this. Looks reasonable. Other atrocities have occurred inside RequestQueue, but they aren't from this patch. Re-opened KAFKA-683. :-) For maps it is nicer to import scala.collection and then refer to mutable.Map rather than scala.mutable.Map. I think the question is why do we need to hang onto the ProduceRequest object at all? We are doing work to null it out, but why can't we just take the one or two fields we need from that in the delayed produce? If we do that then won't the producerequest be out of scope after handleProduce and get gc'd? Is the root cause of this the fact that we moved deserialization into the network thread and shoved the api object into the request? > DelayedProduce requests should not hold full producer request data > ------------------------------------------------------------------ > > Key: KAFKA-671 > URL: https://issues.apache.org/jira/browse/KAFKA-671 > Project: Kafka > Issue Type: Bug > Affects Versions: 0.8 > Reporter: Joel Koshy > Assignee: Sriram Subramanian > Priority: Blocker > Labels: bugs, p1 > Fix For: 0.8.1 > > Attachments: outOfMemFix-v1.patch, outOfMemFix-v2.patch, outOfMemFix-v2-rebase.patch, outOfMemFix-v3.patch > > > Per summary, this leads to unnecessary memory usage. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira