Return-Path: X-Original-To: apmail-cassandra-commits-archive@www.apache.org Delivered-To: apmail-cassandra-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A78BF19961 for ; Thu, 10 Mar 2016 00:16:41 +0000 (UTC) Received: (qmail 61001 invoked by uid 500); 10 Mar 2016 00:16:41 -0000 Delivered-To: apmail-cassandra-commits-archive@cassandra.apache.org Received: (qmail 60974 invoked by uid 500); 10 Mar 2016 00:16:41 -0000 Mailing-List: contact commits-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cassandra.apache.org Delivered-To: mailing list commits@cassandra.apache.org Received: (qmail 60953 invoked by uid 99); 10 Mar 2016 00:16:41 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 10 Mar 2016 00:16:41 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id D23EA2C1F62 for ; Thu, 10 Mar 2016 00:16:40 +0000 (UTC) Date: Thu, 10 Mar 2016 00:16:40 +0000 (UTC) From: "Benedict (JIRA)" To: commits@cassandra.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (CASSANDRA-11327) Maintain a histogram of times when writes are blocked due to no available memory MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/CASSANDRA-11327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15188351#comment-15188351 ] Benedict commented on CASSANDRA-11327: -------------------------------------- No; they're about actually freeing the memory. The point of memtables is that they completely mask latency until you exceed write throughput by total system buffer capacity. The idea being that the cluster should always be provisioned above that level, since it's for real-time service provision. Any rate limit of the kind you describe would artificially introduce latency at all other times, i.e. when a healthy cluster would have none. Certainly there are schemes that are better than others, such as calculating the difference between allocation rate and flush rate, applying a rate limit when one exceeds the other, by an amount inversely proportional to the amount of free space (i.e. so that the latency adulteration only occurs as you approach overload). Actually reclaiming space as flush progresses has the advantage of only introducing latency only when absolutely necessary, but also ensures progress to queries at the disk throughput limit of the cluster. > Maintain a histogram of times when writes are blocked due to no available memory > -------------------------------------------------------------------------------- > > Key: CASSANDRA-11327 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11327 > Project: Cassandra > Issue Type: New Feature > Components: Core > Reporter: Ariel Weisberg > > I have a theory that part of the reason C* is so sensitive to timeouts during saturating write load is that throughput is basically a sawtooth with valleys at zero. This is something I have observed and it gets worse as you add 2i to a table or do anything that decreases the throughput of flushing. > I think the fix for this is to incrementally release memory pinned by memtables and 2i during flushing instead of releasing it all at once. I know that's not really possible, but we can fake it with memory accounting that tracks how close to completion flushing is and releases permits for additional memory. This will lead to a bit of a sawtooth in real memory usage, but we can account for that so the peak footprint is the same. > I think the end result of this change will be a sawtooth, but the valley of the sawtooth will not be zero it will be the rate at which flushing progresses. Optimizing the rate at which flushing progresses and it's fairness with other work can then be tackled separately. > Before we do this I think we should demonstrate that pinned memory due to flushing is actually the issue by getting better visibility into the distribution of instances of not having any memory by maintaining a histogram of spans of time where no memory is available and a thread is blocked. > [MemtableAllocatr$SubPool.allocate(long)|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/utils/memory/MemtableAllocator.java#L186] should be a relatively straightforward entry point for this. The first thread to block can mark the start of memory starvation and the last thread out can mark the end. Have a periodic task that tracks the amount of time spent blocked per interval of time and if it is greater than some threshold log with more details, possibly at debug. -- This message was sent by Atlassian JIRA (v6.3.4#6332)