Return-Path: X-Original-To: apmail-cassandra-commits-archive@www.apache.org Delivered-To: apmail-cassandra-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6A7864206 for ; Wed, 29 Jun 2011 22:45:54 +0000 (UTC) Received: (qmail 5540 invoked by uid 500); 29 Jun 2011 22:45:53 -0000 Delivered-To: apmail-cassandra-commits-archive@cassandra.apache.org Received: (qmail 4828 invoked by uid 500); 29 Jun 2011 22:45:53 -0000 Mailing-List: contact commits-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cassandra.apache.org Delivered-To: mailing list commits@cassandra.apache.org Received: (qmail 4753 invoked by uid 99); 29 Jun 2011 22:45:52 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 29 Jun 2011 22:45:52 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED,T_RP_MATCHES_RCVD X-Spam-Check-By: apache.org Received: from [140.211.11.116] (HELO hel.zones.apache.org) (140.211.11.116) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 29 Jun 2011 22:45:50 +0000 Received: from hel.zones.apache.org (hel.zones.apache.org [140.211.11.116]) by hel.zones.apache.org (Postfix) with ESMTP id 239E743AA56 for ; Wed, 29 Jun 2011 22:45:29 +0000 (UTC) Date: Wed, 29 Jun 2011 22:45:29 +0000 (UTC) From: "Benjamin Coverston (JIRA)" To: commits@cassandra.apache.org Message-ID: <242478465.4022.1309387529141.JavaMail.tomcat@hel.zones.apache.org> Subject: [jira] [Updated] (CASSANDRA-1608) Redesigned Compaction MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/CASSANDRA-1608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Coverston updated CASSANDRA-1608: ------------------------------------------ Attachment: 1608-v8.txt First the good: 1. Modified the code s.t. tombstone purge during minor compactions use the interval tree to prune the list of SSTables speeding up compactions by at least an order of magnitude where the number of SSTables in a column family exceeds ~500. 2. Tested reads and writes. Write speeds (unsurprisingly) are not affected by this compaction strategy. Reads seem to keep up as well. The interval tree does a good job here making sure that bloom filters are only queried only for those SSTables that fall into the queried range. 3. Three successive runs of stress inserting 10M keys resulted in ~3GB of data stored in leveldb. By comparison, the same run using the tiered (default) strategy resulted in ~8GB of data. The Meh: Compactions do back up when setting the flush size to 64MB and the leveled SSTable size to anywhere between 5-10MB. On the upside, if your load has peaks and quieter times this compaction strategy will trigger a periodic check to "catch up" if all event-scheduled compactions complete. Interestingly this extra IO has an upside. For datasets that frequently overwrite old data that has already been flushed to disk there is the potential for substantial de-duplication of data. Further, during reads the number of rows that would need to be merged for a single row is bound by the number of levels + the number of un-leveled sstables. > Redesigned Compaction > --------------------- > > Key: CASSANDRA-1608 > URL: https://issues.apache.org/jira/browse/CASSANDRA-1608 > Project: Cassandra > Issue Type: Improvement > Components: Core > Reporter: Chris Goffinet > Assignee: Benjamin Coverston > Attachments: 0001-leveldb-style-compaction.patch, 1608-v2.txt, 1608-v3.txt, 1608-v4.txt, 1608-v5.txt, 1608-v7.txt, 1608-v8.txt > > > After seeing the I/O issues in CASSANDRA-1470, I've been doing some more thinking on this subject that I wanted to lay out. > I propose we redo the concept of how compaction works in Cassandra. At the moment, compaction is kicked off based on a write access pattern, not read access pattern. In most cases, you want the opposite. You want to be able to track how well each SSTable is performing in the system. If we were to keep statistics in-memory of each SSTable, prioritize them based on most accessed, and bloom filter hit/miss ratios, we could intelligently group sstables that are being read most often and schedule them for compaction. We could also schedule lower priority maintenance on SSTable's not often accessed. > I also propose we limit the size of each SSTable to a fix sized, that gives us the ability to better utilize our bloom filters in a predictable manner. At the moment after a certain size, the bloom filters become less reliable. This would also allow us to group data most accessed. Currently the size of an SSTable can grow to a point where large portions of the data might not actually be accessed as often. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira