Return-Path: Delivered-To: apmail-cassandra-commits-archive@www.apache.org Received: (qmail 96892 invoked from network); 11 Apr 2011 22:17:46 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 11 Apr 2011 22:17:46 -0000 Received: (qmail 21053 invoked by uid 500); 11 Apr 2011 22:17:46 -0000 Delivered-To: apmail-cassandra-commits-archive@cassandra.apache.org Received: (qmail 21014 invoked by uid 500); 11 Apr 2011 22:17:46 -0000 Mailing-List: contact commits-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cassandra.apache.org Delivered-To: mailing list commits@cassandra.apache.org Received: (qmail 21006 invoked by uid 99); 11 Apr 2011 22:17:46 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 11 Apr 2011 22:17:46 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED,T_RP_MATCHES_RCVD X-Spam-Check-By: apache.org Received: from [140.211.11.116] (HELO hel.zones.apache.org) (140.211.11.116) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 11 Apr 2011 22:17:44 +0000 Received: from hel.zones.apache.org (hel.zones.apache.org [140.211.11.116]) by hel.zones.apache.org (Postfix) with ESMTP id 05D3F9D866 for ; Mon, 11 Apr 2011 22:17:06 +0000 (UTC) Date: Mon, 11 Apr 2011 22:17:06 +0000 (UTC) From: "Stu Hood (JIRA)" To: commits@cassandra.apache.org Message-ID: <102745785.50807.1302560226020.JavaMail.tomcat@hel.zones.apache.org> In-Reply-To: <1722538396.50797.1302560106027.JavaMail.tomcat@hel.zones.apache.org> Subject: [jira] [Updated] (CASSANDRA-2455) Improve counter disk usage MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/CASSANDRA-2455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stu Hood updated CASSANDRA-2455: -------------------------------- Description: Counter values currently use a huge amount of space on disk: {code} (header + length + RF * (nodeid + count + clock)) == (2 + 2 + RF * (16 + 8 + 8)) bytes {code} Type specific compression (as on CASSANDRA-2398) is a long term solution to this problem, but we need a short term fix to make a large volume of counters possible. The largest and most redundant part of the counter is the nodeid, which is now 16 bytes per replica. One proposed improvement would be keep a per-sstable dictionary of all replica sets, and to assume the replicas are sorted by nodeid in the counter value. This would allow us to encode the replica as a single integer in the counter value, and to use it to look up the replica set in the dictionary. Assuming an integer replica set id, you could allow for 2^32 replica changes with 4 total bytes of overhead in each counter: {code} (header + length + replicasetid + RF (count + clock)) == (2 + 2 + 4 + RF * (8 + 8)) bytes {code} was: Counter values currently use a huge amount of space on disk: {{(header + length + RF * (nodeid + count + clock)) bytes}} or {{(2 + 2 + RF * (16 + 8 + 8)) bytes}} Type specific compression (as on CASSANDRA-2398) is a long term solution to this problem, but we need a short term fix to make a large volume of counters possible. The largest and most redundant part of the counter is the nodeid, which is now 16 bytes per replica. One proposed fix would be keep a per-sstable dictionary of all replica sets, and to assume the replicas are sorted by nodeid in the counter value. This would allow us to encode the replica as a single integer in the counter value, and to use it to look up the replica set in the dictionary. Assuming an integer replica set id, you could allow for 2^32 replica changes with 4 total bytes of overhead in each counter: {{(header + length + replicasetid + RF (count + clock)) bytes}} or {{(2 + 2 + 4 + RF * (8 + 8)) bytes}} > Improve counter disk usage > -------------------------- > > Key: CASSANDRA-2455 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2455 > Project: Cassandra > Issue Type: Improvement > Reporter: Stu Hood > > Counter values currently use a huge amount of space on disk: > {code} > (header + length + RF * (nodeid + count + clock)) > == > (2 + 2 + RF * (16 + 8 + 8)) bytes > {code} > Type specific compression (as on CASSANDRA-2398) is a long term solution to this problem, but we need a short term fix to make a large volume of counters possible. > The largest and most redundant part of the counter is the nodeid, which is now 16 bytes per replica. One proposed improvement would be keep a per-sstable dictionary of all replica sets, and to assume the replicas are sorted by nodeid in the counter value. This would allow us to encode the replica as a single integer in the counter value, and to use it to look up the replica set in the dictionary. Assuming an integer replica set id, you could allow for 2^32 replica changes with 4 total bytes of overhead in each counter: > {code} > (header + length + replicasetid + RF (count + clock)) > == > (2 + 2 + 4 + RF * (8 + 8)) bytes > {code} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira