Return-Path: X-Original-To: apmail-cassandra-commits-archive@www.apache.org Delivered-To: apmail-cassandra-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 9B52911096 for ; Thu, 26 Jun 2014 01:13:29 +0000 (UTC) Received: (qmail 42011 invoked by uid 500); 26 Jun 2014 01:13:24 -0000 Delivered-To: apmail-cassandra-commits-archive@cassandra.apache.org Received: (qmail 41975 invoked by uid 500); 26 Jun 2014 01:13:24 -0000 Mailing-List: contact commits-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cassandra.apache.org Delivered-To: mailing list commits@cassandra.apache.org Received: (qmail 41962 invoked by uid 99); 26 Jun 2014 01:13:24 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 26 Jun 2014 01:13:24 +0000 Date: Thu, 26 Jun 2014 01:13:24 +0000 (UTC) From: "graham sanderson (JIRA)" To: commits@cassandra.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Created] (CASSANDRA-7449) Variation of SELECT DISTINCT to find clustering keys with only static columns MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 graham sanderson created CASSANDRA-7449: ------------------------------------------- Summary: Variation of SELECT DISTINCT to find clustering keys with only static columns Key: CASSANDRA-7449 URL: https://issues.apache.org/jira/browse/CASSANDRA-7449 Project: Cassandra Issue Type: Improvement Components: API, Core Reporter: graham sanderson A possible use case for static columns involves (per partition) multiple small TTL time series data values combined with a potentially much larger static piece of data. While the TTL time series data will go away on its own, there is no way to TTL the static data (and keep it updated with the latest TTL) without re-inserting it every time to reset the TTL (which is undesirable since it is large and unchanged) The use case looks something like this: {code} CREATE KEYSPACE test WITH replication = { 'class': 'SimpleStrategy', 'replication_factor': '1' }; USE test; CREATE TABLE expiring_series ( id text, series_order int, small_data text, large_data text static, PRIMARY KEY (id, series_order) ); INSERT INTO expiring_series (id, large_data) VALUES ('123', 'this is large and should not be inserted every time'); SELECT id, series_order, small_data, large_data, ttl(small_data) from expiring_series; SELECT id FROM expiring_series; SELECT DISTINCT id FROM expiring_series; SELECT STATICONLY id FROM expiring_series; INSERT INTO expiring_series (id, large_data) VALUES ('123', 'this is large and should not be inserted every time'); INSERT INTO expiring_series (id, series_order, small_data) VALUES ('123', 1, 'antelope') USING TTL 120; // time passes (point A) INSERT INTO expiring_series (id, series_order, small_data) VALUES ('123', 2, 'gibbon') USING TTL 120; // time passes (point B) INSERT INTO expiring_series (id, series_order, small_data) VALUES ('123', 3, 'firebucket') USING TTL 120; {code} // time passes (point C) // time passes and the first row expires (point D) // more time passes and eventually all the "rows" expire (point E) {code} GIven the way the storage engine works, there is no trivial way to make the static column expire when the last row expires, however if there was an easy way to find partitions with no regular rows (just static columns), then that would make manual clean up easy The possible implementation of such a feature is very similar to SELECT DISTINCT, so I'm suggesting SELECT STATICONLY Looking at the points again h4. Point A {code} cqlsh:test> SELECT id, series_order, small_data, large_data, ttl(small_data) from expiring_series; id | series_order | small_data | large_data | ttl(small_data) -----+--------------+------------+-----------------------------------------------------+----------------- 123 | 1 | antelope | this is large and should not be inserted every time | 108 (1 rows) cqlsh:test> SELECT id FROM expiring_series; id ----- 123 (1 rows) cqlsh:test> SELECT DISTINCT id FROM expiring_series; id ----- 123 (1 rows) cqlsh:test> SELECT STATICONLY id FROM expiring_series; (0 rows) {code} h4. Point B {code} cqlsh:test> SELECT id, series_order, small_data, large_data, ttl(small_data) from expiring_series; id | series_order | small_data | large_data | ttl(small_data) -----+--------------+------------+-----------------------------------------------------+----------------- 123 | 1 | antelope | this is large and should not be inserted every time | 87 123 | 2 | gibbon | this is large and should not be inserted every time | 111 (2 rows) cqlsh:test> SELECT id FROM expiring_series; id ----- 123 123 (2 rows) cqlsh:test> SELECT DISTINCT id FROM expiring_series; id ----- 123 (1 rows) cqlsh:test> SELECT STATICONLY id FROM expiring_series; (0 rows) {code} h4. Point C {code} cqlsh:test> SELECT id, series_order, small_data, large_data, ttl(small_data) from expiring_series; id | series_order | small_data | large_data | ttl(small_data) -----+--------------+------------+-----------------------------------------------------+----------------- 123 | 1 | antelope | this is large and should not be inserted every time | 67 123 | 2 | gibbon | this is large and should not be inserted every time | 91 123 | 3 | firebucket | this is large and should not be inserted every time | 110 (3 rows) cqlsh:test> SELECT id FROM expiring_series; id ----- 123 123 123 (3 rows) cqlsh:test> SELECT DISTINCT id FROM expiring_series; id ----- 123 (1 rows) cqlsh:test> SELECT STATICONLY id FROM expiring_series; (0 rows) {code} h4. Point D {code} cqlsh:test> SELECT id, series_order, small_data, large_data, ttl(small_data) from expiring_series; id | series_order | small_data | large_data | ttl(small_data) -----+--------------+------------+-----------------------------------------------------+----------------- 123 | 2 | gibbon | this is large and should not be inserted every time | 22 123 | 3 | firebucket | this is large and should not be inserted every time | 41 (2 rows) cqlsh:test> SELECT id FROM expiring_series; id ----- 123 123 (2 rows) cqlsh:test> SELECT DISTINCT id FROM expiring_series; id ----- 123 (1 rows) cqlsh:test> SELECT STATICONLY id FROM expiring_series; (0 rows) {code} h4. Point E {code} cqlsh:test> SELECT id, series_order, small_data, large_data, ttl(small_data) from expiring_series; id | series_order | small_data | large_data | ttl(small_data) -----+--------------+------------+-----------------------------------------------------+----------------- 123 | null | null | this is large and should not be inserted every time | null (1 rows) cqlsh:test> SELECT id FROM expiring_series; (0 rows) cqlsh:test> SELECT DISTINCT id FROM expiring_series; id ----- 123 (1 rows) cqlsh:test> SELECT STATICONLY id FROM expiring_series; id ----- 123 (1 rows) {code} Notice that after the last id has expired SELECT STATICONLY id return the row, and it can be deleted (under whatever concurrency rules the application needs) {code} cqlsh:test> DELETE FROM expiring_series where id = '123'; cqlsh:test> SELECT id, series_order, small_data, large_data, ttl(small_data) from expiring_series; (0 rows) cqlsh:test> SELECT id FROM expiring_series; (0 rows) cqlsh:test> SELECT DISTINCT id FROM expiring_series; (0 rows) cqlsh:test> SELECT STATICONLY id FROM expiring_series; (0 rows) {code} -- This message was sent by Atlassian JIRA (v6.2#6252)