Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 132CCD96E for ; Wed, 29 May 2013 16:33:52 +0000 (UTC) Received: (qmail 78571 invoked by uid 500); 29 May 2013 16:33:49 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 78469 invoked by uid 500); 29 May 2013 16:33:47 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 78347 invoked by uid 99); 29 May 2013 16:33:46 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 29 May 2013 16:33:46 +0000 X-ASF-Spam-Status: No, hits=-2.3 required=5.0 tests=RCVD_IN_DNSWL_MED,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: local policy) Received: from [192.174.58.134] (HELO XEDGEA.nrel.gov) (192.174.58.134) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 29 May 2013 16:33:42 +0000 Received: from XHUBB.nrel.gov (10.20.4.59) by XEDGEA.nrel.gov (192.174.58.134) with Microsoft SMTP Server (TLS) id 8.3.298.1; Wed, 29 May 2013 10:33:09 -0600 Received: from MAILBOX2.nrel.gov ([fe80::48b0:b121:8465:5e5]) by XHUBB.nrel.gov ([::1]) with mapi; Wed, 29 May 2013 10:33:22 -0600 From: "Hiller, Dean" To: "user@cassandra.apache.org" Date: Wed, 29 May 2013 10:33:20 -0600 Subject: Re: Is there anyone who implemented time range partitions with column families? Thread-Topic: Is there anyone who implemented time range partitions with column families? Thread-Index: Ac5cijsQb9lJLSgHSvKeU/D526RdjQ== Message-ID: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: user-agent: Microsoft-MacOutlook/14.3.4.130416 acceptlanguage: en-US Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-Virus-Checked: Checked by ClamAV on apache.org Something we just ran into with compaction and timeseries data. We have 60,000 virtual tables(playorm virtual tables) inside ONE CF. This unfortunately hurt our compaction with LCS since it can't be parallized for a single tier. We should have had 10 CF's called data0, data1, data2 =8A.data9 such that we could be running 10 compactions in parallel. QUESTION: I am assuming 10 compactions should be enough to put enough load on the disk/cpu/ram etc. etc. or do you think I should go with 100CF's. 98% of our data is all in this one CF. Thanks, Dean On 5/29/13 10:06 AM, "Hiller, Dean" wrote: >Nope, partitioning is done per CF in PlayOrm. > >Dean > >From: cem > >Reply-To: "user@cassandra.apache.org" >> >Date: Wednesday, May 29, 2013 10:01 AM >To: "user@cassandra.apache.org" >> >Subject: Re: Is there anyone who implemented time range partitions with >column families? > >Thank you very much for the fast answer. > >Does playORM use different column families for each partition in >Cassandra? > >Cem > > >On Wed, May 29, 2013 at 5:30 PM, Jeremy Powell >> wrote: >Cem, yes, you can do this with C*, though you have to handle the logic >yourself (other libraries might do this for you, seen the dev of playORM >discuss some things which might be similar). We use Astyanax and >programmatically create CFs based on a time period of our choosing that >makes sense for our system, programmatically drop CFs if/when they are >outside a certain time period (rather than using C*'s TTL), and write >data to the different CFs as needed. > >~Jeremy > >On Wed, May 29, 2013 at 8:36 AM, cem >> wrote: >Hi All, > >I used time range partitions 5 years ago with MySQL to clean up data much >faster. > >I had a big FACT table with time range partitions and it was very is to >drop old partitions (with archiving) and do some saving on disk. > >Has anyone implemented such a thing in Cassandra? It would be great if we >have that in Cassandra. > >Best Regards, >Cem. > >