Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6B0A211E9D for ; Wed, 4 Jun 2014 17:57:02 +0000 (UTC) Received: (qmail 57862 invoked by uid 500); 4 Jun 2014 17:56:58 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 57816 invoked by uid 500); 4 Jun 2014 17:56:58 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 57808 invoked by uid 99); 4 Jun 2014 17:56:58 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 04 Jun 2014 17:56:58 +0000 X-ASF-Spam-Status: No, hits=3.2 required=5.0 tests=FREEMAIL_REPLY,HTML_MESSAGE,RCVD_IN_DNSWL_NONE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of rbradberry@gmail.com designates 209.85.192.175 as permitted sender) Received: from [209.85.192.175] (HELO mail-pd0-f175.google.com) (209.85.192.175) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 04 Jun 2014 17:56:53 +0000 Received: by mail-pd0-f175.google.com with SMTP id z10so6348902pdj.6 for ; Wed, 04 Jun 2014 10:56:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=date:from:to:message-id:in-reply-to:references:subject:mime-version :content-type; bh=ltetbZ/9W6aHnDH0dgjrd2qNebty+dJWT8nKCdtSJ8E=; b=idMVwV+Mif/XK0zpHZQ/OJL2mKB8IVfRW66X363LVEMBoJUHVD3eZZyaHXWgb1wwhL C68D2IQZsD7CgIX331hEcHmsCh7n1Hk384nfM2l5IehE//dOX4HvOgCMIXNwup2FbuSO V/eZTcqSOMNGDeW6ncCTwknJQcKF5myWoyuNj/X+t39bAszanF+MrIbzS5U1u6tJKM1u P30rgegsQWgwhF/k7KF6JVLGFB0L9EjePTvEo94fqEAdU67qL0hW1CeubNevM3LXssJZ zag46HsZ8a0KT9VtZK/OazguRAJp4odeF7q/9r2Yk/qCUfsyp1frbfnXxi1ZTPDMjflg QTLA== X-Received: by 10.68.200.133 with SMTP id js5mr66719831pbc.138.1401904593021; Wed, 04 Jun 2014 10:56:33 -0700 (PDT) Received: from Russells-iMac.local (rrcs-24-136-119-106.nyc.biz.rr.com. [24.136.119.106]) by mx.google.com with ESMTPSA id ee5sm12529155pbc.47.2014.06.04.10.56.30 for (version=TLSv1.2 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 04 Jun 2014 10:56:31 -0700 (PDT) Date: Wed, 4 Jun 2014 13:56:28 -0400 From: Russell Bradberry To: Redmumba , user@cassandra.apache.org Message-ID: In-Reply-To: References: Subject: Re: Customized Compaction Strategy: Dev Questions X-Mailer: Airmail (237) MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="538f5dcc_6de91b18_1dd" X-Virus-Checked: Checked by ClamAV on apache.org --538f5dcc_6de91b18_1dd Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Well, DELETE will not free up disk space until after GC grace has passed = and the next major compaction has run. So in essence, if you need to free= up space right away, then creating daily/monthly tables would be one way= to go. =C2=A0Just remember to clear your snapshots after dropping though= . On June 4, 2014 at 1:54:05 PM, Redmumba (redmumba=40gmail.com) wrote: That still involves quite a bit of infrastructure work--it also means tha= t to query the data, I would have to make N queries, one per table, to qu= ery for audit information (audit information is sorted by a key identifyi= ng the item, and then the date).=C2=A0 I don't think this would yield any= benefit (to me) over simply tombstoning the values or creating a seconda= ry index on date and simply doing a DELETE, right=3F Is there something internally preventing me from implementing this as a s= eparate Strategy=3F On Wed, Jun 4, 2014 at 10:47 AM, Jonathan Haddad wr= ote: I'd suggest creating 1 table per day, and dropping the tables you don't n= eed once you're done. On Wed, Jun 4, 2014 at 10:44 AM, Redmumba wrote: Sorry, yes, that is what I was looking to do--i.e., create a =22Topologic= alCompactionStrategy=22 or similar. On Wed, Jun 4, 2014 at 10:40 AM, Russell Bradberry wrote: Maybe I=E2=80=99m misunderstanding something, but what makes you think th= at running a major compaction every day will cause they data from January= 1st to exist in only one SSTable and not have data from other days in th= e SSTable as well=3F Are you talking about making a new compaction strate= gy that creates SSTables by day=3F On June 4, 2014 at 1:36:10 PM, Redmumba (redmumba=40gmail.com) wrote: Let's say I run a major compaction every day, so that the =22oldest=22 ss= table contains only the data for January 1st.=C2=A0 Assuming all the node= s are in-sync and have had at least one repair run before the table is dr= opped (so that all information for that time period is =22the same=22), w= ouldn't it be safe to assume that the same data would be dropped on all n= odes=3F=C2=A0 There might be a period when the compaction is running wher= e different nodes might have an inconsistent view of just that days' data= (in that some would have it and others would not), but the cluster would= still function and become eventually consistent, correct=3F Also, if the entirety of the sstable is being dropped, wouldn't the tombs= tones be removed with it=3F=C2=A0 I wouldn't be concerned with individual= rows and columns, and this is a write-only table, more or less--the only= deletes that occur in the current system are to delete the old data. On Wed, Jun 4, 2014 at 10:24 AM, Russell Bradberry wrote: I=E2=80=99m not sure what you want to do is feasible. =C2=A0At a high lev= el I can see you running into issues with R=46 etc. =C2=A0The SSTables no= de to node are not identical, so if you drop a full SSTable on one node t= here is no one corresponding SSTable on the adjacent nodes to drop. =C2=A0= =C2=A0You would need to choose data to compact out, and ensure it is rem= oved on all replicas as well. =C2=A0But if your problem is that you=E2=80= =99re low on disk space then you probably won=E2=80=99t be able to write = out a new SSTable with the older information compacted out. Also, there i= s more to an SSTable than just data, the SSTable could have tombstones an= d other relics that haven=E2=80=99t been cleaned up from nodes coming or = going.=C2=A0 On June 4, 2014 at 1:10:58 PM, Redmumba (redmumba=40gmail.com) wrote: Thanks, Russell--yes, a similar concept, just applied to sstables.=C2=A0 = I'm assuming this would require changes to both major compactions, and pr= obably GC (to remove the old tables), but since I'm not super-familiar wi= th the C* internals, I wanted to make sure it was feasible with the curre= nt toolset before I actually dived in and started tinkering. Andrew On Wed, Jun 4, 2014 at 10:04 AM, Russell Bradberry wrote: hmm, I see. So something similar to Capped Collections in MongoDB. On June 4, 2014 at 1:03:46 PM, Redmumba (redmumba=40gmail.com) wrote: Not quite; if I'm at say 90% disk usage, I'd like to drop the oldest ssta= ble rather than simply run out of space. The problem with using TTLs is that I have to try and guess how much data= is being put in--since this is auditing data, the usage can vary wildly = depending on time of year, verbosity of auditing, etc..=C2=A0 I'd like to= maximize the disk space--not optimize the cleanup process. Andrew On Wed, Jun 4, 2014 at 9:47 AM, Russell Bradberry wrote: You mean this: https://issues.apache.org/jira/browse/CASSANDRA-5228 =3F On June 4, 2014 at 12:42:33 PM, Redmumba (redmumba=40gmail.com) wrote: Good morning=21 I've asked (and seen other people ask) about the ability to drop old ssta= bles, basically creating a =46I=46O-like clean-up process.=C2=A0 Since we= 're using Cassandra as an auditing system, this is particularly appealing= to us because it means we can maximize the amount of auditing data we ca= n keep while still allowing Cassandra to clear old data automatically. My idea is this: perform compaction based on the range of dates available= in the sstable (or just metadata about when it was created).=C2=A0 =46or= example, a major compaction could create a combined sstable per day--so = that, say, 60 days of data after a major compaction would contain 60 ssta= bles. My question then is, will this be possible by simply implementing a separ= ate AbstractCompactionStrategy=3F=C2=A0 Does this sound feasilble at all=3F= =C2=A0 Based on the implementation of Size and Leveled strategies, it loo= ks like I would have the ability to control what and how things get compa= cted, but I wanted to verify before putting time into it. Thank you so much for your time=21 Andrew -- Jon Haddad http://www.rustyrazorblade.com skype: rustyrazorblade --538f5dcc_6de91b18_1dd Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline