Return-Path: Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: (qmail 13983 invoked from network); 14 Apr 2011 17:18:37 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 14 Apr 2011 17:18:37 -0000 Received: (qmail 59013 invoked by uid 500); 14 Apr 2011 17:18:35 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 58927 invoked by uid 500); 14 Apr 2011 17:18:35 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 58919 invoked by uid 99); 14 Apr 2011 17:18:35 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 14 Apr 2011 17:18:35 +0000 X-ASF-Spam-Status: No, hits=0.0 required=5.0 tests=FREEMAIL_FROM,RCVD_IN_DNSWL_NONE,SPF_PASS,T_TO_NO_BRKTS_FREEMAIL X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of pjulien@gmail.com designates 209.85.220.172 as permitted sender) Received: from [209.85.220.172] (HELO mail-vx0-f172.google.com) (209.85.220.172) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 14 Apr 2011 17:18:28 +0000 Received: by vxg33 with SMTP id 33so1849753vxg.31 for ; Thu, 14 Apr 2011 10:18:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=H3ugHSQjiFf+7UyxS+urywIJsDadUHVGNbeB/iIT2n0=; b=BnMygLOl5v71uCVsJcq1XWTrLCQFsZN3ffrSrRjYFnJpULX6B5LKrk+Uoq/nEv4HPh 8u6hDloRJTadcuo99vUGDEMy8ZPG5KOJu0KfLYABTxevjaN7KUXAupAtpFy8aJCFC2AL ZWSm5ljAnmxhcMmD7pIqg9app2wq9GvSUtw0s= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=kU75t5JAjL5mpE5h0wtl7PAJOoFnkjDzpHLcPOrDp+Bvgxtwsw2K9CMPIuiUTF1oN7 Dggq7mgOWXeTGrvq07nU0U5jPv8BiNcdS3PtPXYn0jm72NEPKU0MTRy6pAG/ApfgZ/AN eii19Chnm6NUnzIb1Nhp8tckozSZgvlThgA3Q= MIME-Version: 1.0 Received: by 10.52.167.230 with SMTP id zr6mr1488662vdb.6.1302801487750; Thu, 14 Apr 2011 10:18:07 -0700 (PDT) Received: by 10.52.168.1 with HTTP; Thu, 14 Apr 2011 10:18:07 -0700 (PDT) In-Reply-To: References: Date: Thu, 14 Apr 2011 13:18:07 -0400 Message-ID: Subject: Re: Pyramid Organization of Data From: Patrick Julien To: user@cassandra.apache.org Cc: Adrian Cockcroft Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Virus-Checked: Checked by ClamAV on apache.org Thanks for your input Adrian, we've pretty much settled on this too. What I'm trying to figure out is how we do deletes. We want to do deletes in the satellites because: a) we'll run out of disk space very quickly with the amount of data we have b) we don't need more than 3 days worth of history in the satellites, we're currently planning for 7 days of capacity However, the deletes will get replicated back to NY. In NY, we don't want that, we want to run hadoop/pig over all that data dating back to several months/years. Even if we set the replication factor of the satellites to 1 and NY to 3, we'll run out of space very quickly in the satellites. On Thu, Apr 14, 2011 at 11:23 AM, Adrian Cockcroft wrote: > We have similar requirements for wide area backup/archive at Netflix. > I think what you want is a replica with RF of at least 3 in NY for all th= e > satellites, then each satellite could have a lower RF, but if you want sa= fe > local quorum I would use 3 everywhere. > Then NY is the sum of all the satellites, so that makes most use of the d= isk > space. > For archival storage I suggest you use snapshots in NY and save compresse= d > tar files of each keyspace in NY. We've been working on this to allow ful= l > and incremental backup and restore from our EC2 hosted Cassandra clusters > to/from S3. Full backup/restore works fine, incremental and per-keyspace > restore is being worked on. > Adrian > From: Patrick Julien > Reply-To: "user@cassandra.apache.org" > Date: Thu, 14 Apr 2011 05:38:54 -0700 > To: "user@cassandra.apache.org" > Subject: Re: Pyramid Organization of Data > > Thanks,=A0 I'm still working the problem so anything I find out I will po= st > here. > > Yes, you're right, that is the question I am asking. > > No, adding more storage is not a solution since new york would have sever= al > hundred times more storage. > > On Apr 14, 2011 6:38 AM, "aaron morton" wrote: >> I think your question is "NY is the archive, after a certain amount of >> time we want to delete the row from the original DC but keep it in the >> archive in NY." >> >> Once you delete a row, it's deleted as far as the client is concerned. >> GCGaceSeconds is only concerned with when the tombstone marker can be >> removed. If NY has a replica of a row from Tokyo and the row is deleted = in >> either DC, it will be deleted in the other DC as well. >> >> Some thoughts... >> 1) Add more storage in the satellite DC's, then tilt you chair to >> celebrate a job well done :) >> 2) Run two clusters as you say. >> 3) Just thinking out loud, and I know this does not work now. Would it b= e >> possible to support per CF strategy options, so an archive CF only >> replicates to NY ? Can think of possible problems with repair and >> LOCAL_QUORUM, out of interest what else would it break? >> >> Hope that helps. >> Aaron >> >> >> >> On 14 Apr 2011, at 10:17, Patrick Julien wrote: >> >>> We have been successful in implementing, at scale, the comments you >>> posted here. I'm wondering what we can do about deleting data >>> however. >>> >>> The way I see it, we have considerably more storage capacity in NY, >>> but not in the other sites. Using this technique here, it occurs to >>> me that we would replicate non-NY deleted rows back to NY. Is there a >>> way to tell NY not to tombstone rows? >>> >>> The ideas I have so far: >>> >>> - Set GCGracePeriod to be much higher in NY than in the other sites. >>> This way we can get to tombstone'd rows well beyond their disk life in >>> other sites. >>> - A variant on this solution is to set the TTL on rows in non NY sites >>> and again, set the GCGracePeriod to be considerably higher in NY >>> - break this up to multiple clusters and do one write from the client >>> to the its 'local' cluster and one write to the NY cluster. >>> >>> >>> >>> On Fri, Apr 8, 2011 at 7:15 PM, Jonathan Ellis wrot= e: >>>> No, I'm suggesting you have a Tokyo keyspace that gets replicated as >>>> {Tokyo: 2, NYC:1}, a London keyspace that gets replicated to {London: >>>> 2, NYC: 1}, for example. >>>> >>>> On Fri, Apr 8, 2011 at 5:59 PM, Patrick Julien >>>> wrote: >>>>> I'm familiar with this material. I hadn't thought of it from this >>>>> angle but I believe what you're suggesting is that the different data >>>>> centers would hold a different properties file for node discovery >>>>> instead of using auto-discovery. >>>>> >>>>> So Tokyo, and others, would have a configuration that make it >>>>> oblivious to the non New York data centers. >>>>> New York would have a configuration that would give it knowledge of n= o >>>>> other data center. >>>>> >>>>> Would that work? Wouldn't the NY data center wonder where these other >>>>> writes are coming from? >>>>> >>>>> On Fri, Apr 8, 2011 at 6:38 PM, Jonathan Ellis >>>>> wrote: >>>>>> On Fri, Apr 8, 2011 at 12:17 PM, Patrick Julien >>>>>> wrote: >>>>>>> The problem is this: we would like the historical data from Tokyo t= o >>>>>>> stay in Tokyo and only be replicated to New York. The one in London >>>>>>> to be in London and only be replicated to New York and so on for al= l >>>>>>> data centers. >>>>>>> >>>>>>> Is this currently possible with Cassandra? I believe we would need = to >>>>>>> run multiple clusters and migrate data manually from data centers t= o >>>>>>> North America to achieve this. Also, any suggestions would also be >>>>>>> welcomed. >>>>>> >>>>>> NetworkTopologyStrategy allows configuration replicas per-keyspace, >>>>>> per-datacenter: >>>>>> >>>>>> http://www.datastax.com/dev/blog/deploying-cassandra-across-multiple= -data-centers >>>>>> >>>>>> -- >>>>>> Jonathan Ellis >>>>>> Project Chair, Apache Cassandra >>>>>> co-founder of DataStax, the source for professional Cassandra suppor= t >>>>>> http://www.datastax.com >>>>>> >>>>> >>>> >>>> >>>> >>>> -- >>>> Jonathan Ellis >>>> Project Chair, Apache Cassandra >>>> co-founder of DataStax, the source for professional Cassandra support >>>> http://www.datastax.com >>>> >> >