Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 998F09762 for ; Mon, 12 Dec 2011 20:56:53 +0000 (UTC) Received: (qmail 53368 invoked by uid 500); 12 Dec 2011 20:56:51 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 53334 invoked by uid 500); 12 Dec 2011 20:56:51 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 53326 invoked by uid 99); 12 Dec 2011 20:56:50 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 12 Dec 2011 20:56:50 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of dwilliams@system7.co.uk designates 209.85.210.172 as permitted sender) Received: from [209.85.210.172] (HELO mail-iy0-f172.google.com) (209.85.210.172) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 12 Dec 2011 20:56:41 +0000 Received: by iaek3 with SMTP id k3so10881478iae.31 for ; Mon, 12 Dec 2011 12:56:21 -0800 (PST) Received: by 10.42.244.133 with SMTP id lq5mr14195719icb.29.1323723380283; Mon, 12 Dec 2011 12:56:20 -0800 (PST) MIME-Version: 1.0 Received: by 10.50.182.225 with HTTP; Mon, 12 Dec 2011 12:55:59 -0800 (PST) In-Reply-To: References: <4EE40CFE.4050103@gmail.com> From: Dominic Williams Date: Mon, 12 Dec 2011 20:55:59 +0000 Message-ID: Subject: Re: best practices for simulating transactions in Cassandra To: user@cassandra.apache.org Content-Type: multipart/alternative; boundary=90e6ba3fcc33dcda6204b3eb5d9e --90e6ba3fcc33dcda6204b3eb5d9e Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi John, On 12 December 2011 19:35, John Laban wrote: > > So I responded to your algorithm in another part of this thread (very > interesting) but this part of the paper caught my attention: > > > When client application code releases a lock, that lock must not > actually be > > released for a period equal to one millisecond plus twice the maximum > possible > > drift of the clocks in the client computers accessing the Cassandra > databases > > I've been worried about this, and added some arbitrary delay in the > releasing of my locks. But I don't like it as it's (A) an arbitrary valu= e > and (B) it will - perhaps greatly - reduce the throughput of the more > high-contention areas of my system. > > To fix (B) I'll probably just have to try to get rid of locks all togethe= r > in these high-contention areas. > > To fix (A), I'd need to know what the maximum possible drift of my clocks > will be. How did you determine this? What value do you use, out of > curiosity? What does the network layout of your client machines look lik= e? > (Are any of your hosts geographically separated or all running in the sa= me > DC? What's the maximum latency between hosts? etc?) Do you monitor the > clock skew on an ongoing basis? Am I worrying too much? > If you setup NTP carefully no machine should drift more than 4ms say. I forget where, but you'll find the best documentation on how to make a bullet-proof NTP setup on vendor sites for virtualization software (because virtualization software can cause drift so NTP setup has to be just so) What this means is that, for example, to be really safe when a thread releases a lock you should wait say 9ms. Some points:- -- since the sleep is performed before release, an isolated operation should not be delayed at all -- only a waiting thread or a thread requesting a lock immediately it is released will be delayed, and no extra CPU or memory load is involved -- in practice for the vast majority of "application layer" data operations this restriction will have no effect on overall performance as experienced by a user, because such operations nearly always read and write to data with limited scope, for example the data of two users involved in some transaction -- the clocks issue does mean that you can't really serialize access to more broadly shared data where more than 5 or 10 such requests are made a second, say, but in reality even if the extra 9ms sleep on release wasn't necessary, variability in database operation execution time (say under load, or when something goes wrong) means trouble might occur serializing with that level of contention So in summary, although this drift thing seems bad at first, partly because it is a new consideration, in practice it's no big deal so long as you look after your clocks (and the main issue to watch out for is when application nodes running on virtualization software, hypervisors et al have setup issues that make their clocks drift under load, and it is a good idea to be wary of that) Best, Dominic > Sorry for all the questions but I'm very concerned about this particular > problem :) > > Thanks, > John > > > On Mon, Dec 12, 2011 at 4:36 AM, Dominic Williams < > dwilliams@fightmymonster.com> wrote: > >> Hi guys, just thought I'd chip in... >> >> Fight My Monster is still using Cages, which is working fine, but... >> >> I'm looking at using Cassandra to replace Cages/ZooKeeper(!) There are 2 >> main reasons:- >> >> 1. Although a fast ZooKeeper cluster can handle a lot of load (we aren't >> getting anywhere near to capacity and we do a *lot* of serialisation) at >> some point it will be necessary to start hashing lock paths onto separat= e >> ZooKeeper clusters, and I tend to believe that these days you should cho= ose >> platforms that handle sharding themselves (e.g. choose Cassandra rather >> than MySQL) >> >> 2. Why have more components in your system when you can have less!!! KIS= S >> >> Recently I therefore tried to devise an algorithm which can be used to >> add a distributed locking layer to clients such as Pelops, Hector, Pycas= sa >> etc. >> >> There is a doc describing the algorithm, to which may be added an >> appendix describing a protocol so that locking can be interoperable betw= een >> the clients. That could be extended to describe a protocol for >> transactions. Word of warning this is a *beta* algorithm that has only b= een >> seen by a select group so far, and therefore not even 100% sure it works >> but there is a useful general discussion regarding serialization of >> reads/writes so I include it anyway (and since this algorithm is going t= o >> be out there now, if there's anyone out there who fancies doing a Z proo= f >> or disproof, that would be fantastic). >> http://media.fightmymonster.com/Shared/docs/Wait%20Chain%20Algorithm.pdf >> >> Final word on this re transactions: if/when transactions are added to >> locking system in Pelops/Hector/Pycassa, Cassandra will provide better >> performance than ZooKeeper for storing snapshots, especially as transact= ion >> size increases >> >> Best, Dominic >> >> On 11 December 2011 01:53, Guy Incognito wrote: >> >>> you could try writing with the clock of the initial replay entry? >>> >>> On 06/12/2011 20:26, John Laban wrote: >>> >>> Ah, neat. It is similar to what was proposed in (4) above with adding >>> transactions to Cages, but instead of snapshotting the data to be rolle= d >>> back (the "before" data), you snapshot the data to be replayed (the "af= ter" >>> data). And then later, if you find that the transaction didn't complet= e, >>> you just keep replaying the transaction until it takes. >>> >>> The part I don't understand with this approach though: how do you >>> ensure that someone else didn't change the data between your initial fa= iled >>> transaction and the later replaying of the transaction? You could get = lost >>> writes in that situation. >>> >>> Dominic (in the Cages blog post) explained a workaround with that for >>> his rollback proposal: all subsequent readers or writers of that data >>> would have to check for abandoned transactions and roll them back >>> themselves before they could read the data. I don't think this is poss= ible >>> with the XACT_LOG "replay" approach in these slides though, based on ho= w >>> the data is indexed (cassandra node token + timeUUID). >>> >>> >>> PS: How are you liking Cages? >>> >>> >>> >>> >>> 2011/12/6 J=E9r=E9my SEVELLEC >>> >>>> Hi John, >>>> >>>> I had exactly the same reflexions. >>>> >>>> I'm using zookeeper and cage to lock et isolate. >>>> >>>> but how to rollback? >>>> It's impossible so try replay! >>>> >>>> the idea is explained in this presentation >>>> http://www.slideshare.net/mattdennis/cassandra-data-modeling (starting >>>> from slide 24) >>>> >>>> - insert your whole data into one column >>>> - make the job >>>> - remove (or expire) your column. >>>> >>>> if there is a problem during "making the job", you keep the >>>> possibility to replay and replay and replay (synchronously or in a bat= ch). >>>> >>>> Regards >>>> >>>> J=E9r=E9my >>>> >>>> >>>> 2011/12/5 John Laban >>>> >>>>> Hello, >>>>> >>>>> I'm building a system using Cassandra as a datastore and I have a >>>>> few places where I am need of transactions. >>>>> >>>>> I'm using ZooKeeper to provide locking when I'm in need of some >>>>> concurrency control or isolation, so that solves that half of the puz= zle. >>>>> >>>>> What I need now is to sometimes be able to get atomicity across >>>>> multiple writes by simulating the "begin/rollback/commit" abilities o= f a >>>>> relational DB. In other words, there are places where I need to perf= orm >>>>> multiple updates/inserts, and if I fail partway through, I would idea= lly be >>>>> able to rollback the partially-applied updates. >>>>> >>>>> Now, I *know* this isn't possible with Cassandra. What I'm looking >>>>> for are all the best practices, or at least tips and tricks, so that = I can >>>>> get around this limitation in Cassandra and still maintain a consiste= nt >>>>> datastore. (I am using quorum reads/writes so that eventual consiste= ncy >>>>> doesn't kick my ass here as well.) >>>>> >>>>> Below are some ideas I've been able to dig up. Please let me know >>>>> if any of them don't make sense, or if there are better approaches: >>>>> >>>>> >>>>> 1) Updates to a row in a column family are atomic. So try to model >>>>> your data so that you would only ever need to update a single row in = a >>>>> single CF at once. Essentially, you model your data around transacti= ons. >>>>> This is tricky but can certainly be done in some situations. >>>>> >>>>> 2) If you are only dealing with multiple row *inserts* (and not >>>>> updates), have one of the rows act as a 'commit' by essentially valid= ating >>>>> the presence of the other rows. For example, say you were performing= an >>>>> operation where you wanted to create an Account row and 5 User rows a= ll at >>>>> once (this is an unlikely example, but bear with me). You could inse= rt 5 >>>>> rows into the Users CF, and then the 1 row into the Accounts CF, whic= h acts >>>>> as the commit. If something went wrong before the Account could be >>>>> created, any Users that had been created so far would be orphaned and >>>>> unusable, as your business logic can ensure that they can't exist wit= hout >>>>> an Account. You could also have an offline cleanup process that swep= t away >>>>> orphans. >>>>> >>>>> 3) Try to model your updates as idempotent column inserts instead. >>>>> How do you model updates as inserts? Instead of munging the value >>>>> directly, you could insert a column containing the operation you want= to >>>>> perform (like "+5"). It would work kind of like the Consistent Vote >>>>> Counting implementation: ( https://gist.github.com/416666 ). How do >>>>> you make the inserts idempotent? Make sure the column names correspo= nd to >>>>> a request ID or some other identifier that would be identical across >>>>> re-drives of a given (perhaps originally failed) request. This could= leave >>>>> your datastore in a temporarily inconsistent state, but would eventua= lly >>>>> become consistent after a successful re-drive of the original request= . >>>>> >>>>> 4) You could take an approach like Dominic Williams proposed with >>>>> Cages: >>>>> http://ria101.wordpress.com/2010/05/12/locking-and-transactions-over-= cassandra-using-cages/ The gist is that you snapshot all the original val= ues that you're about >>>>> to munge somewhere else (in his case, ZooKeeper), make your updates, = and >>>>> then delete the snapshot (and that delete needs to be atomic). If th= e >>>>> snapshot data was never deleted, then subsequent accessors (even read= ers) >>>>> of the data rows need to do the rollback of the previous transaction >>>>> themselves before they can read/write this data. They do the rollbac= k by >>>>> just overwriting the current values with what is in the snapshot. It >>>>> offloads the work of the rollback to the next worker that accesses th= e >>>>> data. This approach probably needs an generic/high-level programming= layer >>>>> to handle all of the details and complexity, and it doesn't seem like= it >>>>> was ever added to Cages. >>>>> >>>>> >>>>> Are there other approaches or best practices that I missed? I would >>>>> be very interested in hearing any opinions from those who have tackle= d >>>>> these problems before. >>>>> >>>>> Thanks! >>>>> John >>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> J=E9r=E9my >>>> >>> >>> >>> >> > --90e6ba3fcc33dcda6204b3eb5d9e Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi John,

On 12 December 2011 = 19:35, John Laban <john@pagerduty.com> wrote:
So I responded to your algorithm in another part of this thread (= very interesting) but this part of the paper caught my attention:=A0

> When client application code releases a lock, that= lock must not actually be=A0
> released for a period equal to= one millisecond plus twice the maximum possible=A0
> drift of= the clocks in the client computers accessing the Cassandra databases

I've been worried about this, and added some = arbitrary delay in the releasing of my locks. =A0But I don't like it as= it's (A) an arbitrary value and (B) it will - perhaps greatly - reduce= the throughput of the more high-contention areas of my system.

To fix (B) I'll probably just have to try to = get rid of locks all together in these high-contention areas.
To fix (A), I'd need to know what the maximum possible drif= t of my clocks will be. =A0How did you determine this? =A0What value do you= use, out of curiosity? =A0What does the network layout of your client mach= ines look like? =A0(Are any of your hosts geographically separated or all r= unning in the same DC? =A0What's the maximum latency between hosts? =A0= etc?) =A0Do you monitor the clock skew on an ongoing basis? =A0Am I worryin= g too much?

If you setup NTP carefully no machine shou= ld drift more than 4ms say. I forget where, but you'll find the best do= cumentation on how to make a bullet-proof NTP setup on vendor sites for vir= tualization software (because virtualization software can cause drift so NT= P setup has to be just so)

What this means is that, for example, to be really safe= when a thread releases a lock you should wait say 9ms. Some points:-
=
-- since the sleep is performed before release, an isolated operation = should not be delayed at all
-- only a waiting thread or a thread requesting a lock immediately it = is released will be delayed, and no extra CPU or memory load is involved
-- in practice for the vast majority of "application layer&quo= t; data operations this restriction will have no effect on overall performa= nce as experienced by a user, because such operations nearly always read an= d write to data with limited scope, for example the data of two users invol= ved in some transaction
-- the clocks issue does mean that you can't really serialize acce= ss to more broadly shared data where more than 5 or 10 such requests are ma= de a second, say, but in reality even if the extra 9ms sleep on release was= n't necessary, variability in database operation execution time (say un= der load, or when something goes wrong) means trouble might occur serializi= ng with that level of contention=A0

So in summary, although this drift thing seems bad at f= irst, partly because it is a new consideration, in practice it's no big= deal so long as you look after your clocks (and the main issue to watch ou= t for is when application nodes running on virtualization software, hypervi= sors et al have setup issues that make their clocks drift under load, and i= t is a good idea to be wary of that)

Best, Dominic


Sorry for all the questions but I'm very concerned = about this particular problem :)

Thanks,
John


On Mon, Dec 12, 2011 at 4:36 AM, Dominic Williams <dwilliams@fi= ghtmymonster.com> wrote:
Hi guys, just t= hought I'd chip in...

Fight My Monster is still usin= g Cages, which is working fine, but...=A0

I'm looking at using Cassandra to replace Cages/Zoo= Keeper(!) There are 2 main reasons:-

1. Although a fast ZooKeeper cluster can handle a lot o= f load (we aren't getting anywhere near to capacity and we do a *lot* o= f=A0serialisation) at some point it will be necessary to start hashing lock= paths onto separate ZooKeeper clusters, and I tend to believe that these d= ays you should choose platforms that handle sharding themselves (e.g. choos= e Cassandra rather than MySQL)

2. Why have more components in your system when you can= have less!!! KISS

Recently I therefore tried to d= evise an algorithm which can be used to add a distributed locking layer to = clients such as Pelops, Hector, Pycassa etc.

There is a doc describing the algorithm, to which may b= e added an appendix describing a protocol so that locking can be interopera= ble between the clients. That could be extended to describe a protocol for = transactions. Word of warning this is a *beta* algorithm that has only been= seen by a select group so far, and therefore not even 100% sure it works b= ut there is a useful general discussion regarding serialization of reads/wr= ites so I include it anyway=A0(and since this algorithm is going to be out = there now, if there's anyone out there who fancies doing a Z proof or d= isproof, that would be fantastic).

Final word on t= his re transactions: if/when transactions are added to locking system in Pe= lops/Hector/Pycassa, Cassandra will provide better performance than ZooKeep= er for storing snapshots, especially as transaction size increases

Best, Dominic

On 11 December 2011 01:53, Guy Incognito <dnd1066@gma= il.com> wrote:
=20 =20 =20
you could try writing with the clock of the initial replay entry?

On 06/12/2011 20:26, John Laban wrote:
Ah, neat. =A0It is similar to what was propos= ed in (4) above with adding transactions to Cages, but instead of snapshotting the data to be rolled back (the "before" data)= , you snapshot the data to be replayed (the "after" data). =A0And= then later, if you find that the transaction didn't complete, you just keep replaying the transaction until it takes.

The part I don't understand with this approach though: =A0ho= w do you ensure that someone else didn't change the data between your initial failed transaction and the later replaying of the transaction? =A0You could get lost writes in that situation.

Dominic (in the Cages blog post) explained a workaround with that for his rollback proposal: =A0all subsequent readers or writers of that data would have to check for abandoned transactions and roll them back themselves before they could read the data. =A0I don't think this is possible with the XACT_= LOG "replay" approach in these slides though, based on how th= e data is indexed (cassandra node token + timeUUID).


PS: =A0How are you liking Cages?




2011/12/6 J=E9r=E9my SEVELLEC <jsev= ellec@gmail.com>
Hi John,

I had exactly the same reflexions.

I'm using zookeeper and cage to lock et isolate.

but how to rollback?=A0
It's impossible so try replay!

the idea is explained in this presentation=A0http://www.slideshare.net/mattdennis/cassandra-data-modeling=A0(st= arting from slide 24)

- insert your whole data into one column
- make the job
- remove (or expire) your column.

if there is a problem during "making the job",= you keep the possibility to replay and replay and replay (synchronously or in a batch).

Regards

J=E9r=E9my


2011/12/5 John Laban <john= @pagerduty.com>
Hello,

I'm building a system using Cassandra as a datastore and I have a few places where I am need of transactions.

I'm using ZooKeeper to provide locking whe= n I'm in need of some concurrency control or isolation, so that solves that half of the puzzle.

What I need now is to sometimes be able to get atomicity across multiple writes by simulating the "begin/rollback/commit" abilities of a relational DB. =A0In other words, there are places where I need to perform multiple updates/inserts, and if I fail partway through, I would ideally be able to rollback the partially-applied updates.

Now, I *know* this isn't possible with Cassandra. =A0What I'm looking for are all th= e best practices, or at least tips and tricks, so that I can get around this limitation in Cassandra and still maintain a consistent datastore. =A0(I am using quorum reads/writes so that eventual consistency doesn't kick my ass here as well.)

Below are some ideas I've been able to dig up. =A0Please let me know if any of them don'= t make sense, or if there are better approaches:


1) Updates to a row in a column family are atomic. =A0So try to model your data so that you would only ever need to update a single row in a single CF at once. =A0Essentially, you model your data around transactions. =A0This is tricky but can certainly be done in some situations.

2) If you are only dealing with multiple row *inserts* (and not updates), have one of the rows act as a 'commit' by essentially validating the presence of the other rows. =A0For example, say you were performing an operation where you wanted to create an Account row and 5 User rows all at once (this is an unlikely example, but bear with me). =A0You could insert 5 rows into the Users CF, and then the 1 row into the Accounts CF, which acts as the commit. =A0If something went wrong before the Account could be created, any Users that had been created so far would be orphaned and unusable, as your business logic can ensure that they can't exist without an Account. =A0You could also have an offline cleanup process that swept away orphans.

3) Try to model your updates as idempotent column inserts instead. =A0How do you model updates as inserts? =A0Instead of munging the value directly, you could insert a column containing the operation you want to perform (like "+5"). =A0It would work kind of l= ike the Consistent Vote Counting implementation: ( https://gist.github.= com/416666 ). =A0How do you make the inserts idempotent? =A0Make sure the column names correspond to a request ID or some other identifier that would be identical across re-drives of a given (perhaps originally failed) request. =A0This could leave your datastore in a temporarily inconsistent state, but would eventually become consistent after a successful re-drive of the original request.

4) You could take an approach like Dominic Williams proposed with Cages: =A0http://ria101.wordpress.com/2010/05/12/locki= ng-and-transactions-over-cassandra-using-cages/ =A0 =A0The gist is that you snapshot all the original values that you're about to munge somewhere else (in his case, ZooKeeper), make your updates, and then delete the snapshot (and that delete needs to be atomic). =A0If the snapshot data was never deleted, then subsequent accessors (even readers) of the data rows need to do the rollback of the previous transaction themselves before they can read/write this data. =A0They do the rollback by just overwriting the current values with what is in the snapshot. =A0It offloads the work of the rollback to the next worker that accesses the data. =A0This approach probably needs an generic/high-level programming layer to handle all of the details and complexity, and it doesn't seem like it was ever added to Cages.


Are there other approaches or best practices that I missed? =A0I would be very interested in hearing any opinions from those who have tackled these problems before.

Thanks!
John





--
J=E9r=E9my





--90e6ba3fcc33dcda6204b3eb5d9e--