cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ryan Svihla>
Subject Re: Isolation in case of Single Partition Writes and Batching with LWT
Date Mon, 12 Sep 2016 12:57:08 GMT
It was just the first place google turned up, I made an answer late in the evening trying to
help someone out on my own free time. 


Ryan Svihla

> On Sep 12, 2016, at 6:34 AM, Mark Thomas <> wrote:
>> On 11/09/2016 23:07, Ryan Svihla wrote:
>> 1. A batch with updates to a single partition turns into a single
>> mutation so partition writes aren't possible (so may as well use
>> Unlogged batches)
>> 2. Yes, so use local_serial or serial reads and all updates you want to
>> honor LWT need to be LWT as well, this way everything is buying into the
>> same protocol and behaving accordingly. 
>> 3. LWT works with batch (has to be same partition).
>> if
>> condition doesn't fire none of the batch will (same partition will mean
>> it'll be the same mutation anyway so there really isn't any magic going on).
> Is there a good reason for linking to the 3rd party docs rather than the
> official docs in this case? I can't see one at the moment.
> The official docs appear to be:
> It might not matter in this particular instance but it looks as if there
> is a little more to the syntax than the 3rd party docs suggest (even if
> you switch to the latest version of those 3rd party docs).
> Generally, if you are going to point to docs, please point to the
> official Apache Cassandra docs unless there is a very good reason not
> to. (And if the good reason is that there’s a deficiency in the Apache
> Cassandra docs, please make it known on the list or in a Jira so someone
> can write what’s missing)
> Mark
>> Your biggest issue with such a design will be contention (as it would
>> with an rdbms with say row locking), as by intent you're making all
>> reads and writes block until any pending ones are complete. I'm sure
>> there are a couple things I forgot but this is the standard wisdom. 
>> Regards,
>> Ryan Svihla
>> On Sep 11, 2016, at 3:49 PM, Jens Rantil <
>> <>> wrote:
>>> Hi,
>>> This might be off-topic, but you could always use Zookeeper locking
>>> and/or Apache Kafka topic keys for doing things like this.
>>> Cheers,
>>> Jens
>>> On Tuesday, September 6, 2016, Bhuvan Rawal <
>>> <>> wrote:
>>>    Hi,
>>>    We are working to solve on a multi threaded distributed design
>>>    which in which a thread reads current state from Cassandra (Single
>>>    partition ~ 20 Rows), does some computation and saves it back in.
>>>    But it needs to be ensured that in between reading and writing by
>>>    that thread any other thread should not have saved any operation
>>>    on that partition.
>>>    We have thought of a solution for the same - *having a write_time
>>>    column* in the schema and making it static. Every time the thread
>>>    picks up a job read will be performed with LOCAL_QUORUM. While
>>>    writing into Cassandra batch will contain a LWT (IF write_time is
>>>    read time) otherwise read will be performed and computation will
>>>    be done again and so on. This will ensure that while saving
>>>    partition is in a state it was read from.
>>>    In order to avoid race condition we need to ensure couple of things:
>>>    1. While saving data in a batch with a single partition (*Rows may
>>>    be Updates, Deletes, Inserts)* are they Isolated per replica node.
>>>    (Not necessarily on a cluster as a whole). Is there a possibility
>>>    of client reading partial rows?
>>>    2. If we do a LOCAL_QUORUM read and LOCAL_QUORUM writes in this
>>>    case could there a chance of inconsistency in this case (When LWT
>>>    is being used in batches).
>>>    3. Is it possible to use multiple LWT in a single Batch? In
>>>    general how does LWT performs with Batch and is Paxos acted on
>>>    before batch execution?
>>>    Can someone help us with this?
>>>    Thanks & Regards,
>>>    Bhuvan
>>> -- 
>>> Jens Rantil
>>> Backend engineer
>>> Tink AB
>>> Email: <>
>>> Phone: +46 708 84 18 32
>>> Web: <>
>>> Facebook <!/> Linkedin
>>> <>
>>> <>

View raw message