cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From aaron morton <>
Subject Re: Using Cassandra for transaction logging, good idea?
Date Mon, 01 Aug 2011 01:02:15 GMT
If you are doing insert only it should be ok. If you want a unique and roughly ordered Tx id
perhaps consider a TimeUUID in the first case, they are as ordered as the clocks generating
the UUID's. Which is about as good as snowflake does, cannot remember what resolution the
two use.  

Be aware that writes are not isolated, if your write involves multiple columns readers may
see some of the columns before all are written. This may be an issue if you are doing reads
around the are as the new transactions are written.

Aaron Morton
Freelance Cassandra Developer

On 1 Aug 2011, at 10:15, Kent Närling wrote:

> Sounds interesting.
> Reading a bit on snowflake it seems a bit uncertain if it fulfills the A & B criterias?
> ie:
> >     A, eventually return all known transactions 
> >     B, Not return the same transaction more than once 
> Also, any reflections on the general idea to use Cassandra like this?
> It would seem to me that i you set the write consistency very high then it should be
possible to be a reliability comparable to a classic transactional database?
> Since we are talking business transactions here it is VERY important that once we write
a transaction we know that it will not be lost or partially written etc.
> On the other hand we also know that it is insert only (ie no updates) the insert operation
is atomic so it could fit the cassandra model quite well?
> Any other people using cassandra to store business data like this?
> On Sun, Jul 31, 2011 at 10:20 PM, Lior Golan [via [hidden email]] <[hidden email]>
> How about using Snowflake to generate the transaction ids:
> From: Kent Narling [mailto:[hidden email]] 
> Sent: Thursday, July 28, 2011 5:46 PM
> To: [hidden email]
> Subject: Using Cassandra for transaction logging, good idea?
> Hi! 
> I am considering to use cassandra for clustered transaction logging in a project. 
> What I need are in principal 3 functions: 
> 1 - Log transaction with a unique (but possibly non-sequential) id 
> 2 - Fetch transaction with a specific id 
> 3 - Fetch X new transactions "after" a specific cursor/transaction 
>      This function must be guaranteed to: 
>      A, eventually return all known transactions 
>      B, Not return the same transaction more than once 
>      The order of the transactions fetches does not have to be strictly time-sorted 
>      but in practice it probably has to be based on some time-oriented order to be able
to support cursors. 
> I can see that 1 & 2 are trivial to solve in Cassandra, but is there any elegant
way to solve 3? 
> Since there might be multiple nodes logging transactions, their clocks might not be perfectly
synchronized (to millisec level) etc so sorting on time is not stable. 
> Possibly creating a synchronized incremental id might be one option but that could create
a cluster bottleneck etc? 
> Another alternative might be to use cassandra for 1 & 2 and then store an ordered
list of id:s in a standard DB. This might be a reasonable compromise since 3 is less critical
from a HA point of view, but maybe someone can point me to a more elegant solution using Cassandra?

> If you reply to this email, your message will be added to the discussion below:
> To start a new topic under [hidden email], email [hidden email] 
> To unsubscribe from [hidden email], click here.
> View this message in context: Re: Using Cassandra for transaction logging, good idea?
> Sent from the mailing list archive at

View raw message