Sounds interesting.

Reading a bit on snowflake it seems a bit uncertain if it fulfills the A & B criterias?

ie:

>     A, eventually return all known transactions 
>     B, Not return the same transaction more than once 


Also, any reflections on the general idea to use Cassandra like this?

It would seem to me that i you set the write consistency very high then it should be possible to be a reliability comparable to a classic transactional database?

Since we are talking business transactions here it is VERY important that once we write a transaction we know that it will not be lost or partially written etc.
On the other hand we also know that it is insert only (ie no updates) the insert operation is atomic so it could fit the cassandra model quite well?

Any other people using cassandra to store business data like this?

On Sun, Jul 31, 2011 at 10:20 PM, Lior Golan [via [hidden email]] <[hidden email]> wrote:

How about using Snowflake to generate the transaction ids: https://github.com/twitter/snowflake

 

From: Kent Narling [mailto:[hidden email]]

Sent: Thursday, July 28, 2011 5:46 PM
To: [hidden email]

Subject: Using Cassandra for transaction logging, good idea?

 

Hi! 



I am considering to use cassandra for clustered transaction logging in a project. 

What I need are in principal 3 functions: 

1 - Log transaction with a unique (but possibly non-sequential) id 
2 - Fetch transaction with a specific id 
3 - Fetch X new transactions "after" a specific cursor/transaction 
     This function must be guaranteed to: 
     A, eventually return all known transactions 
     B, Not return the same transaction more than once 
     The order of the transactions fetches does not have to be strictly time-sorted 
     but in practice it probably has to be based on some time-oriented order to be able to support cursors. 

I can see that 1 & 2 are trivial to solve in Cassandra, but is there any elegant way to solve 3? 
Since there might be multiple nodes logging transactions, their clocks might not be perfectly synchronized (to millisec level) etc so sorting on time is not stable. 
Possibly creating a synchronized incremental id might be one option but that could create a cluster bottleneck etc? 

Another alternative might be to use cassandra for 1 & 2 and then store an ordered list of id:s in a standard DB. This might be a reasonable compromise since 3 is less critical from a HA point of view, but maybe someone can point me to a more elegant solution using Cassandra? 




If you reply to this email, your message will be added to the discussion below:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Using-Cassandra-for-transaction-logging-good-idea-tp6630109p6639001.html
To start a new topic under [hidden email], email [hidden email]
To unsubscribe from [hidden email], click here.



View this message in context: Re: Using Cassandra for transaction logging, good idea?
Sent from the cassandra-user@incubator.apache.org mailing list archive at Nabble.com.