incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aaron Morton <aa...@thelastpickle.com>
Subject Re: Consolidating records and TTL
Date Thu, 05 Jun 2014 09:26:42 GMT
As Tyler says, with atomic batches which are enabled by default the cluster will keep trying
to replay the insert / deletes. 

Nodes check their local batch log for failed batches, ones where the coordinator did not acknowledge
it had successfully completed, every 60 seconds. So there is a window where it’s possible
for not all mutations in the batch to be completed. This could happen when a write timeout
occurs when processing a batch of 2 rows; the request CL will not have been achieved on one
or more of the rows. The coordinator will leave it up to the batch log to replay the request,
and the client driver will (by default config) not retry. 

You can use a model like this. 

create table ledger (
	account 		int, 
	tx_id 		timeuuid, 
	sub_total 		int,
	primary key (account, tx_id)
);

create table account (
	account 		int, 
	total			int, 
	last_tx_id		timeuuid, 
	primary key (account)
);

To get the total:

select * from account where account = X;

Then get the ledger entries you need

select * from ledger where account = X and tx_id > last_tx_id;

This query will degrade when the partition size in the ledger table gets bigger, as it will
need to read the column index (see column_index_size_in_kb in yaml). It will use that to find
the first page that contains the rows we are interested in and then read forwards to the end
of the row. It’s not the most efficient type of read but if you are going to delete ledger
entries this *should* be able to skip over the tombstones without reading them. 

When you want to update the total in the account write to the account table and update both
the total and the last_tx_id. You can then delete ledger entries if needed. Don’t forget
to ensure that only one client thread is doing this at a time. 

Hope that helps. 
Aaron


-----------------
Aaron Morton
New Zealand
@aaronmorton

Co-Founder & Principal Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com

On 5/06/2014, at 10:37 am, Tyler Hobbs <tyler@datastax.com> wrote:

> Just use an atomic batch that holds both the insert and deletes: http://www.datastax.com/dev/blog/atomic-batches-in-cassandra-1-2
> 
> 
> On Tue, Jun 3, 2014 at 2:13 PM, Charlie Mason <charlie.mas@gmail.com> wrote:
> Hi All.
> 
> I have a system thats going to make possibly several concurrent changes to a running
total. I know I could use a counter for this. However I have extra meta data I can store with
the changes which would allow me to reply the changes. If I use a counter and it looses some
writes I can't recover it as I will only have its current total not the extra meta data to
know where to replay from.
> 
> What I was planning to do was write each change of the value to a CQL table with a Time
UUID as a row level primary key as well as a partition key. Then when I need to read the running
total back I will do a query for all the changes and add them up to get the total.
> 
> As there could be tens of thousands of these I want to have a period after which these
are consolidated. Most won't be any where near that but a few will which I need to be able
to support. So I was also going to have a consolidated total table which holds the UUID of
the values consolidated up to. Since I can bound the query for the recent updates by the UUID
I should be able to avoid all the tombstones. So if the read encounters any changes that can
be consolidated it inserts a new consolidated value and deletes the newly consolidated changes.
> 
> What I am slightly worried about is what happens if the consolidated value insert fails
but the deletes to the change records succeed. I would be left with an inconsistent total
indefinitely. I have come up with a couple of ideas:
> 
> 
> 1, I could make it require all nodes to acknowledge it before deleting the difference
records.
> 
> 2, May be I could have another period after its consolidated but before its deleted?
> 
> 3, Is there anyway I could use the TTL to allow to it to be deleted after a period of
time? Chances are another read would come in and fix the value.
> 
> 
> Anyone got any other suggestions on how I could implement this?
> 
> 
> Thanks,
> 
> Charlie M
> 
> 
> 
> -- 
> Tyler Hobbs
> DataStax


Mime
View raw message