With a RF and CL of one, there is no replication so there can be no issue with distributed deletes. Writes (and reads) can only go to the one host that has the data and will be refused if that node is down. I'd guess that your app isn't deleting records when you think that it is, or that the delete is failing but not being detected as failed.-BryanOn Fri, Feb 15, 2013 at 10:21 AM, Mike <email@example.com> wrote:
If you increase the number of nodes to 3, with an RF of 3, then you should be able to read/delete utilizing a quorum consistency level, which I believe will help here. Also, make sure the time of your servers are in sync, utilizing NTP, as drifting time between you client and server could cause updates to be mistakenly dropped for being old.
Also, make sure you are running with a gc_grace period that is high enough. The default is 10 days.
Hope this helps,
On 2/15/2013 1:13 PM, Víctor Hugo Oliveira Molinar wrote:
I have a column family filled with event objects which need to be processed by query threads.
Once each thread query for those objects(spread among columns bellow a row), it performs a delete operation for each object in cassandra.
It's done in order to ensure that these events wont be processed again.
Some tests has showed me that it works, but sometimes i'm not getting those events deleted. I checked it through cassandra-cli,etc.
So, reading it (http://wiki.apache.org/cassandra/DistributedDeletes) I came to a conclusion that I may be reading old data.
My cluster is currently configured as: 2 nodes, RF1, CL 1.
In that case, what should I do?
- Increase the consistency level for the write operations( in that case, the deletions ). In order to ensure that those deletions are stored in all nodes.
- Increase the consistency level for the read operations. In order to ensure that I'm reading only those yet processed events(deleted).
Thanks in advance