cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Arya Goudarzi (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (CASSANDRA-5412) Lots of deleted rows came back to life after upgrade from 1.1.6 to 1.1.10
Date Wed, 03 Apr 2013 21:19:16 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-5412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Arya Goudarzi updated CASSANDRA-5412:
-------------------------------------

    Affects Version/s: 1.1.10
    
> Lots of deleted rows came back to life after upgrade from 1.1.6 to 1.1.10
> -------------------------------------------------------------------------
>
>                 Key: CASSANDRA-5412
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-5412
>             Project: Cassandra
>          Issue Type: Bug
>    Affects Versions: 1.1.10
>         Environment: Ubuntu 10.04 LTS
> Sun Java 6 u39
> 1.1.6 and 1.1.10
>            Reporter: Arya Goudarzi
>
> Also per discussion here:  http://www.mail-archive.com/user@cassandra.apache.org/msg28905.html
> I was not able to find any answers as to why a simple upgrade process could bring back
a lot of (millions) of deleted rows to life. We have successful repairs running on our cluster
every night. Unless repair is not doing its job, it is not possible to the best of my knowledge
that the deleted rows come back unless there is a bug. I have previously experienced this
issue when I upgraded our sandbox cluster. I failed at every single attempt to reproduce the
issue by restoring a fresh cluster from snapshot, and performing the upgrade from 1.1.6 to
1.1.10. I even exercised this with the snapshot of our production cluster before upgrading
and was not successful. So, I finally made the decision to upgrade, and guess what?! Millions
of deleted rows came back after the upgrade. 
> This time I confirmed the timestamps of the deleted rows that came back; they were actually
before the time there were deleted. So, this is just like when tombstones get purged before
they get propagated. We use nanosecond precision timestamps (19 digits).
> My discussion on the mailing list did not lead anywhere, though Aaron helped me find
one another possible way of this happening by Hinted Handoff which I filed a separate ticket
for. I don't believe this is an issue for us as we don't have nodes down for a long period
of time. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message