Return-Path: X-Original-To: apmail-cassandra-commits-archive@www.apache.org Delivered-To: apmail-cassandra-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id EC9581770B for ; Thu, 26 Mar 2015 19:54:04 +0000 (UTC) Received: (qmail 25398 invoked by uid 500); 26 Mar 2015 19:53:53 -0000 Delivered-To: apmail-cassandra-commits-archive@cassandra.apache.org Received: (qmail 25356 invoked by uid 500); 26 Mar 2015 19:53:53 -0000 Mailing-List: contact commits-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cassandra.apache.org Delivered-To: mailing list commits@cassandra.apache.org Received: (qmail 25339 invoked by uid 99); 26 Mar 2015 19:53:53 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 26 Mar 2015 19:53:53 +0000 Date: Thu, 26 Mar 2015 19:53:53 +0000 (UTC) From: "Philip Thompson (JIRA)" To: commits@cassandra.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (CASSANDRA-9045) Deleted columns are resurrected after repair in wide rows MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/CASSANDRA-9045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382532#comment-14382532 ] Philip Thompson commented on CASSANDRA-9045: -------------------------------------------- [~thobbs], this will be most meaningful to you. The Digest Mismatch seems interesting to me, how could that happen at CL=ALL for all operations? > Deleted columns are resurrected after repair in wide rows > --------------------------------------------------------- > > Key: CASSANDRA-9045 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9045 > Project: Cassandra > Issue Type: Bug > Components: Core > Reporter: Roman Tkachenko > Assignee: Marcus Eriksson > Priority: Critical > Fix For: 2.0.14 > > Attachments: cqlsh.txt > > > Hey guys, > After almost a week of researching the issue and trying out multiple things with (almost) no luck I was suggested (on the user@cass list) to file a report here. > h5. Setup > Cassandra 2.0.13 (we had the issue with 2.0.10 as well and upgraded to see if it goes away) > Multi datacenter 12+6 nodes cluster. > h5. Schema > {code} > cqlsh> describe keyspace blackbook; > CREATE KEYSPACE blackbook WITH replication = { > 'class': 'NetworkTopologyStrategy', > 'IAD': '3', > 'ORD': '3' > }; > USE blackbook; > CREATE TABLE bounces ( > domainid text, > address text, > message text, > "timestamp" bigint, > PRIMARY KEY (domainid, address) > ) WITH > bloom_filter_fp_chance=0.100000 AND > caching='KEYS_ONLY' AND > comment='' AND > dclocal_read_repair_chance=0.100000 AND > gc_grace_seconds=864000 AND > index_interval=128 AND > read_repair_chance=0.000000 AND > populate_io_cache_on_flush='false' AND > default_time_to_live=0 AND > speculative_retry='99.0PERCENTILE' AND > memtable_flush_period_in_ms=0 AND > compaction={'class': 'LeveledCompactionStrategy'} AND > compression={'sstable_compression': 'LZ4Compressor'}; > {code} > h5. Use case > Each row (defined by a domainid) can have many many columns (bounce entries) so rows can get pretty wide. In practice, most of the rows are not that big but some of them contain hundreds of thousands and even millions of columns. > Columns are not TTL'ed but can be deleted using the following CQL3 statement: > {code} > delete from bounces where domainid = 'domain.com' and address = 'alice@example.com'; > {code} > All queries are performed using LOCAL_QUORUM CL. > h5. Problem > We weren't very diligent about running repairs on the cluster initially, but shorty after we started doing it we noticed that some of previously deleted columns (bounce entries) are there again, as if tombstones have disappeared. > I have run this test multiple times via cqlsh, on the row of the customer who originally reported the issue: > * delete an entry > * verify it's not returned even with CL=ALL > * run repair on nodes that own this row's key > * the columns reappear and are returned even with CL=ALL > I tried the same test on another row with much less data and everything was correctly deleted and didn't reappear after repair. > h5. Other steps I've taken so far > Made sure NTP is running on all servers and clocks are synchronized. > Increased gc_grace_seconds to 100 days, ran full repair (on the affected keyspace) on all nodes, then changed it back to the default 10 days again. Didn't help. > Performed one more test. Updated one of the resurrected columns, then deleted it and ran repair again. This time the updated version of the column reappeared. > Finally, I noticed these log entries for the row in question: > {code} > INFO [ValidationExecutor:77] 2015-03-25 20:27:43,936 CompactionController.java (line 192) Compacting large row blackbook/bounces:4ed558feba8a483733001d6a (279067683 bytes) incrementally > {code} > Figuring it may be related I bumped "in_memory_compaction_limit_in_mb" to 512MB so the row fits into it, deleted the entry and ran repair once again. The log entry for this row was gone and the columns didn't reappear. > We have a lot of rows much larger than 512MB so can't increase this parameters forever, if that is the issue. > Please let me know if you need more information on the case or if I can run more experiments. > Thanks! > Roman -- This message was sent by Atlassian JIRA (v6.3.4#6332)