Return-Path: X-Original-To: apmail-cassandra-commits-archive@www.apache.org Delivered-To: apmail-cassandra-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 45E9710433 for ; Wed, 23 Oct 2013 15:53:47 +0000 (UTC) Received: (qmail 78623 invoked by uid 500); 23 Oct 2013 15:53:46 -0000 Delivered-To: apmail-cassandra-commits-archive@cassandra.apache.org Received: (qmail 78512 invoked by uid 500); 23 Oct 2013 15:53:45 -0000 Mailing-List: contact commits-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cassandra.apache.org Delivered-To: mailing list commits@cassandra.apache.org Received: (qmail 78478 invoked by uid 99); 23 Oct 2013 15:53:44 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 23 Oct 2013 15:53:44 +0000 Date: Wed, 23 Oct 2013 15:53:44 +0000 (UTC) From: "Constance Eustace (JIRA)" To: commits@cassandra.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Comment Edited] (CASSANDRA-6137) CQL3 SELECT IN CLAUSE inconsistent MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/CASSANDRA-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801014#comment-13801014 ] Constance Eustace edited comment on CASSANDRA-6137 at 10/23/13 3:53 PM: ------------------------------------------------------------------------ Once the initial datarun is done, then a nodetool compact fixes the initial corruption, my second run is now about 4x further than it's been before and no corruption/bad WHERE IN results have occurred. This could be some initial confusion in the internal data structures on newly created keyspaces that lack data, or once the compaction thread catches up. EDIT: I've done runs that did 10x the data volume on inserts and 5x the number of update threads and there have been no issues if proper nodetool flush / nodetool compact / nodetool keycacheinvalidate is done. was (Author: cowardlydragon): Once the initial datarun is done, then a nodetool compact fixes the initial corruption, my second run is now about 4x further than it's been before and no corruption/bad WHERE IN results have occurred. This could be some initial confusion in the internal data structures on newly created keyspaces that lack data, or once the compaction thread catches up. > CQL3 SELECT IN CLAUSE inconsistent > ---------------------------------- > > Key: CASSANDRA-6137 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6137 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Ubuntu AWS Cassandra 2.0.1 SINGLE NODE on EBS RAID storage > OSX Cassandra 1.2.8 on SSD storage > Reporter: Constance Eustace > Priority: Minor > > Possible Resolution: > What seems to be key is to run a nodetool compact (possibly a nodetool flush) after schema drops / schema creations / schema truncates and invalidate the caches. This seems to align the data for new inserts/updates. From my reproduction tests, I have been unable to generate the database corruption if nodetool flush, nodetool compact, nodetool keycacheinvalidate (we have turned off rowcache due to other bugs). Then, even after running a more stressful test with 10x the inserts and five separate concurrent update threads the corruption did not appear. > So I believe this is a tentative "fix" to this issue... in general, after any manipulation to the schema, you should run nodetool compact and keycacheinvalidate. I have not tested if a general compact on all keyspaces and tables versus a more specific compact on the affected keyspace and/or keyspace tables is all that is necessary (compact can be a very expensive operation). > ------------------------------------------------------------------ > Problem Encountered: > We are encountering inconsistent results from CQL3 queries with column keys using IN clause in WHERE. This has been reproduced in cqlsh and the jdbc driver. Specifically, we are doing queries to pull a subset of column keys for a specific row key. > We detect this corruption by selecting all the column keys for a row, and then trying different subsets of column keys in WHERE IN (). We see some of these column key subset queries not return all the column keys, even though the select-all-column-keys query finds them. > It seems to appear when there is a large amount of raw insertion work (non-updates / new ingested data) combined with simultaneous updates to existing data. EDIT: this also seems to only happen with mass insert+updates after schema changes / drops / table creation / table truncation. See the Possible Resolution section above. > ------------------------------------------------------------------ > Details: > Rowkey is e_entid > Column key is p_prop > This returns roughly 21 rows for 21 column keys that match p_prop. > cqlsh> SELECT e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars FROM internal_submission.Entity_Job WHERE e_entid = '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'; > These three queries each return one row for the requested single column key in the IN clause: > SELECT e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars FROM internal_submission.Entity_Job WHERE e_entid = '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB' AND p_prop in ('urn:bby:pcm:job:ingest:content:complete:count'); > SELECT e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars FROM internal_submission.Entity_Job WHERE e_entid = '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB' AND p_prop in ('urn:bby:pcm:job:ingest:content:all:count'); > SELECT e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars FROM internal_submission.Entity_Job WHERE e_entid = '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB' AND p_prop in ('urn:bby:pcm:job:ingest:content:fail:count'); > This query returns ONLY ONE ROW (one column key), not three as I would expect from the three-column-key IN clause: > cqlsh> SELECT e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars FROM internal_submission.Entity_Job WHERE e_entid = '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB' AND p_prop in ('urn:bby:pcm:job:ingest:content:complete:count','urn:bby:pcm:job:ingest:content:all:count','urn:bby:pcm:job:ingest:content:fail:count'); > This query does return two rows however for the requested two column keys: > cqlsh> SELECT e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars FROM internal_submission.Entity_Job WHERE e_entid = '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB' AND p_prop in ( 'urn:bby:pcm:job:ingest:content:all:count','urn:bby:pcm:job:ingest:content:fail:count'); > cqlsh> describe table internal_submission.entity_job; > CREATE TABLE entity_job ( > e_entid text, > p_prop text, > describes text, > dndcondition text, > e_entlinks text, > e_entname text, > e_enttype text, > ingeststatus text, > ingeststatusdetail text, > p_flags text, > p_propid text, > p_proplinks text, > p_storage text, > p_subents text, > p_val text, > p_vallang text, > p_vallinks text, > p_valtype text, > p_valunit text, > p_vars text, > partnerid text, > referenceid text, > size int, > sourceip text, > submitdate bigint, > submitevent text, > userid text, > version text, > PRIMARY KEY (e_entid, p_prop) > ) WITH > bloom_filter_fp_chance=0.010000 AND > caching='KEYS_ONLY' AND > comment='' AND > dclocal_read_repair_chance=0.000000 AND > gc_grace_seconds=864000 AND > index_interval=128 AND > read_repair_chance=0.100000 AND > replicate_on_write='true' AND > populate_io_cache_on_flush='false' AND > default_time_to_live=0 AND > speculative_retry='NONE' AND > memtable_flush_period_in_ms=0 AND > compaction={'class': 'SizeTieredCompactionStrategy'} AND > compression={'sstable_compression': 'LZ4Compressor'}; > CREATE INDEX internal_submission__JobDescribesIDX ON entity_job (describes); > CREATE INDEX internal_submission__JobDNDConditionIDX ON entity_job (dndcondition); > CREATE INDEX internal_submission__JobIngestStatusIDX ON entity_job (ingeststatus); > CREATE INDEX internal_submission__JobIngestStatusDetailIDX ON entity_job (ingeststatusdetail); > CREATE INDEX internal_submission__JobReferenceIDIDX ON entity_job (referenceid); > CREATE INDEX internal_submission__JobUserIDX ON entity_job (userid); > CREATE INDEX internal_submission__JobVersionIDX ON entity_job (version); > ------------------------------- > My suspicion is that the three-column-key IN Clause is translated (improperly or not) to a two-column key range with the assumption that the third column key is present in that range, but it isn't... > ----------------------------------- > We have tried: nodetool cache invalidations, start/stop of cassandra. Those did NOT fix the problem. A table dump (COPY TO) and then reload (COPY FROM) does fix the rows, but then more corruption creeps in. > We are using the cassandra-jdbc driver, but I don't see anything wrong with the issued statements inside the cassandra source code when I step through the code. > With additional writes, it may be possible that some rows get fixed. Compaction or other jobs may repair this, but on the timescale of hours done debugging, the failures are consistent. -- This message was sent by Atlassian JIRA (v6.1#6144)