Return-Path: X-Original-To: apmail-cassandra-commits-archive@www.apache.org Delivered-To: apmail-cassandra-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 34D9218B22 for ; Wed, 2 Sep 2015 21:44:46 +0000 (UTC) Received: (qmail 49915 invoked by uid 500); 2 Sep 2015 21:44:46 -0000 Delivered-To: apmail-cassandra-commits-archive@cassandra.apache.org Received: (qmail 49877 invoked by uid 500); 2 Sep 2015 21:44:46 -0000 Mailing-List: contact commits-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cassandra.apache.org Delivered-To: mailing list commits@cassandra.apache.org Received: (qmail 49859 invoked by uid 99); 2 Sep 2015 21:44:46 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 02 Sep 2015 21:44:46 +0000 Date: Wed, 2 Sep 2015 21:44:45 +0000 (UTC) From: "Philip Thompson (JIRA)" To: commits@cassandra.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (CASSANDRA-10253) Incrimental repairs not working as expected with DTCS MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/CASSANDRA-10253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-10253: ---------------------------------------- Reproduced In: 2.1.8 Fix Version/s: 2.1.x > Incrimental repairs not working as expected with DTCS > ----------------------------------------------------- > > Key: CASSANDRA-10253 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10253 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Pre-prod > Reporter: vijay > Fix For: 2.1.x > > > HI, > we are ingesting data 6 million records every 15 mins into one DTCS table and relaying on Cassandra for purging the data.Table Schema given below, Issue 1: we are expecting to see table sstable created on day d1 will not be compacted after d1 how we are not seeing this, how ever i see some data being purged at random intervals > Issue 2: when we run incremental repair using "nodetool repair keyspace table -inc -pr" each sstable is splitting up to multiple smaller SStables and increasing the total storage.This behavior is same running repairs on any node and any number of times > There are mutation drop's in the cluster > Table: > {code} > CREATE TABLE TableA ( > F1 text, > F2 int, > createts bigint, > stats blob, > PRIMARY KEY ((F1,F2), createts) > ) WITH CLUSTERING ORDER BY (createts DESC) > AND bloom_filter_fp_chance = 0.01 > AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}' > AND comment = '' > AND compaction = {'min_threshold': '12', 'max_sstable_age_days': '1', 'base_time_seconds': '50', 'class': 'org.apache.cassandra.db.compaction.DateTieredCompactionStrategy'} > AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'} > AND dclocal_read_repair_chance = 0.0 > AND default_time_to_live = 93600 > AND gc_grace_seconds = 3600 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99.0PERCENTILE'; > {code} > Thanks -- This message was sent by Atlassian JIRA (v6.3.4#6332)