Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 629BE200C01 for ; Thu, 19 Jan 2017 10:36:33 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 6127C160B54; Thu, 19 Jan 2017 09:36:33 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id A9CEC160B42 for ; Thu, 19 Jan 2017 10:36:32 +0100 (CET) Received: (qmail 85854 invoked by uid 500); 19 Jan 2017 09:36:31 -0000 Mailing-List: contact commits-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cassandra.apache.org Delivered-To: mailing list commits@cassandra.apache.org Received: (qmail 85843 invoked by uid 99); 19 Jan 2017 09:36:31 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 19 Jan 2017 09:36:31 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 68FA2C09D3 for ; Thu, 19 Jan 2017 09:36:31 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -1.199 X-Spam-Level: X-Spam-Status: No, score=-1.199 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-2.999] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id gpxzh8WULzxM for ; Thu, 19 Jan 2017 09:36:30 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id AF87B5FB1E for ; Thu, 19 Jan 2017 09:36:29 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id D5FCDE58AA for ; Thu, 19 Jan 2017 09:36:26 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 8E4A625284 for ; Thu, 19 Jan 2017 09:36:26 +0000 (UTC) Date: Thu, 19 Jan 2017 09:36:26 +0000 (UTC) From: "Christian Esken (JIRA)" To: commits@cassandra.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (CASSANDRA-13005) Cassandra TWCS is not removing fully expired tables MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Thu, 19 Jan 2017 09:36:33 -0000 [ https://issues.apache.org/jira/browse/CASSANDRA-13005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christian Esken updated CASSANDRA-13005: ---------------------------------------- Description: I have a table where all columns are stored with TTL of maximum 4 hours. Usually TWCS compaction properly removes expired data via tombstone compaction and also removes fully expired tables. The number of SSTables is nearly constant since weeks. Good. The problem: Suddenly TWCS does not remove old SSTables any longer. They are being recreated frequently (judging form the file creation timestamp), but the number of tables is growing. Analysis and actions take so far: - sstablemetadata shows strange data, as if the table is completely empty. - sstabledump throws an Exception when running it on such a SSTable - Even triggering a manual major compaction will not remove the old SSTable's. To be more precise: They are recreated with new id and timestamp (not sure whether they are identical as I cannot inspect content due to the sstabledump crash) {color:blue}edit 2017-01-19: This ticket may be obsolete. See the later comments for more information.{color} was: I have a table where all columns are stored with TTL of maximum 4 hours. Usually TWCS compaction properly removes expired data via tombstone compaction and also removes fully expired tables. The number of SSTables is nearly constant since weeks. Good. The problem: Suddenly TWCS does not remove old SSTables any longer. They are being recreated frequently (judging form the file creation timestamp), but the number of tables is growing. Analysis and actions take so far: - sstablemetadata shows strange data, as if the table is completely empty. - sstabledump throws an Exception when running it on such a SSTable - Even triggering a manual major compaction will not remove the old SSTable's. To be more precise: They are recreated with new id and timestamp (not sure whether they are identical as I cannot inspect content due to the sstabledump crash) {color:blue}edit 2017-0-19: This ticket may be obsolete. See the later comments for more information.{color} > Cassandra TWCS is not removing fully expired tables > --------------------------------------------------- > > Key: CASSANDRA-13005 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13005 > Project: Cassandra > Issue Type: Bug > Components: Compaction > Environment: Cassandra 3.0.9 > Java HotSpot(TM) 64-Bit Server VM version 25.112-b15 (Java version 1.8.0_112-b15) > Linux 3.16 > Reporter: Christian Esken > Priority: Minor > Labels: twcs > Attachments: sstablemetadata-empty-type-that-is-3GB.txt > > > I have a table where all columns are stored with TTL of maximum 4 hours. Usually TWCS compaction properly removes expired data via tombstone compaction and also removes fully expired tables. The number of SSTables is nearly constant since weeks. Good. > The problem: Suddenly TWCS does not remove old SSTables any longer. They are being recreated frequently (judging form the file creation timestamp), but the number of tables is growing. Analysis and actions take so far: > - sstablemetadata shows strange data, as if the table is completely empty. > - sstabledump throws an Exception when running it on such a SSTable > - Even triggering a manual major compaction will not remove the old SSTable's. To be more precise: They are recreated with new id and timestamp (not sure whether they are identical as I cannot inspect content due to the sstabledump crash) > {color:blue}edit 2017-01-19: This ticket may be obsolete. See the later comments for more information.{color} -- This message was sent by Atlassian JIRA (v6.3.4#6332)