Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id A47E9200B41 for ; Thu, 7 Jul 2016 10:42:18 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id A2F3E160A68; Thu, 7 Jul 2016 08:42:18 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id EC772160A59 for ; Thu, 7 Jul 2016 10:42:17 +0200 (CEST) Received: (qmail 50560 invoked by uid 500); 7 Jul 2016 08:42:17 -0000 Mailing-List: contact dev-help@tephra.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@tephra.incubator.apache.org Delivered-To: mailing list dev@tephra.incubator.apache.org Received: (qmail 50549 invoked by uid 99); 7 Jul 2016 08:42:17 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 07 Jul 2016 08:42:17 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id A62C7C2FE2 for ; Thu, 7 Jul 2016 08:42:16 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -4.507 X-Spam-Level: X-Spam-Status: No, score=-4.507 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KAM_LAZY_DOMAIN_SECURITY=1, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-1.287] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id nFwSwICrtaVN for ; Thu, 7 Jul 2016 08:42:16 +0000 (UTC) Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with SMTP id 1AE885FBC9 for ; Thu, 7 Jul 2016 08:42:14 +0000 (UTC) Received: (qmail 47782 invoked by uid 99); 7 Jul 2016 08:42:11 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 07 Jul 2016 08:42:11 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 157EF2C0003 for ; Thu, 7 Jul 2016 08:42:11 +0000 (UTC) Date: Thu, 7 Jul 2016 08:42:10 +0000 (UTC) From: "James Taylor (JIRA)" To: dev@tephra.incubator.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (TEPHRA-35) Prune invalid transaction set once all data for a given invalid transaction has been dropped MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Thu, 07 Jul 2016 08:42:18 -0000 [ https://issues.apache.org/jira/browse/TEPHRA-35?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365820#comment-15365820 ] James Taylor commented on TEPHRA-35: ------------------------------------ I still think putting the burden at the application level to either track which tables are involved in a transaction or waiting until every single table has been major compacted is too high. Why can't the client send the table names over when a commit occurs? Then, only in the exception case of a client not being able to reach the transaction manager would it be required that all transactional tables be major compacted. Or maybe some other alternative? How does Omid deal with this? > Prune invalid transaction set once all data for a given invalid transaction has been dropped > -------------------------------------------------------------------------------------------- > > Key: TEPHRA-35 > URL: https://issues.apache.org/jira/browse/TEPHRA-35 > Project: Tephra > Issue Type: New Feature > Reporter: Gary Helmling > Assignee: Poorna Chandra > Priority: Blocker > Attachments: ApacheTephraAutomaticInvalidListPruning-v2.pdf > > > In addition to dropping the data from invalid transactions we need to be able to prune the invalid set of any transactions where data cleanup has been completely performed. Without this, the invalid set will grow indefinitely and become a greater and greater cost to in-progress transactions over time. > To do this correctly, the TransactionDataJanitor coprocessor will need to maintain some bookkeeping for the transaction data that it removes, so that the transaction manager can reason about when all of a given transaction's data has been removed. Only at this point can the transaction manager safely drop the transaction ID from the invalid set. > One approach would be for the TransactionDataJanitor to update a table marking when a major compaction was performed on a region and what transaction IDs were filtered out. Once all regions in a table containing the transaction data have been compacted, we can remove the filtered out transaction IDs from the invalid set. However, this will need to cope with changing region names due to splits, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332)