Return-Path: X-Original-To: apmail-hbase-issues-archive@www.apache.org Delivered-To: apmail-hbase-issues-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 2283F11845 for ; Wed, 16 Apr 2014 16:00:49 +0000 (UTC) Received: (qmail 4612 invoked by uid 500); 16 Apr 2014 16:00:34 -0000 Delivered-To: apmail-hbase-issues-archive@hbase.apache.org Received: (qmail 4459 invoked by uid 500); 16 Apr 2014 16:00:30 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 4429 invoked by uid 99); 16 Apr 2014 16:00:30 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 16 Apr 2014 16:00:30 +0000 Date: Wed, 16 Apr 2014 16:00:30 +0000 (UTC) From: "Jean-Daniel Cryans (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HBASE-10958) [dataloss] Bulk loading with seqids can prevent some log entries from being replayed MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HBASE-10958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13971584#comment-13971584 ] Jean-Daniel Cryans commented on HBASE-10958: -------------------------------------------- bq. we'd like ADMIN to be granted sparingly to admins or delegates +1 bq. IIRC enable and disable are ADMIN actions also, since disabling or enabling a 10000 region table has consequences. This is what I see in the code in trunk: {code} requirePermission("preBulkLoadHFile", ....getTableDesc().getTableName(), el.getFirst(), null, Permission.Action.WRITE); requirePermission("enableTable", tableName, null, null, Action.ADMIN, Action.CREATE); requirePermission("disableTable", tableName, null, null, Action.ADMIN, Action.CREATE); requirePermission("compact", getTableName(e.getEnvironment()), null, null, Action.ADMIN); requirePermission("flush", getTableName(e.getEnvironment()), null, null, Action.ADMIN); {code} IMO flush should have lower or same perms as disableTable. So here's a list of changes I believe are needed: - preBulkLoadHFile goes from WRITE to CREATE (seems more in line with what's really needed to bulk load given the code I posted yesterday) - compact/flush go from ADMIN to ADMIN or CREATE This should not have an impact on the current users. If we can agree on the changes, I'll open a new jira that's going to be blocking this one. > [dataloss] Bulk loading with seqids can prevent some log entries from being replayed > ------------------------------------------------------------------------------------ > > Key: HBASE-10958 > URL: https://issues.apache.org/jira/browse/HBASE-10958 > Project: HBase > Issue Type: Bug > Affects Versions: 0.96.2, 0.98.1, 0.94.18 > Reporter: Jean-Daniel Cryans > Assignee: Jean-Daniel Cryans > Priority: Blocker > Fix For: 0.99.0, 0.94.19, 0.98.2, 0.96.3 > > Attachments: HBASE-10958-less-intrusive-hack-0.96.patch, HBASE-10958-quick-hack-0.96.patch, HBASE-10958-v2.patch, HBASE-10958.patch > > > We found an issue with bulk loads causing data loss when assigning sequence ids (HBASE-6630) that is triggered when replaying recovered edits. We're nicknaming this issue *Blindspot*. > The problem is that the sequence id given to a bulk loaded file is higher than those of the edits in the region's memstore. When replaying recovered edits, the rule to skip some of them is that they have to be _lower than the highest sequence id_. In other words, the edits that have a sequence id lower than the highest one in the store files *should* have also been flushed. This is not the case with bulk loaded files since we now have an HFile with a sequence id higher than unflushed edits. > The log recovery code takes this into account by simply skipping the bulk loaded files, but this "bulk loaded status" is *lost* on compaction. The edits in the logs that have a sequence id lower than the bulk loaded file that got compacted are put in a blind spot and are skipped during replay. > Here's the easiest way to recreate this issue: > - Create an empty table > - Put one row in it (let's say it gets seqid 1) > - Bulk load one file (it gets seqid 2). I used ImporTsv and set hbase.mapreduce.bulkload.assign.sequenceNumbers. > - Bulk load a second file the same way (it gets seqid 3). > - Major compact the table (the new file has seqid 3 and isn't considered bulk loaded). > - Kill the region server that holds the table's region. > - Scan the table once the region is made available again. The first row, at seqid 1, will be missing since the HFile with seqid 3 makes us believe that everything that came before it was flushed. -- This message was sent by Atlassian JIRA (v6.2#6252)