Return-Path: X-Original-To: apmail-hbase-issues-archive@www.apache.org Delivered-To: apmail-hbase-issues-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A3D30EE10 for ; Sun, 27 Jan 2013 00:01:13 +0000 (UTC) Received: (qmail 64249 invoked by uid 500); 27 Jan 2013 00:01:13 -0000 Delivered-To: apmail-hbase-issues-archive@hbase.apache.org Received: (qmail 64178 invoked by uid 500); 27 Jan 2013 00:01:13 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 64100 invoked by uid 99); 27 Jan 2013 00:01:13 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 27 Jan 2013 00:01:13 +0000 Date: Sun, 27 Jan 2013 00:01:13 +0000 (UTC) From: "Hudson (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HBASE-7643) HFileArchiver.resolveAndArchive() race condition may lead to snapshot data loss MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HBASE-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13563691#comment-13563691 ] Hudson commented on HBASE-7643: ------------------------------- Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #377 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/377/]) HBASE-7643 HFileArchiver.resolveAndArchive() race condition may lead to snapshot data loss (Revision 1438972) Result = FAILURE mbertozzi : Files : * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/backup/HFileArchiver.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/backup/TestHFileArchiving.java > HFileArchiver.resolveAndArchive() race condition may lead to snapshot data loss > ------------------------------------------------------------------------------- > > Key: HBASE-7643 > URL: https://issues.apache.org/jira/browse/HBASE-7643 > Project: HBase > Issue Type: Bug > Affects Versions: hbase-6055, 0.96.0 > Reporter: Matteo Bertozzi > Assignee: Matteo Bertozzi > Priority: Blocker > Fix For: 0.96.0, 0.94.5 > > Attachments: HBASE-7653-p4-v0.patch, HBASE-7653-p4-v1.patch, HBASE-7653-p4-v2.patch, HBASE-7653-p4-v3.patch, HBASE-7653-p4-v4.patch, HBASE-7653-p4-v5.patch, HBASE-7653-p4-v6.patch, HBASE-7653-p4-v7.patch > > > * The master have an hfile cleaner thread (that is responsible for cleaning the /hbase/.archive dir) > ** /hbase/.archive/table/region/family/hfile > ** if the family/region/family directory is empty the cleaner removes it > * The master can archive files (from another thread, e.g. DeleteTableHandler) > * The region can archive files (from another server/process, e.g. compaction) > The simplified file archiving code looks like this: > {code} > HFileArchiver.resolveAndArchive(...) { > // ensure that the archive dir exists > fs.mkdir(archiveDir); > // move the file to the archiver > success = fs.rename(originalPath/fileName, archiveDir/fileName) > // if the rename is failed, delete the file without archiving > if (!success) fs.delete(originalPath/fileName); > } > {code} > Since there's no synchronization between HFileArchiver.resolveAndArchive() and the cleaner run (different process, thread, ...) you can end up in the situation where you are moving something in a directory that doesn't exists. > {code} > fs.mkdir(archiveDir); > // HFileCleaner chore starts at this point > // and the archiveDirectory that we just ensured to be present gets removed. > // The rename at this point will fail since the parent directory is missing. > success = fs.rename(originalPath/fileName, archiveDir/fileName) > {code} > The bad thing of deleting the file without archiving is that if you've a snapshot that relies on the file to be present, or you've a clone table that relies on that file is that you're losing data. > Possible solutions > * Create a ZooKeeper lock, to notify the master ("Hey I'm archiving something, wait a bit") > * Add a RS -> Master call to let the master removes files and avoid this kind of situations > * Avoid to remove empty directories from the archive if the table exists or is not disabled > * Add a try catch around the fs.rename > The last one, the easiest one, looks like: > {code} > for (int i = 0; i < retries; ++i) { > // ensure archive directory to be present > fs.mkdir(archiveDir); > // ----> possible race <----- > // try to archive file > success = fs.rename(originalPath/fileName, archiveDir/fileName); > if (success) break; > } > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira