hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jonathan Hsieh (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-7643) HFileArchiver.resolveAndArchive() race condition and snapshot data loss
Date Wed, 23 Jan 2013 12:28:13 GMT

    [ https://issues.apache.org/jira/browse/HBASE-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13560637#comment-13560637
] 

Jonathan Hsieh commented on HBASE-7643:
---------------------------------------

Looks good.  Just have a few comment suggestions.

Mention cleaner dir deletion race?

{code}
+      if (i > 0) {
+        // Ensure that the archive directory exists
+        // (we're in a retry loop, so don't worry too much about the exception)
+        try {
{code}

Is there corresponding java doc that needs to be changed?

{code}
  public boolean moveAndClose(Path dest) throws IOException {
       this.close();
       Path p = this.getPath();
-      return !fs.rename(p, dest);
+      return fs.rename(p, dest);
     }
{code}

nit: comment about 1 ms sleep between cleaner runs..
{code}
+    Stoppable stoppable = new StoppableImplementation();
+    HFileCleaner cleaner = new HFileCleaner(1, stoppable, conf, fs, archiveDir);
+
{code}

I buy this but needed to think a bit to figure out why this is correct.  Add a comment? (the
invariant is that the file is in on or the other place, and if it failes in one we check the
other).
{code}
+      try {
+        HFileArchiver.archiveRegion(conf, fs, rootDir, sourceRegionDir.getParent(), sourceRegionDir);
+        assertTrue(fs.exists(archiveFile));
+        assertFalse(fs.exists(sourceFile));
+      } catch (IOException e) {
+        assertFalse(fs.exists(archiveFile));
+        assertTrue(fs.exists(sourceFile));
+      }
{code}
                
> HFileArchiver.resolveAndArchive() race condition and snapshot data loss
> -----------------------------------------------------------------------
>
>                 Key: HBASE-7643
>                 URL: https://issues.apache.org/jira/browse/HBASE-7643
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: hbase-6055, 0.96.0
>            Reporter: Matteo Bertozzi
>            Assignee: Matteo Bertozzi
>            Priority: Blocker
>             Fix For: 0.96.0, 0.94.5
>
>         Attachments: HBASE-7653-p4-v0.patch, HBASE-7653-p4-v1.patch, HBASE-7653-p4-v2.patch,
HBASE-7653-p4-v3.patch
>
>
>  * The master have an hfile cleaner thread (that is responsible for cleaning the /hbase/.archive
dir)
>  ** /hbase/.archive/table/region/family/hfile
>  ** if the family/region/family directory is empty the cleaner removes it
>  * The master can archive files (from another thread, e.g. DeleteTableHandler)
>  * The region can archive files (from another server/process, e.g. compaction)
> The simplified file archiving code looks like this:
> {code}
> HFileArchiver.resolveAndArchive(...) {
>   // ensure that the archive dir exists
>   fs.mkdir(archiveDir);
>   // move the file to the archiver
>   success = fs.rename(originalPath/fileName, archiveDir/fileName)
>   // if the rename is failed, delete the file without archiving
>   if (!success) fs.delete(originalPath/fileName);
> }
> {code}
> Since there's no synchronization between HFileArchiver.resolveAndArchive() and the cleaner
run (different process, thread, ...) you can end up in the situation where you are moving
something in a directory that doesn't exists.
> {code}
> fs.mkdir(archiveDir);
> // HFileCleaner chore starts at this point
> // and the archiveDirectory that we just ensured to be present gets removed.
> // The rename at this point will fail since the parent directory is missing.
> success = fs.rename(originalPath/fileName, archiveDir/fileName)
> {code}
> The bad thing of deleting the file without archiving is that if you've a snapshot that
relies on the file to be present, or you've a clone table that relies on that file is that
you're losing data.
> Possible solutions
>  * Create a ZooKeeper lock, to notify the master ("Hey I'm archiving something, wait
a bit")
>  * Add a RS -> Master call to let the master removes files and avoid this kind of
situations
>  * Avoid to remove empty directories from the archive if the table exists or is not disabled
>  * Add a try catch around the fs.rename
> The last one, the easiest one, looks like:
> {code}
> for (int i = 0; i < retries; ++i) {
>   // ensure archive directory to be present
>   fs.mkdir(archiveDir);
>   // ----> possible race <-----
>   // try to archive file
>   success = fs.rename(originalPath/fileName, archiveDir/fileName);
>   if (success) break;
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message