hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Nauroth (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-4423) Checkpoint exception causes fatal damage to fsimage.
Date Tue, 29 Jan 2013 23:11:13 GMT

    [ https://issues.apache.org/jira/browse/HDFS-4423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565931#comment-13565931
] 

Chris Nauroth commented on HDFS-4423:
-------------------------------------

Here is the output from test-patch.  Regarding the Findbugs warnings, this is the exact same
output I get from using a no-op patch file (a 0-byte file as input to test-patch.sh) applied
to branch-1.  There are no new warnings related to this patch.  Perhaps we need to investigate
if a prior patch accidentally introduced new warnings.

     [exec] -1 overall.  
     [exec] 
     [exec]     +1 @author.  The patch does not contain any @author tags.
     [exec] 
     [exec]     +1 tests included.  The patch appears to include 4 new or modified tests.
     [exec] 
     [exec]     +1 javadoc.  The javadoc tool did not generate any warning messages.
     [exec] 
     [exec]     +1 javac.  The applied patch does not increase the total number of javac compiler
warnings.
     [exec] 
     [exec]     -1 findbugs.  The patch appears to introduce 12 new Findbugs (version 1.3.9)
warnings.

                
> Checkpoint exception causes fatal damage to fsimage.
> ----------------------------------------------------
>
>                 Key: HDFS-4423
>                 URL: https://issues.apache.org/jira/browse/HDFS-4423
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 1.0.4, 1.1.1
>         Environment: CentOS 6.2
>            Reporter: ChenFolin
>            Assignee: Chris Nauroth
>            Priority: Blocker
>         Attachments: HDFS-4423-branch-1.1.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> The impact of class is org.apache.hadoop.hdfs.server.namenode.FSImage.java
> {code}
> boolean loadFSImage(MetaRecoveryContext recovery) throws IOException {
> ...
> latestNameSD.read();
>     needToSave |= loadFSImage(getImageFile(latestNameSD, NameNodeFile.IMAGE));
>     LOG.info("Image file of size " + imageSize + " loaded in " 
>         + (FSNamesystem.now() - startTime)/1000 + " seconds.");
>     
>     // Load latest edits
>     if (latestNameCheckpointTime > latestEditsCheckpointTime)
>       // the image is already current, discard edits
>       needToSave |= true;
>     else // latestNameCheckpointTime == latestEditsCheckpointTime
>       needToSave |= (loadFSEdits(latestEditsSD, recovery) > 0);
>     
>     return needToSave;
>   }
> {code}
> If it is the normal flow of the checkpoint,the value of latestNameCheckpointTime  is
equal to the value of latestEditsCheckpointTime,and it will exec “else”.
> The problem is that,latestNameCheckpointTime > latestEditsCheckpointTime:
> SecondNameNode starts checkpoint,
> ...
> NameNode:rollFSImage,NameNode shutdown after write latestNameCheckpointTime and before
write latestEditsCheckpointTime.
> Start NameNode:because latestNameCheckpointTime > latestEditsCheckpointTime,so
the value of needToSave is true, and it will not update “rootDir”'s nsCount that is
the cluster's file number(update exec at loadFSEdits “FSNamesystem.getFSNamesystem().dir.updateCountForINodeWithQuota()”),and
then “saveNamespace” will write file number to fsimage whit default value “1”。
> The next time,loadFSImage will fail.
> Maybe,it will work:
> {code}
> boolean loadFSImage(MetaRecoveryContext recovery) throws IOException {
> ...
> latestNameSD.read();
>     needToSave |= loadFSImage(getImageFile(latestNameSD, NameNodeFile.IMAGE));
>     LOG.info("Image file of size " + imageSize + " loaded in " 
>         + (FSNamesystem.now() - startTime)/1000 + " seconds.");
>     
>     // Load latest edits
>     if (latestNameCheckpointTime > latestEditsCheckpointTime){
>       // the image is already current, discard edits
>       needToSave |= true;
>       FSNamesystem.getFSNamesystem().dir.updateCountForINodeWithQuota();
>     }
>     else // latestNameCheckpointTime == latestEditsCheckpointTime
>       needToSave |= (loadFSEdits(latestEditsSD, recovery) > 0);
>     
>     return needToSave;
>   }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message