hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yongjun Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-9406) FSImage corruption after taking snapshot
Date Thu, 28 Jan 2016 08:06:39 GMT

    [ https://issues.apache.org/jira/browse/HDFS-9406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120981#comment-15120981

Yongjun Zhang commented on HDFS-9406:

Hi Guys,

I just uploaded a prototype patch to address this issue and the one reported in HDFS-9697.
I will add test code soon.

Thanks Stanislav again for the test data. The sequence of events I found the in the data that
trigger the issue is roughly:

# snapshot s0 is taken
# file A is created at dir X
# file A is moved from dir X to dir Y
# snapshot s1 is taken
# Y is deleted with trash enabled, thus Y is moved to trash which is not snapshottable dir
# snapshot s2 is taken
# Y is deleted when cleaning trash
# delete snapshot s1 which is the last snapshot that has file A (and A is already deleted
from current state)

Unfortunately I did not manage to create a small testcase that reproduce the symptom. So I
used Stanislav's data to see the problem and verify the fix.

The issue I found was, when deleting a snapshot which is the last one that contains a given
INode, the current implementation fails to clean up:

# the create list of the snapshot diff in the snapshot prior to the snapshot to be deleted
# the parent INodeDirectory's children list.

The prototype I did does two condition checking and do the cleaning accordingly:

# check the reference count of a child to be removed, if it's a reference and has a reference
count 1
# check if a child of current INodeDirectory is in the {{removedINodes}}, if so, remove it
from the children list of current INodeDirectory. This part needs some optimization since
removedINodes may be a long list of INodes that are not the children of the current INode.

Though I'm trying to solve these two jiras together here, I will update a testcase scenario
I found that can reproduce HDFS-9697 there.

Hi [~jingzhao] and [~szetszwo],

Would you please help review this prototype patch and share your thoughts?

Thanks a lot.

> FSImage corruption after taking snapshot
> ----------------------------------------
>                 Key: HDFS-9406
>                 URL: https://issues.apache.org/jira/browse/HDFS-9406
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.6.0
>         Environment: CentOS 6 amd64, CDH 5.4.4-1
> 2xCPU: Intel(R) Xeon(R) CPU E5-2640 v3
> Memory: 32GB
> Namenode blocks: ~700_000 blocks, no HA setup
>            Reporter: Stanislav Antic
>            Assignee: Yongjun Zhang
>         Attachments: HDFS-9406.001.patch
> FSImage corruption happened after HDFS snapshots were taken. Cluster was not used
> at that time.
> When namenode restarts it reported NULL pointer exception:
> {code}
> 15/11/07 10:01:15 INFO namenode.FileJournalManager: Recovering unfinalized segments in
> 15/11/07 10:01:15 INFO namenode.FSImage: No edit log streams selected.
> 15/11/07 10:01:18 INFO namenode.FSImageFormatPBINode: Loading 1370277 INodes.
> 15/11/07 10:01:27 ERROR namenode.NameNode: Failed to start namenode.
> java.lang.NullPointerException
>         at org.apache.hadoop.hdfs.server.namenode.INodeDirectory.addChild(INodeDirectory.java:531)
>         at org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.addToParent(FSImageFormatPBINode.java:252)
>         at org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeDirectorySection(FSImageFormatPBINode.java:202)
>         at org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:261)
>         at org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:180)
>         at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:226)
>         at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:929)
>         at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:913)
>         at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:732)
>         at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:668)
>         at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:281)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1061)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:765)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:643)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:810)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:794)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1487)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1553)
> 15/11/07 10:01:27 INFO util.ExitUtil: Exiting with status 1
> {code}
> Corruption happened after "07.11.2015 00:15", and after that time blocks ~9300 blocks
were invalidated that shouldn't be.
> After recovering FSimage I discovered that around ~9300 blocks were missing.
> -I also attached log of namenode before and after corruption happened.-

This message was sent by Atlassian JIRA

View raw message