Return-Path: X-Original-To: apmail-hadoop-hdfs-commits-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 126B910B2E for ; Fri, 14 Feb 2014 07:36:13 +0000 (UTC) Received: (qmail 1916 invoked by uid 500); 14 Feb 2014 07:36:10 -0000 Delivered-To: apmail-hadoop-hdfs-commits-archive@hadoop.apache.org Received: (qmail 1842 invoked by uid 500); 14 Feb 2014 07:36:10 -0000 Mailing-List: contact hdfs-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-commits@hadoop.apache.org Received: (qmail 1834 invoked by uid 99); 14 Feb 2014 07:36:09 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 14 Feb 2014 07:36:09 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 14 Feb 2014 07:36:06 +0000 Received: from eris.apache.org (localhost [127.0.0.1]) by eris.apache.org (Postfix) with ESMTP id 261F22388994; Fri, 14 Feb 2014 07:35:46 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r1568206 - in /hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs: ./ src/main/java/org/apache/hadoop/hdfs/server/namenode/ src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/ src/test/java/org/apache/hadoop/hdfs/ser... Date: Fri, 14 Feb 2014 07:35:45 -0000 To: hdfs-commits@hadoop.apache.org From: jing9@apache.org X-Mailer: svnmailer-1.0.9 Message-Id: <20140214073546.261F22388994@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Author: jing9 Date: Fri Feb 14 07:35:44 2014 New Revision: 1568206 URL: http://svn.apache.org/r1568206 Log: HDFS-5554. Merge change r1548796 from trunk. Added: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java - copied unchanged from r1548796, hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java Removed: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeFileWithSnapshot.java Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotFSImageFormat.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSnapshotPathINodes.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestINodeFileUnderConstructionWithSnapshot.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotBlocksMap.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt?rev=1568206&r1=1568205&r2=1568206&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt Fri Feb 14 07:35:44 2014 @@ -48,6 +48,9 @@ Release 2.4.0 - UNRELEASED HDFS-5537. Remove FileWithSnapshot interface. (jing9 via szetszwo) + HDFS-5554. Flatten INodeFile hierarchy: Replace INodeFileWithSnapshot with + FileWithSnapshotFeature. (jing9 via szetszwo) + OPTIMIZATIONS HDFS-5790. LeaseManager.findPath is very slow when many leases need recovery Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java?rev=1568206&r1=1568205&r2=1568206&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java Fri Feb 14 07:35:44 2014 @@ -60,7 +60,6 @@ import org.apache.hadoop.hdfs.server.com import org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList; import org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectorySnapshottable; import org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot; -import org.apache.hadoop.hdfs.server.namenode.snapshot.INodeFileWithSnapshot; import org.apache.hadoop.hdfs.server.namenode.snapshot.Snapshot; import org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotFSImageFormat; import org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotFSImageFormat.ReferenceMap; @@ -713,11 +712,9 @@ public class FSImageFormat { modificationTime, atime, blocks, replication, blockSize); if (underConstruction) { file.toUnderConstruction(clientName, clientMachine, null); - return fileDiffs == null ? file : new INodeFileWithSnapshot(file, - fileDiffs); + return fileDiffs == null ? file : new INodeFile(file, fileDiffs); } else { - return fileDiffs == null ? file : - new INodeFileWithSnapshot(file, fileDiffs); + return fileDiffs == null ? file : new INodeFile(file, fileDiffs); } } else if (numBlocks == -1) { //directory Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java?rev=1568206&r1=1568205&r2=1568206&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java Fri Feb 14 07:35:44 2014 @@ -201,7 +201,6 @@ import org.apache.hadoop.hdfs.server.nam import org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics; import org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectorySnapshottable; import org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectorySnapshottable.SnapshotDiffInfo; -import org.apache.hadoop.hdfs.server.namenode.snapshot.INodeFileWithSnapshot; import org.apache.hadoop.hdfs.server.namenode.snapshot.Snapshot; import org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager; import org.apache.hadoop.hdfs.server.namenode.startupprogress.Phase; @@ -1773,7 +1772,7 @@ public class FSNamesystem implements Nam throw new HadoopIllegalArgumentException("concat: target file " + target + " is empty"); } - if (trgInode instanceof INodeFileWithSnapshot) { + if (trgInode.isWithSnapshot()) { throw new HadoopIllegalArgumentException("concat: target file " + target + " is in a snapshot"); } Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java?rev=1568206&r1=1568205&r2=1568206&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java Fri Feb 14 07:35:44 2014 @@ -34,7 +34,6 @@ import org.apache.hadoop.hdfs.protocol.S import org.apache.hadoop.hdfs.server.namenode.INodeReference.WithCount; import org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectorySnapshottable; import org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot; -import org.apache.hadoop.hdfs.server.namenode.snapshot.INodeFileWithSnapshot; import org.apache.hadoop.hdfs.server.namenode.snapshot.Snapshot; import org.apache.hadoop.hdfs.util.ReadOnlyList; @@ -322,23 +321,6 @@ public class INodeDirectory extends INod replaceChild(oldChild, ref, null); return ref; } - - private void replaceChildFile(final INodeFile oldChild, - final INodeFile newChild, final INodeMap inodeMap) { - replaceChild(oldChild, newChild, inodeMap); - oldChild.clear(); - newChild.updateBlockCollection(); - } - - /** Replace a child {@link INodeFile} with an {@link INodeFileWithSnapshot}. */ - INodeFileWithSnapshot replaceChild4INodeFileWithSnapshot( - final INodeFile child, final INodeMap inodeMap) { - Preconditions.checkArgument(!(child instanceof INodeFileWithSnapshot), - "Child file is already an INodeFileWithSnapshot, child=" + child); - final INodeFileWithSnapshot newChild = new INodeFileWithSnapshot(child); - replaceChildFile(child, newChild, inodeMap); - return newChild; - } @Override public INodeDirectory recordModification(Snapshot latest, Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java?rev=1568206&r1=1568205&r2=1568206&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java Fri Feb 14 07:35:44 2014 @@ -35,7 +35,7 @@ import org.apache.hadoop.hdfs.server.blo import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState; import org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff; import org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList; -import org.apache.hadoop.hdfs.server.namenode.snapshot.INodeFileWithSnapshot; +import org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature; import org.apache.hadoop.hdfs.server.namenode.snapshot.Snapshot; import com.google.common.annotations.VisibleForTesting; @@ -141,24 +141,11 @@ public class INodeFile extends INodeWith this.blocks = that.blocks; this.headFeature = that.headFeature; } - - /** - * If the inode contains a {@link FileUnderConstructionFeature}, return it; - * otherwise, return null. - */ - public final FileUnderConstructionFeature getFileUnderConstructionFeature() { - for (Feature f = this.headFeature; f != null; f = f.nextFeature) { - if (f instanceof FileUnderConstructionFeature) { - return (FileUnderConstructionFeature) f; - } - } - return null; - } - - /** Is this file under construction? */ - @Override // BlockCollection - public boolean isUnderConstruction() { - return getFileUnderConstructionFeature() != null; + + public INodeFile(INodeFile that, FileDiffList diffs) { + this(that); + Preconditions.checkArgument(!that.isWithSnapshot()); + this.addSnapshotFeature(diffs); } private void addFeature(Feature f) { @@ -183,6 +170,25 @@ public class INodeFile extends INodeWith /* Start of Under-Construction Feature */ + /** + * If the inode contains a {@link FileUnderConstructionFeature}, return it; + * otherwise, return null. + */ + public final FileUnderConstructionFeature getFileUnderConstructionFeature() { + for (Feature f = this.headFeature; f != null; f = f.nextFeature) { + if (f instanceof FileUnderConstructionFeature) { + return (FileUnderConstructionFeature) f; + } + } + return null; + } + + /** Is this file under construction? */ + @Override // BlockCollection + public boolean isUnderConstruction() { + return getFileUnderConstructionFeature() != null; + } + /** Convert this file to an {@link INodeFileUnderConstruction}. */ INodeFile toUnderConstruction(String clientName, String clientMachine, DatanodeDescriptor clientNode) { @@ -267,24 +273,75 @@ public class INodeFile extends INodeWith } /* End of Under-Construction Feature */ + + /* Start of Snapshot Feature */ + + private FileWithSnapshotFeature addSnapshotFeature(FileDiffList diffs) { + FileWithSnapshotFeature sf = new FileWithSnapshotFeature(diffs); + this.addFeature(sf); + return sf; + } + + /** + * If feature list contains a {@link FileWithSnapshotFeature}, return it; + * otherwise, return null. + */ + public final FileWithSnapshotFeature getFileWithSnapshotFeature() { + for (Feature f = headFeature; f != null; f = f.nextFeature) { + if (f instanceof FileWithSnapshotFeature) { + return (FileWithSnapshotFeature) f; + } + } + return null; + } + + /** Is this file has the snapshot feature? */ + public final boolean isWithSnapshot() { + return getFileWithSnapshotFeature() != null; + } + + @Override + public String toDetailString() { + FileWithSnapshotFeature sf = this.getFileWithSnapshotFeature(); + return super.toDetailString() + (sf == null ? "" : sf.getDetailedString()); + } @Override public INodeFileAttributes getSnapshotINode(final Snapshot snapshot) { - return this; + FileWithSnapshotFeature sf = this.getFileWithSnapshotFeature(); + if (sf != null) { + return sf.getSnapshotINode(this, snapshot); + } else { + return this; + } } @Override public INodeFile recordModification(final Snapshot latest, final INodeMap inodeMap) throws QuotaExceededException { if (isInLatestSnapshot(latest)) { - INodeFileWithSnapshot newFile = getParent() - .replaceChild4INodeFileWithSnapshot(this, inodeMap) - .recordModification(latest, inodeMap); - return newFile; - } else { - return this; + // the file is in snapshot, create a snapshot feature if it does not have + FileWithSnapshotFeature sf = this.getFileWithSnapshotFeature(); + if (sf == null) { + sf = addSnapshotFeature(null); + } + // record self in the diff list if necessary + if (!shouldRecordInSrcSnapshot(latest)) { + sf.getDiffs().saveSelf2Snapshot(latest, this, null); + } + } + return this; + } + + public FileDiffList getDiffs() { + FileWithSnapshotFeature sf = this.getFileWithSnapshotFeature(); + if (sf != null) { + return sf.getDiffs(); } + return null; } + + /* End of Snapshot Feature */ /** @return the replication factor of the file. */ public final short getFileReplication(Snapshot snapshot) { @@ -296,14 +353,23 @@ public class INodeFile extends INodeWith } /** The same as getFileReplication(null). */ - @Override + @Override // INodeFileAttributes public final short getFileReplication() { return getFileReplication(null); } - @Override + @Override // BlockCollection public short getBlockReplication() { - return getFileReplication(null); + short max = getFileReplication(null); + FileWithSnapshotFeature sf = this.getFileWithSnapshotFeature(); + if (sf != null) { + short maxInSnapshot = sf.getMaxBlockRepInDiffs(); + if (sf.isCurrentFileDeleted()) { + return maxInSnapshot; + } + max = maxInSnapshot > max ? maxInSnapshot : max; + } + return max; } /** Set the replication factor of this file. */ @@ -396,12 +462,20 @@ public class INodeFile extends INodeWith final BlocksMapUpdateInfo collectedBlocks, final List removedINodes, final boolean countDiffChange) throws QuotaExceededException { + FileWithSnapshotFeature sf = getFileWithSnapshotFeature(); + if (sf != null) { + return sf.cleanFile(this, snapshot, prior, collectedBlocks, + removedINodes, countDiffChange); + } Quota.Counts counts = Quota.Counts.newInstance(); - if (snapshot == null && prior == null) { - // this only happens when deleting the current file + if (snapshot == null && prior == null) { + // this only happens when deleting the current file and the file is not + // in any snapshot computeQuotaUsage(counts, false); destroyAndCollectBlocks(collectedBlocks, removedINodes); } else if (snapshot == null && prior != null) { + // when deleting the current file and the file is in snapshot, we should + // clean the 0-sized block if the file is UC FileUnderConstructionFeature uc = getFileUnderConstructionFeature(); if (uc != null) { uc.cleanZeroSizeBlock(this, collectedBlocks); @@ -423,8 +497,9 @@ public class INodeFile extends INodeWith clear(); removedINodes.add(this); - if (this instanceof INodeFileWithSnapshot) { - ((INodeFileWithSnapshot) this).getDiffs().clear(); + FileWithSnapshotFeature sf = getFileWithSnapshotFeature(); + if (sf != null) { + sf.clearDiffs(); } } @@ -439,8 +514,9 @@ public class INodeFile extends INodeWith boolean useCache, int lastSnapshotId) { long nsDelta = 1; final long dsDelta; - if (this instanceof INodeFileWithSnapshot) { - FileDiffList fileDiffList = ((INodeFileWithSnapshot) this).getDiffs(); + FileWithSnapshotFeature sf = getFileWithSnapshotFeature(); + if (sf != null) { + FileDiffList fileDiffList = sf.getDiffs(); Snapshot last = fileDiffList.getLastSnapshot(); List diffs = fileDiffList.asList(); @@ -472,16 +548,16 @@ public class INodeFile extends INodeWith private void computeContentSummary4Snapshot(final Content.Counts counts) { // file length and diskspace only counted for the latest state of the file // i.e. either the current state or the last snapshot - if (this instanceof INodeFileWithSnapshot) { - final INodeFileWithSnapshot withSnapshot = (INodeFileWithSnapshot) this; - final FileDiffList diffs = withSnapshot.getDiffs(); + FileWithSnapshotFeature sf = getFileWithSnapshotFeature(); + if (sf != null) { + final FileDiffList diffs = sf.getDiffs(); final int n = diffs.asList().size(); counts.add(Content.FILE, n); - if (n > 0 && withSnapshot.isCurrentFileDeleted()) { + if (n > 0 && sf.isCurrentFileDeleted()) { counts.add(Content.LENGTH, diffs.getLast().getFileSize()); } - if (withSnapshot.isCurrentFileDeleted()) { + if (sf.isCurrentFileDeleted()) { final long lastFileSize = diffs.getLast().getFileSize(); counts.add(Content.DISKSPACE, lastFileSize * getBlockReplication()); } @@ -489,8 +565,8 @@ public class INodeFile extends INodeWith } private void computeContentSummary4Current(final Content.Counts counts) { - if (this instanceof INodeFileWithSnapshot - && ((INodeFileWithSnapshot) this).isCurrentFileDeleted()) { + FileWithSnapshotFeature sf = this.getFileWithSnapshotFeature(); + if (sf != null && sf.isCurrentFileDeleted()) { return; } @@ -509,8 +585,9 @@ public class INodeFile extends INodeWith * otherwise, get the file size from the given snapshot. */ public final long computeFileSize(Snapshot snapshot) { - if (snapshot != null && this instanceof INodeFileWithSnapshot) { - final FileDiff d = ((INodeFileWithSnapshot) this).getDiffs().getDiff( + FileWithSnapshotFeature sf = this.getFileWithSnapshotFeature(); + if (snapshot != null && sf != null) { + final FileDiff d = sf.getDiffs().getDiff( snapshot); if (d != null) { return d.getFileSize(); Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java?rev=1568206&r1=1568205&r2=1568206&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java Fri Feb 14 07:35:44 2014 @@ -27,7 +27,6 @@ import org.apache.hadoop.fs.permission.F import org.apache.hadoop.fs.permission.PermissionStatus; import org.apache.hadoop.hdfs.protocol.QuotaExceededException; import org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot; -import org.apache.hadoop.hdfs.server.namenode.snapshot.INodeFileWithSnapshot; import org.apache.hadoop.hdfs.server.namenode.snapshot.Snapshot; import com.google.common.base.Preconditions; @@ -102,9 +101,8 @@ public abstract class INodeReference ext } if (wn != null) { INode referred = wc.getReferredINode(); - if (referred instanceof INodeFileWithSnapshot) { - return ((INodeFileWithSnapshot) referred).getDiffs().getPrior( - wn.lastSnapshotId); + if (referred.isFile() && referred.asFile().isWithSnapshot()) { + return referred.asFile().getDiffs().getPrior(wn.lastSnapshotId); } else if (referred instanceof INodeDirectoryWithSnapshot) { return ((INodeDirectoryWithSnapshot) referred).getDiffs().getPrior( wn.lastSnapshotId); @@ -547,9 +545,8 @@ public abstract class INodeReference ext private Snapshot getSelfSnapshot() { INode referred = getReferredINode().asReference().getReferredINode(); Snapshot snapshot = null; - if (referred instanceof INodeFileWithSnapshot) { - snapshot = ((INodeFileWithSnapshot) referred).getDiffs().getPrior( - lastSnapshotId); + if (referred.isFile() && referred.asFile().isWithSnapshot()) { + snapshot = referred.asFile().getDiffs().getPrior(lastSnapshotId); } else if (referred instanceof INodeDirectoryWithSnapshot) { snapshot = ((INodeDirectoryWithSnapshot) referred).getDiffs().getPrior( lastSnapshotId); @@ -637,12 +634,12 @@ public abstract class INodeReference ext Snapshot snapshot = getSelfSnapshot(prior); INode referred = getReferredINode().asReference().getReferredINode(); - if (referred instanceof INodeFileWithSnapshot) { - // if referred is a file, it must be a FileWithSnapshot since we did + if (referred.isFile() && referred.asFile().isWithSnapshot()) { + // if referred is a file, it must be a file with Snapshot since we did // recordModification before the rename - INodeFileWithSnapshot sfile = (INodeFileWithSnapshot) referred; + INodeFile file = referred.asFile(); // make sure we mark the file as deleted - sfile.deleteCurrentFile(); + file.getFileWithSnapshotFeature().deleteCurrentFile(); try { // when calling cleanSubtree of the referred node, since we // compute quota usage updates before calling this destroy @@ -671,9 +668,8 @@ public abstract class INodeReference ext WithCount wc = (WithCount) getReferredINode().asReference(); INode referred = wc.getReferredINode(); Snapshot lastSnapshot = null; - if (referred instanceof INodeFileWithSnapshot) { - lastSnapshot = ((INodeFileWithSnapshot) referred).getDiffs() - .getLastSnapshot(); + if (referred.isFile() && referred.asFile().isWithSnapshot()) { + lastSnapshot = referred.asFile().getDiffs().getLastSnapshot(); } else if (referred instanceof INodeDirectoryWithSnapshot) { lastSnapshot = ((INodeDirectoryWithSnapshot) referred) .getLastSnapshot(); Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java?rev=1568206&r1=1568205&r2=1568206&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java Fri Feb 14 07:35:44 2014 @@ -33,7 +33,7 @@ import org.apache.hadoop.hdfs.server.nam * The difference of an {@link INodeFile} between two snapshots. */ public class FileDiff extends - AbstractINodeDiff { + AbstractINodeDiff { /** The file size at snapshot creation time. */ private final long fileSize; @@ -56,11 +56,12 @@ public class FileDiff extends } @Override - Quota.Counts combinePosteriorAndCollectBlocks( - INodeFileWithSnapshot currentINode, FileDiff posterior, - BlocksMapUpdateInfo collectedBlocks, final List removedINodes) { - return currentINode.updateQuotaAndCollectBlocks(posterior, collectedBlocks, - removedINodes); + Quota.Counts combinePosteriorAndCollectBlocks(INodeFile currentINode, + FileDiff posterior, BlocksMapUpdateInfo collectedBlocks, + final List removedINodes) { + return currentINode.getFileWithSnapshotFeature() + .updateQuotaAndCollectBlocks(currentINode, posterior, collectedBlocks, + removedINodes); } @Override @@ -84,9 +85,10 @@ public class FileDiff extends } @Override - Quota.Counts destroyDiffAndCollectBlocks(INodeFileWithSnapshot currentINode, + Quota.Counts destroyDiffAndCollectBlocks(INodeFile currentINode, BlocksMapUpdateInfo collectedBlocks, final List removedINodes) { - return currentINode.updateQuotaAndCollectBlocks(this, collectedBlocks, - removedINodes); + return currentINode.getFileWithSnapshotFeature() + .updateQuotaAndCollectBlocks(currentINode, this, collectedBlocks, + removedINodes); } } Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java?rev=1568206&r1=1568205&r2=1568206&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java Fri Feb 14 07:35:44 2014 @@ -17,19 +17,20 @@ */ package org.apache.hadoop.hdfs.server.namenode.snapshot; +import org.apache.hadoop.hdfs.server.namenode.INodeFile; import org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes; /** A list of FileDiffs for storing snapshot data. */ public class FileDiffList extends - AbstractINodeDiffList { + AbstractINodeDiffList { @Override - FileDiff createDiff(Snapshot snapshot, INodeFileWithSnapshot file) { + FileDiff createDiff(Snapshot snapshot, INodeFile file) { return new FileDiff(snapshot, file); } @Override - INodeFileAttributes createSnapshotCopy(INodeFileWithSnapshot currentINode) { + INodeFileAttributes createSnapshotCopy(INodeFile currentINode) { return new INodeFileAttributes.SnapshotCopy(currentINode); } } Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java?rev=1568206&r1=1568205&r2=1568206&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java Fri Feb 14 07:35:44 2014 @@ -34,9 +34,9 @@ import org.apache.hadoop.classification. import org.apache.hadoop.hdfs.DFSUtil; import org.apache.hadoop.hdfs.protocol.QuotaExceededException; import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport; -import org.apache.hadoop.hdfs.protocol.SnapshotException; import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport.DiffReportEntry; import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport.DiffType; +import org.apache.hadoop.hdfs.protocol.SnapshotException; import org.apache.hadoop.hdfs.server.namenode.Content; import org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext; import org.apache.hadoop.hdfs.server.namenode.INode; @@ -432,8 +432,8 @@ public class INodeDirectorySnapshottable parentPath.remove(parentPath.size() - 1); } } - } else if (node.isFile() && node.asFile() instanceof INodeFileWithSnapshot) { - INodeFileWithSnapshot file = (INodeFileWithSnapshot) node.asFile(); + } else if (node.isFile() && node.asFile().isWithSnapshot()) { + INodeFile file = node.asFile(); Snapshot earlierSnapshot = diffReport.isFromEarlier() ? diffReport.from : diffReport.to; Snapshot laterSnapshot = diffReport.isFromEarlier() ? diffReport.to Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java?rev=1568206&r1=1568205&r2=1568206&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java Fri Feb 14 07:35:44 2014 @@ -37,6 +37,7 @@ import org.apache.hadoop.hdfs.server.nam import org.apache.hadoop.hdfs.server.namenode.INode; import org.apache.hadoop.hdfs.server.namenode.INodeDirectory; import org.apache.hadoop.hdfs.server.namenode.INodeDirectoryAttributes; +import org.apache.hadoop.hdfs.server.namenode.INodeFile; import org.apache.hadoop.hdfs.server.namenode.INodeMap; import org.apache.hadoop.hdfs.server.namenode.INodeReference; import org.apache.hadoop.hdfs.server.namenode.Quota; @@ -803,10 +804,9 @@ public class INodeDirectoryWithSnapshot } // For DstReference node, since the node is not in the created list of // prior, we should treat it as regular file/dir - } else if (topNode.isFile() - && topNode.asFile() instanceof INodeFileWithSnapshot) { - INodeFileWithSnapshot fs = (INodeFileWithSnapshot) topNode.asFile(); - counts.add(fs.getDiffs().deleteSnapshotDiff(post, prior, fs, + } else if (topNode.isFile() && topNode.asFile().isWithSnapshot()) { + INodeFile file = topNode.asFile(); + counts.add(file.getDiffs().deleteSnapshotDiff(post, prior, file, collectedBlocks, removedINodes, countDiffChange)); } else if (topNode.isDirectory()) { INodeDirectory dir = topNode.asDirectory(); Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotFSImageFormat.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotFSImageFormat.java?rev=1568206&r1=1568205&r2=1568206&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotFSImageFormat.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotFSImageFormat.java Fri Feb 14 07:35:44 2014 @@ -97,8 +97,7 @@ public class SnapshotFSImageFormat { public static void saveFileDiffList(final INodeFile file, final DataOutput out) throws IOException { - saveINodeDiffs(file instanceof INodeFileWithSnapshot? - ((INodeFileWithSnapshot) file).getDiffs(): null, out, null); + saveINodeDiffs(file.getDiffs(), out, null); } public static FileDiffList loadFileDiffList(DataInput in, Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSnapshotPathINodes.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSnapshotPathINodes.java?rev=1568206&r1=1568205&r2=1568206&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSnapshotPathINodes.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSnapshotPathINodes.java Fri Feb 14 07:35:44 2014 @@ -32,7 +32,6 @@ import org.apache.hadoop.hdfs.Distribute import org.apache.hadoop.hdfs.MiniDFSCluster; import org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectorySnapshottable; import org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot; -import org.apache.hadoop.hdfs.server.namenode.snapshot.INodeFileWithSnapshot; import org.apache.hadoop.hdfs.server.namenode.snapshot.Snapshot; import org.junit.AfterClass; import org.junit.Assert; @@ -244,7 +243,7 @@ public class TestSnapshotPathINodes { // The last INode should be the INode for sub1 final INode last = nodesInPath.getLastINode(); assertEquals(last.getFullPathName(), sub1.toString()); - assertFalse(last instanceof INodeFileWithSnapshot); + assertFalse(last instanceof INodeFile); String[] invalidPathComponent = {"invalidDir", "foo", ".snapshot", "bar"}; Path invalidPath = new Path(invalidPathComponent[0]); @@ -294,7 +293,7 @@ public class TestSnapshotPathINodes { // Check the INode for file1 (snapshot file) final INode inode = inodes[inodes.length - 1]; assertEquals(file1.getName(), inode.getLocalName()); - assertEquals(INodeFileWithSnapshot.class, inode.getClass()); + assertTrue(inode.asFile().isWithSnapshot()); } // Check the INodes for path /TestSnapshot/sub1/file1 @@ -398,6 +397,8 @@ public class TestSnapshotPathINodes { // The last INode should be associated with file1 assertEquals(inodes[components.length - 1].getFullPathName(), file1.toString()); + // record the modification time of the inode + final long modTime = inodes[inodes.length - 1].getModificationTime(); // Create a snapshot for the dir, and check the inodes for the path // pointing to a snapshot file @@ -421,10 +422,10 @@ public class TestSnapshotPathINodes { // Check the INode for snapshot of file1 INode snapshotFileNode = ssInodes[ssInodes.length - 1]; assertEquals(snapshotFileNode.getLocalName(), file1.getName()); - assertTrue(snapshotFileNode instanceof INodeFileWithSnapshot); + assertTrue(snapshotFileNode.asFile().isWithSnapshot()); // The modification time of the snapshot INode should be the same with the // original INode before modification - assertEquals(inodes[inodes.length - 1].getModificationTime(), + assertEquals(modTime, snapshotFileNode.getModificationTime(ssNodesInPath.getPathSnapshot())); // Check the INode for /TestSnapshot/sub1/file1 again @@ -439,8 +440,7 @@ public class TestSnapshotPathINodes { final int last = components.length - 1; assertEquals(newInodes[last].getFullPathName(), file1.toString()); // The modification time of the INode for file3 should have been changed - Assert.assertFalse(inodes[last].getModificationTime() - == newInodes[last].getModificationTime()); + Assert.assertFalse(modTime == newInodes[last].getModificationTime()); hdfs.deleteSnapshot(sub1, "s3"); hdfs.disallowSnapshot(sub1); } Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestINodeFileUnderConstructionWithSnapshot.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestINodeFileUnderConstructionWithSnapshot.java?rev=1568206&r1=1568205&r2=1568206&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestINodeFileUnderConstructionWithSnapshot.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestINodeFileUnderConstructionWithSnapshot.java Fri Feb 14 07:35:44 2014 @@ -176,7 +176,7 @@ public class TestINodeFileUnderConstruct dirNode = (INodeDirectorySnapshottable) fsdir.getINode(dir.toString()); last = dirNode.getDiffs().getLast(); Snapshot s1 = last.snapshot; - assertTrue(fileNode instanceof INodeFileWithSnapshot); + assertTrue(fileNode.isWithSnapshot()); assertEquals(BLOCKSIZE * 3, fileNode.computeFileSize(s1)); // 4. modify file --> append without closing stream --> take snapshot --> Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java?rev=1568206&r1=1568205&r2=1568206&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java Fri Feb 14 07:35:44 2014 @@ -403,8 +403,7 @@ public class TestRenameWithSnapshots { final Path foo_s3 = SnapshotTestHelper.getSnapshotPath(sdir1, "s3", "foo"); assertFalse(hdfs.exists(foo_s3)); - INodeFileWithSnapshot sfoo = (INodeFileWithSnapshot) fsdir.getINode( - newfoo.toString()).asFile(); + INodeFile sfoo = fsdir.getINode(newfoo.toString()).asFile(); assertEquals("s2", sfoo.getDiffs().getLastSnapshot().getRoot() .getLocalName()); } @@ -604,8 +603,7 @@ public class TestRenameWithSnapshots { status = hdfs.getFileStatus(foo_s2); assertEquals(REPL, status.getReplication()); - INodeFileWithSnapshot snode = (INodeFileWithSnapshot) fsdir.getINode( - newfoo.toString()).asFile(); + INodeFile snode = fsdir.getINode(newfoo.toString()).asFile(); assertEquals(1, snode.getDiffs().asList().size()); assertEquals("s2", snode.getDiffs().getLastSnapshot().getRoot() .getLocalName()); @@ -763,8 +761,7 @@ public class TestRenameWithSnapshots { .asDirectory(); assertEquals(1, foo.getDiffs().asList().size()); assertEquals("s1", foo.getLastSnapshot().getRoot().getLocalName()); - INodeFileWithSnapshot bar1 = (INodeFileWithSnapshot) fsdir.getINode4Write( - bar1_dir1.toString()).asFile(); + INodeFile bar1 = fsdir.getINode4Write(bar1_dir1.toString()).asFile(); assertEquals(1, bar1.getDiffs().asList().size()); assertEquals("s1", bar1.getDiffs().getLastSnapshot().getRoot() .getLocalName()); @@ -774,7 +771,7 @@ public class TestRenameWithSnapshots { INodeReference.WithCount barWithCount = (WithCount) barRef .getReferredINode(); assertEquals(2, barWithCount.getReferenceCount()); - INodeFileWithSnapshot bar = (INodeFileWithSnapshot) barWithCount.asFile(); + INodeFile bar = barWithCount.asFile(); assertEquals(1, bar.getDiffs().asList().size()); assertEquals("s1", bar.getDiffs().getLastSnapshot().getRoot() .getLocalName()); @@ -984,8 +981,7 @@ public class TestRenameWithSnapshots { assertEquals("s333", fooDiffs.get(2).snapshot.getRoot().getLocalName()); assertEquals("s22", fooDiffs.get(1).snapshot.getRoot().getLocalName()); assertEquals("s1", fooDiffs.get(0).snapshot.getRoot().getLocalName()); - INodeFileWithSnapshot bar1 = (INodeFileWithSnapshot) fsdir.getINode4Write( - bar1_dir1.toString()).asFile(); + INodeFile bar1 = fsdir.getINode4Write(bar1_dir1.toString()).asFile(); List bar1Diffs = bar1.getDiffs().asList(); assertEquals(3, bar1Diffs.size()); assertEquals("s333", bar1Diffs.get(2).snapshot.getRoot().getLocalName()); @@ -997,7 +993,7 @@ public class TestRenameWithSnapshots { INodeReference.WithCount barWithCount = (WithCount) barRef.getReferredINode(); // 5 references: s1, s22, s333, s2222, current tree of sdir1 assertEquals(5, barWithCount.getReferenceCount()); - INodeFileWithSnapshot bar = (INodeFileWithSnapshot) barWithCount.asFile(); + INodeFile bar = barWithCount.asFile(); List barDiffs = bar.getDiffs().asList(); assertEquals(4, barDiffs.size()); assertEquals("s2222", barDiffs.get(3).snapshot.getRoot().getLocalName()); @@ -1047,7 +1043,7 @@ public class TestRenameWithSnapshots { barRef = fsdir.getINode(bar_s2222.toString()).asReference(); barWithCount = (WithCount) barRef.getReferredINode(); assertEquals(4, barWithCount.getReferenceCount()); - bar = (INodeFileWithSnapshot) barWithCount.asFile(); + bar = barWithCount.asFile(); barDiffs = bar.getDiffs().asList(); assertEquals(4, barDiffs.size()); assertEquals("s2222", barDiffs.get(3).snapshot.getRoot().getLocalName()); @@ -1229,7 +1225,7 @@ public class TestRenameWithSnapshots { fooRef = fsdir.getINode4Write(foo2.toString()); assertTrue(fooRef instanceof INodeReference.DstReference); INodeFile fooNode = fooRef.asFile(); - assertTrue(fooNode instanceof INodeFileWithSnapshot); + assertTrue(fooNode.isWithSnapshot()); assertTrue(fooNode.isUnderConstruction()); } finally { if (out != null) { @@ -1240,7 +1236,7 @@ public class TestRenameWithSnapshots { fooRef = fsdir.getINode4Write(foo2.toString()); assertTrue(fooRef instanceof INodeReference.DstReference); INodeFile fooNode = fooRef.asFile(); - assertTrue(fooNode instanceof INodeFileWithSnapshot); + assertTrue(fooNode.isWithSnapshot()); assertFalse(fooNode.isUnderConstruction()); restartClusterAndCheckImage(true); @@ -1715,8 +1711,7 @@ public class TestRenameWithSnapshots { assertTrue(diff.getChildrenDiff().getList(ListType.CREATED).isEmpty()); // bar was converted to filewithsnapshot while renaming - INodeFileWithSnapshot barNode = (INodeFileWithSnapshot) fsdir - .getINode4Write(bar.toString()); + INodeFile barNode = fsdir.getINode4Write(bar.toString()).asFile(); assertSame(barNode, children.get(0)); assertSame(fooNode, barNode.getParent()); List barDiffList = barNode.getDiffs().asList(); Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotBlocksMap.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotBlocksMap.java?rev=1568206&r1=1568205&r2=1568206&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotBlocksMap.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotBlocksMap.java Fri Feb 14 07:35:44 2014 @@ -19,6 +19,7 @@ package org.apache.hadoop.hdfs.server.na import static org.apache.hadoop.test.GenericTestUtils.assertExceptionContains; import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertNotNull; import static org.junit.Assert.assertNull; import static org.junit.Assert.assertTrue; @@ -167,7 +168,8 @@ public class TestSnapshotBlocksMap { Assert.assertSame(INodeFile.class, f1.getClass()); hdfs.setReplication(file1, (short)2); f1 = assertBlockCollection(file1.toString(), 2, fsdir, blockmanager); - Assert.assertSame(INodeFileWithSnapshot.class, f1.getClass()); + assertTrue(f1.isWithSnapshot()); + assertFalse(f1.isUnderConstruction()); } // Check the block information for file0 Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java?rev=1568206&r1=1568205&r2=1568206&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java Fri Feb 14 07:35:44 2014 @@ -277,10 +277,10 @@ public class TestSnapshotDeletion { TestSnapshotBlocksMap.assertBlockCollection(new Path(snapshotNoChangeDir, noChangeFileSCopy.getLocalName()).toString(), 1, fsdir, blockmanager); - INodeFileWithSnapshot metaChangeFile2SCopy = - (INodeFileWithSnapshot) children.get(0); + INodeFile metaChangeFile2SCopy = children.get(0).asFile(); assertEquals(metaChangeFile2.getName(), metaChangeFile2SCopy.getLocalName()); - assertEquals(INodeFileWithSnapshot.class, metaChangeFile2SCopy.getClass()); + assertTrue(metaChangeFile2SCopy.isWithSnapshot()); + assertFalse(metaChangeFile2SCopy.isUnderConstruction()); TestSnapshotBlocksMap.assertBlockCollection(new Path(snapshotNoChangeDir, metaChangeFile2SCopy.getLocalName()).toString(), 1, fsdir, blockmanager); @@ -338,8 +338,9 @@ public class TestSnapshotDeletion { INode child = children.get(0); assertEquals(child.getLocalName(), metaChangeFile1.getName()); // check snapshot copy of metaChangeFile1 - assertEquals(INodeFileWithSnapshot.class, child.getClass()); - INodeFileWithSnapshot metaChangeFile1SCopy = (INodeFileWithSnapshot) child; + INodeFile metaChangeFile1SCopy = child.asFile(); + assertTrue(metaChangeFile1SCopy.isWithSnapshot()); + assertFalse(metaChangeFile1SCopy.isUnderConstruction()); assertEquals(REPLICATION_1, metaChangeFile1SCopy.getFileReplication(null)); assertEquals(REPLICATION_1,