Return-Path: X-Original-To: apmail-hadoop-hdfs-commits-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 76980C04D for ; Tue, 8 May 2012 00:08:37 +0000 (UTC) Received: (qmail 42761 invoked by uid 500); 8 May 2012 00:08:37 -0000 Delivered-To: apmail-hadoop-hdfs-commits-archive@hadoop.apache.org Received: (qmail 42725 invoked by uid 500); 8 May 2012 00:08:37 -0000 Mailing-List: contact hdfs-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-commits@hadoop.apache.org Received: (qmail 42717 invoked by uid 99); 8 May 2012 00:08:37 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 08 May 2012 00:08:37 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 08 May 2012 00:08:34 +0000 Received: from eris.apache.org (localhost [127.0.0.1]) by eris.apache.org (Postfix) with ESMTP id 39E4C2388860; Tue, 8 May 2012 00:08:14 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r1335305 - in /hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs: ./ src/main/java/ src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ src/main/java/org/apache/hadoop/hdfs/server/namenode/ src/test/java/org/apache/ha... Date: Tue, 08 May 2012 00:08:13 -0000 To: hdfs-commits@hadoop.apache.org From: szetszwo@apache.org X-Mailer: svnmailer-1.0.8-patched Message-Id: <20120508000814.39E4C2388860@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Author: szetszwo Date: Tue May 8 00:08:12 2012 New Revision: 1335305 URL: http://svn.apache.org/viewvc?rev=1335305&view=rev Log: svn merge -c 1335304 from trunk for HDFS-3363. Define BlockCollection and MutableBlockCollection interfaces so that INodeFile and INodeFileUnderConstruction do not have to be used in block management. Added: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java - copied unchanged from r1335304, hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/MutableBlockCollection.java - copied unchanged from r1335304, hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/MutableBlockCollection.java Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/ (props changed) hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/ (props changed) hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSInodeInfo.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileUnderConstruction.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java Propchange: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs:r1335304 Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt?rev=1335305&r1=1335304&r2=1335305&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt Tue May 8 00:08:12 2012 @@ -289,6 +289,10 @@ Release 2.0.0 - UNRELEASED HDFS-3375. Put client name in DataXceiver thread name for readBlock and keepalive (todd) + HDFS-3363. Define BlockCollection and MutableBlockCollection interfaces + so that INodeFile and INodeFileUnderConstruction do not have to be used in + block management. (John George via szetszwo) + OPTIMIZATIONS HDFS-2477. Optimize computing the diff between a block report and the Propchange: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java:r1335304 Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java?rev=1335305&r1=1335304&r2=1335305&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java Tue May 8 00:08:12 2012 @@ -19,14 +19,16 @@ package org.apache.hadoop.hdfs.server.bl import org.apache.hadoop.hdfs.protocol.Block; import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState; -import org.apache.hadoop.hdfs.server.namenode.INodeFile; import org.apache.hadoop.hdfs.util.LightWeightGSet; /** * Internal class for block metadata. + * BlockInfo class maintains for a given block + * the {@link BlockCollection} it is part of and datanodes where the replicas of + * the block are stored. */ public class BlockInfo extends Block implements LightWeightGSet.LinkedElement { - private INodeFile inode; + private BlockCollection inode; /** For implementing {@link LightWeightGSet.LinkedElement} interface */ private LightWeightGSet.LinkedElement nextLinkedElement; @@ -66,11 +68,11 @@ public class BlockInfo extends Block imp this.inode = from.inode; } - public INodeFile getINode() { + public BlockCollection getINode() { return inode; } - public void setINode(INodeFile inode) { + public void setINode(BlockCollection inode) { this.inode = inode; } Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java?rev=1335305&r1=1335304&r2=1335305&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java Tue May 8 00:08:12 2012 @@ -234,7 +234,7 @@ public class BlockInfoUnderConstruction blockRecoveryId = recoveryId; if (replicas.size() == 0) { NameNode.stateChangeLog.warn("BLOCK*" - + " INodeFileUnderConstruction.initLeaseRecovery:" + + " BlockInfoUnderConstruction.initLeaseRecovery:" + " No blocks found, lease removed."); } Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java?rev=1335305&r1=1335304&r2=1335305&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java Tue May 8 00:08:12 2012 @@ -57,8 +57,6 @@ import org.apache.hadoop.hdfs.server.com import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState; import org.apache.hadoop.hdfs.server.common.Util; import org.apache.hadoop.hdfs.server.namenode.FSClusterStats; -import org.apache.hadoop.hdfs.server.namenode.INodeFile; -import org.apache.hadoop.hdfs.server.namenode.INodeFileUnderConstruction; import org.apache.hadoop.hdfs.server.namenode.NameNode; import org.apache.hadoop.hdfs.server.namenode.Namesystem; import org.apache.hadoop.hdfs.server.protocol.BlockCommand; @@ -386,7 +384,7 @@ public class BlockManager { numReplicas.decommissionedReplicas(); if (block instanceof BlockInfo) { - String fileName = ((BlockInfo)block).getINode().getFullPathName(); + String fileName = ((BlockInfo)block).getINode().getName(); out.print(fileName + ": "); } // l: == live:, d: == decommissioned c: == corrupt e: == excess @@ -462,7 +460,7 @@ public class BlockManager { * @throws IOException if the block does not have at least a minimal number * of replicas reported from data-nodes. */ - public boolean commitOrCompleteLastBlock(INodeFileUnderConstruction fileINode, + public boolean commitOrCompleteLastBlock(MutableBlockCollection fileINode, Block commitBlock) throws IOException { if(commitBlock == null) return false; // not committing, this is a block allocation retry @@ -474,7 +472,7 @@ public class BlockManager { final boolean b = commitBlock((BlockInfoUnderConstruction)lastBlock, commitBlock); if(countNodes(lastBlock).liveReplicas() >= minReplication) - completeBlock(fileINode,fileINode.numBlocks()-1, false); + completeBlock(fileINode, fileINode.numBlocks()-1, false); return b; } @@ -485,7 +483,7 @@ public class BlockManager { * @throws IOException if the block does not have at least a minimal number * of replicas reported from data-nodes. */ - private BlockInfo completeBlock(final INodeFile fileINode, + private BlockInfo completeBlock(final MutableBlockCollection fileINode, final int blkIndex, boolean force) throws IOException { if(blkIndex < 0) return null; @@ -518,7 +516,7 @@ public class BlockManager { return blocksMap.replaceBlock(completeBlock); } - private BlockInfo completeBlock(final INodeFile fileINode, + private BlockInfo completeBlock(final MutableBlockCollection fileINode, final BlockInfo block, boolean force) throws IOException { BlockInfo[] fileBlocks = fileINode.getBlocks(); for(int idx = 0; idx < fileBlocks.length; idx++) @@ -533,7 +531,7 @@ public class BlockManager { * regardless of whether enough replicas are present. This is necessary * when tailing edit logs as a Standby. */ - public BlockInfo forceCompleteBlock(final INodeFile fileINode, + public BlockInfo forceCompleteBlock(final MutableBlockCollection fileINode, final BlockInfoUnderConstruction block) throws IOException { block.commitBlock(block); return completeBlock(fileINode, block, true); @@ -554,7 +552,7 @@ public class BlockManager { * @return the last block locations if the block is partial or null otherwise */ public LocatedBlock convertLastBlockToUnderConstruction( - INodeFileUnderConstruction fileINode) throws IOException { + MutableBlockCollection fileINode) throws IOException { BlockInfo oldBlock = fileINode.getLastBlock(); if(oldBlock == null || fileINode.getPreferredBlockSize() == oldBlock.getNumBytes()) @@ -925,7 +923,7 @@ public class BlockManager { " does not exist. "); } - INodeFile inode = storedBlock.getINode(); + BlockCollection inode = storedBlock.getINode(); if (inode == null) { NameNode.stateChangeLog.info("BLOCK markBlockAsCorrupt: " + "block " + storedBlock + @@ -1053,7 +1051,7 @@ public class BlockManager { int requiredReplication, numEffectiveReplicas; List containingNodes, liveReplicaNodes; DatanodeDescriptor srcNode; - INodeFile fileINode = null; + BlockCollection fileINode = null; int additionalReplRequired; int scheduledWork = 0; @@ -1067,7 +1065,7 @@ public class BlockManager { // block should belong to a file fileINode = blocksMap.getINode(block); // abandoned block or block reopened for append - if(fileINode == null || fileINode.isUnderConstruction()) { + if(fileINode == null || fileINode instanceof MutableBlockCollection) { neededReplications.remove(block, priority); // remove from neededReplications neededReplications.decrementReplicationIndex(priority); continue; @@ -1153,7 +1151,7 @@ public class BlockManager { // block should belong to a file fileINode = blocksMap.getINode(block); // abandoned block or block reopened for append - if(fileINode == null || fileINode.isUnderConstruction()) { + if(fileINode == null || fileINode instanceof MutableBlockCollection) { neededReplications.remove(block, priority); // remove from neededReplications rw.targets = null; neededReplications.decrementReplicationIndex(priority); @@ -1918,7 +1916,7 @@ assert storedBlock.findDatanode(dn) < 0 int numCurrentReplica = countLiveNodes(storedBlock); if (storedBlock.getBlockUCState() == BlockUCState.COMMITTED && numCurrentReplica >= minReplication) { - completeBlock(storedBlock.getINode(), storedBlock, false); + completeBlock((MutableBlockCollection)storedBlock.getINode(), storedBlock, false); } else if (storedBlock.isComplete()) { // check whether safe replication is reached for the block // only complete blocks are counted towards that. @@ -1956,7 +1954,7 @@ assert storedBlock.findDatanode(dn) < 0 return block; } assert storedBlock != null : "Block must be stored by now"; - INodeFile fileINode = storedBlock.getINode(); + BlockCollection fileINode = storedBlock.getINode(); assert fileINode != null : "Block must belong to a file"; // add block to the datanode @@ -1983,7 +1981,7 @@ assert storedBlock.findDatanode(dn) < 0 if(storedBlock.getBlockUCState() == BlockUCState.COMMITTED && numLiveReplicas >= minReplication) { - storedBlock = completeBlock(fileINode, storedBlock, false); + storedBlock = completeBlock((MutableBlockCollection)fileINode, storedBlock, false); } else if (storedBlock.isComplete()) { // check whether safe replication is reached for the block // only complete blocks are counted towards that @@ -1994,7 +1992,7 @@ assert storedBlock.findDatanode(dn) < 0 } // if file is under construction, then done for now - if (fileINode.isUnderConstruction()) { + if (fileINode instanceof MutableBlockCollection) { return storedBlock; } @@ -2131,7 +2129,7 @@ assert storedBlock.findDatanode(dn) < 0 * what happened with it. */ private MisReplicationResult processMisReplicatedBlock(BlockInfo block) { - INodeFile fileINode = block.getINode(); + BlockCollection fileINode = block.getINode(); if (fileINode == null) { // block does not belong to any file addToInvalidates(block); @@ -2260,7 +2258,7 @@ assert storedBlock.findDatanode(dn) < 0 BlockPlacementPolicy replicator) { assert namesystem.hasWriteLock(); // first form a rack to datanodes map and - INodeFile inode = getINode(b); + BlockCollection inode = getINode(b); final Map> rackMap = new HashMap>(); for(final Iterator iter = nonExcess.iterator(); @@ -2381,7 +2379,7 @@ assert storedBlock.findDatanode(dn) < 0 // necessary. In that case, put block on a possibly-will- // be-replicated list. // - INodeFile fileINode = blocksMap.getINode(block); + BlockCollection fileINode = blocksMap.getINode(block); if (fileINode != null) { namesystem.decrementSafeBlockCount(block); updateNeededReplications(block, -1, 0); @@ -2613,7 +2611,7 @@ assert storedBlock.findDatanode(dn) < 0 NumberReplicas num) { int curReplicas = num.liveReplicas(); int curExpectedReplicas = getReplication(block); - INodeFile fileINode = blocksMap.getINode(block); + BlockCollection fileINode = blocksMap.getINode(block); Iterator nodeIter = blocksMap.nodeIterator(block); StringBuilder nodeList = new StringBuilder(); while (nodeIter.hasNext()) { @@ -2626,7 +2624,7 @@ assert storedBlock.findDatanode(dn) < 0 + ", corrupt replicas: " + num.corruptReplicas() + ", decommissioned replicas: " + num.decommissionedReplicas() + ", excess replicas: " + num.excessReplicas() - + ", Is Open File: " + fileINode.isUnderConstruction() + + ", Is Open File: " + (fileINode instanceof MutableBlockCollection) + ", Datanodes having this block: " + nodeList + ", Current Datanode: " + srcNode + ", Is current datanode decommissioning: " + srcNode.isDecommissionInProgress()); @@ -2641,7 +2639,7 @@ assert storedBlock.findDatanode(dn) < 0 final Iterator it = srcNode.getBlockIterator(); while(it.hasNext()) { final Block block = it.next(); - INodeFile fileINode = blocksMap.getINode(block); + BlockCollection fileINode = blocksMap.getINode(block); short expectedReplication = fileINode.getReplication(); NumberReplicas num = countNodes(block); int numCurrentReplica = num.liveReplicas(); @@ -2664,7 +2662,7 @@ assert storedBlock.findDatanode(dn) < 0 final Iterator it = srcNode.getBlockIterator(); while(it.hasNext()) { final Block block = it.next(); - INodeFile fileINode = blocksMap.getINode(block); + BlockCollection fileINode = blocksMap.getINode(block); if (fileINode != null) { NumberReplicas num = countNodes(block); @@ -2681,7 +2679,7 @@ assert storedBlock.findDatanode(dn) < 0 if ((curReplicas == 0) && (num.decommissionedReplicas() > 0)) { decommissionOnlyReplicas++; } - if (fileINode.isUnderConstruction()) { + if (fileINode instanceof MutableBlockCollection) { underReplicatedInOpenFiles++; } } @@ -2784,11 +2782,10 @@ assert storedBlock.findDatanode(dn) < 0 /* get replication factor of a block */ private int getReplication(Block block) { - INodeFile fileINode = blocksMap.getINode(block); + BlockCollection fileINode = blocksMap.getINode(block); if (fileINode == null) { // block does not belong to any file return 0; } - assert !fileINode.isDirectory() : "Block cannot belong to a directory."; return fileINode.getReplication(); } @@ -2861,11 +2858,11 @@ assert storedBlock.findDatanode(dn) < 0 return this.neededReplications.getCorruptBlockSize(); } - public BlockInfo addINode(BlockInfo block, INodeFile iNode) { + public BlockInfo addINode(BlockInfo block, BlockCollection iNode) { return blocksMap.addINode(block, iNode); } - public INodeFile getINode(Block b) { + public BlockCollection getINode(Block b) { return blocksMap.getINode(b); } @@ -3005,7 +3002,7 @@ assert storedBlock.findDatanode(dn) < 0 private static class ReplicationWork { private Block block; - private INodeFile fileINode; + private BlockCollection fileINode; private DatanodeDescriptor srcNode; private List containingNodes; @@ -3016,7 +3013,7 @@ assert storedBlock.findDatanode(dn) < 0 private int priority; public ReplicationWork(Block block, - INodeFile fileINode, + BlockCollection fileINode, DatanodeDescriptor srcNode, List containingNodes, List liveReplicaNodes, Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java?rev=1335305&r1=1335304&r2=1335305&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java Tue May 8 00:08:12 2012 @@ -29,7 +29,6 @@ import org.apache.hadoop.conf.Configurat import org.apache.hadoop.hdfs.protocol.Block; import org.apache.hadoop.hdfs.protocol.LocatedBlock; import org.apache.hadoop.hdfs.server.namenode.FSClusterStats; -import org.apache.hadoop.hdfs.server.namenode.FSInodeInfo; import org.apache.hadoop.net.NetworkTopology; import org.apache.hadoop.net.Node; import org.apache.hadoop.util.ReflectionUtils; @@ -123,13 +122,13 @@ public abstract class BlockPlacementPoli * @return array of DatanodeDescriptor instances chosen as target * and sorted as a pipeline. */ - DatanodeDescriptor[] chooseTarget(FSInodeInfo srcInode, + DatanodeDescriptor[] chooseTarget(BlockCollection srcInode, int numOfReplicas, DatanodeDescriptor writer, List chosenNodes, HashMap excludedNodes, long blocksize) { - return chooseTarget(srcInode.getFullPathName(), numOfReplicas, writer, + return chooseTarget(srcInode.getName(), numOfReplicas, writer, chosenNodes, excludedNodes, blocksize); } @@ -159,7 +158,7 @@ public abstract class BlockPlacementPoli listed in the previous parameter. * @return the replica that is the best candidate for deletion */ - abstract public DatanodeDescriptor chooseReplicaToDelete(FSInodeInfo srcInode, + abstract public DatanodeDescriptor chooseReplicaToDelete(BlockCollection srcInode, Block block, short replicationFactor, Collection existingReplicas, Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java?rev=1335305&r1=1335304&r2=1335305&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java Tue May 8 00:08:12 2012 @@ -33,7 +33,6 @@ import org.apache.hadoop.hdfs.protocol.D import org.apache.hadoop.hdfs.protocol.HdfsConstants; import org.apache.hadoop.hdfs.protocol.LocatedBlock; import org.apache.hadoop.hdfs.server.namenode.FSClusterStats; -import org.apache.hadoop.hdfs.server.namenode.FSInodeInfo; import org.apache.hadoop.net.NetworkTopology; import org.apache.hadoop.net.Node; import org.apache.hadoop.net.NodeBase; @@ -547,7 +546,7 @@ public class BlockPlacementPolicyDefault } @Override - public DatanodeDescriptor chooseReplicaToDelete(FSInodeInfo inode, + public DatanodeDescriptor chooseReplicaToDelete(BlockCollection inode, Block block, short replicationFactor, Collection first, Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java?rev=1335305&r1=1335304&r2=1335305&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java Tue May 8 00:08:12 2012 @@ -20,7 +20,6 @@ package org.apache.hadoop.hdfs.server.bl import java.util.Iterator; import org.apache.hadoop.hdfs.protocol.Block; -import org.apache.hadoop.hdfs.server.namenode.INodeFile; import org.apache.hadoop.hdfs.util.GSet; import org.apache.hadoop.hdfs.util.LightWeightGSet; @@ -93,7 +92,7 @@ class BlocksMap { blocks = null; } - INodeFile getINode(Block b) { + BlockCollection getINode(Block b) { BlockInfo info = blocks.get(b); return (info != null) ? info.getINode() : null; } @@ -101,7 +100,7 @@ class BlocksMap { /** * Add block b belonging to the specified file inode to the map. */ - BlockInfo addINode(BlockInfo b, INodeFile iNode) { + BlockInfo addINode(BlockInfo b, BlockCollection iNode) { BlockInfo info = blocks.get(b); if (info != b) { info = b; Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSInodeInfo.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSInodeInfo.java?rev=1335305&r1=1335304&r2=1335305&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSInodeInfo.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSInodeInfo.java Tue May 8 00:08:12 2012 @@ -1,38 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hdfs.server.namenode; - -import org.apache.hadoop.classification.InterfaceAudience; - -/** - * This interface is used used the pluggable block placement policy - * to expose a few characteristics of an Inode. - */ -@InterfaceAudience.Private -public interface FSInodeInfo { - - /** - * a string representation of an inode - * - * @return the full pathname (from root) that this inode represents - */ - - public String getFullPathName() ; -} - - Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java?rev=1335305&r1=1335304&r2=1335305&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java Tue May 8 00:08:12 2012 @@ -2840,7 +2840,7 @@ public class FSNamesystem implements Nam if (storedBlock == null) { throw new IOException("Block (=" + lastblock + ") not found"); } - INodeFile iFile = storedBlock.getINode(); + INodeFile iFile = (INodeFile) storedBlock.getINode(); if (!iFile.isUnderConstruction() || storedBlock.isComplete()) { throw new IOException("Unexpected block (=" + lastblock + ") since the file (=" + iFile.getLocalName() @@ -4394,7 +4394,7 @@ public class FSNamesystem implements Nam } // check file inode - INodeFile file = storedBlock.getINode(); + INodeFile file = (INodeFile) storedBlock.getINode(); if (file==null || !file.isUnderConstruction()) { throw new IOException("The file " + storedBlock + " belonged to does not exist or it is not under construction."); @@ -4706,7 +4706,7 @@ public class FSNamesystem implements Nam while (blkIterator.hasNext()) { Block blk = blkIterator.next(); - INode inode = blockManager.getINode(blk); + INode inode = (INodeFile) blockManager.getINode(blk); skip++; if (inode != null && blockManager.countNodes(blk).liveReplicas() == 0) { String src = FSDirectory.getFullPathName(inode); Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java?rev=1335305&r1=1335304&r2=1335305&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java Tue May 8 00:08:12 2012 @@ -38,7 +38,7 @@ import com.google.common.primitives.Sign * directory inodes. */ @InterfaceAudience.Private -abstract class INode implements Comparable, FSInodeInfo { +abstract class INode implements Comparable { /* * The inode name is in java UTF8 encoding; * The name in HdfsFileStatus should keep the same encoding as this. @@ -264,7 +264,6 @@ abstract class INode implements Comparab this.name = name; } - @Override public String getFullPathName() { // Get the full path name of this inode. return FSDirectory.getFullPathName(this); Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java?rev=1335305&r1=1335304&r2=1335305&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java Tue May 8 00:08:12 2012 @@ -20,15 +20,18 @@ package org.apache.hadoop.hdfs.server.na import java.io.IOException; import java.util.List; +import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.fs.permission.FsAction; import org.apache.hadoop.fs.permission.FsPermission; import org.apache.hadoop.fs.permission.PermissionStatus; import org.apache.hadoop.hdfs.protocol.Block; import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo; import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoUnderConstruction; +import org.apache.hadoop.hdfs.server.blockmanagement.BlockCollection; /** I-node for closed file. */ -public class INodeFile extends INode { +@InterfaceAudience.Private +public class INodeFile extends INode implements BlockCollection { static final FsPermission UMASK = FsPermission.createImmutable((short)0111); //Number of bits for Block size @@ -167,6 +170,12 @@ public class INodeFile extends INode { blocks = null; return 1; } + + public String getName() { + // Get the full path name of this inode. + return getFullPathName(); + } + @Override long[] computeContentSummary(long[] summary) { Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileUnderConstruction.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileUnderConstruction.java?rev=1335305&r1=1335304&r2=1335305&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileUnderConstruction.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileUnderConstruction.java Tue May 8 00:08:12 2012 @@ -25,13 +25,15 @@ import org.apache.hadoop.hdfs.server.blo import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoUnderConstruction; import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor; import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState; +import org.apache.hadoop.hdfs.server.blockmanagement.MutableBlockCollection; import com.google.common.base.Joiner; /** * I-node for file being written. */ -public class INodeFileUnderConstruction extends INodeFile { +public class INodeFileUnderConstruction extends INodeFile + implements MutableBlockCollection { private String clientName; // lease holder private final String clientMachine; private final DatanodeDescriptor clientNode; // if client is a cluster node too. Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java?rev=1335305&r1=1335304&r2=1335305&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java Tue May 8 00:08:12 2012 @@ -734,7 +734,7 @@ class NamenodeJspHelper { this.inode = null; } else { this.block = new Block(blockId); - this.inode = blockManager.getINode(block); + this.inode = (INodeFile) blockManager.getINode(block); } } Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java?rev=1335305&r1=1335304&r2=1335305&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java Tue May 8 00:08:12 2012 @@ -46,9 +46,9 @@ import org.apache.hadoop.hdfs.server.blo import org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy; import org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault; import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor; +import org.apache.hadoop.hdfs.server.blockmanagement.BlockCollection; import org.apache.hadoop.hdfs.server.datanode.DataNode; import org.apache.hadoop.hdfs.server.datanode.DataNodeTestUtils; -import org.apache.hadoop.hdfs.server.namenode.FSInodeInfo; import org.apache.hadoop.hdfs.server.namenode.FSNamesystem; import org.apache.hadoop.hdfs.server.namenode.NameNode; import org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter; @@ -585,7 +585,7 @@ public class TestDNFencing { } @Override - public DatanodeDescriptor chooseReplicaToDelete(FSInodeInfo inode, + public DatanodeDescriptor chooseReplicaToDelete(BlockCollection inode, Block block, short replicationFactor, Collection first, Collection second) {