Return-Path: X-Original-To: apmail-hadoop-hdfs-commits-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 0A75E771A for ; Fri, 28 Oct 2011 18:30:12 +0000 (UTC) Received: (qmail 36521 invoked by uid 500); 28 Oct 2011 18:30:11 -0000 Delivered-To: apmail-hadoop-hdfs-commits-archive@hadoop.apache.org Received: (qmail 36483 invoked by uid 500); 28 Oct 2011 18:30:11 -0000 Mailing-List: contact hdfs-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-commits@hadoop.apache.org Received: (qmail 36475 invoked by uid 99); 28 Oct 2011 18:30:11 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 28 Oct 2011 18:30:11 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 28 Oct 2011 18:30:09 +0000 Received: from eris.apache.org (localhost [127.0.0.1]) by eris.apache.org (Postfix) with ESMTP id 729A72388847; Fri, 28 Oct 2011 18:29:49 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r1190492 - in /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs: ./ src/main/java/ src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ src/main/java/org/apache/hadoop/hdfs/server/namenode/ src/test/java/org/apache... Date: Fri, 28 Oct 2011 18:29:48 -0000 To: hdfs-commits@hadoop.apache.org From: szetszwo@apache.org X-Mailer: svnmailer-1.0.8-patched Message-Id: <20111028182949.729A72388847@eris.apache.org> Author: szetszwo Date: Fri Oct 28 18:29:48 2011 New Revision: 1190492 URL: http://svn.apache.org/viewvc?rev=1190492&view=rev Log: svn merge -c 1190491 from trunk for HDFS-2493. Modified: hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/ (props changed) hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/ (props changed) hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStatistics.java hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeReport.java hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java Propchange: hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Fri Oct 28 18:29:48 2011 @@ -1,4 +1,4 @@ -/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs:1161777,1161781,1162188,1162421,1162491,1162499,1162613,1162928,1162954,1162979,1163050,1163069,1163081,1163490,1163768,1164255,1164301,1164339,1166402,1167383,1167662,1170085,1170379,1170459,1170996,1171136,1171297,1171379,1171611,1172916,1173402,1173468,1175113,1176178,1176550,1176719,1176729,1176733,1177100,1177161,1177487,1177531,1177757,1177859,1177864,1177905,1179169,1179856,1179861,1180757,1183081,1183098,1183175,1183554,1186508,1187140,1189028,1189355,1189360,1189546,1189932,1189982,1190077 +/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs:1161777,1161781,1162188,1162421,1162491,1162499,1162613,1162928,1162954,1162979,1163050,1163069,1163081,1163490,1163768,1164255,1164301,1164339,1166402,1167383,1167662,1170085,1170379,1170459,1170996,1171136,1171297,1171379,1171611,1172916,1173402,1173468,1175113,1176178,1176550,1176719,1176729,1176733,1177100,1177161,1177487,1177531,1177757,1177859,1177864,1177905,1179169,1179856,1179861,1180757,1183081,1183098,1183175,1183554,1186508,1187140,1189028,1189355,1189360,1189546,1189932,1189982,1190077,1190491 /hadoop/core/branches/branch-0.19/hdfs:713112 /hadoop/hdfs/branches/HDFS-1052:987665-1095512 /hadoop/hdfs/branches/HDFS-265:796829-820463 Modified: hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt?rev=1190492&r1=1190491&r2=1190492&view=diff ============================================================================== --- hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt (original) +++ hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt Fri Oct 28 18:29:48 2011 @@ -744,6 +744,9 @@ Release 0.23.0 - Unreleased HDFS-2371. Refactor BlockSender.java for better readability. (suresh) + HDFS-2493. Remove reference to FSNamesystem in blockmanagement classes. + (szetszwo) + OPTIMIZATIONS HDFS-1458. Improve checkpoint performance by avoiding unnecessary image Propchange: hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Fri Oct 28 18:29:48 2011 @@ -1,4 +1,4 @@ -/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java:1161777,1161781,1162188,1162421,1162491,1162499,1162613,1162928,1162954,1162979,1163050,1163069,1163081,1163490,1163768,1164255,1164301,1164339,1166402,1167383,1167662,1170085,1170379,1170459,1170996,1171136,1171297,1171379,1171611,1172916,1173402,1173468,1175113,1176178,1176550,1176719,1176729,1176733,1177100,1177161,1177487,1177531,1177757,1177859,1177864,1177905,1179169,1179856,1179861,1180757,1183081,1183098,1183175,1183554,1186508,1187140,1189028,1189355,1189360,1189546,1189932,1189982,1190077 +/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java:1161777,1161781,1162188,1162421,1162491,1162499,1162613,1162928,1162954,1162979,1163050,1163069,1163081,1163490,1163768,1164255,1164301,1164339,1166402,1167383,1167662,1170085,1170379,1170459,1170996,1171136,1171297,1171379,1171611,1172916,1173402,1173468,1175113,1176178,1176550,1176719,1176729,1176733,1177100,1177161,1177487,1177531,1177757,1177859,1177864,1177905,1179169,1179856,1179861,1180757,1183081,1183098,1183175,1183554,1186508,1187140,1189028,1189355,1189360,1189546,1189932,1189982,1190077,1190491 /hadoop/core/branches/branch-0.19/hdfs/src/java:713112 /hadoop/core/trunk/src/hdfs:776175-785643,785929-786278 /hadoop/hdfs/branches/HDFS-1052/src/java:987665-1095512 Modified: hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java?rev=1190492&r1=1190491&r2=1190492&view=diff ============================================================================== --- hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java (original) +++ hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java Fri Oct 28 18:29:48 2011 @@ -53,7 +53,7 @@ import org.apache.hadoop.hdfs.security.t import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState; import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState; import org.apache.hadoop.hdfs.server.common.Util; -import org.apache.hadoop.hdfs.server.namenode.FSNamesystem; +import org.apache.hadoop.hdfs.server.namenode.FSClusterStats; import org.apache.hadoop.hdfs.server.namenode.INode; import org.apache.hadoop.hdfs.server.namenode.INodeFile; import org.apache.hadoop.hdfs.server.namenode.INodeFileUnderConstruction; @@ -70,8 +70,6 @@ import com.google.common.annotations.Vis /** * Keeps information related to the blocks stored in the Hadoop cluster. - * This class is a helper class for {@link FSNamesystem} and requires several - * methods to be called with lock held on {@link FSNamesystem}. */ @InterfaceAudience.Private public class BlockManager { @@ -174,15 +172,16 @@ public class BlockManager { /** for block replicas placement */ private BlockPlacementPolicy blockplacement; - public BlockManager(FSNamesystem fsn, Configuration conf) throws IOException { - namesystem = fsn; - datanodeManager = new DatanodeManager(this, fsn, conf); + public BlockManager(final Namesystem namesystem, final FSClusterStats stats, + final Configuration conf) throws IOException { + this.namesystem = namesystem; + datanodeManager = new DatanodeManager(this, namesystem, conf); heartbeatManager = datanodeManager.getHeartbeatManager(); invalidateBlocks = new InvalidateBlocks(datanodeManager); blocksMap = new BlocksMap(DEFAULT_MAP_LOAD_FACTOR); blockplacement = BlockPlacementPolicy.getInstance( - conf, fsn, datanodeManager.getNetworkTopology()); + conf, stats, datanodeManager.getNetworkTopology()); pendingReplications = new PendingReplicationBlocks(conf.getInt( DFSConfigKeys.DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_KEY, DFSConfigKeys.DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_DEFAULT) * 1000L); @@ -2556,7 +2555,7 @@ public class BlockManager { workFound = this.computeReplicationWork(blocksToProcess); - // Update FSNamesystemMetrics counters + // Update counters namesystem.writeLock(); try { this.updateState(); Modified: hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java?rev=1190492&r1=1190491&r2=1190492&view=diff ============================================================================== --- hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java (original) +++ hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java Fri Oct 28 18:29:48 2011 @@ -50,8 +50,8 @@ import org.apache.hadoop.hdfs.protocol.L import org.apache.hadoop.hdfs.protocol.UnregisteredNodeException; import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.BlockTargetPair; import org.apache.hadoop.hdfs.server.common.Util; -import org.apache.hadoop.hdfs.server.namenode.FSNamesystem; import org.apache.hadoop.hdfs.server.namenode.NameNode; +import org.apache.hadoop.hdfs.server.namenode.Namesystem; import org.apache.hadoop.hdfs.server.protocol.BalancerBandwidthCommand; import org.apache.hadoop.hdfs.server.protocol.BlockCommand; import org.apache.hadoop.hdfs.server.protocol.BlockRecoveryCommand; @@ -78,7 +78,7 @@ import org.apache.hadoop.util.Reflection public class DatanodeManager { static final Log LOG = LogFactory.getLog(DatanodeManager.class); - private final FSNamesystem namesystem; + private final Namesystem namesystem; private final BlockManager blockManager; private final HeartbeatManager heartbeatManager; @@ -124,12 +124,12 @@ public class DatanodeManager { final int blockInvalidateLimit; DatanodeManager(final BlockManager blockManager, - final FSNamesystem namesystem, final Configuration conf + final Namesystem namesystem, final Configuration conf ) throws IOException { this.namesystem = namesystem; this.blockManager = blockManager; - this.heartbeatManager = new HeartbeatManager(namesystem, conf); + this.heartbeatManager = new HeartbeatManager(namesystem, blockManager, conf); this.hostsReader = new HostsFileReader( conf.get(DFSConfigKeys.DFS_HOSTS, ""), @@ -163,7 +163,8 @@ public class DatanodeManager { private Daemon decommissionthread = null; void activate(final Configuration conf) { - this.decommissionthread = new Daemon(new DecommissionManager(namesystem).new Monitor( + final DecommissionManager dm = new DecommissionManager(namesystem, blockManager); + this.decommissionthread = new Daemon(dm.new Monitor( conf.getInt(DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_INTERVAL_KEY, DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_INTERVAL_DEFAULT), conf.getInt(DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_NODES_PER_INTERVAL_KEY, Modified: hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStatistics.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStatistics.java?rev=1190492&r1=1190491&r2=1190492&view=diff ============================================================================== --- hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStatistics.java (original) +++ hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStatistics.java Fri Oct 28 18:29:48 2011 @@ -56,4 +56,7 @@ public interface DatanodeStatistics { * The block related entries are set to -1. */ public long[] getStats(); + + /** @return the expired heartbeats */ + public int getExpiredHeartbeats(); } \ No newline at end of file Modified: hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java?rev=1190492&r1=1190491&r2=1190492&view=diff ============================================================================== --- hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java (original) +++ hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java Fri Oct 28 18:29:48 2011 @@ -23,7 +23,7 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; -import org.apache.hadoop.hdfs.server.namenode.FSNamesystem; +import org.apache.hadoop.hdfs.server.namenode.Namesystem; /** * Manage node decommissioning. @@ -33,10 +33,13 @@ import org.apache.hadoop.hdfs.server.nam class DecommissionManager { static final Log LOG = LogFactory.getLog(DecommissionManager.class); - private final FSNamesystem fsnamesystem; + private final Namesystem namesystem; + private final BlockManager blockmanager; - DecommissionManager(final FSNamesystem namesystem) { - this.fsnamesystem = namesystem; + DecommissionManager(final Namesystem namesystem, + final BlockManager blockmanager) { + this.namesystem = namesystem; + this.blockmanager = blockmanager; } /** Periodically check decommission status. */ @@ -61,12 +64,12 @@ class DecommissionManager { */ @Override public void run() { - for(; fsnamesystem.isRunning(); ) { - fsnamesystem.writeLock(); + for(; namesystem.isRunning(); ) { + namesystem.writeLock(); try { check(); } finally { - fsnamesystem.writeUnlock(); + namesystem.writeUnlock(); } try { @@ -78,7 +81,7 @@ class DecommissionManager { } private void check() { - final DatanodeManager dm = fsnamesystem.getBlockManager().getDatanodeManager(); + final DatanodeManager dm = blockmanager.getDatanodeManager(); int count = 0; for(Map.Entry entry : dm.getDatanodeCyclicIteration(firstkey)) { Modified: hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java?rev=1190492&r1=1190491&r2=1190492&view=diff ============================================================================== --- hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java (original) +++ hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java Fri Oct 28 18:29:48 2011 @@ -27,7 +27,7 @@ import org.apache.hadoop.hdfs.DFSConfigK import org.apache.hadoop.hdfs.DFSUtil; import org.apache.hadoop.hdfs.protocol.DatanodeID; import org.apache.hadoop.hdfs.server.common.Util; -import org.apache.hadoop.hdfs.server.namenode.FSNamesystem; +import org.apache.hadoop.hdfs.server.namenode.Namesystem; import org.apache.hadoop.util.Daemon; /** @@ -55,14 +55,17 @@ class HeartbeatManager implements Datano /** Heartbeat monitor thread */ private final Daemon heartbeatThread = new Daemon(new Monitor()); - final FSNamesystem namesystem; + final Namesystem namesystem; + final BlockManager blockManager; - HeartbeatManager(final FSNamesystem namesystem, final Configuration conf) { + HeartbeatManager(final Namesystem namesystem, final BlockManager blockManager, + final Configuration conf) { this.heartbeatRecheckInterval = conf.getInt( DFSConfigKeys.DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY, DFSConfigKeys.DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_DEFAULT); // 5 minutes this.namesystem = namesystem; + this.blockManager = blockManager; } void activate(Configuration conf) { @@ -136,6 +139,11 @@ class HeartbeatManager implements Datano getBlockPoolUsed()}; } + @Override + public synchronized int getExpiredHeartbeats() { + return stats.expiredHeartbeats; + } + synchronized void register(final DatanodeDescriptor d) { if (!datanodes.contains(d)) { addDatanode(d); @@ -191,7 +199,7 @@ class HeartbeatManager implements Datano * effect causes more datanodes to be declared dead. */ void heartbeatCheck() { - final DatanodeManager dm = namesystem.getBlockManager().getDatanodeManager(); + final DatanodeManager dm = blockManager.getDatanodeManager(); // It's OK to check safe mode w/o taking the lock here, we re-check // for safe mode after taking the lock before removing a datanode. if (namesystem.isInSafeMode()) { @@ -204,7 +212,7 @@ class HeartbeatManager implements Datano synchronized(this) { for (DatanodeDescriptor d : datanodes) { if (dm.isDatanodeDead(d)) { - namesystem.incrExpiredHeartbeats(); + stats.incrExpiredHeartbeats(); dead = d; break; } @@ -244,8 +252,7 @@ class HeartbeatManager implements Datano heartbeatCheck(); lastHeartbeatCheck = now; } - if (namesystem.getBlockManager().shouldUpdateBlockKey( - now - lastBlockKeyUpdate)) { + if (blockManager.shouldUpdateBlockKey(now - lastBlockKeyUpdate)) { synchronized(HeartbeatManager.this) { for(DatanodeDescriptor d : datanodes) { d.needKeyUpdate = true; @@ -274,6 +281,8 @@ class HeartbeatManager implements Datano private long blockPoolUsed = 0L; private int xceiverCount = 0; + private int expiredHeartbeats = 0; + private void add(final DatanodeDescriptor node) { capacityUsed += node.getDfsUsed(); blockPoolUsed += node.getBlockPoolUsed(); @@ -297,5 +306,10 @@ class HeartbeatManager implements Datano capacityTotal -= node.getDfsUsed(); } } + + /** Increment expired heartbeat counter. */ + private void incrExpiredHeartbeats() { + expiredHeartbeats++; + } } } Modified: hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java?rev=1190492&r1=1190491&r2=1190492&view=diff ============================================================================== --- hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java (original) +++ hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java Fri Oct 28 18:29:48 2011 @@ -17,6 +17,45 @@ */ package org.apache.hadoop.hdfs.server.namenode; +import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_DEFAULT; +import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_SIZE_DEFAULT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_SIZE_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BYTES_PER_CHECKSUM_DEFAULT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_CLIENT_WRITE_PACKET_SIZE_DEFAULT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_CLIENT_WRITE_PACKET_SIZE_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_ACCESSTIME_PRECISION_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_DELEGATION_KEY_UPDATE_INTERVAL_DEFAULT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_DELEGATION_KEY_UPDATE_INTERVAL_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_DELEGATION_TOKEN_MAX_LIFETIME_DEFAULT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_DELEGATION_TOKEN_MAX_LIFETIME_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_DELEGATION_TOKEN_RENEW_INTERVAL_DEFAULT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_DELEGATION_TOKEN_RENEW_INTERVAL_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_EDITS_DIR_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_MAX_OBJECTS_DEFAULT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_MAX_OBJECTS_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_DEFAULT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_REPL_QUEUE_THRESHOLD_PCT_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_RESOURCE_CHECK_INTERVAL_DEFAULT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_RESOURCE_CHECK_INTERVAL_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_SAFEMODE_EXTENSION_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_SAFEMODE_MIN_DATANODES_DEFAULT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_SAFEMODE_MIN_DATANODES_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_SAFEMODE_THRESHOLD_PCT_DEFAULT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_SAFEMODE_THRESHOLD_PCT_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_UPGRADE_PERMISSION_DEFAULT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_UPGRADE_PERMISSION_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_PERMISSIONS_ENABLED_DEFAULT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_PERMISSIONS_SUPERUSERGROUP_DEFAULT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_PERMISSIONS_SUPERUSERGROUP_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_REPLICATION_DEFAULT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_REPLICATION_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_SUPPORT_APPEND_DEFAULT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_SUPPORT_APPEND_KEY; import static org.apache.hadoop.hdfs.server.common.Util.now; import java.io.BufferedWriter; @@ -68,7 +107,6 @@ import org.apache.hadoop.fs.UnresolvedLi import org.apache.hadoop.fs.permission.FsAction; import org.apache.hadoop.fs.permission.FsPermission; import org.apache.hadoop.fs.permission.PermissionStatus; -import static org.apache.hadoop.hdfs.DFSConfigKeys.*; import org.apache.hadoop.hdfs.DFSUtil; import org.apache.hadoop.hdfs.HdfsConfiguration; import org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException; @@ -119,7 +157,6 @@ import org.apache.hadoop.ipc.Server; import org.apache.hadoop.metrics2.annotation.Metric; import org.apache.hadoop.metrics2.annotation.Metrics; import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem; -import org.apache.hadoop.metrics2.lib.MutableCounterInt; import org.apache.hadoop.metrics2.util.MBeans; import org.apache.hadoop.net.NetworkTopology; import org.apache.hadoop.net.Node; @@ -202,8 +239,6 @@ public class FSNamesystem implements Nam private UserGroupInformation fsOwner; private String supergroup; private PermissionStatus defaultPermission; - // FSNamesystemMetrics counter variables - @Metric private MutableCounterInt expiredHeartbeats; // Scan interval is not configurable. private static final long DELEGATION_TOKEN_REMOVER_SCAN_INTERVAL = @@ -282,7 +317,7 @@ public class FSNamesystem implements Nam nnResourceChecker = new NameNodeResourceChecker(conf); checkAvailableResources(); this.systemStart = now(); - this.blockManager = new BlockManager(this, conf); + this.blockManager = new BlockManager(this, this, conf); this.datanodeStatistics = blockManager.getDatanodeManager().getDatanodeStatistics(); this.fsLock = new ReentrantReadWriteLock(true); // fair locking setConfigurationParameters(conf); @@ -2589,9 +2624,9 @@ public class FSNamesystem implements Nam return blockManager.getMissingBlocksCount(); } - /** Increment expired heartbeat counter. */ - public void incrExpiredHeartbeats() { - expiredHeartbeats.incr(); + @Metric({"ExpiredHeartbeats", "Number of expired heartbeats"}) + public int getExpiredHeartbeats() { + return datanodeStatistics.getExpiredHeartbeats(); } /** @see ClientProtocol#getStats() */ Modified: hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeReport.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeReport.java?rev=1190492&r1=1190491&r2=1190492&view=diff ============================================================================== --- hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeReport.java (original) +++ hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeReport.java Fri Oct 28 18:29:48 2011 @@ -77,7 +77,7 @@ public class TestDatanodeReport extends NUM_OF_DATANODES); Thread.sleep(5000); - assertCounter("ExpiredHeartbeats", 1, getMetrics("FSNamesystem")); + assertGauge("ExpiredHeartbeats", 1, getMetrics("FSNamesystem")); }finally { cluster.shutdown(); } Modified: hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java?rev=1190492&r1=1190491&r2=1190492&view=diff ============================================================================== --- hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java (original) +++ hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java Fri Oct 28 18:29:48 2011 @@ -80,7 +80,7 @@ public class TestBlockManager { "need to set a dummy value here so it assumes a multi-rack cluster"); fsn = Mockito.mock(FSNamesystem.class); Mockito.doReturn(true).when(fsn).hasWriteLock(); - bm = new BlockManager(fsn, conf); + bm = new BlockManager(fsn, fsn, conf); } private void addNodes(Iterable nodesToAdd) {