Return-Path: X-Original-To: apmail-hadoop-hdfs-commits-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id DA7939FC0 for ; Tue, 8 May 2012 21:58:43 +0000 (UTC) Received: (qmail 39721 invoked by uid 500); 8 May 2012 21:58:43 -0000 Delivered-To: apmail-hadoop-hdfs-commits-archive@hadoop.apache.org Received: (qmail 39692 invoked by uid 500); 8 May 2012 21:58:43 -0000 Mailing-List: contact hdfs-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-commits@hadoop.apache.org Received: (qmail 39683 invoked by uid 99); 8 May 2012 21:58:43 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 08 May 2012 21:58:43 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 08 May 2012 21:58:30 +0000 Received: from eris.apache.org (localhost [127.0.0.1]) by eris.apache.org (Postfix) with ESMTP id 9D6B02388A32; Tue, 8 May 2012 21:58:07 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r1335791 [1/2] - in /hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs: ./ src/main/java/ src/main/java/org/apache/hadoop/hdfs/ src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ src/main/java/org/apache/hadoop/hdfs... Date: Tue, 08 May 2012 21:58:05 -0000 To: hdfs-commits@hadoop.apache.org From: szetszwo@apache.org X-Mailer: svnmailer-1.0.8-patched Message-Id: <20120508215807.9D6B02388A32@eris.apache.org> Author: szetszwo Date: Tue May 8 21:57:58 2012 New Revision: 1335791 URL: http://svn.apache.org/viewvc?rev=1335791&view=rev Log: Merge r1334158 through r1335790 from trunk. Added: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java - copied unchanged from r1335790, hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/MutableBlockCollection.java - copied unchanged from r1335790, hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/MutableBlockCollection.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemAtHdfsRoot.java - copied unchanged from r1335790, hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemAtHdfsRoot.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsAtHdfsRoot.java - copied unchanged from r1335790, hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsAtHdfsRoot.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestRBWBlockInvalidation.java - copied unchanged from r1335790, hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestRBWBlockInvalidation.java Removed: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSInodeInfo.java Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/ (props changed) hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/ (props changed) hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/SocketCache.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileUnderConstruction.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/BootstrapStandby.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/StandbyCheckpointer.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DelegationTokenFetcher.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/native/ (props changed) hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/ (props changed) hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/ (props changed) hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/ (props changed) hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs/ (props changed) hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemHdfs.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsHdfs.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestConnCache.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferKeepalive.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetTestUtil.java hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java Propchange: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs:r1334158-1335790 Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt Tue May 8 21:57:58 2012 @@ -422,6 +422,19 @@ Release 2.0.0 - UNRELEASED HDFS-3303. Remove Writable implementation from RemoteEditLogManifest. (Brandon Li via szetszwo) + HDFS-2617. Replaced Kerberized SSL for image transfer and fsck + with SPNEGO-based solution. (jghoman, tucu, and atm via eli) + + HDFS-3365. Enable users to disable socket caching in DFS client + configuration (todd) + + HDFS-3375. Put client name in DataXceiver thread name for readBlock + and keepalive (todd) + + HDFS-3363. Define BlockCollection and MutableBlockCollection interfaces + so that INodeFile and INodeFileUnderConstruction do not have to be used in + block management. (John George via szetszwo) + OPTIMIZATIONS HDFS-3024. Improve performance of stringification in addStoredBlock (todd) @@ -435,6 +448,8 @@ Release 2.0.0 - UNRELEASED HDFS-2476. More CPU efficient data structure for under-replicated, over-replicated, and invalidated blocks. (Tomasz Nykiel via todd) + HDFS-3378. Remove DFS_NAMENODE_SECONDARY_HTTPS_PORT_KEY and DEFAULT. (eli) + BUG FIXES HDFS-2481. Unknown protocol: org.apache.hadoop.hdfs.protocol.ClientProtocol. @@ -603,6 +618,9 @@ Release 2.0.0 - UNRELEASED HDFS-3357. DataXceiver reads from client socket with incorrect/no timeout (todd) + HDFS-3376. DFSClient fails to make connection to DN if there are many + unusable cached sockets (todd) + BREAKDOWN OF HDFS-1623 SUBTASKS HDFS-2179. Add fencing framework and mechanisms for NameNode HA. (todd) Propchange: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java:r1334158-1335790 Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java Tue May 8 21:57:58 2012 @@ -99,8 +99,6 @@ public class DFSConfigKeys extends Commo public static final int DFS_NAMENODE_SAFEMODE_MIN_DATANODES_DEFAULT = 0; public static final String DFS_NAMENODE_SECONDARY_HTTP_ADDRESS_KEY = "dfs.namenode.secondary.http-address"; public static final String DFS_NAMENODE_SECONDARY_HTTP_ADDRESS_DEFAULT = "0.0.0.0:50090"; - public static final String DFS_NAMENODE_SECONDARY_HTTPS_PORT_KEY = "dfs.namenode.secondary.https-port"; - public static final int DFS_NAMENODE_SECONDARY_HTTPS_PORT_DEFAULT = 50490; public static final String DFS_NAMENODE_CHECKPOINT_CHECK_PERIOD_KEY = "dfs.namenode.checkpoint.check.period"; public static final long DFS_NAMENODE_CHECKPOINT_CHECK_PERIOD_DEFAULT = 60; public static final String DFS_NAMENODE_CHECKPOINT_PERIOD_KEY = "dfs.namenode.checkpoint.period"; @@ -329,10 +327,10 @@ public class DFSConfigKeys extends Commo public static final String DFS_DATANODE_USER_NAME_KEY = "dfs.datanode.kerberos.principal"; public static final String DFS_NAMENODE_KEYTAB_FILE_KEY = "dfs.namenode.keytab.file"; public static final String DFS_NAMENODE_USER_NAME_KEY = "dfs.namenode.kerberos.principal"; - public static final String DFS_NAMENODE_KRB_HTTPS_USER_NAME_KEY = "dfs.namenode.kerberos.https.principal"; + public static final String DFS_NAMENODE_INTERNAL_SPENGO_USER_NAME_KEY = "dfs.namenode.kerberos.internal.spnego.principal"; public static final String DFS_SECONDARY_NAMENODE_KEYTAB_FILE_KEY = "dfs.secondary.namenode.keytab.file"; public static final String DFS_SECONDARY_NAMENODE_USER_NAME_KEY = "dfs.secondary.namenode.kerberos.principal"; - public static final String DFS_SECONDARY_NAMENODE_KRB_HTTPS_USER_NAME_KEY = "dfs.secondary.namenode.kerberos.https.principal"; + public static final String DFS_SECONDARY_NAMENODE_INTERNAL_SPENGO_USER_NAME_KEY = "dfs.secondary.namenode.kerberos.internal.spnego.principal"; public static final String DFS_NAMENODE_NAME_CACHE_THRESHOLD_KEY = "dfs.namenode.name.cache.threshold"; public static final int DFS_NAMENODE_NAME_CACHE_THRESHOLD_DEFAULT = 10; Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java Tue May 8 21:57:58 2012 @@ -864,7 +864,13 @@ public class DFSInputStream extends FSIn // Allow retry since there is no way of knowing whether the cached socket // is good until we actually use it. for (int retries = 0; retries <= nCachedConnRetry && fromCache; ++retries) { - Socket sock = socketCache.get(dnAddr); + Socket sock = null; + // Don't use the cache on the last attempt - it's possible that there + // are arbitrarily many unusable sockets in the cache, but we don't + // want to fail the read. + if (retries < nCachedConnRetry) { + sock = socketCache.get(dnAddr); + } if (sock == null) { fromCache = false; Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java Tue May 8 21:57:58 2012 @@ -81,7 +81,6 @@ public class HdfsConfiguration extends C deprecate("dfs.safemode.extension", DFSConfigKeys.DFS_NAMENODE_SAFEMODE_EXTENSION_KEY); deprecate("dfs.safemode.threshold.pct", DFSConfigKeys.DFS_NAMENODE_SAFEMODE_THRESHOLD_PCT_KEY); deprecate("dfs.secondary.http.address", DFSConfigKeys.DFS_NAMENODE_SECONDARY_HTTP_ADDRESS_KEY); - deprecate("dfs.secondary.https.port", DFSConfigKeys.DFS_NAMENODE_SECONDARY_HTTPS_PORT_KEY); deprecate("dfs.socket.timeout", DFSConfigKeys.DFS_CLIENT_SOCKET_TIMEOUT_KEY); deprecate("fs.checkpoint.dir", DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_DIR_KEY); deprecate("fs.checkpoint.edits.dir", DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_EDITS_DIR_KEY); Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java Tue May 8 21:57:58 2012 @@ -144,7 +144,7 @@ public class HftpFileSystem extends File } protected URI getNamenodeSecureUri(URI uri) { - return DFSUtil.createUri("https", getNamenodeSecureAddr(uri)); + return DFSUtil.createUri("http", getNamenodeSecureAddr(uri)); } @Override @@ -247,7 +247,7 @@ public class HftpFileSystem extends File c = DelegationTokenFetcher.getDTfromRemote(nnHttpUrl, renewer); } catch (Exception e) { LOG.info("Couldn't get a delegation token from " + nnHttpUrl + - " using https."); + " using http."); if(LOG.isDebugEnabled()) { LOG.debug("error was ", e); } @@ -686,11 +686,11 @@ public class HftpFileSystem extends File Configuration conf) throws IOException { // update the kerberos credentials, if they are coming from a keytab UserGroupInformation.getLoginUser().reloginFromKeytab(); - // use https to renew the token + // use http to renew the token InetSocketAddress serviceAddr = SecurityUtil.getTokenServiceAddr(token); return DelegationTokenFetcher.renewDelegationToken - (DFSUtil.createUri("https", serviceAddr).toString(), + (DFSUtil.createUri("http", serviceAddr).toString(), (Token) token); } @@ -700,10 +700,10 @@ public class HftpFileSystem extends File Configuration conf) throws IOException { // update the kerberos credentials, if they are coming from a keytab UserGroupInformation.getLoginUser().checkTGTAndReloginFromKeytab(); - // use https to cancel the token + // use http to cancel the token InetSocketAddress serviceAddr = SecurityUtil.getTokenServiceAddr(token); DelegationTokenFetcher.cancelDelegationToken - (DFSUtil.createUri("https", serviceAddr).toString(), + (DFSUtil.createUri("http", serviceAddr).toString(), (Token) token); } } Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/SocketCache.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/SocketCache.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/SocketCache.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/SocketCache.java Tue May 8 21:57:58 2012 @@ -47,6 +47,9 @@ class SocketCache { public SocketCache(int capacity) { multimap = LinkedListMultimap.create(); this.capacity = capacity; + if (capacity <= 0) { + LOG.debug("SocketCache disabled in configuration."); + } } /** @@ -55,6 +58,10 @@ class SocketCache { * @return A socket with unknown state, possibly closed underneath. Or null. */ public synchronized Socket get(SocketAddress remote) { + if (capacity <= 0) { // disabled + return null; + } + List socklist = multimap.get(remote); if (socklist == null) { return null; @@ -76,6 +83,12 @@ class SocketCache { * @param sock socket not used by anyone. */ public synchronized void put(Socket sock) { + if (capacity <= 0) { + // Cache disabled. + IOUtils.closeSocket(sock); + return; + } + Preconditions.checkNotNull(sock); SocketAddress remoteAddr = sock.getRemoteSocketAddress(); Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java Tue May 8 21:57:58 2012 @@ -22,18 +22,17 @@ import java.util.LinkedList; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.hdfs.protocol.Block; import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState; -import org.apache.hadoop.hdfs.server.namenode.INodeFile; import org.apache.hadoop.hdfs.util.LightWeightGSet; /** * BlockInfo class maintains for a given block - * the {@link INodeFile} it is part of and datanodes where the replicas of + * the {@link BlockCollection} it is part of and datanodes where the replicas of * the block are stored. */ @InterfaceAudience.Private public class BlockInfo extends Block implements LightWeightGSet.LinkedElement { - private INodeFile inode; + private BlockCollection inode; /** For implementing {@link LightWeightGSet.LinkedElement} interface */ private LightWeightGSet.LinkedElement nextLinkedElement; @@ -77,11 +76,11 @@ public class BlockInfo extends Block imp this.inode = from.inode; } - public INodeFile getINode() { + public BlockCollection getINode() { return inode; } - public void setINode(INodeFile inode) { + public void setINode(BlockCollection inode) { this.inode = inode; } Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java Tue May 8 21:57:58 2012 @@ -234,7 +234,7 @@ public class BlockInfoUnderConstruction blockRecoveryId = recoveryId; if (replicas.size() == 0) { NameNode.stateChangeLog.warn("BLOCK*" - + " INodeFileUnderConstruction.initLeaseRecovery:" + + " BlockInfoUnderConstruction.initLeaseRecovery:" + " No blocks found, lease removed."); } Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java Tue May 8 21:57:58 2012 @@ -55,8 +55,6 @@ import org.apache.hadoop.hdfs.server.com import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState; import org.apache.hadoop.hdfs.server.common.Util; import org.apache.hadoop.hdfs.server.namenode.FSClusterStats; -import org.apache.hadoop.hdfs.server.namenode.INodeFile; -import org.apache.hadoop.hdfs.server.namenode.INodeFileUnderConstruction; import org.apache.hadoop.hdfs.server.namenode.NameNode; import org.apache.hadoop.hdfs.server.namenode.Namesystem; import org.apache.hadoop.hdfs.server.protocol.BlockCommand; @@ -384,7 +382,7 @@ public class BlockManager { numReplicas.decommissionedReplicas(); if (block instanceof BlockInfo) { - String fileName = ((BlockInfo)block).getINode().getFullPathName(); + String fileName = ((BlockInfo)block).getINode().getName(); out.print(fileName + ": "); } // l: == live:, d: == decommissioned c: == corrupt e: == excess @@ -460,7 +458,7 @@ public class BlockManager { * @throws IOException if the block does not have at least a minimal number * of replicas reported from data-nodes. */ - public boolean commitOrCompleteLastBlock(INodeFileUnderConstruction fileINode, + public boolean commitOrCompleteLastBlock(MutableBlockCollection fileINode, Block commitBlock) throws IOException { if(commitBlock == null) return false; // not committing, this is a block allocation retry @@ -472,7 +470,7 @@ public class BlockManager { final boolean b = commitBlock((BlockInfoUnderConstruction)lastBlock, commitBlock); if(countNodes(lastBlock).liveReplicas() >= minReplication) - completeBlock(fileINode,fileINode.numBlocks()-1, false); + completeBlock(fileINode, fileINode.numBlocks()-1, false); return b; } @@ -483,7 +481,7 @@ public class BlockManager { * @throws IOException if the block does not have at least a minimal number * of replicas reported from data-nodes. */ - private BlockInfo completeBlock(final INodeFile fileINode, + private BlockInfo completeBlock(final MutableBlockCollection fileINode, final int blkIndex, boolean force) throws IOException { if(blkIndex < 0) return null; @@ -516,7 +514,7 @@ public class BlockManager { return blocksMap.replaceBlock(completeBlock); } - private BlockInfo completeBlock(final INodeFile fileINode, + private BlockInfo completeBlock(final MutableBlockCollection fileINode, final BlockInfo block, boolean force) throws IOException { BlockInfo[] fileBlocks = fileINode.getBlocks(); for(int idx = 0; idx < fileBlocks.length; idx++) @@ -531,7 +529,7 @@ public class BlockManager { * regardless of whether enough replicas are present. This is necessary * when tailing edit logs as a Standby. */ - public BlockInfo forceCompleteBlock(final INodeFile fileINode, + public BlockInfo forceCompleteBlock(final MutableBlockCollection fileINode, final BlockInfoUnderConstruction block) throws IOException { block.commitBlock(block); return completeBlock(fileINode, block, true); @@ -552,7 +550,7 @@ public class BlockManager { * @return the last block locations if the block is partial or null otherwise */ public LocatedBlock convertLastBlockToUnderConstruction( - INodeFileUnderConstruction fileINode) throws IOException { + MutableBlockCollection fileINode) throws IOException { BlockInfo oldBlock = fileINode.getLastBlock(); if(oldBlock == null || fileINode.getPreferredBlockSize() == oldBlock.getNumBytes()) @@ -923,7 +921,7 @@ public class BlockManager { " does not exist. "); } - INodeFile inode = storedBlock.getINode(); + BlockCollection inode = storedBlock.getINode(); if (inode == null) { NameNode.stateChangeLog.info("BLOCK markBlockAsCorrupt: " + "block " + storedBlock + @@ -1051,7 +1049,7 @@ public class BlockManager { int requiredReplication, numEffectiveReplicas; List containingNodes, liveReplicaNodes; DatanodeDescriptor srcNode; - INodeFile fileINode = null; + BlockCollection fileINode = null; int additionalReplRequired; int scheduledWork = 0; @@ -1065,7 +1063,7 @@ public class BlockManager { // block should belong to a file fileINode = blocksMap.getINode(block); // abandoned block or block reopened for append - if(fileINode == null || fileINode.isUnderConstruction()) { + if(fileINode == null || fileINode instanceof MutableBlockCollection) { neededReplications.remove(block, priority); // remove from neededReplications neededReplications.decrementReplicationIndex(priority); continue; @@ -1151,7 +1149,7 @@ public class BlockManager { // block should belong to a file fileINode = blocksMap.getINode(block); // abandoned block or block reopened for append - if(fileINode == null || fileINode.isUnderConstruction()) { + if(fileINode == null || fileINode instanceof MutableBlockCollection) { neededReplications.remove(block, priority); // remove from neededReplications rw.targets = null; neededReplications.decrementReplicationIndex(priority); @@ -1804,7 +1802,8 @@ assert storedBlock.findDatanode(dn) < 0 case COMPLETE: case COMMITTED: if (storedBlock.getGenerationStamp() != iblk.getGenerationStamp()) { - return new BlockToMarkCorrupt(storedBlock, + return new BlockToMarkCorrupt(new BlockInfo(iblk, storedBlock + .getINode().getReplication()), "block is " + ucState + " and reported genstamp " + iblk.getGenerationStamp() + " does not match " + "genstamp in block map " + storedBlock.getGenerationStamp()); @@ -1824,7 +1823,8 @@ assert storedBlock.findDatanode(dn) < 0 if (!storedBlock.isComplete()) { return null; // not corrupt } else if (storedBlock.getGenerationStamp() != iblk.getGenerationStamp()) { - return new BlockToMarkCorrupt(storedBlock, + return new BlockToMarkCorrupt(new BlockInfo(iblk, storedBlock + .getINode().getReplication()), "reported " + reportedState + " replica with genstamp " + iblk.getGenerationStamp() + " does not match COMPLETE block's " + "genstamp in block map " + storedBlock.getGenerationStamp()); @@ -1916,7 +1916,7 @@ assert storedBlock.findDatanode(dn) < 0 int numCurrentReplica = countLiveNodes(storedBlock); if (storedBlock.getBlockUCState() == BlockUCState.COMMITTED && numCurrentReplica >= minReplication) { - completeBlock(storedBlock.getINode(), storedBlock, false); + completeBlock((MutableBlockCollection)storedBlock.getINode(), storedBlock, false); } else if (storedBlock.isComplete()) { // check whether safe replication is reached for the block // only complete blocks are counted towards that. @@ -1954,7 +1954,7 @@ assert storedBlock.findDatanode(dn) < 0 return block; } assert storedBlock != null : "Block must be stored by now"; - INodeFile fileINode = storedBlock.getINode(); + BlockCollection fileINode = storedBlock.getINode(); assert fileINode != null : "Block must belong to a file"; // add block to the datanode @@ -1981,7 +1981,7 @@ assert storedBlock.findDatanode(dn) < 0 if(storedBlock.getBlockUCState() == BlockUCState.COMMITTED && numLiveReplicas >= minReplication) { - storedBlock = completeBlock(fileINode, storedBlock, false); + storedBlock = completeBlock((MutableBlockCollection)fileINode, storedBlock, false); } else if (storedBlock.isComplete()) { // check whether safe replication is reached for the block // only complete blocks are counted towards that @@ -1992,7 +1992,7 @@ assert storedBlock.findDatanode(dn) < 0 } // if file is under construction, then done for now - if (fileINode.isUnderConstruction()) { + if (fileINode instanceof MutableBlockCollection) { return storedBlock; } @@ -2129,7 +2129,7 @@ assert storedBlock.findDatanode(dn) < 0 * what happened with it. */ private MisReplicationResult processMisReplicatedBlock(BlockInfo block) { - INodeFile fileINode = block.getINode(); + BlockCollection fileINode = block.getINode(); if (fileINode == null) { // block does not belong to any file addToInvalidates(block); @@ -2258,7 +2258,7 @@ assert storedBlock.findDatanode(dn) < 0 BlockPlacementPolicy replicator) { assert namesystem.hasWriteLock(); // first form a rack to datanodes map and - INodeFile inode = getINode(b); + BlockCollection inode = getINode(b); final Map> rackMap = new HashMap>(); for(final Iterator iter = nonExcess.iterator(); @@ -2379,7 +2379,7 @@ assert storedBlock.findDatanode(dn) < 0 // necessary. In that case, put block on a possibly-will- // be-replicated list. // - INodeFile fileINode = blocksMap.getINode(block); + BlockCollection fileINode = blocksMap.getINode(block); if (fileINode != null) { namesystem.decrementSafeBlockCount(block); updateNeededReplications(block, -1, 0); @@ -2611,7 +2611,7 @@ assert storedBlock.findDatanode(dn) < 0 NumberReplicas num) { int curReplicas = num.liveReplicas(); int curExpectedReplicas = getReplication(block); - INodeFile fileINode = blocksMap.getINode(block); + BlockCollection fileINode = blocksMap.getINode(block); Iterator nodeIter = blocksMap.nodeIterator(block); StringBuilder nodeList = new StringBuilder(); while (nodeIter.hasNext()) { @@ -2624,7 +2624,7 @@ assert storedBlock.findDatanode(dn) < 0 + ", corrupt replicas: " + num.corruptReplicas() + ", decommissioned replicas: " + num.decommissionedReplicas() + ", excess replicas: " + num.excessReplicas() - + ", Is Open File: " + fileINode.isUnderConstruction() + + ", Is Open File: " + (fileINode instanceof MutableBlockCollection) + ", Datanodes having this block: " + nodeList + ", Current Datanode: " + srcNode + ", Is current datanode decommissioning: " + srcNode.isDecommissionInProgress()); @@ -2639,7 +2639,7 @@ assert storedBlock.findDatanode(dn) < 0 final Iterator it = srcNode.getBlockIterator(); while(it.hasNext()) { final Block block = it.next(); - INodeFile fileINode = blocksMap.getINode(block); + BlockCollection fileINode = blocksMap.getINode(block); short expectedReplication = fileINode.getReplication(); NumberReplicas num = countNodes(block); int numCurrentReplica = num.liveReplicas(); @@ -2662,7 +2662,7 @@ assert storedBlock.findDatanode(dn) < 0 final Iterator it = srcNode.getBlockIterator(); while(it.hasNext()) { final Block block = it.next(); - INodeFile fileINode = blocksMap.getINode(block); + BlockCollection fileINode = blocksMap.getINode(block); if (fileINode != null) { NumberReplicas num = countNodes(block); @@ -2679,7 +2679,7 @@ assert storedBlock.findDatanode(dn) < 0 if ((curReplicas == 0) && (num.decommissionedReplicas() > 0)) { decommissionOnlyReplicas++; } - if (fileINode.isUnderConstruction()) { + if (fileINode instanceof MutableBlockCollection) { underReplicatedInOpenFiles++; } } @@ -2782,11 +2782,10 @@ assert storedBlock.findDatanode(dn) < 0 /* get replication factor of a block */ private int getReplication(Block block) { - INodeFile fileINode = blocksMap.getINode(block); + BlockCollection fileINode = blocksMap.getINode(block); if (fileINode == null) { // block does not belong to any file return 0; } - assert !fileINode.isDirectory() : "Block cannot belong to a directory."; return fileINode.getReplication(); } @@ -2859,11 +2858,11 @@ assert storedBlock.findDatanode(dn) < 0 return this.neededReplications.getCorruptBlockSize(); } - public BlockInfo addINode(BlockInfo block, INodeFile iNode) { + public BlockInfo addINode(BlockInfo block, BlockCollection iNode) { return blocksMap.addINode(block, iNode); } - public INodeFile getINode(Block b) { + public BlockCollection getINode(Block b) { return blocksMap.getINode(b); } @@ -3003,7 +3002,7 @@ assert storedBlock.findDatanode(dn) < 0 private static class ReplicationWork { private Block block; - private INodeFile fileINode; + private BlockCollection fileINode; private DatanodeDescriptor srcNode; private List containingNodes; @@ -3014,7 +3013,7 @@ assert storedBlock.findDatanode(dn) < 0 private int priority; public ReplicationWork(Block block, - INodeFile fileINode, + BlockCollection fileINode, DatanodeDescriptor srcNode, List containingNodes, List liveReplicaNodes, Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java Tue May 8 21:57:58 2012 @@ -29,7 +29,6 @@ import org.apache.hadoop.conf.Configurat import org.apache.hadoop.hdfs.protocol.Block; import org.apache.hadoop.hdfs.protocol.LocatedBlock; import org.apache.hadoop.hdfs.server.namenode.FSClusterStats; -import org.apache.hadoop.hdfs.server.namenode.FSInodeInfo; import org.apache.hadoop.net.NetworkTopology; import org.apache.hadoop.net.Node; import org.apache.hadoop.util.ReflectionUtils; @@ -123,13 +122,13 @@ public abstract class BlockPlacementPoli * @return array of DatanodeDescriptor instances chosen as target * and sorted as a pipeline. */ - DatanodeDescriptor[] chooseTarget(FSInodeInfo srcInode, + DatanodeDescriptor[] chooseTarget(BlockCollection srcInode, int numOfReplicas, DatanodeDescriptor writer, List chosenNodes, HashMap excludedNodes, long blocksize) { - return chooseTarget(srcInode.getFullPathName(), numOfReplicas, writer, + return chooseTarget(srcInode.getName(), numOfReplicas, writer, chosenNodes, excludedNodes, blocksize); } @@ -159,7 +158,7 @@ public abstract class BlockPlacementPoli listed in the previous parameter. * @return the replica that is the best candidate for deletion */ - abstract public DatanodeDescriptor chooseReplicaToDelete(FSInodeInfo srcInode, + abstract public DatanodeDescriptor chooseReplicaToDelete(BlockCollection srcInode, Block block, short replicationFactor, Collection existingReplicas, Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java Tue May 8 21:57:58 2012 @@ -33,7 +33,6 @@ import org.apache.hadoop.hdfs.protocol.D import org.apache.hadoop.hdfs.protocol.HdfsConstants; import org.apache.hadoop.hdfs.protocol.LocatedBlock; import org.apache.hadoop.hdfs.server.namenode.FSClusterStats; -import org.apache.hadoop.hdfs.server.namenode.FSInodeInfo; import org.apache.hadoop.net.NetworkTopology; import org.apache.hadoop.net.Node; import org.apache.hadoop.net.NodeBase; @@ -547,7 +546,7 @@ public class BlockPlacementPolicyDefault } @Override - public DatanodeDescriptor chooseReplicaToDelete(FSInodeInfo inode, + public DatanodeDescriptor chooseReplicaToDelete(BlockCollection inode, Block block, short replicationFactor, Collection first, Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java Tue May 8 21:57:58 2012 @@ -20,7 +20,6 @@ package org.apache.hadoop.hdfs.server.bl import java.util.Iterator; import org.apache.hadoop.hdfs.protocol.Block; -import org.apache.hadoop.hdfs.server.namenode.INodeFile; import org.apache.hadoop.hdfs.util.GSet; import org.apache.hadoop.hdfs.util.LightWeightGSet; @@ -93,7 +92,7 @@ class BlocksMap { blocks = null; } - INodeFile getINode(Block b) { + BlockCollection getINode(Block b) { BlockInfo info = blocks.get(b); return (info != null) ? info.getINode() : null; } @@ -101,7 +100,7 @@ class BlocksMap { /** * Add block b belonging to the specified file inode to the map. */ - BlockInfo addINode(BlockInfo b, INodeFile iNode) { + BlockInfo addINode(BlockInfo b, BlockCollection iNode) { BlockInfo info = blocks.get(b); if (info != b) { info = b; Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java Tue May 8 21:57:58 2012 @@ -85,6 +85,12 @@ class DataXceiver extends Receiver imple private long opStartTime; //the start time of receiving an Op private final SocketInputWrapper socketInputWrapper; + + /** + * Client Name used in previous operation. Not available on first request + * on the socket. + */ + private String previousOpClientName; public static DataXceiver create(Socket s, DataNode dn, DataXceiverServer dataXceiverServer) throws IOException { @@ -122,7 +128,11 @@ class DataXceiver extends Receiver imple */ private void updateCurrentThreadName(String status) { StringBuilder sb = new StringBuilder(); - sb.append("DataXceiver for client ").append(remoteAddress); + sb.append("DataXceiver for client "); + if (previousOpClientName != null) { + sb.append(previousOpClientName).append(" at "); + } + sb.append(remoteAddress); if (status != null) { sb.append(" [").append(status).append("]"); } @@ -202,6 +212,8 @@ class DataXceiver extends Receiver imple final String clientName, final long blockOffset, final long length) throws IOException { + previousOpClientName = clientName; + OutputStream baseStream = NetUtils.getOutputStream(s, dnConf.socketWriteTimeout); DataOutputStream out = new DataOutputStream(new BufferedOutputStream( @@ -295,7 +307,8 @@ class DataXceiver extends Receiver imple final long maxBytesRcvd, final long latestGenerationStamp, DataChecksum requestedChecksum) throws IOException { - updateCurrentThreadName("Receiving block " + block + " client=" + clientname); + previousOpClientName = clientname; + updateCurrentThreadName("Receiving block " + block); final boolean isDatanode = clientname.length() == 0; final boolean isClient = !isDatanode; final boolean isTransfer = stage == BlockConstructionStage.TRANSFER_RBW @@ -502,7 +515,7 @@ class DataXceiver extends Receiver imple final DatanodeInfo[] targets) throws IOException { checkAccess(null, true, blk, blockToken, Op.TRANSFER_BLOCK, BlockTokenSecretManager.AccessMode.COPY); - + previousOpClientName = clientName; updateCurrentThreadName(Op.TRANSFER_BLOCK + " " + blk); final DataOutputStream out = new DataOutputStream( Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java Tue May 8 21:57:58 2012 @@ -2840,7 +2840,7 @@ public class FSNamesystem implements Nam if (storedBlock == null) { throw new IOException("Block (=" + lastblock + ") not found"); } - INodeFile iFile = storedBlock.getINode(); + INodeFile iFile = (INodeFile) storedBlock.getINode(); if (!iFile.isUnderConstruction() || storedBlock.isComplete()) { throw new IOException("Unexpected block (=" + lastblock + ") since the file (=" + iFile.getLocalName() @@ -4394,7 +4394,7 @@ public class FSNamesystem implements Nam } // check file inode - INodeFile file = storedBlock.getINode(); + INodeFile file = (INodeFile) storedBlock.getINode(); if (file==null || !file.isUnderConstruction()) { throw new IOException("The file " + storedBlock + " belonged to does not exist or it is not under construction."); @@ -4556,7 +4556,7 @@ public class FSNamesystem implements Nam if (destinationExisted && dinfo.isDir()) { Path spath = new Path(src); Path parent = spath.getParent(); - if (isRoot(parent)) { + if (parent.isRoot()) { overwrite = parent.toString(); } else { overwrite = parent.toString() + Path.SEPARATOR; @@ -4569,10 +4569,6 @@ public class FSNamesystem implements Nam leaseManager.changeLease(src, dst, overwrite, replaceBy); } - - private boolean isRoot(Path path) { - return path.getParent() == null; - } /** * Serializes leases. @@ -4710,7 +4706,7 @@ public class FSNamesystem implements Nam while (blkIterator.hasNext()) { Block blk = blkIterator.next(); - INode inode = blockManager.getINode(blk); + INode inode = (INodeFile) blockManager.getINode(blk); skip++; if (inode != null && blockManager.countNodes(blk).liveReplicas() == 0) { String src = FSDirectory.getFullPathName(inode); Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java Tue May 8 21:57:58 2012 @@ -27,6 +27,8 @@ import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; + +import org.apache.hadoop.hdfs.DFSConfigKeys; import org.apache.hadoop.security.SecurityUtil; import org.apache.commons.logging.Log; @@ -34,7 +36,6 @@ import org.apache.commons.logging.LogFac import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.hdfs.DFSConfigKeys; import org.apache.hadoop.hdfs.DFSUtil; import org.apache.hadoop.hdfs.HAUtil; import org.apache.hadoop.hdfs.server.common.JspHelper; @@ -83,11 +84,11 @@ public class GetImageServlet extends Htt (Configuration)getServletContext().getAttribute(JspHelper.CURRENT_CONF); if(UserGroupInformation.isSecurityEnabled() && - !isValidRequestor(request.getRemoteUser(), conf)) { + !isValidRequestor(request.getUserPrincipal().getName(), conf)) { response.sendError(HttpServletResponse.SC_FORBIDDEN, "Only Namenode and Secondary Namenode may access this servlet"); LOG.warn("Received non-NN/SNN request for image or edits from " - + request.getRemoteHost()); + + request.getUserPrincipal().getName() + " at " + request.getRemoteHost()); return; } @@ -156,15 +157,10 @@ public class GetImageServlet extends Htt } // issue a HTTP get request to download the new fsimage - MD5Hash downloadImageDigest = reloginIfNecessary().doAs( - new PrivilegedExceptionAction() { - @Override - public MD5Hash run() throws Exception { - return TransferFsImage.downloadImageToStorage( + MD5Hash downloadImageDigest = + TransferFsImage.downloadImageToStorage( parsedParams.getInfoServer(), txid, nnImage.getStorage(), true); - } - }); nnImage.saveDigestAndRenameCheckpointImage(txid, downloadImageDigest); // Now that we have a new checkpoint, we might be able to @@ -176,18 +172,6 @@ public class GetImageServlet extends Htt } return null; } - - // We may have lost our ticket since the last time we tried to open - // an http connection, so log in just in case. - private UserGroupInformation reloginIfNecessary() throws IOException { - // This method is only called on the NN, therefore it is safe to - // use these key values. - return UserGroupInformation.loginUserFromKeytabAndReturnUGI( - SecurityUtil.getServerPrincipal(conf - .get(DFSConfigKeys.DFS_NAMENODE_KRB_HTTPS_USER_NAME_KEY), - NameNode.getAddress(conf).getHostName()), - conf.get(DFSConfigKeys.DFS_NAMENODE_KEYTAB_FILE_KEY)); - } }); } catch (Throwable t) { @@ -234,18 +218,10 @@ public class GetImageServlet extends Htt validRequestors.add( SecurityUtil.getServerPrincipal(conf - .get(DFSConfigKeys.DFS_NAMENODE_KRB_HTTPS_USER_NAME_KEY), NameNode - .getAddress(conf).getHostName())); - validRequestors.add( - SecurityUtil.getServerPrincipal(conf .get(DFSConfigKeys.DFS_NAMENODE_USER_NAME_KEY), NameNode .getAddress(conf).getHostName())); validRequestors.add( SecurityUtil.getServerPrincipal(conf - .get(DFSConfigKeys.DFS_SECONDARY_NAMENODE_KRB_HTTPS_USER_NAME_KEY), - SecondaryNameNode.getHttpAddress(conf).getHostName())); - validRequestors.add( - SecurityUtil.getServerPrincipal(conf .get(DFSConfigKeys.DFS_SECONDARY_NAMENODE_USER_NAME_KEY), SecondaryNameNode.getHttpAddress(conf).getHostName())); @@ -253,21 +229,17 @@ public class GetImageServlet extends Htt Configuration otherNnConf = HAUtil.getConfForOtherNode(conf); validRequestors.add( SecurityUtil.getServerPrincipal(otherNnConf - .get(DFSConfigKeys.DFS_NAMENODE_KRB_HTTPS_USER_NAME_KEY), - NameNode.getAddress(otherNnConf).getHostName())); - validRequestors.add( - SecurityUtil.getServerPrincipal(otherNnConf .get(DFSConfigKeys.DFS_NAMENODE_USER_NAME_KEY), NameNode.getAddress(otherNnConf).getHostName())); } for(String v : validRequestors) { if(v != null && v.equals(remoteUser)) { - if(LOG.isDebugEnabled()) LOG.debug("isValidRequestor is allowing: " + remoteUser); + if(LOG.isInfoEnabled()) LOG.info("GetImageServlet allowing: " + remoteUser); return true; } } - if(LOG.isDebugEnabled()) LOG.debug("isValidRequestor is rejecting: " + remoteUser); + if(LOG.isInfoEnabled()) LOG.info("GetImageServlet rejecting: " + remoteUser); return false; } Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java Tue May 8 21:57:58 2012 @@ -38,7 +38,7 @@ import com.google.common.primitives.Sign * directory inodes. */ @InterfaceAudience.Private -abstract class INode implements Comparable, FSInodeInfo { +abstract class INode implements Comparable { /* * The inode name is in java UTF8 encoding; * The name in HdfsFileStatus should keep the same encoding as this. @@ -264,7 +264,6 @@ abstract class INode implements Comparab this.name = name; } - @Override public String getFullPathName() { // Get the full path name of this inode. return FSDirectory.getFullPathName(this); Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java Tue May 8 21:57:58 2012 @@ -20,15 +20,18 @@ package org.apache.hadoop.hdfs.server.na import java.io.IOException; import java.util.List; +import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.fs.permission.FsAction; import org.apache.hadoop.fs.permission.FsPermission; import org.apache.hadoop.fs.permission.PermissionStatus; import org.apache.hadoop.hdfs.protocol.Block; import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo; import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoUnderConstruction; +import org.apache.hadoop.hdfs.server.blockmanagement.BlockCollection; /** I-node for closed file. */ -public class INodeFile extends INode { +@InterfaceAudience.Private +public class INodeFile extends INode implements BlockCollection { static final FsPermission UMASK = FsPermission.createImmutable((short)0111); //Number of bits for Block size @@ -167,6 +170,12 @@ public class INodeFile extends INode { blocks = null; return 1; } + + public String getName() { + // Get the full path name of this inode. + return getFullPathName(); + } + @Override long[] computeContentSummary(long[] summary) { Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileUnderConstruction.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileUnderConstruction.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileUnderConstruction.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileUnderConstruction.java Tue May 8 21:57:58 2012 @@ -25,13 +25,15 @@ import org.apache.hadoop.hdfs.server.blo import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoUnderConstruction; import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor; import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState; +import org.apache.hadoop.hdfs.server.blockmanagement.MutableBlockCollection; import com.google.common.base.Joiner; /** * I-node for file being written. */ -public class INodeFileUnderConstruction extends INodeFile { +public class INodeFileUnderConstruction extends INodeFile + implements MutableBlockCollection { private String clientName; // lease holder private final String clientMachine; private final DatanodeDescriptor clientNode; // if client is a cluster node too. Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java Tue May 8 21:57:58 2012 @@ -164,10 +164,8 @@ public class NameNode { DFS_NAMENODE_CHECKPOINT_EDITS_DIR_KEY, DFS_NAMENODE_SERVICE_RPC_ADDRESS_KEY, DFS_NAMENODE_HTTP_ADDRESS_KEY, - DFS_NAMENODE_HTTPS_ADDRESS_KEY, DFS_NAMENODE_KEYTAB_FILE_KEY, DFS_NAMENODE_SECONDARY_HTTP_ADDRESS_KEY, - DFS_NAMENODE_SECONDARY_HTTPS_PORT_KEY, DFS_SECONDARY_NAMENODE_KEYTAB_FILE_KEY, DFS_NAMENODE_BACKUP_ADDRESS_KEY, DFS_NAMENODE_BACKUP_HTTP_ADDRESS_KEY, @@ -361,8 +359,9 @@ public class NameNode { } protected void setHttpServerAddress(Configuration conf) { - conf.set(DFS_NAMENODE_HTTP_ADDRESS_KEY, - NetUtils.getHostPortString(getHttpAddress())); + String hostPort = NetUtils.getHostPortString(getHttpAddress()); + conf.set(DFS_NAMENODE_HTTP_ADDRESS_KEY, hostPort); + LOG.info("Web-server up at: " + hostPort); } protected void loadNamesystem(Configuration conf) throws IOException { Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java Tue May 8 21:57:58 2012 @@ -17,6 +17,7 @@ */ package org.apache.hadoop.hdfs.server.namenode; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_ADMIN; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_HTTPS_ADDRESS_KEY; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_SERVER_HTTPS_KEYSTORE_RESOURCE_KEY; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_SERVER_HTTPS_KEYSTORE_RESOURCE_DEFAULT; @@ -43,6 +44,7 @@ import org.apache.hadoop.http.HttpServer import org.apache.hadoop.net.NetUtils; import org.apache.hadoop.security.SecurityUtil; import org.apache.hadoop.security.UserGroupInformation; +import org.apache.hadoop.security.authentication.server.AuthenticationFilter; import org.apache.hadoop.security.authorize.AccessControlList; /** @@ -78,127 +80,101 @@ public class NameNodeHttpServer { conf.get(DFSConfigKeys.DFS_NAMENODE_USER_NAME_KEY), nn.getNameNodeAddress().getHostName()); } - + public void start() throws IOException { final String infoHost = bindAddress.getHostName(); - - if(UserGroupInformation.isSecurityEnabled()) { - String httpsUser = SecurityUtil.getServerPrincipal(conf - .get(DFSConfigKeys.DFS_NAMENODE_KRB_HTTPS_USER_NAME_KEY), infoHost); - if (httpsUser == null) { - LOG.warn(DFSConfigKeys.DFS_NAMENODE_KRB_HTTPS_USER_NAME_KEY - + " not defined in config. Starting http server as " - + getDefaultServerPrincipal() - + ": Kerberized SSL may be not function correctly."); - } else { - // Kerberized SSL servers must be run from the host principal... - LOG.info("Logging in as " + httpsUser + " to start http server."); - SecurityUtil.login(conf, DFSConfigKeys.DFS_NAMENODE_KEYTAB_FILE_KEY, - DFSConfigKeys.DFS_NAMENODE_KRB_HTTPS_USER_NAME_KEY, infoHost); - } - } + int infoPort = bindAddress.getPort(); - UserGroupInformation ugi = UserGroupInformation.getLoginUser(); - try { - this.httpServer = ugi.doAs(new PrivilegedExceptionAction() { - @Override - public HttpServer run() throws IOException, InterruptedException { - int infoPort = bindAddress.getPort(); - httpServer = new HttpServer("hdfs", infoHost, infoPort, - infoPort == 0, conf, - new AccessControlList(conf.get(DFSConfigKeys.DFS_ADMIN, " "))) { - { - if (WebHdfsFileSystem.isEnabled(conf, LOG)) { - //add SPNEGO authentication filter for webhdfs - final String name = "SPNEGO"; - final String classname = AuthFilter.class.getName(); - final String pathSpec = WebHdfsFileSystem.PATH_PREFIX + "/*"; - Map params = getAuthFilterParams(conf); - defineFilter(webAppContext, name, classname, params, - new String[]{pathSpec}); - LOG.info("Added filter '" + name + "' (class=" + classname + ")"); - - // add webhdfs packages - addJerseyResourcePackage( - NamenodeWebHdfsMethods.class.getPackage().getName() - + ";" + Param.class.getPackage().getName(), pathSpec); - } + httpServer = new HttpServer("hdfs", infoHost, infoPort, + infoPort == 0, conf, + new AccessControlList(conf.get(DFS_ADMIN, " "))) { + { + // Add SPNEGO support to NameNode + if (UserGroupInformation.isSecurityEnabled()) { + Map params = new HashMap(); + String principalInConf = conf.get( + DFSConfigKeys.DFS_NAMENODE_INTERNAL_SPENGO_USER_NAME_KEY); + if (principalInConf != null && !principalInConf.isEmpty()) { + params.put("kerberos.principal", + SecurityUtil.getServerPrincipal(principalInConf, infoHost)); + String httpKeytab = conf.get(DFSConfigKeys.DFS_NAMENODE_KEYTAB_FILE_KEY); + if (httpKeytab != null && !httpKeytab.isEmpty()) { + params.put("kerberos.keytab", httpKeytab); } - private Map getAuthFilterParams(Configuration conf) - throws IOException { - Map params = new HashMap(); - String principalInConf = conf - .get(DFSConfigKeys.DFS_WEB_AUTHENTICATION_KERBEROS_PRINCIPAL_KEY); - if (principalInConf != null && !principalInConf.isEmpty()) { - params - .put( - DFSConfigKeys.DFS_WEB_AUTHENTICATION_KERBEROS_PRINCIPAL_KEY, - SecurityUtil.getServerPrincipal(principalInConf, - infoHost)); - } - String httpKeytab = conf - .get(DFSConfigKeys.DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY); - if (httpKeytab != null && !httpKeytab.isEmpty()) { - params.put( - DFSConfigKeys.DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY, - httpKeytab); - } - return params; - } - }; + params.put(AuthenticationFilter.AUTH_TYPE, "kerberos"); - boolean certSSL = conf.getBoolean(DFSConfigKeys.DFS_HTTPS_ENABLE_KEY, false); - boolean useKrb = UserGroupInformation.isSecurityEnabled(); - if (certSSL || useKrb) { - boolean needClientAuth = conf.getBoolean( - DFSConfigKeys.DFS_CLIENT_HTTPS_NEED_AUTH_KEY, - DFSConfigKeys.DFS_CLIENT_HTTPS_NEED_AUTH_DEFAULT); - InetSocketAddress secInfoSocAddr = NetUtils.createSocketAddr(conf - .get(DFSConfigKeys.DFS_NAMENODE_HTTPS_ADDRESS_KEY, - DFSConfigKeys.DFS_NAMENODE_HTTPS_ADDRESS_DEFAULT)); - Configuration sslConf = new HdfsConfiguration(false); - if (certSSL) { - sslConf.addResource(conf.get(DFS_SERVER_HTTPS_KEYSTORE_RESOURCE_KEY, - DFS_SERVER_HTTPS_KEYSTORE_RESOURCE_DEFAULT)); - } - httpServer.addSslListener(secInfoSocAddr, sslConf, needClientAuth, - useKrb); - // assume same ssl port for all datanodes - InetSocketAddress datanodeSslPort = NetUtils.createSocketAddr( - conf.get(DFS_DATANODE_HTTPS_ADDRESS_KEY, - infoHost + ":" + DFSConfigKeys.DFS_DATANODE_HTTPS_DEFAULT_PORT)); - httpServer.setAttribute(DFSConfigKeys.DFS_DATANODE_HTTPS_PORT_KEY, - datanodeSslPort.getPort()); + defineFilter(webAppContext, SPNEGO_FILTER, + AuthenticationFilter.class.getName(), params, null); } - httpServer.setAttribute(NAMENODE_ATTRIBUTE_KEY, nn); - httpServer.setAttribute(NAMENODE_ADDRESS_ATTRIBUTE_KEY, - nn.getNameNodeAddress()); - httpServer.setAttribute(FSIMAGE_ATTRIBUTE_KEY, nn.getFSImage()); - httpServer.setAttribute(JspHelper.CURRENT_CONF, conf); - setupServlets(httpServer, conf); - httpServer.start(); - - // The web-server port can be ephemeral... ensure we have the correct - // info - infoPort = httpServer.getPort(); - httpAddress = new InetSocketAddress(infoHost, infoPort); - LOG.info(nn.getRole() + " Web-server up at: " + httpAddress); - return httpServer; } - }); - } catch (InterruptedException e) { - throw new IOException(e); - } finally { - if(UserGroupInformation.isSecurityEnabled() && - conf.get(DFSConfigKeys.DFS_NAMENODE_KRB_HTTPS_USER_NAME_KEY) != null) { - // Go back to being the correct Namenode principal - LOG.info("Logging back in as NameNode user following http server start"); - nn.loginAsNameNodeUser(conf); + if (WebHdfsFileSystem.isEnabled(conf, LOG)) { + //add SPNEGO authentication filter for webhdfs + final String name = "SPNEGO"; + final String classname = AuthFilter.class.getName(); + final String pathSpec = WebHdfsFileSystem.PATH_PREFIX + "/*"; + Map params = getAuthFilterParams(conf); + defineFilter(webAppContext, name, classname, params, + new String[]{pathSpec}); + LOG.info("Added filter '" + name + "' (class=" + classname + ")"); + + // add webhdfs packages + addJerseyResourcePackage( + NamenodeWebHdfsMethods.class.getPackage().getName() + + ";" + Param.class.getPackage().getName(), pathSpec); + } + } + + private Map getAuthFilterParams(Configuration conf) + throws IOException { + Map params = new HashMap(); + String principalInConf = conf + .get(DFSConfigKeys.DFS_WEB_AUTHENTICATION_KERBEROS_PRINCIPAL_KEY); + if (principalInConf != null && !principalInConf.isEmpty()) { + params + .put( + DFSConfigKeys.DFS_WEB_AUTHENTICATION_KERBEROS_PRINCIPAL_KEY, + SecurityUtil.getServerPrincipal(principalInConf, + bindAddress.getHostName())); + } + String httpKeytab = conf + .get(DFSConfigKeys.DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY); + if (httpKeytab != null && !httpKeytab.isEmpty()) { + params.put( + DFSConfigKeys.DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY, + httpKeytab); + } + return params; + } + }; + + boolean certSSL = conf.getBoolean("dfs.https.enable", false); + if (certSSL) { + boolean needClientAuth = conf.getBoolean("dfs.https.need.client.auth", false); + InetSocketAddress secInfoSocAddr = NetUtils.createSocketAddr(infoHost + ":" + conf.get( + "dfs.https.port", infoHost + ":" + 0)); + Configuration sslConf = new Configuration(false); + if (certSSL) { + sslConf.addResource(conf.get("dfs.https.server.keystore.resource", + "ssl-server.xml")); } + httpServer.addSslListener(secInfoSocAddr, sslConf, needClientAuth); + // assume same ssl port for all datanodes + InetSocketAddress datanodeSslPort = NetUtils.createSocketAddr(conf.get( + "dfs.datanode.https.address", infoHost + ":" + 50475)); + httpServer.setAttribute("datanode.https.port", datanodeSslPort + .getPort()); } + httpServer.setAttribute("name.node", nn); + httpServer.setAttribute("name.node.address", bindAddress); + httpServer.setAttribute("name.system.image", nn.getFSImage()); + httpServer.setAttribute(JspHelper.CURRENT_CONF, conf); + setupServlets(httpServer, conf); + httpServer.start(); + httpAddress = new InetSocketAddress(bindAddress.getAddress(), httpServer.getPort()); } - + + public void stop() throws Exception { if (httpServer != null) { httpServer.stop(); Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java Tue May 8 21:57:58 2012 @@ -734,7 +734,7 @@ class NamenodeJspHelper { this.inode = null; } else { this.block = new Block(blockId); - this.inode = blockManager.getINode(block); + this.inode = (INodeFile) blockManager.getINode(block); } } Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java Tue May 8 21:57:58 2012 @@ -25,8 +25,10 @@ import java.security.PrivilegedAction; import java.security.PrivilegedExceptionAction; import java.util.Collection; import java.util.Date; +import java.util.HashMap; import java.util.Iterator; import java.util.List; +import java.util.Map; import org.apache.commons.cli.CommandLine; import org.apache.commons.cli.CommandLineParser; @@ -44,6 +46,7 @@ import org.apache.hadoop.conf.Configurat import org.apache.hadoop.fs.FileSystem; import static org.apache.hadoop.hdfs.DFSConfigKeys.*; +import org.apache.hadoop.hdfs.DFSConfigKeys; import org.apache.hadoop.hdfs.DFSUtil; import org.apache.hadoop.hdfs.HAUtil; import org.apache.hadoop.hdfs.NameNodeProxies; @@ -63,9 +66,9 @@ import org.apache.hadoop.ipc.RemoteExcep import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem; import org.apache.hadoop.metrics2.source.JvmMetrics; import org.apache.hadoop.net.NetUtils; -import org.apache.hadoop.security.Krb5AndCertsSslSocketConnector; import org.apache.hadoop.security.SecurityUtil; import org.apache.hadoop.security.UserGroupInformation; +import org.apache.hadoop.security.authentication.server.AuthenticationFilter; import org.apache.hadoop.security.authorize.AccessControlList; import org.apache.hadoop.util.Daemon; @@ -108,7 +111,6 @@ public class SecondaryNameNode implement private volatile boolean shouldRun; private HttpServer infoServer; private int infoPort; - private int imagePort; private String infoBindAddress; private Collection checkpointDirs; @@ -229,63 +231,47 @@ public class SecondaryNameNode implement // Initialize other scheduling parameters from the configuration checkpointConf = new CheckpointConf(conf); - + // initialize the webserver for uploading files. - // Kerberized SSL servers must be run from the host principal... - UserGroupInformation httpUGI = - UserGroupInformation.loginUserFromKeytabAndReturnUGI( - SecurityUtil.getServerPrincipal(conf - .get(DFS_SECONDARY_NAMENODE_KRB_HTTPS_USER_NAME_KEY), - infoBindAddress), - conf.get(DFS_SECONDARY_NAMENODE_KEYTAB_FILE_KEY)); - try { - infoServer = httpUGI.doAs(new PrivilegedExceptionAction() { - @Override - public HttpServer run() throws IOException, InterruptedException { - LOG.info("Starting web server as: " + - UserGroupInformation.getCurrentUser().getUserName()); - - int tmpInfoPort = infoSocAddr.getPort(); - infoServer = new HttpServer("secondary", infoBindAddress, tmpInfoPort, - tmpInfoPort == 0, conf, - new AccessControlList(conf.get(DFS_ADMIN, " "))); - - if(UserGroupInformation.isSecurityEnabled()) { - SecurityUtil.initKrb5CipherSuites(); - InetSocketAddress secInfoSocAddr = - NetUtils.createSocketAddr(infoBindAddress + ":"+ conf.getInt( - DFS_NAMENODE_SECONDARY_HTTPS_PORT_KEY, - DFS_NAMENODE_SECONDARY_HTTPS_PORT_DEFAULT)); - imagePort = secInfoSocAddr.getPort(); - infoServer.addSslListener(secInfoSocAddr, conf, false, true); + int tmpInfoPort = infoSocAddr.getPort(); + infoServer = new HttpServer("secondary", infoBindAddress, tmpInfoPort, + tmpInfoPort == 0, conf, + new AccessControlList(conf.get(DFS_ADMIN, " "))) { + { + if (UserGroupInformation.isSecurityEnabled()) { + Map params = new HashMap(); + String principalInConf = conf.get(DFSConfigKeys.DFS_SECONDARY_NAMENODE_INTERNAL_SPENGO_USER_NAME_KEY); + if (principalInConf != null && !principalInConf.isEmpty()) { + params.put("kerberos.principal", + SecurityUtil.getServerPrincipal(principalInConf, infoSocAddr.getHostName())); } - - infoServer.setAttribute("secondary.name.node", SecondaryNameNode.this); - infoServer.setAttribute("name.system.image", checkpointImage); - infoServer.setAttribute(JspHelper.CURRENT_CONF, conf); - infoServer.addInternalServlet("getimage", "/getimage", - GetImageServlet.class, true); - infoServer.start(); - return infoServer; + String httpKeytab = conf.get(DFSConfigKeys.DFS_SECONDARY_NAMENODE_KEYTAB_FILE_KEY); + if (httpKeytab != null && !httpKeytab.isEmpty()) { + params.put("kerberos.keytab", httpKeytab); + } + params.put(AuthenticationFilter.AUTH_TYPE, "kerberos"); + + defineFilter(webAppContext, SPNEGO_FILTER, AuthenticationFilter.class.getName(), + params, null); } - }); - } catch (InterruptedException e) { - throw new RuntimeException(e); - } - + } + }; + infoServer.setAttribute("secondary.name.node", this); + infoServer.setAttribute("name.system.image", checkpointImage); + infoServer.setAttribute(JspHelper.CURRENT_CONF, conf); + infoServer.addInternalServlet("getimage", "/getimage", + GetImageServlet.class, true); + infoServer.start(); + LOG.info("Web server init done"); // The web-server port can be ephemeral... ensure we have the correct info infoPort = infoServer.getPort(); - if (!UserGroupInformation.isSecurityEnabled()) { - imagePort = infoPort; - } - - conf.set(DFS_NAMENODE_SECONDARY_HTTP_ADDRESS_KEY, infoBindAddress + ":" +infoPort); - LOG.info("Secondary Web-server up at: " + infoBindAddress + ":" +infoPort); - LOG.info("Secondary image servlet up at: " + infoBindAddress + ":" + imagePort); + + conf.set(DFS_NAMENODE_SECONDARY_HTTP_ADDRESS_KEY, infoBindAddress + ":" + infoPort); + LOG.info("Secondary Web-server up at: " + infoBindAddress + ":" + infoPort); LOG.info("Checkpoint Period :" + checkpointConf.getPeriod() + " secs " + - "(" + checkpointConf.getPeriod()/60 + " min)"); + "(" + checkpointConf.getPeriod() / 60 + " min)"); LOG.info("Log Size Trigger :" + checkpointConf.getTxnCount() + " txns"); } @@ -434,7 +420,7 @@ public class SecondaryNameNode implement throw new IOException("This is not a DFS"); } - String configuredAddress = DFSUtil.getInfoServer(null, conf, true); + String configuredAddress = DFSUtil.getInfoServer(null, conf, false); String address = DFSUtil.substituteForWildcardAddress(configuredAddress, fsName.getHost()); LOG.debug("Will connect to NameNode at HTTP address: " + address); @@ -446,7 +432,7 @@ public class SecondaryNameNode implement * for image transfers */ private InetSocketAddress getImageListenAddress() { - return new InetSocketAddress(infoBindAddress, imagePort); + return new InetSocketAddress(infoBindAddress, infoPort); } /** @@ -507,7 +493,7 @@ public class SecondaryNameNode implement /** - * @param argv The parameters passed to this program. + * @param opts The parameters passed to this program. * @exception Exception if the filesystem does not exist. * @return 0 on success, non zero on error. */ @@ -709,7 +695,7 @@ public class SecondaryNameNode implement * Construct a checkpoint image. * @param conf Node configuration. * @param imageDirs URIs of storage for image. - * @param editDirs URIs of storage for edit logs. + * @param editsDirs URIs of storage for edit logs. * @throws IOException If storage cannot be access. */ CheckpointStorage(Configuration conf, Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java Tue May 8 21:57:58 2012 @@ -201,19 +201,17 @@ public class TransferFsImage { String queryString, List localPaths, NNStorage dstStorage, boolean getChecksum) throws IOException { byte[] buf = new byte[HdfsConstants.IO_FILE_BUFFER_SIZE]; - String proto = UserGroupInformation.isSecurityEnabled() ? "https://" : "http://"; - StringBuilder str = new StringBuilder(proto+nnHostPort+"/getimage?"); - str.append(queryString); + String str = "http://" + nnHostPort + "/getimage?" + queryString; + LOG.info("Opening connection to " + str); // // open connection to remote server // - URL url = new URL(str.toString()); - - // Avoid Krb bug with cross-realm hosts - SecurityUtil.fetchServiceTicket(url); - HttpURLConnection connection = (HttpURLConnection) url.openConnection(); - + URL url = new URL(str); + + HttpURLConnection connection = (HttpURLConnection) + SecurityUtil.openSecureHttpConnection(url); + if (connection.getResponseCode() != HttpURLConnection.HTTP_OK) { throw new HttpGetFailedException( "Image transfer servlet at " + url + Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/BootstrapStandby.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/BootstrapStandby.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/BootstrapStandby.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/BootstrapStandby.java Tue May 8 21:57:58 2012 @@ -95,7 +95,6 @@ public class BootstrapStandby implements static final int ERR_CODE_LOGS_UNAVAILABLE = 6; public int run(String[] args) throws Exception { - SecurityUtil.initKrb5CipherSuites(); parseArgs(args); parseConfAndFindOtherNN(); NameNode.checkAllowFormat(conf); @@ -322,7 +321,7 @@ public class BootstrapStandby implements "Could not determine valid IPC address for other NameNode (%s)" + ", got: %s", otherNNId, otherIpcAddr); - otherHttpAddr = DFSUtil.getInfoServer(null, otherNode, true); + otherHttpAddr = DFSUtil.getInfoServer(null, otherNode, false); otherHttpAddr = DFSUtil.substituteForWildcardAddress(otherHttpAddr, otherIpcAddr.getHostName()); Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/StandbyCheckpointer.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/StandbyCheckpointer.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/StandbyCheckpointer.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/StandbyCheckpointer.java Tue May 8 21:57:58 2012 @@ -92,7 +92,7 @@ public class StandbyCheckpointer { } private String getHttpAddress(Configuration conf) { - String configuredAddr = DFSUtil.getInfoServer(null, conf, true); + String configuredAddr = DFSUtil.getInfoServer(null, conf, false); // Use the hostname from the RPC address as a default, in case // the HTTP address is configured to 0.0.0.0. Modified: hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java?rev=1335791&r1=1335790&r2=1335791&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java (original) +++ hadoop/common/branches/HDFS-3092/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java Tue May 8 21:57:58 2012 @@ -504,7 +504,7 @@ public class DFSAdmin extends FsShell { */ public int fetchImage(String[] argv, int idx) throws IOException { String infoServer = DFSUtil.getInfoServer( - HAUtil.getAddressOfActive(getDFS()), getConf(), true); + HAUtil.getAddressOfActive(getDFS()), getConf(), false); TransferFsImage.downloadMostRecentImageToDirectory(infoServer, new File(argv[idx])); return 0;