Return-Path: X-Original-To: apmail-hadoop-common-commits-archive@www.apache.org Delivered-To: apmail-hadoop-common-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 45486946C for ; Thu, 5 Jan 2012 19:21:31 +0000 (UTC) Received: (qmail 73093 invoked by uid 500); 5 Jan 2012 19:21:30 -0000 Delivered-To: apmail-hadoop-common-commits-archive@hadoop.apache.org Received: (qmail 72746 invoked by uid 500); 5 Jan 2012 19:21:29 -0000 Mailing-List: contact common-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-dev@hadoop.apache.org Delivered-To: mailing list common-commits@hadoop.apache.org Received: (qmail 72738 invoked by uid 99); 5 Jan 2012 19:21:29 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 05 Jan 2012 19:21:29 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 05 Jan 2012 19:21:26 +0000 Received: from eris.apache.org (localhost [127.0.0.1]) by eris.apache.org (Postfix) with ESMTP id A8D7223888E4; Thu, 5 Jan 2012 19:21:05 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r1227775 - in /hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common: ./ src/main/docs/ src/main/java/ src/main/java/org/apache/hadoop/fs/ src/main/java/org/apache/hadoop/ipc/ src/main/java/org/apache/hadoop/metrics/spi/ src/... Date: Thu, 05 Jan 2012 19:21:04 -0000 To: common-commits@hadoop.apache.org From: atm@apache.org X-Mailer: svnmailer-1.0.8-patched Message-Id: <20120105192105.A8D7223888E4@eris.apache.org> Author: atm Date: Thu Jan 5 19:21:01 2012 New Revision: 1227775 URL: http://svn.apache.org/viewvc?rev=1227775&view=rev Log: Merge trunk into HA branch. Added: hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemCanonicalization.java - copied unchanged from r1227765, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemCanonicalization.java hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/NetUtilsTestResolver.java - copied unchanged from r1227765, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/NetUtilsTestResolver.java Modified: hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/CHANGES.txt (contents, props changed) hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/docs/ (props changed) hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/ (props changed) hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/spi/Util.java hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/Servers.java hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/core/ (props changed) hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/MiniRPCBenchmark.java hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestDoAsEffectiveUser.java hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java Modified: hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/CHANGES.txt URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1227775&r1=1227774&r2=1227775&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/CHANGES.txt (original) +++ hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/CHANGES.txt Thu Jan 5 19:21:01 2012 @@ -79,11 +79,19 @@ Trunk (unreleased changes) HADOOP-7899. Generate proto java files as part of the build. (tucu) - HADOOP-7574. Improve FSShell -stat, add user/group elements (XieXianshan via harsh) + HADOOP-7574. Improve FSShell -stat, add user/group elements. + (XieXianshan via harsh) - HADOOP-7348. Change 'addnl' in getmerge util to be a flag '-nl' instead (XieXianshan via harsh) + HADOOP-7348. Change 'addnl' in getmerge util to be a flag '-nl' instead. + (XieXianshan via harsh) - HADOOP-7919. Remove the unused hadoop.logfile.* properties from the core-default.xml file. (harsh) + HADOOP-7919. Remove the unused hadoop.logfile.* properties from the + core-default.xml file. (harsh) + + HADOOP-7808. Port HADOOP-7510 - Add configurable option to use original + hostname in token instead of IP to allow server IP change. + (Daryn Sharp via suresh) + BUGS @@ -241,6 +249,9 @@ Release 0.23.1 - Unreleased HADOOP-7948. Shell scripts created by hadoop-dist/pom.xml to build tar do not properly propagate failure. (cim_michajlomatijkiw via tucu) + HADOOP-7949. Updated maxIdleTime default in the code to match + core-default.xml (eli) + Release 0.23.0 - 2011-11-01 INCOMPATIBLE CHANGES Propchange: hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/CHANGES.txt ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Thu Jan 5 19:21:01 2012 @@ -1,5 +1,5 @@ /hadoop/common/branches/yahoo-merge/CHANGES.txt:1079157,1079163-1079164,1079167 -/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt:1161333-1227258 +/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt:1161333-1227765 /hadoop/core/branches/branch-0.18/CHANGES.txt:727226 /hadoop/core/branches/branch-0.19/CHANGES.txt:713112 /hadoop/core/trunk/CHANGES.txt:776175-785643,785929-786278 Propchange: hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/docs/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Thu Jan 5 19:21:01 2012 @@ -1,2 +1,2 @@ -/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs:1152502-1227258 +/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs:1152502-1227765 /hadoop/core/branches/branch-0.19/src/docs:713112 Propchange: hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Thu Jan 5 19:21:01 2012 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java:1152502-1227258 +/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java:1152502-1227765 /hadoop/core/branches/branch-0.19/core/src/java:713112 /hadoop/core/trunk/src/core:776175-785643,785929-786278 Modified: hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java?rev=1227775&r1=1227774&r2=1227775&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java Thu Jan 5 19:21:01 2012 @@ -51,7 +51,7 @@ public class CommonConfigurationKeys ext /** How often does RPC client send pings to RPC server */ public static final String IPC_PING_INTERVAL_KEY = "ipc.ping.interval"; /** Default value for IPC_PING_INTERVAL_KEY */ - public static final int IPC_PING_INTERVAL_DEFAULT = 60000; + public static final int IPC_PING_INTERVAL_DEFAULT = 60000; // 1 min /** Enables pings from RPC client to the server */ public static final String IPC_CLIENT_PING_KEY = "ipc.client.ping"; /** Default value of IPC_CLIENT_PING_KEY */ @@ -114,5 +114,11 @@ public class CommonConfigurationKeys ext public static final String HADOOP_SECURITY_SERVICE_AUTHORIZATION_REFRESH_USER_MAPPINGS = "security.refresh.user.mappings.protocol.acl"; + + public static final String HADOOP_SECURITY_TOKEN_SERVICE_USE_IP = + "hadoop.security.token.service.use_ip"; + public static final boolean HADOOP_SECURITY_TOKEN_SERVICE_USE_IP_DEFAULT = + true; + } Modified: hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java?rev=1227775&r1=1227774&r2=1227775&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java Thu Jan 5 19:21:01 2012 @@ -165,7 +165,7 @@ public class CommonConfigurationKeysPubl public static final String IPC_CLIENT_CONNECTION_MAXIDLETIME_KEY = "ipc.client.connection.maxidletime"; /** Default value for IPC_CLIENT_CONNECTION_MAXIDLETIME_KEY */ - public static final int IPC_CLIENT_CONNECTION_MAXIDLETIME_DEFAULT = 10000; + public static final int IPC_CLIENT_CONNECTION_MAXIDLETIME_DEFAULT = 10000; // 10s /** See core-default.xml */ public static final String IPC_CLIENT_CONNECT_MAX_RETRIES_KEY = "ipc.client.connect.max.retries"; Modified: hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java?rev=1227775&r1=1227774&r2=1227775&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java Thu Jan 5 19:21:01 2012 @@ -47,6 +47,7 @@ import org.apache.hadoop.conf.Configured import org.apache.hadoop.fs.Options.Rename; import org.apache.hadoop.fs.permission.FsPermission; import org.apache.hadoop.io.MultipleIOException; +import org.apache.hadoop.net.NetUtils; import org.apache.hadoop.security.Credentials; import org.apache.hadoop.security.SecurityUtil; import org.apache.hadoop.security.UserGroupInformation; @@ -187,6 +188,15 @@ public abstract class FileSystem extends public abstract URI getUri(); /** + * Resolve the uri's hostname and add the default port if not in the uri + * @return URI + * @see NetUtils#getCanonicalUri(URI, int) + */ + protected URI getCanonicalUri() { + return NetUtils.getCanonicalUri(getUri(), getDefaultPort()); + } + + /** * Get the default port for this file system. * @return the default port or 0 if there isn't one */ @@ -195,8 +205,13 @@ public abstract class FileSystem extends } /** - * Get a canonical name for this file system. - * @return a URI string that uniquely identifies this file system + * Get a canonical service name for this file system. The token cache is + * the only user of this value, and uses it to lookup this filesystem's + * service tokens. The token cache will not attempt to acquire tokens if the + * service is null. + * @return a service string that uniquely identifies this file system, null + * if the filesystem does not implement tokens + * @see SecurityUtil#buildDTServiceName(URI, int) */ public String getCanonicalServiceName() { return SecurityUtil.buildDTServiceName(getUri(), getDefaultPort()); @@ -487,32 +502,31 @@ public abstract class FileSystem extends */ protected void checkPath(Path path) { URI uri = path.toUri(); - if (uri.getScheme() == null) // fs is relative - return; - String thisScheme = this.getUri().getScheme(); String thatScheme = uri.getScheme(); - String thisAuthority = this.getUri().getAuthority(); - String thatAuthority = uri.getAuthority(); + if (thatScheme == null) // fs is relative + return; + URI thisUri = getCanonicalUri(); + String thisScheme = thisUri.getScheme(); //authority and scheme are not case sensitive if (thisScheme.equalsIgnoreCase(thatScheme)) {// schemes match - if (thisAuthority == thatAuthority || // & authorities match - (thisAuthority != null && - thisAuthority.equalsIgnoreCase(thatAuthority))) - return; - + String thisAuthority = thisUri.getAuthority(); + String thatAuthority = uri.getAuthority(); if (thatAuthority == null && // path's authority is null thisAuthority != null) { // fs has an authority - URI defaultUri = getDefaultUri(getConf()); // & is the conf default - if (thisScheme.equalsIgnoreCase(defaultUri.getScheme()) && - thisAuthority.equalsIgnoreCase(defaultUri.getAuthority())) - return; - try { // or the default fs's uri - defaultUri = get(getConf()).getUri(); - } catch (IOException e) { - throw new RuntimeException(e); + URI defaultUri = getDefaultUri(getConf()); + if (thisScheme.equalsIgnoreCase(defaultUri.getScheme())) { + uri = defaultUri; // schemes match, so use this uri instead + } else { + uri = null; // can't determine auth of the path } - if (thisScheme.equalsIgnoreCase(defaultUri.getScheme()) && - thisAuthority.equalsIgnoreCase(defaultUri.getAuthority())) + } + if (uri != null) { + // canonicalize uri before comparing with this fs + uri = NetUtils.getCanonicalUri(uri, getDefaultPort()); + thatAuthority = uri.getAuthority(); + if (thisAuthority == thatAuthority || // authorities match + (thisAuthority != null && + thisAuthority.equalsIgnoreCase(thatAuthority))) return; } } Modified: hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java?rev=1227775&r1=1227774&r2=1227775&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java Thu Jan 5 19:21:01 2012 @@ -77,6 +77,15 @@ public class FilterFileSystem extends Fi return fs.getUri(); } + /** + * Returns a qualified URI whose scheme and authority identify this + * FileSystem. + */ + @Override + protected URI getCanonicalUri() { + return fs.getCanonicalUri(); + } + /** Make sure that a path specifies a FileSystem. */ public Path makeQualified(Path path) { return fs.makeQualified(path); Modified: hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java?rev=1227775&r1=1227774&r2=1227775&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java Thu Jan 5 19:21:01 2012 @@ -48,6 +48,7 @@ import org.apache.commons.logging.*; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.CommonConfigurationKeys; import org.apache.hadoop.fs.CommonConfigurationKeysPublic; import org.apache.hadoop.ipc.RpcPayloadHeader.*; import org.apache.hadoop.io.IOUtils; @@ -88,8 +89,6 @@ public class Client { private SocketFactory socketFactory; // how to create sockets private int refCount = 1; - final static String PING_INTERVAL_NAME = "ipc.ping.interval"; - final static int DEFAULT_PING_INTERVAL = 60000; // 1 min final static int PING_CALL_ID = -1; /** @@ -99,7 +98,7 @@ public class Client { * @param pingInterval the ping interval */ final public static void setPingInterval(Configuration conf, int pingInterval) { - conf.setInt(PING_INTERVAL_NAME, pingInterval); + conf.setInt(CommonConfigurationKeys.IPC_PING_INTERVAL_KEY, pingInterval); } /** @@ -110,7 +109,8 @@ public class Client { * @return the ping interval */ final static int getPingInterval(Configuration conf) { - return conf.getInt(PING_INTERVAL_NAME, DEFAULT_PING_INTERVAL); + return conf.getInt(CommonConfigurationKeys.IPC_PING_INTERVAL_KEY, + CommonConfigurationKeys.IPC_PING_INTERVAL_DEFAULT); } /** @@ -123,7 +123,7 @@ public class Client { * @return the timeout period in milliseconds. -1 if no timeout value is set */ final public static int getTimeout(Configuration conf) { - if (!conf.getBoolean("ipc.client.ping", true)) { + if (!conf.getBoolean(CommonConfigurationKeys.IPC_CLIENT_PING_KEY, true)) { return getPingInterval(conf); } return -1; @@ -425,7 +425,7 @@ public class Client { */ private synchronized boolean updateAddress() throws IOException { // Do a fresh lookup with the old host name. - InetSocketAddress currentAddr = new InetSocketAddress( + InetSocketAddress currentAddr = NetUtils.createSocketAddrForHost( server.getHostName(), server.getPort()); if (!server.equals(currentAddr)) { @@ -1347,15 +1347,19 @@ public class Client { Class protocol, UserGroupInformation ticket, int rpcTimeout, Configuration conf) throws IOException { String remotePrincipal = getRemotePrincipal(conf, addr, protocol); - boolean doPing = conf.getBoolean("ipc.client.ping", true); + boolean doPing = + conf.getBoolean(CommonConfigurationKeys.IPC_CLIENT_PING_KEY, true); return new ConnectionId(addr, protocol, ticket, rpcTimeout, remotePrincipal, - conf.getInt("ipc.client.connection.maxidletime", 10000), // 10s - conf.getInt("ipc.client.connect.max.retries", 10), conf.getInt( CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY, CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_DEFAULT), - conf.getBoolean("ipc.client.tcpnodelay", false), + conf.getInt(CommonConfigurationKeysPublic.IPC_CLIENT_CONNECTION_MAXIDLETIME_KEY, + CommonConfigurationKeysPublic.IPC_CLIENT_CONNECTION_MAXIDLETIME_DEFAULT), + conf.getInt(CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_KEY, + CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_DEFAULT), + conf.getBoolean(CommonConfigurationKeysPublic.IPC_CLIENT_TCPNODELAY_KEY, + CommonConfigurationKeysPublic.IPC_CLIENT_TCPNODELAY_DEFAULT), doPing, (doPing ? Client.getPingInterval(conf) : 0)); } Modified: hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java?rev=1227775&r1=1227774&r2=1227775&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java Thu Jan 5 19:21:01 2012 @@ -62,6 +62,7 @@ import org.apache.commons.logging.LogFac import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.CommonConfigurationKeys; +import org.apache.hadoop.fs.CommonConfigurationKeysPublic; import org.apache.hadoop.io.BytesWritable; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Writable; @@ -378,7 +379,9 @@ public abstract class Server { //-tion (for idle connections) ran private long cleanupInterval = 10000; //the minimum interval between //two cleanup runs - private int backlogLength = conf.getInt("ipc.server.listen.queue.size", 128); + private int backlogLength = conf.getInt( + CommonConfigurationKeysPublic.IPC_SERVER_LISTEN_QUEUE_SIZE_KEY, + CommonConfigurationKeysPublic.IPC_SERVER_LISTEN_QUEUE_SIZE_DEFAULT); public Listener() throws IOException { address = new InetSocketAddress(bindAddress, port); @@ -1712,12 +1715,18 @@ public abstract class Server { } else { this.readThreads = conf.getInt( CommonConfigurationKeys.IPC_SERVER_RPC_READ_THREADS_KEY, - CommonConfigurationKeys.IPC_SERVER_RPC_READ_THREADS_DEFAULT); + CommonConfigurationKeys.IPC_SERVER_RPC_READ_THREADS_DEFAULT); } this.callQueue = new LinkedBlockingQueue(maxQueueSize); - this.maxIdleTime = 2*conf.getInt("ipc.client.connection.maxidletime", 1000); - this.maxConnectionsToNuke = conf.getInt("ipc.client.kill.max", 10); - this.thresholdIdleConnections = conf.getInt("ipc.client.idlethreshold", 4000); + this.maxIdleTime = 2 * conf.getInt( + CommonConfigurationKeysPublic.IPC_CLIENT_CONNECTION_MAXIDLETIME_KEY, + CommonConfigurationKeysPublic.IPC_CLIENT_CONNECTION_MAXIDLETIME_DEFAULT); + this.maxConnectionsToNuke = conf.getInt( + CommonConfigurationKeysPublic.IPC_CLIENT_KILL_MAX_KEY, + CommonConfigurationKeysPublic.IPC_CLIENT_KILL_MAX_DEFAULT); + this.thresholdIdleConnections = conf.getInt( + CommonConfigurationKeysPublic.IPC_CLIENT_IDLETHRESHOLD_KEY, + CommonConfigurationKeysPublic.IPC_CLIENT_IDLETHRESHOLD_DEFAULT); this.secretManager = (SecretManager) secretManager; this.authorize = conf.getBoolean(CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION, @@ -1729,7 +1738,9 @@ public abstract class Server { this.port = listener.getAddress().getPort(); this.rpcMetrics = RpcMetrics.create(this); this.rpcDetailedMetrics = RpcDetailedMetrics.create(this.port); - this.tcpNoDelay = conf.getBoolean("ipc.server.tcpnodelay", false); + this.tcpNoDelay = conf.getBoolean( + CommonConfigurationKeysPublic.IPC_SERVER_TCPNODELAY_KEY, + CommonConfigurationKeysPublic.IPC_SERVER_TCPNODELAY_DEFAULT); // Create the responder here responder = new Responder(); Modified: hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/spi/Util.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/spi/Util.java?rev=1227775&r1=1227774&r2=1227775&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/spi/Util.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/spi/Util.java Thu Jan 5 19:21:01 2012 @@ -28,6 +28,7 @@ import java.util.List; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; +import org.apache.hadoop.net.NetUtils; /** * Static utility methods @@ -56,14 +57,7 @@ public class Util { else { String[] specStrings = specs.split("[ ,]+"); for (String specString : specStrings) { - int colon = specString.indexOf(':'); - if (colon < 0 || colon == specString.length() - 1) { - result.add(new InetSocketAddress(specString, defaultPort)); - } else { - String hostname = specString.substring(0, colon); - int port = Integer.parseInt(specString.substring(colon+1)); - result.add(new InetSocketAddress(hostname, port)); - } + result.add(NetUtils.createSocketAddr(specString, defaultPort)); } } return result; Modified: hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/Servers.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/Servers.java?rev=1227775&r1=1227774&r2=1227775&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/Servers.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/Servers.java Thu Jan 5 19:21:01 2012 @@ -28,6 +28,7 @@ import com.google.common.collect.Lists; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; +import org.apache.hadoop.net.NetUtils; /** * Helpers to handle server addresses @@ -57,14 +58,7 @@ public class Servers { else { String[] specStrings = specs.split("[ ,]+"); for (String specString : specStrings) { - int colon = specString.indexOf(':'); - if (colon < 0 || colon == specString.length() - 1) { - result.add(new InetSocketAddress(specString, defaultPort)); - } else { - String hostname = specString.substring(0, colon); - int port = Integer.parseInt(specString.substring(colon+1)); - result.add(new InetSocketAddress(hostname, port)); - } + result.add(NetUtils.createSocketAddr(specString, defaultPort)); } } return result; Modified: hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java?rev=1227775&r1=1227774&r2=1227775&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java Thu Jan 5 19:21:01 2012 @@ -37,6 +37,7 @@ import java.nio.channels.SocketChannel; import java.util.Map.Entry; import java.util.regex.Pattern; import java.util.*; +import java.util.concurrent.ConcurrentHashMap; import javax.net.SocketFactory; @@ -45,11 +46,17 @@ import org.apache.commons.logging.LogFac import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.Path; import org.apache.hadoop.ipc.Server; import org.apache.hadoop.ipc.VersionedProtocol; +import org.apache.hadoop.security.SecurityUtil; import org.apache.hadoop.util.ReflectionUtils; +import com.google.common.annotations.VisibleForTesting; + +//this will need to be replaced someday when there is a suitable replacement +import sun.net.dns.ResolverConfiguration; +import sun.net.util.IPAddressUtil; + @InterfaceAudience.LimitedPrivate({"HDFS", "MapReduce"}) @InterfaceStability.Unstable public class NetUtils { @@ -65,6 +72,26 @@ public class NetUtils { /** Base URL of the Hadoop Wiki: {@value} */ public static final String HADOOP_WIKI = "http://wiki.apache.org/hadoop/"; + private static HostResolver hostResolver; + + static { + // SecurityUtils requires a more secure host resolver if tokens are + // using hostnames + setUseQualifiedHostResolver(!SecurityUtil.getTokenServiceUseIp()); + } + + /** + * This method is intended for use only by SecurityUtils! + * @param flag where the qualified or standard host resolver is used + * to create socket addresses + */ + @InterfaceAudience.Private + public static void setUseQualifiedHostResolver(boolean flag) { + hostResolver = flag + ? new QualifiedHostResolver() + : new StandardHostResolver(); + } + /** * Get the socket factory for the given class according to its * configuration parameter @@ -178,43 +205,256 @@ public class NetUtils { throw new IllegalArgumentException("Target address cannot be null." + helpText); } - int colonIndex = target.indexOf(':'); - if (colonIndex < 0 && defaultPort == -1) { - throw new RuntimeException("Not a host:port pair: " + target + - helpText); + boolean hasScheme = target.contains("://"); + URI uri = null; + try { + uri = hasScheme ? URI.create(target) : URI.create("dummyscheme://"+target); + } catch (IllegalArgumentException e) { + throw new IllegalArgumentException( + "Does not contain a valid host:port authority: " + target + helpText + ); + } + + String host = uri.getHost(); + int port = uri.getPort(); + if (port == -1) { + port = defaultPort; + } + String path = uri.getPath(); + + if ((host == null) || (port < 0) || + (!hasScheme && path != null && !path.isEmpty())) + { + throw new IllegalArgumentException( + "Does not contain a valid host:port authority: " + target + helpText + ); + } + return createSocketAddrForHost(host, port); + } + + /** + * Create a socket address with the given host and port. The hostname + * might be replaced with another host that was set via + * {@link #addStaticResolution(String, String)}. The value of + * hadoop.security.token.service.use_ip will determine whether the + * standard java host resolver is used, or if the fully qualified resolver + * is used. + * @param host the hostname or IP use to instantiate the object + * @param port the port number + * @return InetSocketAddress + */ + public static InetSocketAddress createSocketAddrForHost(String host, int port) { + String staticHost = getStaticResolution(host); + String resolveHost = (staticHost != null) ? staticHost : host; + + InetSocketAddress addr; + try { + InetAddress iaddr = hostResolver.getByName(resolveHost); + // if there is a static entry for the host, make the returned + // address look like the original given host + if (staticHost != null) { + iaddr = InetAddress.getByAddress(host, iaddr.getAddress()); + } + addr = new InetSocketAddress(iaddr, port); + } catch (UnknownHostException e) { + addr = InetSocketAddress.createUnresolved(host, port); } - String hostname; - int port = -1; - if (!target.contains("/")) { - if (colonIndex == -1) { - hostname = target; + return addr; + } + + interface HostResolver { + InetAddress getByName(String host) throws UnknownHostException; + } + + /** + * Uses standard java host resolution + */ + static class StandardHostResolver implements HostResolver { + public InetAddress getByName(String host) throws UnknownHostException { + return InetAddress.getByName(host); + } + } + + /** + * This an alternate resolver with important properties that the standard + * java resolver lacks: + * 1) The hostname is fully qualified. This avoids security issues if not + * all hosts in the cluster do not share the same search domains. It + * also prevents other hosts from performing unnecessary dns searches. + * In contrast, InetAddress simply returns the host as given. + * 2) The InetAddress is instantiated with an exact host and IP to prevent + * further unnecessary lookups. InetAddress may perform an unnecessary + * reverse lookup for an IP. + * 3) A call to getHostName() will always return the qualified hostname, or + * more importantly, the IP if instantiated with an IP. This avoids + * unnecessary dns timeouts if the host is not resolvable. + * 4) Point 3 also ensures that if the host is re-resolved, ex. during a + * connection re-attempt, that a reverse lookup to host and forward + * lookup to IP is not performed since the reverse/forward mappings may + * not always return the same IP. If the client initiated a connection + * with an IP, then that IP is all that should ever be contacted. + * + * NOTE: this resolver is only used if: + * hadoop.security.token.service.use_ip=false + */ + protected static class QualifiedHostResolver implements HostResolver { + @SuppressWarnings("unchecked") + private List searchDomains = + ResolverConfiguration.open().searchlist(); + + /** + * Create an InetAddress with a fully qualified hostname of the given + * hostname. InetAddress does not qualify an incomplete hostname that + * is resolved via the domain search list. + * {@link InetAddress#getCanonicalHostName()} will fully qualify the + * hostname, but it always return the A record whereas the given hostname + * may be a CNAME. + * + * @param host a hostname or ip address + * @return InetAddress with the fully qualified hostname or ip + * @throws UnknownHostException if host does not exist + */ + public InetAddress getByName(String host) throws UnknownHostException { + InetAddress addr = null; + + if (IPAddressUtil.isIPv4LiteralAddress(host)) { + // use ipv4 address as-is + byte[] ip = IPAddressUtil.textToNumericFormatV4(host); + addr = InetAddress.getByAddress(host, ip); + } else if (IPAddressUtil.isIPv6LiteralAddress(host)) { + // use ipv6 address as-is + byte[] ip = IPAddressUtil.textToNumericFormatV6(host); + addr = InetAddress.getByAddress(host, ip); + } else if (host.endsWith(".")) { + // a rooted host ends with a dot, ex. "host." + // rooted hosts never use the search path, so only try an exact lookup + addr = getByExactName(host); + } else if (host.contains(".")) { + // the host contains a dot (domain), ex. "host.domain" + // try an exact host lookup, then fallback to search list + addr = getByExactName(host); + if (addr == null) { + addr = getByNameWithSearch(host); + } } else { - // must be the old style : - hostname = target.substring(0, colonIndex); - String portStr = target.substring(colonIndex + 1); - try { - port = Integer.parseInt(portStr); - } catch (NumberFormatException nfe) { - throw new IllegalArgumentException( - "Can't parse port '" + portStr + "'" - + helpText); + // it's a simple host with no dots, ex. "host" + // try the search list, then fallback to exact host + InetAddress loopback = InetAddress.getByName(null); + if (host.equalsIgnoreCase(loopback.getHostName())) { + addr = InetAddress.getByAddress(host, loopback.getAddress()); + } else { + addr = getByNameWithSearch(host); + if (addr == null) { + addr = getByExactName(host); + } } } - } else { - // a new uri - URI addr = new Path(target).toUri(); - hostname = addr.getHost(); - port = addr.getPort(); + // unresolvable! + if (addr == null) { + throw new UnknownHostException(host); + } + return addr; } - if (port == -1) { - port = defaultPort; + InetAddress getByExactName(String host) { + InetAddress addr = null; + // InetAddress will use the search list unless the host is rooted + // with a trailing dot. The trailing dot will disable any use of the + // search path in a lower level resolver. See RFC 1535. + String fqHost = host; + if (!fqHost.endsWith(".")) fqHost += "."; + try { + addr = getInetAddressByName(fqHost); + // can't leave the hostname as rooted or other parts of the system + // malfunction, ex. kerberos principals are lacking proper host + // equivalence for rooted/non-rooted hostnames + addr = InetAddress.getByAddress(host, addr.getAddress()); + } catch (UnknownHostException e) { + // ignore, caller will throw if necessary + } + return addr; + } + + InetAddress getByNameWithSearch(String host) { + InetAddress addr = null; + if (host.endsWith(".")) { // already qualified? + addr = getByExactName(host); + } else { + for (String domain : searchDomains) { + String dot = !domain.startsWith(".") ? "." : ""; + addr = getByExactName(host + dot + domain); + if (addr != null) break; + } + } + return addr; + } + + // implemented as a separate method to facilitate unit testing + InetAddress getInetAddressByName(String host) throws UnknownHostException { + return InetAddress.getByName(host); } + + void setSearchDomains(String ... domains) { + searchDomains = Arrays.asList(domains); + } + } + + /** + * This is for testing only! + */ + @VisibleForTesting + static void setHostResolver(HostResolver newResolver) { + hostResolver = newResolver; + } - if (getStaticResolution(hostname) != null) { - hostname = getStaticResolution(hostname); + /** + * Resolve the uri's hostname and add the default port if not in the uri + * @param uri to resolve + * @param defaultPort if none is given + * @return URI + */ + public static URI getCanonicalUri(URI uri, int defaultPort) { + // skip if there is no authority, ie. "file" scheme or relative uri + String host = uri.getHost(); + if (host == null) { + return uri; + } + String fqHost = canonicalizeHost(host); + int port = uri.getPort(); + // short out if already canonical with a port + if (host.equals(fqHost) && port != -1) { + return uri; + } + // reconstruct the uri with the canonical host and port + try { + uri = new URI(uri.getScheme(), uri.getUserInfo(), + fqHost, (port == -1) ? defaultPort : port, + uri.getPath(), uri.getQuery(), uri.getFragment()); + } catch (URISyntaxException e) { + throw new IllegalArgumentException(e); + } + return uri; + } + + // cache the canonicalized hostnames; the cache currently isn't expired, + // but the canonicals will only change if the host's resolver configuration + // changes + private static final ConcurrentHashMap canonicalizedHostCache = + new ConcurrentHashMap(); + + private static String canonicalizeHost(String host) { + // check if the host has already been canonicalized + String fqHost = canonicalizedHostCache.get(host); + if (fqHost == null) { + try { + fqHost = hostResolver.getByName(host).getHostName(); + // slight race condition, but won't hurt + canonicalizedHostCache.put(host, fqHost); + } catch (UnknownHostException e) { + fqHost = host; + } } - return new InetSocketAddress(hostname, port); + return fqHost; } /** @@ -279,8 +519,8 @@ public class NetUtils { */ public static InetSocketAddress getConnectAddress(Server server) { InetSocketAddress addr = server.getListenerAddress(); - if (addr.getAddress().getHostAddress().equals("0.0.0.0")) { - addr = new InetSocketAddress("127.0.0.1", addr.getPort()); + if (addr.getAddress().isAnyLocalAddress()) { + addr = createSocketAddrForHost("127.0.0.1", addr.getPort()); } return addr; } Modified: hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java?rev=1227775&r1=1227774&r2=1227775&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java Thu Jan 5 19:21:01 2012 @@ -35,6 +35,7 @@ import org.apache.commons.logging.LogFac import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.CommonConfigurationKeys; import org.apache.hadoop.io.Text; import org.apache.hadoop.net.NetUtils; import org.apache.hadoop.security.token.Token; @@ -50,6 +51,35 @@ public class SecurityUtil { public static final Log LOG = LogFactory.getLog(SecurityUtil.class); public static final String HOSTNAME_PATTERN = "_HOST"; + // controls whether buildTokenService will use an ip or host/ip as given + // by the user + private static boolean useIpForTokenService; + + static { + boolean useIp = new Configuration().getBoolean( + CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP, + CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP_DEFAULT); + setTokenServiceUseIp(useIp); + } + + /** + * For use only by tests and initialization + */ + @InterfaceAudience.Private + static void setTokenServiceUseIp(boolean flag) { + useIpForTokenService = flag; + NetUtils.setUseQualifiedHostResolver(!flag); + } + + /** + * Intended only for temporary use by NetUtils. Do not use. + * @return whether tokens use an IP address + */ + @InterfaceAudience.Private + public static boolean getTokenServiceUseIp() { + return useIpForTokenService; + } + /** * Find the original TGT within the current subject's credentials. Cross-realm * TGT's of the form "krbtgt/TWO.COM@ONE.COM" may be present. @@ -263,29 +293,20 @@ public class SecurityUtil { } /** - * create service name for Delegation token ip:port - * @param uri - * @param defPort - * @return "ip:port" + * create the service name for a Delegation token + * @param uri of the service + * @param defPort is used if the uri lacks a port + * @return the token service, or null if no authority + * @see #buildTokenService(InetSocketAddress) */ public static String buildDTServiceName(URI uri, int defPort) { - int port = uri.getPort(); - if(port == -1) - port = defPort; - - // build the service name string "/ip:port" - // for whatever reason using NetUtils.createSocketAddr(target).toString() - // returns "localhost/ip:port" - StringBuffer sb = new StringBuffer(); - String host = uri.getHost(); - if (host != null) { - host = NetUtils.normalizeHostName(host); - } else { - host = ""; + String authority = uri.getAuthority(); + if (authority == null) { + return null; } - sb.append(host).append(":").append(port); - return sb.toString(); - } + InetSocketAddress addr = NetUtils.createSocketAddr(authority, defPort); + return buildTokenService(addr).toString(); + } /** * Get the host name from the principal name of format /host@realm. @@ -368,21 +389,57 @@ public class SecurityUtil { } /** + * Decode the given token's service field into an InetAddress + * @param token from which to obtain the service + * @return InetAddress for the service + */ + public static InetSocketAddress getTokenServiceAddr(Token token) { + return NetUtils.createSocketAddr(token.getService().toString()); + } + + /** * Set the given token's service to the format expected by the RPC client * @param token a delegation token * @param addr the socket for the rpc connection */ public static void setTokenService(Token token, InetSocketAddress addr) { - token.setService(buildTokenService(addr)); + Text service = buildTokenService(addr); + if (token != null) { + token.setService(service); + LOG.info("Acquired token "+token); // Token#toString() prints service + } else { + LOG.warn("Failed to get token for service "+service); + } } /** * Construct the service key for a token * @param addr InetSocketAddress of remote connection with a token - * @return "ip:port" + * @return "ip:port" or "host:port" depending on the value of + * hadoop.security.token.service.use_ip */ public static Text buildTokenService(InetSocketAddress addr) { - String host = addr.getAddress().getHostAddress(); + String host = null; + if (useIpForTokenService) { + if (addr.isUnresolved()) { // host has no ip address + throw new IllegalArgumentException( + new UnknownHostException(addr.getHostName()) + ); + } + host = addr.getAddress().getHostAddress(); + } else { + host = addr.getHostName().toLowerCase(); + } return new Text(host + ":" + addr.getPort()); } + + /** + * Construct the service key for a token + * @param uri of remote connection with a token + * @return "ip:port" or "host:port" depending on the value of + * hadoop.security.token.service.use_ip + */ + public static Text buildTokenService(URI uri) { + return buildTokenService(NetUtils.createSocketAddr(uri.getAuthority())); + } } Propchange: hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/core/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Thu Jan 5 19:21:01 2012 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/core:1152502-1227258 +/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/core:1152502-1227765 /hadoop/core/branches/branch-0.19/core/src/test/core:713112 /hadoop/core/trunk/src/test/core:776175-785643,785929-786278 Modified: hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/MiniRPCBenchmark.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/MiniRPCBenchmark.java?rev=1227775&r1=1227774&r2=1227775&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/MiniRPCBenchmark.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/MiniRPCBenchmark.java Thu Jan 5 19:21:01 2012 @@ -34,6 +34,7 @@ import org.apache.hadoop.fs.CommonConfig import org.apache.hadoop.io.Text; import org.apache.hadoop.net.NetUtils; import org.apache.hadoop.security.KerberosInfo; +import org.apache.hadoop.security.SecurityUtil; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.security.authorize.ProxyUsers; import org.apache.hadoop.security.token.Token; @@ -213,8 +214,7 @@ public class MiniRPCBenchmark { token = p.getDelegationToken(new Text(RENEWER)); currentUgi = UserGroupInformation.createUserForTesting(MINI_USER, GROUP_NAMES); - token.setService(new Text(addr.getAddress().getHostAddress() - + ":" + addr.getPort())); + SecurityUtil.setTokenService(token, addr); currentUgi.addToken(token); return p; } Modified: hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java?rev=1227775&r1=1227774&r2=1227775&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java Thu Jan 5 19:21:01 2012 @@ -40,6 +40,7 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.commons.logging.impl.Log4JLogger; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.CommonConfigurationKeys; import org.apache.hadoop.io.Text; import org.apache.hadoop.ipc.Client.ConnectionId; import org.apache.hadoop.net.NetUtils; @@ -286,10 +287,7 @@ public class TestSaslRPC { .getUserName())); Token token = new Token(tokenId, sm); - Text host = new Text(addr.getAddress().getHostAddress() + ":" - + addr.getPort()); - token.setService(host); - LOG.info("Service IP address for token is " + host); + SecurityUtil.setTokenService(token, addr); current.addToken(token); TestSaslProtocol proxy = null; @@ -311,14 +309,17 @@ public class TestSaslRPC { public void testPingInterval() throws Exception { Configuration newConf = new Configuration(conf); newConf.set(SERVER_PRINCIPAL_KEY, SERVER_PRINCIPAL_1); - conf.setInt(Client.PING_INTERVAL_NAME, Client.DEFAULT_PING_INTERVAL); + conf.setInt(CommonConfigurationKeys.IPC_PING_INTERVAL_KEY, + CommonConfigurationKeys.IPC_PING_INTERVAL_DEFAULT); + // set doPing to true - newConf.setBoolean("ipc.client.ping", true); + newConf.setBoolean(CommonConfigurationKeys.IPC_CLIENT_PING_KEY, true); ConnectionId remoteId = ConnectionId.getConnectionId( new InetSocketAddress(0), TestSaslProtocol.class, null, 0, newConf); - assertEquals(Client.DEFAULT_PING_INTERVAL, remoteId.getPingInterval()); + assertEquals(CommonConfigurationKeys.IPC_PING_INTERVAL_DEFAULT, + remoteId.getPingInterval()); // set doPing to false - newConf.setBoolean("ipc.client.ping", false); + newConf.setBoolean(CommonConfigurationKeys.IPC_CLIENT_PING_KEY, false); remoteId = ConnectionId.getConnectionId( new InetSocketAddress(0), TestSaslProtocol.class, null, 0, newConf); assertEquals(0, remoteId.getPingInterval()); @@ -358,10 +359,7 @@ public class TestSaslRPC { .getUserName())); Token token = new Token(tokenId, sm); - Text host = new Text(addr.getAddress().getHostAddress() + ":" - + addr.getPort()); - token.setService(host); - LOG.info("Service IP address for token is " + host); + SecurityUtil.setTokenService(token, addr); current.addToken(token); Configuration newConf = new Configuration(conf); @@ -448,10 +446,7 @@ public class TestSaslRPC { .getUserName())); Token token = new Token(tokenId, sm); - Text host = new Text(addr.getAddress().getHostAddress() + ":" - + addr.getPort()); - token.setService(host); - LOG.info("Service IP address for token is " + host); + SecurityUtil.setTokenService(token, addr); current.addToken(token); current.doAs(new PrivilegedExceptionAction() { Modified: hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java?rev=1227775&r1=1227774&r2=1227775&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java Thu Jan 5 19:21:01 2012 @@ -17,25 +17,29 @@ */ package org.apache.hadoop.net; -import junit.framework.AssertionFailedError; -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.junit.Test; - import static org.junit.Assert.*; import java.io.IOException; import java.net.BindException; +import java.net.ConnectException; import java.net.InetAddress; +import java.net.InetSocketAddress; import java.net.NetworkInterface; import java.net.Socket; -import java.net.ConnectException; import java.net.SocketException; -import java.net.InetSocketAddress; +import java.net.URI; import java.net.UnknownHostException; import java.util.Enumeration; +import junit.framework.AssertionFailedError; + +import org.apache.commons.lang.StringUtils; +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; +import org.junit.Before; +import org.junit.BeforeClass; +import org.junit.Test; public class TestNetUtils { @@ -248,4 +252,255 @@ public class TestNetUtils { } return wrapped; } -} + + static NetUtilsTestResolver resolver; + static Configuration config; + + @BeforeClass + public static void setupResolver() { + resolver = NetUtilsTestResolver.install(); + } + + @Before + public void resetResolver() { + resolver.reset(); + config = new Configuration(); + } + + // getByExactName + + private void verifyGetByExactNameSearch(String host, String ... searches) { + assertNull(resolver.getByExactName(host)); + assertBetterArrayEquals(searches, resolver.getHostSearches()); + } + + @Test + public void testResolverGetByExactNameUnqualified() { + verifyGetByExactNameSearch("unknown", "unknown."); + } + + @Test + public void testResolverGetByExactNameUnqualifiedWithDomain() { + verifyGetByExactNameSearch("unknown.domain", "unknown.domain."); + } + + @Test + public void testResolverGetByExactNameQualified() { + verifyGetByExactNameSearch("unknown.", "unknown."); + } + + @Test + public void testResolverGetByExactNameQualifiedWithDomain() { + verifyGetByExactNameSearch("unknown.domain.", "unknown.domain."); + } + + // getByNameWithSearch + + private void verifyGetByNameWithSearch(String host, String ... searches) { + assertNull(resolver.getByNameWithSearch(host)); + assertBetterArrayEquals(searches, resolver.getHostSearches()); + } + + @Test + public void testResolverGetByNameWithSearchUnqualified() { + String host = "unknown"; + verifyGetByNameWithSearch(host, host+".a.b.", host+".b.", host+".c."); + } + + @Test + public void testResolverGetByNameWithSearchUnqualifiedWithDomain() { + String host = "unknown.domain"; + verifyGetByNameWithSearch(host, host+".a.b.", host+".b.", host+".c."); + } + + @Test + public void testResolverGetByNameWithSearchQualified() { + String host = "unknown."; + verifyGetByNameWithSearch(host, host); + } + + @Test + public void testResolverGetByNameWithSearchQualifiedWithDomain() { + String host = "unknown.domain."; + verifyGetByNameWithSearch(host, host); + } + + // getByName + + private void verifyGetByName(String host, String ... searches) { + InetAddress addr = null; + try { + addr = resolver.getByName(host); + } catch (UnknownHostException e) {} // ignore + assertNull(addr); + assertBetterArrayEquals(searches, resolver.getHostSearches()); + } + + @Test + public void testResolverGetByNameQualified() { + String host = "unknown."; + verifyGetByName(host, host); + } + + @Test + public void testResolverGetByNameQualifiedWithDomain() { + verifyGetByName("unknown.domain.", "unknown.domain."); + } + + @Test + public void testResolverGetByNameUnqualified() { + String host = "unknown"; + verifyGetByName(host, host+".a.b.", host+".b.", host+".c.", host+"."); + } + + @Test + public void testResolverGetByNameUnqualifiedWithDomain() { + String host = "unknown.domain"; + verifyGetByName(host, host+".", host+".a.b.", host+".b.", host+".c."); + } + + // resolving of hosts + + private InetAddress verifyResolve(String host, String ... searches) { + InetAddress addr = null; + try { + addr = resolver.getByName(host); + } catch (UnknownHostException e) {} // ignore + assertNotNull(addr); + assertBetterArrayEquals(searches, resolver.getHostSearches()); + return addr; + } + + private void + verifyInetAddress(InetAddress addr, String host, String ip) { + assertNotNull(addr); + assertEquals(host, addr.getHostName()); + assertEquals(ip, addr.getHostAddress()); + } + + @Test + public void testResolverUnqualified() { + String host = "host"; + InetAddress addr = verifyResolve(host, host+".a.b."); + verifyInetAddress(addr, "host.a.b", "1.1.1.1"); + } + + @Test + public void testResolverUnqualifiedWithDomain() { + String host = "host.a"; + InetAddress addr = verifyResolve(host, host+".", host+".a.b.", host+".b."); + verifyInetAddress(addr, "host.a.b", "1.1.1.1"); + } + + @Test + public void testResolverUnqualifedFull() { + String host = "host.a.b"; + InetAddress addr = verifyResolve(host, host+"."); + verifyInetAddress(addr, host, "1.1.1.1"); + } + + @Test + public void testResolverQualifed() { + String host = "host.a.b."; + InetAddress addr = verifyResolve(host, host); + verifyInetAddress(addr, host, "1.1.1.1"); + } + + // localhost + + @Test + public void testResolverLoopback() { + String host = "Localhost"; + InetAddress addr = verifyResolve(host); // no lookup should occur + verifyInetAddress(addr, "Localhost", "127.0.0.1"); + } + + @Test + public void testResolverIP() { + String host = "1.1.1.1"; + InetAddress addr = verifyResolve(host); // no lookup should occur for ips + verifyInetAddress(addr, host, host); + } + + // + + @Test + public void testCanonicalUriWithPort() { + URI uri; + + uri = NetUtils.getCanonicalUri(URI.create("scheme://host:123"), 456); + assertEquals("scheme://host.a.b:123", uri.toString()); + + uri = NetUtils.getCanonicalUri(URI.create("scheme://host:123/"), 456); + assertEquals("scheme://host.a.b:123/", uri.toString()); + + uri = NetUtils.getCanonicalUri(URI.create("scheme://host:123/path"), 456); + assertEquals("scheme://host.a.b:123/path", uri.toString()); + + uri = NetUtils.getCanonicalUri(URI.create("scheme://host:123/path?q#frag"), 456); + assertEquals("scheme://host.a.b:123/path?q#frag", uri.toString()); + } + + @Test + public void testCanonicalUriWithDefaultPort() { + URI uri; + + uri = NetUtils.getCanonicalUri(URI.create("scheme://host"), 123); + assertEquals("scheme://host.a.b:123", uri.toString()); + + uri = NetUtils.getCanonicalUri(URI.create("scheme://host/"), 123); + assertEquals("scheme://host.a.b:123/", uri.toString()); + + uri = NetUtils.getCanonicalUri(URI.create("scheme://host/path"), 123); + assertEquals("scheme://host.a.b:123/path", uri.toString()); + + uri = NetUtils.getCanonicalUri(URI.create("scheme://host/path?q#frag"), 123); + assertEquals("scheme://host.a.b:123/path?q#frag", uri.toString()); + } + + @Test + public void testCanonicalUriWithPath() { + URI uri; + + uri = NetUtils.getCanonicalUri(URI.create("path"), 2); + assertEquals("path", uri.toString()); + + uri = NetUtils.getCanonicalUri(URI.create("/path"), 2); + assertEquals("/path", uri.toString()); + } + + @Test + public void testCanonicalUriWithNoAuthority() { + URI uri; + + uri = NetUtils.getCanonicalUri(URI.create("scheme:/"), 2); + assertEquals("scheme:/", uri.toString()); + + uri = NetUtils.getCanonicalUri(URI.create("scheme:/path"), 2); + assertEquals("scheme:/path", uri.toString()); + + uri = NetUtils.getCanonicalUri(URI.create("scheme:///"), 2); + assertEquals("scheme:///", uri.toString()); + + uri = NetUtils.getCanonicalUri(URI.create("scheme:///path"), 2); + assertEquals("scheme:///path", uri.toString()); + } + + @Test + public void testCanonicalUriWithNoHost() { + URI uri = NetUtils.getCanonicalUri(URI.create("scheme://:123/path"), 2); + assertEquals("scheme://:123/path", uri.toString()); + } + + @Test + public void testCanonicalUriWithNoPortNoDefaultPort() { + URI uri = NetUtils.getCanonicalUri(URI.create("scheme://host/path"), -1); + assertEquals("scheme://host.a.b/path", uri.toString()); + } + + private void assertBetterArrayEquals(T[] expect, T[]got) { + String expectStr = StringUtils.join(expect, ", "); + String gotStr = StringUtils.join(got, ", "); + assertEquals(expectStr, gotStr); + } +} \ No newline at end of file Modified: hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestDoAsEffectiveUser.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestDoAsEffectiveUser.java?rev=1227775&r1=1227774&r2=1227775&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestDoAsEffectiveUser.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestDoAsEffectiveUser.java Thu Jan 5 19:21:01 2012 @@ -418,9 +418,7 @@ public class TestDoAsEffectiveUser { .getUserName()), new Text("SomeSuperUser")); Token token = new Token(tokenId, sm); - Text host = new Text(addr.getAddress().getHostAddress() + ":" - + addr.getPort()); - token.setService(host); + SecurityUtil.setTokenService(token, addr); UserGroupInformation proxyUserUgi = UserGroupInformation .createProxyUserForTesting(PROXY_USER_NAME, current, GROUP_NAMES); proxyUserUgi.addToken(token); @@ -476,9 +474,7 @@ public class TestDoAsEffectiveUser { .getUserName()), new Text("SomeSuperUser")); Token token = new Token(tokenId, sm); - Text host = new Text(addr.getAddress().getHostAddress() + ":" - + addr.getPort()); - token.setService(host); + SecurityUtil.setTokenService(token, addr); current.addToken(token); String retVal = current.doAs(new PrivilegedExceptionAction() { @Override Modified: hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java?rev=1227775&r1=1227774&r2=1227775&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java Thu Jan 5 19:21:01 2012 @@ -16,16 +16,19 @@ */ package org.apache.hadoop.security; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.assertEquals; +import static org.junit.Assert.*; import java.io.IOException; import java.net.InetAddress; +import java.net.InetSocketAddress; +import java.net.URI; import javax.security.auth.kerberos.KerberosPrincipal; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.io.Text; +import org.apache.hadoop.net.NetUtils; +import org.apache.hadoop.security.token.Token; import org.junit.Test; import org.mockito.Mockito; @@ -121,4 +124,213 @@ public class TestSecurityUtil { assertEquals(null, SecurityUtil.getHostFromPrincipal("service@realm")); } + + @Test + public void testBuildDTServiceName() { + assertEquals("127.0.0.1:123", + SecurityUtil.buildDTServiceName(URI.create("test://LocalHost"), 123) + ); + assertEquals("127.0.0.1:123", + SecurityUtil.buildDTServiceName(URI.create("test://LocalHost:123"), 456) + ); + assertEquals("127.0.0.1:123", + SecurityUtil.buildDTServiceName(URI.create("test://127.0.0.1"), 123) + ); + assertEquals("127.0.0.1:123", + SecurityUtil.buildDTServiceName(URI.create("test://127.0.0.1:123"), 456) + ); + } + + @Test + public void testBuildTokenServiceSockAddr() { + assertEquals("127.0.0.1:123", + SecurityUtil.buildTokenService(new InetSocketAddress("LocalHost", 123)).toString() + ); + assertEquals("127.0.0.1:123", + SecurityUtil.buildTokenService(new InetSocketAddress("127.0.0.1", 123)).toString() + ); + // what goes in, comes out + assertEquals("127.0.0.1:123", + SecurityUtil.buildTokenService(NetUtils.createSocketAddr("127.0.0.1", 123)).toString() + ); + } + + @Test + public void testGoodHostsAndPorts() { + InetSocketAddress compare = NetUtils.createSocketAddrForHost("localhost", 123); + runGoodCases(compare, "localhost", 123); + runGoodCases(compare, "localhost:", 123); + runGoodCases(compare, "localhost:123", 456); + } + + void runGoodCases(InetSocketAddress addr, String host, int port) { + assertEquals(addr, NetUtils.createSocketAddr(host, port)); + assertEquals(addr, NetUtils.createSocketAddr("hdfs://"+host, port)); + assertEquals(addr, NetUtils.createSocketAddr("hdfs://"+host+"/path", port)); + } + + @Test + public void testBadHostsAndPorts() { + runBadCases("", true); + runBadCases(":", false); + runBadCases("hdfs/", false); + runBadCases("hdfs:/", false); + runBadCases("hdfs://", true); + } + + void runBadCases(String prefix, boolean validIfPosPort) { + runBadPortPermutes(prefix, false); + runBadPortPermutes(prefix+"*", false); + runBadPortPermutes(prefix+"localhost", validIfPosPort); + runBadPortPermutes(prefix+"localhost:-1", false); + runBadPortPermutes(prefix+"localhost:-123", false); + runBadPortPermutes(prefix+"localhost:xyz", false); + runBadPortPermutes(prefix+"localhost/xyz", validIfPosPort); + runBadPortPermutes(prefix+"localhost/:123", validIfPosPort); + runBadPortPermutes(prefix+":123", false); + runBadPortPermutes(prefix+":xyz", false); + } + + void runBadPortPermutes(String arg, boolean validIfPosPort) { + int ports[] = { -123, -1, 123 }; + boolean bad = false; + try { + NetUtils.createSocketAddr(arg); + } catch (IllegalArgumentException e) { + bad = true; + } finally { + assertTrue("should be bad: '"+arg+"'", bad); + } + for (int port : ports) { + if (validIfPosPort && port > 0) continue; + + bad = false; + try { + NetUtils.createSocketAddr(arg, port); + } catch (IllegalArgumentException e) { + bad = true; + } finally { + assertTrue("should be bad: '"+arg+"' (default port:"+port+")", bad); + } + } + } + + // check that the socket addr has: + // 1) the InetSocketAddress has the correct hostname, ie. exact host/ip given + // 2) the address is resolved, ie. has an ip + // 3,4) the socket's InetAddress has the same hostname, and the correct ip + // 5) the port is correct + private void + verifyValues(InetSocketAddress addr, String host, String ip, int port) { + assertTrue(!addr.isUnresolved()); + // don't know what the standard resolver will return for hostname. + // should be host for host; host or ip for ip is ambiguous + if (!SecurityUtil.getTokenServiceUseIp()) { + assertEquals(host, addr.getHostName()); + assertEquals(host, addr.getAddress().getHostName()); + } + assertEquals(ip, addr.getAddress().getHostAddress()); + assertEquals(port, addr.getPort()); + } + + // check: + // 1) buildTokenService honors use_ip setting + // 2) setTokenService & getService works + // 3) getTokenServiceAddr decodes to the identical socket addr + private void + verifyTokenService(InetSocketAddress addr, String host, String ip, int port, boolean useIp) { + //LOG.info("address:"+addr+" host:"+host+" ip:"+ip+" port:"+port); + + SecurityUtil.setTokenServiceUseIp(useIp); + String serviceHost = useIp ? ip : host.toLowerCase(); + + Token token = new Token(); + Text service = new Text(serviceHost+":"+port); + + assertEquals(service, SecurityUtil.buildTokenService(addr)); + SecurityUtil.setTokenService(token, addr); + assertEquals(service, token.getService()); + + InetSocketAddress serviceAddr = SecurityUtil.getTokenServiceAddr(token); + assertNotNull(serviceAddr); + verifyValues(serviceAddr, serviceHost, ip, port); + } + + // check: + // 1) socket addr is created with fields set as expected + // 2) token service with ips + // 3) token service with the given host or ip + private void + verifyAddress(InetSocketAddress addr, String host, String ip, int port) { + verifyValues(addr, host, ip, port); + //LOG.info("test that token service uses ip"); + verifyTokenService(addr, host, ip, port, true); + //LOG.info("test that token service uses host"); + verifyTokenService(addr, host, ip, port, false); + } + + // check: + // 1-4) combinations of host and port + // this will construct a socket addr, verify all the fields, build the + // service to verify the use_ip setting is honored, set the token service + // based on addr and verify the token service is set correctly, decode + // the token service and ensure all the fields of the decoded addr match + private void verifyServiceAddr(String host, String ip) { + InetSocketAddress addr; + int port = 123; + + // test host, port tuple + //LOG.info("test tuple ("+host+","+port+")"); + addr = NetUtils.createSocketAddrForHost(host, port); + verifyAddress(addr, host, ip, port); + + // test authority with no default port + //LOG.info("test authority '"+host+":"+port+"'"); + addr = NetUtils.createSocketAddr(host+":"+port); + verifyAddress(addr, host, ip, port); + + // test authority with a default port, make sure default isn't used + //LOG.info("test authority '"+host+":"+port+"' with ignored default port"); + addr = NetUtils.createSocketAddr(host+":"+port, port+1); + verifyAddress(addr, host, ip, port); + + // test host-only authority, using port as default port + //LOG.info("test host:"+host+" port:"+port); + addr = NetUtils.createSocketAddr(host, port); + verifyAddress(addr, host, ip, port); + } + + @Test + public void testSocketAddrWithName() { + String staticHost = "my"; + NetUtils.addStaticResolution(staticHost, "localhost"); + verifyServiceAddr("LocalHost", "127.0.0.1"); + } + + @Test + public void testSocketAddrWithIP() { + verifyServiceAddr("127.0.0.1", "127.0.0.1"); + } + + @Test + public void testSocketAddrWithNameToStaticName() { + String staticHost = "host1"; + NetUtils.addStaticResolution(staticHost, "localhost"); + verifyServiceAddr(staticHost, "127.0.0.1"); + } + + @Test + public void testSocketAddrWithNameToStaticIP() { + String staticHost = "host3"; + NetUtils.addStaticResolution(staticHost, "255.255.255.255"); + verifyServiceAddr(staticHost, "255.255.255.255"); + } + + // this is a bizarre case, but it's if a test tries to remap an ip address + @Test + public void testSocketAddrWithIPToStaticIP() { + String staticHost = "1.2.3.4"; + NetUtils.addStaticResolution(staticHost, "255.255.255.255"); + verifyServiceAddr(staticHost, "255.255.255.255"); + } }