Return-Path: X-Original-To: apmail-hadoop-hdfs-commits-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 90F2510444 for ; Thu, 23 Jan 2014 17:50:02 +0000 (UTC) Received: (qmail 99091 invoked by uid 500); 23 Jan 2014 17:50:01 -0000 Delivered-To: apmail-hadoop-hdfs-commits-archive@hadoop.apache.org Received: (qmail 99051 invoked by uid 500); 23 Jan 2014 17:50:00 -0000 Mailing-List: contact hdfs-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-commits@hadoop.apache.org Received: (qmail 99043 invoked by uid 99); 23 Jan 2014 17:50:00 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 23 Jan 2014 17:50:00 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 23 Jan 2014 17:49:50 +0000 Received: from eris.apache.org (localhost [127.0.0.1]) by eris.apache.org (Postfix) with ESMTP id 38ACE23889E3; Thu, 23 Jan 2014 17:49:27 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r1560768 [1/2] - in /hadoop/common/branches/HDFS-4685/hadoop-hdfs-project: hadoop-hdfs-httpfs/ hadoop-hdfs-httpfs/src/main/conf/ hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/ hadoop-hdfs-httpfs/src/main/java/org/apache/... Date: Thu, 23 Jan 2014 17:49:26 -0000 To: hdfs-commits@hadoop.apache.org From: cnauroth@apache.org X-Mailer: svnmailer-1.0.9 Message-Id: <20140123174927.38ACE23889E3@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Author: cnauroth Date: Thu Jan 23 17:49:24 2014 New Revision: 1560768 URL: http://svn.apache.org/r1560768 Log: Merge trunk to HDFS-4685. Added: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpsFSFileSystem.java - copied unchanged from r1560767, hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpsFSFileSystem.java hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/tomcat/ssl-server.xml - copied unchanged from r1560767, hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/tomcat/ssl-server.xml hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/TestHttpFSFWithSWebhdfsFileSystem.java - copied unchanged from r1560767, hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/TestHttpFSFWithSWebhdfsFileSystem.java Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/ (props changed) hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-env.sh hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSKerberosAuthenticator.java hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSUtils.java hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServerWebApp.java hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/DelegationTokenIdentifier.java hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/security/DelegationTokenManagerService.java hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/servlet/ServerWebApp.java hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/libexec/httpfs-config.sh hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/sbin/httpfs.sh hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/apt/ServerSetup.apt.vm hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSKerberosAuthenticationHandler.java hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/service/security/TestDelegationTokenManagerService.java hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestJettyHelper.java hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/ (props changed) hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/native/ (props changed) hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/ (props changed) hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/ (props changed) hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/ (props changed) hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs/ (props changed) hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogs.java hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java Propchange: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs:r1559794-1560767 Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml?rev=1560768&r1=1560767&r2=1560768&view=diff ============================================================================== --- hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml (original) +++ hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml Thu Jan 23 17:49:24 2014 @@ -554,6 +554,9 @@ + + Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-env.sh URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-env.sh?rev=1560768&r1=1560767&r2=1560768&view=diff ============================================================================== --- hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-env.sh (original) +++ hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-env.sh Thu Jan 23 17:49:24 2014 @@ -39,3 +39,15 @@ # The hostname HttpFS server runs on # # export HTTPFS_HTTP_HOSTNAME=`hostname -f` + +# Indicates if HttpFS is using SSL +# +# export HTTPFS_SSL_ENABLED=false + +# The location of the SSL keystore if using SSL +# +# export HTTPFS_SSL_KEYSTORE_FILE=${HOME}/.keystore + +# The password of the SSL keystore if using SSL +# +# export HTTPFS_SSL_KEYSTORE_PASS=password Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java?rev=1560768&r1=1560767&r2=1560768&view=diff ============================================================================== --- hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java (original) +++ hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java Thu Jan 23 17:49:24 2014 @@ -243,7 +243,7 @@ public class HttpFSFileSystem extends Fi if (makeQualified) { path = makeQualified(path); } - final URL url = HttpFSUtils.createHttpURL(path, params); + final URL url = HttpFSUtils.createURL(path, params); return doAsRealUserIfNecessary(new Callable() { @Override public HttpURLConnection call() throws Exception { Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSKerberosAuthenticator.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSKerberosAuthenticator.java?rev=1560768&r1=1560767&r2=1560768&view=diff ============================================================================== --- hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSKerberosAuthenticator.java (original) +++ hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSKerberosAuthenticator.java Thu Jan 23 17:49:24 2014 @@ -123,7 +123,7 @@ public class HttpFSKerberosAuthenticator Map params = new HashMap(); params.put(OP_PARAM, op.toString()); params.put(RENEWER_PARAM,renewer); - URL url = HttpFSUtils.createHttpURL(new Path(fsURI), params); + URL url = HttpFSUtils.createURL(new Path(fsURI), params); AuthenticatedURL aUrl = new AuthenticatedURL(new HttpFSKerberosAuthenticator()); try { @@ -150,7 +150,7 @@ public class HttpFSKerberosAuthenticator params.put(OP_PARAM, DelegationTokenOperation.RENEWDELEGATIONTOKEN.toString()); params.put(TOKEN_PARAM, dToken.encodeToUrlString()); - URL url = HttpFSUtils.createHttpURL(new Path(fsURI), params); + URL url = HttpFSUtils.createURL(new Path(fsURI), params); AuthenticatedURL aUrl = new AuthenticatedURL(new HttpFSKerberosAuthenticator()); try { @@ -172,7 +172,7 @@ public class HttpFSKerberosAuthenticator params.put(OP_PARAM, DelegationTokenOperation.CANCELDELEGATIONTOKEN.toString()); params.put(TOKEN_PARAM, dToken.encodeToUrlString()); - URL url = HttpFSUtils.createHttpURL(new Path(fsURI), params); + URL url = HttpFSUtils.createURL(new Path(fsURI), params); AuthenticatedURL aUrl = new AuthenticatedURL(new HttpFSKerberosAuthenticator()); try { Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSUtils.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSUtils.java?rev=1560768&r1=1560767&r2=1560768&view=diff ============================================================================== --- hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSUtils.java (original) +++ hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSUtils.java Thu Jan 23 17:49:24 2014 @@ -55,17 +55,21 @@ public class HttpFSUtils { * * @return a URL for the HttpFSServer server, * - * @throws IOException thrown if an IO error occurrs. + * @throws IOException thrown if an IO error occurs. */ - static URL createHttpURL(Path path, Map params) + static URL createURL(Path path, Map params) throws IOException { URI uri = path.toUri(); String realScheme; if (uri.getScheme().equalsIgnoreCase(HttpFSFileSystem.SCHEME)) { realScheme = "http"; + } else if (uri.getScheme().equalsIgnoreCase(HttpsFSFileSystem.SCHEME)) { + realScheme = "https"; + } else { throw new IllegalArgumentException(MessageFormat.format( - "Invalid scheme [{0}] it should be 'webhdfs'", uri)); + "Invalid scheme [{0}] it should be '" + HttpFSFileSystem.SCHEME + "' " + + "or '" + HttpsFSFileSystem.SCHEME + "'", uri)); } StringBuilder sb = new StringBuilder(); sb.append(realScheme).append("://").append(uri.getAuthority()). Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServerWebApp.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServerWebApp.java?rev=1560768&r1=1560767&r2=1560768&view=diff ============================================================================== --- hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServerWebApp.java (original) +++ hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServerWebApp.java Thu Jan 23 17:49:24 2014 @@ -94,11 +94,11 @@ public class HttpFSServerWebApp extends */ @Override public void init() throws ServerException { - super.init(); if (SERVER != null) { throw new RuntimeException("HttpFSServer server already initialized"); } SERVER = this; + super.init(); adminGroup = getConfig().get(getPrefixedName(CONF_ADMIN_GROUP), "admin"); LOG.info("Connects to Namenode [{}]", get().get(FileSystemAccess.class).getFileSystemConfiguration(). Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/DelegationTokenIdentifier.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/DelegationTokenIdentifier.java?rev=1560768&r1=1560767&r2=1560768&view=diff ============================================================================== --- hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/DelegationTokenIdentifier.java (original) +++ hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/DelegationTokenIdentifier.java Thu Jan 23 17:49:24 2014 @@ -29,30 +29,33 @@ import org.apache.hadoop.security.token. public class DelegationTokenIdentifier extends AbstractDelegationTokenIdentifier { - public static final Text KIND_NAME = WebHdfsFileSystem.TOKEN_KIND; + private Text kind = WebHdfsFileSystem.TOKEN_KIND; - public DelegationTokenIdentifier() { + public DelegationTokenIdentifier(Text kind) { + this.kind = kind; } /** * Create a new delegation token identifier * + * @param kind token kind * @param owner the effective username of the token owner * @param renewer the username of the renewer * @param realUser the real username of the token owner */ - public DelegationTokenIdentifier(Text owner, Text renewer, Text realUser) { + public DelegationTokenIdentifier(Text kind, Text owner, Text renewer, + Text realUser) { super(owner, renewer, realUser); + this.kind = kind; } - /** * Returns the kind, TOKEN_KIND. * @return returns TOKEN_KIND. */ @Override public Text getKind() { - return KIND_NAME; + return kind; } } Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/security/DelegationTokenManagerService.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/security/DelegationTokenManagerService.java?rev=1560768&r1=1560767&r2=1560768&view=diff ============================================================================== --- hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/security/DelegationTokenManagerService.java (original) +++ hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/security/DelegationTokenManagerService.java Thu Jan 23 17:49:24 2014 @@ -19,6 +19,8 @@ package org.apache.hadoop.lib.service.se import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.fs.http.server.HttpFSServerWebApp; +import org.apache.hadoop.hdfs.web.SWebHdfsFileSystem; +import org.apache.hadoop.hdfs.web.WebHdfsFileSystem; import org.apache.hadoop.io.Text; import org.apache.hadoop.lib.server.BaseService; import org.apache.hadoop.lib.server.ServerException; @@ -55,6 +57,8 @@ public class DelegationTokenManagerServi DelegationTokenSecretManager secretManager = null; + private Text tokenKind; + public DelegationTokenManagerService() { super(PREFIX); } @@ -70,7 +74,9 @@ public class DelegationTokenManagerServi long updateInterval = getServiceConfig().getLong(UPDATE_INTERVAL, DAY); long maxLifetime = getServiceConfig().getLong(MAX_LIFETIME, 7 * DAY); long renewInterval = getServiceConfig().getLong(RENEW_INTERVAL, DAY); - secretManager = new DelegationTokenSecretManager(updateInterval, + tokenKind = (HttpFSServerWebApp.get().isSslEnabled()) + ? SWebHdfsFileSystem.TOKEN_KIND : WebHdfsFileSystem.TOKEN_KIND; + secretManager = new DelegationTokenSecretManager(tokenKind, updateInterval, maxLifetime, renewInterval, HOUR); try { @@ -122,7 +128,7 @@ public class DelegationTokenManagerServi realUser = new Text(ugi.getRealUser().getUserName()); } DelegationTokenIdentifier tokenIdentifier = - new DelegationTokenIdentifier(owner, new Text(renewer), realUser); + new DelegationTokenIdentifier(tokenKind, owner, new Text(renewer), realUser); Token token = new Token(tokenIdentifier, secretManager); try { @@ -188,7 +194,7 @@ public class DelegationTokenManagerServi throws DelegationTokenManagerException { ByteArrayInputStream buf = new ByteArrayInputStream(token.getIdentifier()); DataInputStream dis = new DataInputStream(buf); - DelegationTokenIdentifier id = new DelegationTokenIdentifier(); + DelegationTokenIdentifier id = new DelegationTokenIdentifier(tokenKind); try { id.readFields(dis); dis.close(); @@ -203,6 +209,8 @@ public class DelegationTokenManagerServi private static class DelegationTokenSecretManager extends AbstractDelegationTokenSecretManager { + private Text tokenKind; + /** * Create a secret manager * @@ -215,17 +223,18 @@ public class DelegationTokenManagerServi * scanned * for expired tokens */ - public DelegationTokenSecretManager(long delegationKeyUpdateInterval, + public DelegationTokenSecretManager(Text tokenKind, long delegationKeyUpdateInterval, long delegationTokenMaxLifetime, long delegationTokenRenewInterval, long delegationTokenRemoverScanInterval) { super(delegationKeyUpdateInterval, delegationTokenMaxLifetime, delegationTokenRenewInterval, delegationTokenRemoverScanInterval); + this.tokenKind = tokenKind; } @Override public DelegationTokenIdentifier createIdentifier() { - return new DelegationTokenIdentifier(); + return new DelegationTokenIdentifier(tokenKind); } } Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/servlet/ServerWebApp.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/servlet/ServerWebApp.java?rev=1560768&r1=1560767&r2=1560768&view=diff ============================================================================== --- hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/servlet/ServerWebApp.java (original) +++ hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/servlet/ServerWebApp.java Thu Jan 23 17:49:24 2014 @@ -44,6 +44,7 @@ public abstract class ServerWebApp exten private static final String TEMP_DIR = ".temp.dir"; private static final String HTTP_HOSTNAME = ".http.hostname"; private static final String HTTP_PORT = ".http.port"; + public static final String SSL_ENABLED = ".ssl.enabled"; private static ThreadLocal HOME_DIR_TL = new ThreadLocal(); @@ -225,4 +226,12 @@ public abstract class ServerWebApp exten public void setAuthority(InetSocketAddress authority) { this.authority = authority; } + + + /** + * + */ + public boolean isSslEnabled() { + return Boolean.valueOf(System.getProperty(getName() + SSL_ENABLED, "false")); + } } Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/libexec/httpfs-config.sh URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/libexec/httpfs-config.sh?rev=1560768&r1=1560767&r2=1560768&view=diff ============================================================================== --- hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/libexec/httpfs-config.sh (original) +++ hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/libexec/httpfs-config.sh Thu Jan 23 17:49:24 2014 @@ -143,6 +143,27 @@ else print "Using HTTPFS_HTTP_HOSTNAME: ${HTTPFS_HTTP_HOSTNAME}" fi +if [ "${HTTPFS_SSL_ENABLED}" = "" ]; then + export HTTPFS_SSL_ENABLED="false" + print "Setting HTTPFS_SSL_ENABLED: ${HTTPFS_SSL_ENABLED}" +else + print "Using HTTPFS_SSL_ENABLED: ${HTTPFS_SSL_ENABLED}" +fi + +if [ "${HTTPFS_SSL_KEYSTORE_FILE}" = "" ]; then + export HTTPFS_SSL_KEYSTORE_FILE=${HOME}/.keystore + print "Setting HTTPFS_SSL_KEYSTORE_FILE: ${HTTPFS_SSL_KEYSTORE_FILE}" +else + print "Using HTTPFS_SSL_KEYSTORE_FILE: ${HTTPFS_SSL_KEYSTORE_FILE}" +fi + +if [ "${HTTPFS_SSL_KEYSTORE_PASS}" = "" ]; then + export HTTPFS_SSL_KEYSTORE_PASS=password + print "Setting HTTPFS_SSL_KEYSTORE_PASS: ${HTTPFS_SSL_KEYSTORE_PASS}" +else + print "Using HTTPFS_SSL_KEYSTORE_PASS: ${HTTPFS_SSL_KEYSTORE_PASS}" +fi + if [ "${CATALINA_BASE}" = "" ]; then export CATALINA_BASE=${HTTPFS_HOME}/share/hadoop/httpfs/tomcat print "Setting CATALINA_BASE: ${CATALINA_BASE}" Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/sbin/httpfs.sh URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/sbin/httpfs.sh?rev=1560768&r1=1560767&r2=1560768&view=diff ============================================================================== --- hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/sbin/httpfs.sh (original) +++ hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/sbin/httpfs.sh Thu Jan 23 17:49:24 2014 @@ -43,6 +43,9 @@ catalina_opts="${catalina_opts} -Dhttpfs catalina_opts="${catalina_opts} -Dhttpfs.admin.port=${HTTPFS_ADMIN_PORT}"; catalina_opts="${catalina_opts} -Dhttpfs.http.port=${HTTPFS_HTTP_PORT}"; catalina_opts="${catalina_opts} -Dhttpfs.http.hostname=${HTTPFS_HTTP_HOSTNAME}"; +catalina_opts="${catalina_opts} -Dhttpfs.ssl.enabled=${HTTPFS_SSL_ENABLED}"; +catalina_opts="${catalina_opts} -Dhttpfs.ssl.keystore.file=${HTTPFS_SSL_KEYSTORE_FILE}"; +catalina_opts="${catalina_opts} -Dhttpfs.ssl.keystore.pass=${HTTPFS_SSL_KEYSTORE_PASS}"; print "Adding to CATALINA_OPTS: ${catalina_opts}" Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/apt/ServerSetup.apt.vm URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/apt/ServerSetup.apt.vm?rev=1560768&r1=1560767&r2=1560768&view=diff ============================================================================== --- hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/apt/ServerSetup.apt.vm (original) +++ hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/apt/ServerSetup.apt.vm Thu Jan 23 17:49:24 2014 @@ -118,4 +118,46 @@ Transfer-Encoding: chunked HttpFS supports the following {{{./httpfs-default.html}configuration properties}} in the HttpFS's <<>> configuration file. +* HttpFS over HTTPS (SSL) + + To configure HttpFS to work over SSL edit the {{httpfs-env.sh}} script in the + configuration directory setting the {{HTTPFS_SSL_ENABLED}} to {{true}}. + + In addition, the following 2 properties may be defined (shown with default + values): + + * HTTPFS_SSL_KEYSTORE_FILE=${HOME}/.keystore + + * HTTPFS_SSL_KEYSTORE_PASS=password + + In the HttpFS <<>> directory, replace the <<>> file + with the <<>> file. + + + You need to create an SSL certificate for the HttpFS server. As the + <<>> Unix user, using the Java <<>> command to create the + SSL certificate: + ++---+ +$ keytool -genkey -alias tomcat -keyalg RSA ++---+ + + You will be asked a series of questions in an interactive prompt. It will + create the keystore file, which will be named <<.keystore>> and located in the + <<>> user home directory. + + The password you enter for "keystore password" must match the value of the + <<>> environment variable set in the + <<>> script in the configuration directory. + + The answer to "What is your first and last name?" (i.e. "CN") must be the + hostname of the machine where the HttpFS Server will be running. + + Start HttpFS. It should work over HTTPS. + + Using the Hadoop <<>> API or the Hadoop FS shell, use the + <<>> scheme. Make sure the JVM is picking up the truststore + containing the public key of the SSL certificate if using a self-signed + certificate. + \[ {{{./index.html}Go Back}} \] Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java?rev=1560768&r1=1560767&r2=1560768&view=diff ============================================================================== --- hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java (original) +++ hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java Thu Jan 23 17:49:24 2014 @@ -116,10 +116,14 @@ public abstract class BaseTestHttpFSWith return HttpFSFileSystem.class; } + protected String getScheme() { + return "webhdfs"; + } + protected FileSystem getHttpFSFileSystem() throws Exception { Configuration conf = new Configuration(); conf.set("fs.webhdfs.impl", getFileSystemClass().getName()); - URI uri = new URI("webhdfs://" + + URI uri = new URI(getScheme() + "://" + TestJettyHelper.getJettyURL().toURI().getAuthority()); return FileSystem.get(uri, conf); } @@ -127,7 +131,7 @@ public abstract class BaseTestHttpFSWith protected void testGet() throws Exception { FileSystem fs = getHttpFSFileSystem(); Assert.assertNotNull(fs); - URI uri = new URI("webhdfs://" + + URI uri = new URI(getScheme() + "://" + TestJettyHelper.getJettyURL().toURI().getAuthority()); Assert.assertEquals(fs.getUri(), uri); fs.close(); Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSKerberosAuthenticationHandler.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSKerberosAuthenticationHandler.java?rev=1560768&r1=1560767&r2=1560768&view=diff ============================================================================== --- hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSKerberosAuthenticationHandler.java (original) +++ hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSKerberosAuthenticationHandler.java Thu Jan 23 17:49:24 2014 @@ -22,9 +22,13 @@ import org.apache.hadoop.conf.Configurat import org.apache.hadoop.fs.http.client.HttpFSFileSystem; import org.apache.hadoop.fs.http.client.HttpFSKerberosAuthenticator; import org.apache.hadoop.fs.http.client.HttpFSKerberosAuthenticator.DelegationTokenOperation; +import org.apache.hadoop.hdfs.web.SWebHdfsFileSystem; +import org.apache.hadoop.hdfs.web.WebHdfsFileSystem; +import org.apache.hadoop.io.Text; import org.apache.hadoop.lib.service.DelegationTokenIdentifier; import org.apache.hadoop.lib.service.DelegationTokenManager; import org.apache.hadoop.lib.service.DelegationTokenManagerException; +import org.apache.hadoop.lib.servlet.ServerWebApp; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.security.authentication.client.AuthenticationException; import org.apache.hadoop.security.authentication.server.AuthenticationHandler; @@ -51,7 +55,24 @@ public class TestHttpFSKerberosAuthentic @Test @TestDir - public void testManagementOperations() throws Exception { + public void testManagementOperationsWebHdfsFileSystem() throws Exception { + testManagementOperations(WebHdfsFileSystem.TOKEN_KIND); + } + + @Test + @TestDir + public void testManagementOperationsSWebHdfsFileSystem() throws Exception { + try { + System.setProperty(HttpFSServerWebApp.NAME + + ServerWebApp.SSL_ENABLED, "true"); + testManagementOperations(SWebHdfsFileSystem.TOKEN_KIND); + } finally { + System.getProperties().remove(HttpFSServerWebApp.NAME + + ServerWebApp.SSL_ENABLED); + } + } + + private void testManagementOperations(Text expectedTokenKind) throws Exception { String dir = TestDirHelper.getTestDir().getAbsolutePath(); Configuration httpfsConf = new Configuration(false); @@ -67,8 +88,8 @@ public class TestHttpFSKerberosAuthentic testNonManagementOperation(handler); testManagementOperationErrors(handler); - testGetToken(handler, null); - testGetToken(handler, "foo"); + testGetToken(handler, null, expectedTokenKind); + testGetToken(handler, "foo", expectedTokenKind); testCancelToken(handler); testRenewToken(handler); @@ -112,8 +133,8 @@ public class TestHttpFSKerberosAuthentic Mockito.contains("requires SPNEGO")); } - private void testGetToken(AuthenticationHandler handler, String renewer) - throws Exception { + private void testGetToken(AuthenticationHandler handler, String renewer, + Text expectedTokenKind) throws Exception { DelegationTokenOperation op = DelegationTokenOperation.GETDELEGATIONTOKEN; HttpServletRequest request = Mockito.mock(HttpServletRequest.class); HttpServletResponse response = Mockito.mock(HttpServletResponse.class); @@ -154,6 +175,7 @@ public class TestHttpFSKerberosAuthentic Token dt = new Token(); dt.decodeFromUrlString(tokenStr); HttpFSServerWebApp.get().get(DelegationTokenManager.class).verifyToken(dt); + Assert.assertEquals(expectedTokenKind, dt.getKind()); } private void testCancelToken(AuthenticationHandler handler) Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/service/security/TestDelegationTokenManagerService.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/service/security/TestDelegationTokenManagerService.java?rev=1560768&r1=1560767&r2=1560768&view=diff ============================================================================== --- hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/service/security/TestDelegationTokenManagerService.java (original) +++ hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/service/security/TestDelegationTokenManagerService.java Thu Jan 23 17:49:24 2014 @@ -23,6 +23,9 @@ import org.apache.hadoop.fs.http.server. import org.apache.hadoop.lib.server.Server; import org.apache.hadoop.lib.service.DelegationTokenManager; import org.apache.hadoop.lib.service.DelegationTokenManagerException; +import org.apache.hadoop.lib.service.hadoop.FileSystemAccessService; +import org.apache.hadoop.lib.service.instrumentation.InstrumentationService; +import org.apache.hadoop.lib.service.scheduler.SchedulerService; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.security.token.Token; import org.apache.hadoop.test.HTestCase; @@ -43,9 +46,12 @@ public class TestDelegationTokenManagerS public void service() throws Exception { String dir = TestDirHelper.getTestDir().getAbsolutePath(); Configuration conf = new Configuration(false); - conf.set("server.services", StringUtils.join(",", - Arrays.asList(DelegationTokenManagerService.class.getName()))); - Server server = new Server("server", dir, dir, dir, dir, conf); + conf.set("httpfs.services", StringUtils.join(",", + Arrays.asList(InstrumentationService.class.getName(), + SchedulerService.class.getName(), + FileSystemAccessService.class.getName(), + DelegationTokenManagerService.class.getName()))); + Server server = new HttpFSServerWebApp(dir, dir, dir, dir, conf); server.init(); DelegationTokenManager tm = server.get(DelegationTokenManager.class); Assert.assertNotNull(tm); Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestJettyHelper.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestJettyHelper.java?rev=1560768&r1=1560767&r2=1560768&view=diff ============================================================================== --- hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestJettyHelper.java (original) +++ hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestJettyHelper.java Thu Jan 23 17:49:24 2014 @@ -28,31 +28,46 @@ import org.junit.Test; import org.junit.rules.MethodRule; import org.junit.runners.model.FrameworkMethod; import org.junit.runners.model.Statement; +import org.mortbay.jetty.Connector; import org.mortbay.jetty.Server; +import org.mortbay.jetty.security.SslSocketConnector; public class TestJettyHelper implements MethodRule { - - @Test - public void dummy() { + private boolean ssl; + private String keyStoreType; + private String keyStore; + private String keyStorePassword; + private Server server; + + public TestJettyHelper() { + this.ssl = false; + } + + public TestJettyHelper(String keyStoreType, String keyStore, + String keyStorePassword) { + ssl = true; + this.keyStoreType = keyStoreType; + this.keyStore = keyStore; + this.keyStorePassword = keyStorePassword; } - private static ThreadLocal TEST_SERVLET_TL = new InheritableThreadLocal(); + private static ThreadLocal TEST_JETTY_TL = + new InheritableThreadLocal(); @Override public Statement apply(final Statement statement, final FrameworkMethod frameworkMethod, final Object o) { return new Statement() { @Override public void evaluate() throws Throwable { - Server server = null; TestJetty testJetty = frameworkMethod.getAnnotation(TestJetty.class); if (testJetty != null) { server = createJettyServer(); } try { - TEST_SERVLET_TL.set(server); + TEST_JETTY_TL.set(TestJettyHelper.this); statement.evaluate(); } finally { - TEST_SERVLET_TL.remove(); + TEST_JETTY_TL.remove(); if (server != null && server.isRunning()) { try { server.stop(); @@ -73,8 +88,19 @@ public class TestJettyHelper implements int port = ss.getLocalPort(); ss.close(); Server server = new Server(0); - server.getConnectors()[0].setHost(host); - server.getConnectors()[0].setPort(port); + if (!ssl) { + server.getConnectors()[0].setHost(host); + server.getConnectors()[0].setPort(port); + } else { + SslSocketConnector c = new SslSocketConnector(); + c.setHost(host); + c.setPort(port); + c.setNeedClientAuth(false); + c.setKeystore(keyStore); + c.setKeystoreType(keyStoreType); + c.setKeyPassword(keyStorePassword); + server.setConnectors(new Connector[] {c}); + } return server; } catch (Exception ex) { throw new RuntimeException("Could not start embedded servlet container, " + ex.getMessage(), ex); @@ -109,11 +135,11 @@ public class TestJettyHelper implements * @return a Jetty server ready to be configured and the started. */ public static Server getJettyServer() { - Server server = TEST_SERVLET_TL.get(); - if (server == null) { + TestJettyHelper helper = TEST_JETTY_TL.get(); + if (helper == null || helper.server == null) { throw new IllegalStateException("This test does not use @TestJetty"); } - return server; + return helper.server; } /** @@ -123,12 +149,15 @@ public class TestJettyHelper implements * @return the base URL (SCHEMA://HOST:PORT) of the test Jetty server. */ public static URL getJettyURL() { - Server server = TEST_SERVLET_TL.get(); - if (server == null) { + TestJettyHelper helper = TEST_JETTY_TL.get(); + if (helper == null || helper.server == null) { throw new IllegalStateException("This test does not use @TestJetty"); } try { - return new URL("http://" + server.getConnectors()[0].getHost() + ":" + server.getConnectors()[0].getPort()); + String scheme = (helper.ssl) ? "https" : "http"; + return new URL(scheme + "://" + + helper.server.getConnectors()[0].getHost() + ":" + + helper.server.getConnectors()[0].getPort()); } catch (MalformedURLException ex) { throw new RuntimeException("It should never happen, " + ex.getMessage(), ex); } Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt?rev=1560768&r1=1560767&r2=1560768&view=diff ============================================================================== --- hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt (original) +++ hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt Thu Jan 23 17:49:24 2014 @@ -120,94 +120,9 @@ Trunk (Unreleased) HDFS-5041. Add the time of last heartbeat to dead server Web UI (Shinichi Yamashita via brandonli) - HDFS-5049. Add JNI mlock support. (Andrew Wang via Colin Patrick McCabe) - - HDFS-5051. Propagate cache status information from the DataNode to the - NameNode (Andrew Wang via Colin Patrick McCabe) - - HDFS-5052. Add cacheRequest/uncacheRequest support to NameNode. - (contributed by Colin Patrick McCabe) - - HDFS-5050. Add DataNode support for mlock and munlock - (Andrew Wang via Colin Patrick McCabe) - - HDFS-5141. Add cache status information to datanode heartbeat. - (Contributed by Andrew Wang) - - HDFS-5121. Add RPCs for creating and manipulating cache pools. - (Contributed by Colin Patrick McCabe) - - HDFS-5163. Miscellaneous cache pool RPC fixes. (Contributed by Colin - Patrick McCabe) - - HDFS-5120. Add command-line support for manipulating cache pools. - (Contributed by Colin Patrick McCabe) - - HDFS-5158. Add command-line support for manipulating cache directives. - (Contributed by Colin Patrick McCabe) - - HDFS-5053. NameNode should invoke DataNode APIs to coordinate caching. - (Andrew Wang) - - HDFS-5197. Document dfs.cachereport.intervalMsec in hdfs-default.xml. - (cnauroth) - - HDFS-5213. Separate PathBasedCacheEntry and PathBasedCacheDirectiveWithId. - (Contributed by Colin Patrick McCabe) - - HDFS-5236. Change PathBasedCacheDirective APIs to be a single value - rather than batch. (Contributed by Andrew Wang) - - HDFS-5191. Revisit zero-copy API in FSDataInputStream to make it more - intuitive. (Contributed by Colin Patrick McCabe) - - HDFS-5119. Persist CacheManager state in the edit log. - (Contributed by Andrew Wang) - - HDFS-5190. Move cache pool related CLI commands to CacheAdmin. - (Contributed by Andrew Wang) - - HDFS-5304. Expose if a block replica is cached in getFileBlockLocations. - (Contributed by Andrew Wang) - - HDFS-5224. Refactor PathBasedCache* methods to use a Path rather than a - String. (cnauroth) - - HDFS-5358. Add replication field to PathBasedCacheDirective. - (Contributed by Colin Patrick McCabe) - - HDFS-5359. Allow LightWeightGSet#Iterator to remove elements. - (Contributed by Colin Patrick McCabe) - - HDFS-5096. Automatically cache new data added to a cached path. - (Contributed by Colin Patrick McCabe) - - HDFS-5378. In CacheReport, don't send genstamp and length on the wire - (Contributed by Colin Patrick McCabe) - - HDFS-5386. Add feature documentation for datanode caching. - (Colin Patrick McCabe via cnauroth) - - HDFS-5326. add modifyDirective to cacheAdmin. (cmccabe) - - HDFS-5450. Better API for getting the cached blocks locations. (wang) - - HDFS-5485. Add command-line support for modifyDirective. (cmccabe) - - HDFS-5366. recaching improvements (cmccabe) - - HDFS-5511. improve CacheManipulator interface to allow better unit testing - (cmccabe) - - HDFS-5451. Add byte and file statistics to PathBasedCacheEntry. - (Colin Patrick McCabe via Andrew Wang) - HDFS-5531. Combine the getNsQuota() and getDsQuota() methods in INode. (szetszwo) - HDFS-5473. Consistent naming of user-visible caching classes and methods - (cmccabe) - HDFS-5285. Flatten INodeFile hierarchy: Replace INodeFileUnderConstruction and INodeFileUnderConstructionWithSnapshot with FileUnderContructionFeature. (jing9 via szetszwo) @@ -215,15 +130,8 @@ Trunk (Unreleased) HDFS-5286. Flatten INodeDirectory hierarchy: Replace INodeDirectoryWithQuota with DirectoryWithQuotaFeature. (szetszwo) - HDFS-5556. Add some more NameNode cache statistics, cache pool stats - (cmccabe) - HDFS-5537. Remove FileWithSnapshot interface. (jing9 via szetszwo) - HDFS-5430. Support TTL on CacheDirectives. (wang) - - HDFS-5630. Hook up cache directive and pool usage statistics. (wang) - HDFS-5554. Flatten INodeFile hierarchy: Replace INodeFileWithSnapshot with FileWithSnapshotFeature. (jing9 via szetszwo) @@ -234,14 +142,6 @@ Trunk (Unreleased) INodeDirectoryWithSnapshot with DirectoryWithSnapshotFeature. (jing9 via szetszwo) - HDFS-5431. Support cachepool-based limit management in path-based caching - (awang via cmccabe) - - HDFS-5636. Enforce a max TTL per cache pool. (awang via cmccabe) - - HDFS-5651. Remove dfs.namenode.caching.enabled and improve CRM locking. - (cmccabe via wang) - HDFS-5715. Use Snapshot ID to indicate the corresponding Snapshot for a FileDiff/DirectoryDiff. (jing9) @@ -250,11 +150,6 @@ Trunk (Unreleased) OPTIMIZATIONS - HDFS-5349. DNA_CACHE and DNA_UNCACHE should be by blockId only. (cmccabe) - - HDFS-5665. Remove the unnecessary writeLock while initializing CacheManager - in FsNameSystem Ctor. (Uma Maheswara Rao G via Andrew Wang) - BUG FIXES HADOOP-9635 Fix potential Stack Overflow in DomainSocket.c (V. Karthik Kumar @@ -372,110 +267,12 @@ Trunk (Unreleased) HDFS-4366. Block Replication Policy Implementation May Skip Higher-Priority Blocks for Lower-Priority Blocks (Derek Dagit via kihwal) - HDFS-5169. hdfs.c: translateZCRException: null pointer deref when - translating some exceptions. (Contributed by Colin Patrick McCabe) - - HDFS-5198. NameNodeRpcServer must not send back DNA_FINALIZE in reply to a - cache report. (Contributed by Colin Patrick McCabe) - - HDFS-5195. Prevent passing null pointer to mlock and munlock. (cnauroth) - - HDFS-5201. NativeIO: consolidate getrlimit into NativeIO#getMemlockLimit - (Contributed by Colin Patrick McCabe) - - HDFS-5210. Fix some failing unit tests on HDFS-4949 branch. - (Contributed by Andrew Wang) - - HDFS-5266. ElasticByteBufferPool#Key does not implement equals. (cnauroth) - - HDFS-5309. Fix failing caching unit tests. (Andrew Wang) - - HDFS-5314. Do not expose CachePool type in AddCachePoolOp (Colin Patrick - McCabe) - - HDFS-5348. Fix error message when dfs.datanode.max.locked.memory is - improperly configured. (Colin Patrick McCabe) - - HDFS-5373. hdfs cacheadmin -addDirective short usage does not mention - -replication parameter. (cnauroth) - - HDFS-5383. fix broken caching unit tests. (Andrew Wang) - - HDFS-5388. Loading fsimage fails to find cache pools during namenode - startup. (Chris Nauroth via Colin Patrick McCabe) - - HDFS-5203. Concurrent clients that add a cache directive on the same path - may prematurely uncache from each other. (Chris Nauroth via Colin Patrick - McCabe) - - HDFS-5385. Caching RPCs are AtMostOnce, but do not persist client ID and - call ID to edit log. (Chris Nauroth via Colin Patrick McCabe) - - HDFS-5404. Resolve regressions in Windows compatibility on HDFS-4949 - branch. (Chris Nauroth via Andrew Wang) - - HDFS-5405. Fix possible RetryCache hang for caching RPC handlers in - FSNamesystem. (wang) - - HDFS-5419. Fixup test-patch.sh warnings on HDFS-4949 branch. (wang) - - HDFS-5468. CacheAdmin help command does not recognize commands (Stephen - Chu via Colin Patrick McCabe) - - HDFS-5394. Fix race conditions in DN caching and uncaching (cmccabe) - - HDFS-5482. DistributedFileSystem#listPathBasedCacheDirectives must support - relative paths. (Colin Patrick McCabe via cnauroth) - - HDFS-5320. Add datanode caching metrics. (wang) - - HDFS-5520. loading cache path directives from edit log doesn't update - nextEntryId (cmccabe) - - HDFS-5512. CacheAdmin -listPools fails with NPE when user lacks permissions - to view all pools (wang via cmccabe) - - HDFS-5513. CacheAdmin commands fail when using . as the path. (wang) - - HDFS-5543. Fix narrow race condition in TestPathBasedCacheRequests - (cmccabe) - - HDFS-5565. CacheAdmin help should match against non-dashed commands - (wang via cmccabe) - - HDFS-5562. TestCacheDirectives and TestFsDatasetCache should stub out - native mlock. (Colin McCabe and Akira Ajisaka via wang) - - HDFS-5555. CacheAdmin commands fail when first listed NameNode is in - Standby (jxiang via cmccabe) - - HDFS-5626. dfsadmin -report shows incorrect cache values. (cmccabe) - - HDFS-5679. TestCacheDirectives should handle the case where native code - is not available. (wang) - - HDFS-5701. Fix the CacheAdmin -addPool -maxTtl option name. - (Stephen Chu via wang) - - HDFS-5708. The CacheManager throws a NPE in the DataNode logs when - processing cache reports that refer to a block not known to the - BlockManager. (cmccabe via wang) - - HDFS-5659. dfsadmin -report doesn't output cache information properly. - (wang) - HDFS-5705. TestSecondaryNameNodeUpgrade#testChangeNsIDFails may fail due to ConcurrentModificationException. (Ted Yu via brandonli) HDFS-5719. FSImage#doRollback() should close prevState before return (Ted Yu via brandonli) - HDFS-5589. Namenode loops caching and uncaching when data should be - uncached (awang via cmccabe) - - HDFS-5724. modifyCacheDirective logging audit log command wrongly as - addCacheDirective (Uma Maheswara Rao G via Colin Patrick McCabe) - HDFS-5726. Fix compilation error in AbstractINodeDiff for JDK7. (jing9) HDFS-5768. Consolidate the serialization code in DelegationTokenSecretManager @@ -524,6 +321,10 @@ Release 2.4.0 - UNRELEASED HDFS-5784. reserve space in edit log header and fsimage header for feature flag section (cmccabe) + HDFS-5703. Add support for HTTPS and swebhdfs to HttpFS. (tucu) + + HDFS-4949. Centralized cache management in HDFS. (wang and cmccabe) + IMPROVEMENTS HDFS-5267. Remove volatile from LightWeightHashSet. (Junping Du via llu) @@ -695,6 +496,12 @@ Release 2.4.0 - UNRELEASED HDFS-5704. Change OP_UPDATE_BLOCKS with a new OP_ADD_BLOCK. (jing9) + HDFS-5434. Change block placement policy constructors from package private + to protected. (Buddy Taylor via Arpit Agarwal) + + HDFS-5788. listLocatedStatus response can be very large. (Nathan Roberts + via kihwal) + OPTIMIZATIONS HDFS-5239. Allow FSNamesystem lock fairness to be configurable (daryn) @@ -704,6 +511,9 @@ Release 2.4.0 - UNRELEASED HDFS-5681. renewLease should not hold fsn write lock. (daryn via Kihwal) + HDFS-5241. Provide alternate queuing audit logger to reduce logging + contention (daryn) + BUG FIXES HDFS-5034. Remove debug prints from GetFileLinkInfo (Andrew Wang via Colin @@ -778,6 +588,12 @@ Release 2.4.0 - UNRELEASED HDFS-5800. Fix a typo in DFSClient.renewLease(). (Kousuke Saruta via szetszwo) + HDFS-5748. Too much information shown in the dfs health page. + (Haohui Mai via brandonli) + + HDFS-5806. balancer should set SoTimeout to avoid indefinite hangs. + (Nathan Roberts via Andrew Wang). + BREAKDOWN OF HDFS-2832 SUBTASKS AND RELATED JIRAS HDFS-4985. Add storage type to the protocol and expose it in block report @@ -911,6 +727,211 @@ Release 2.4.0 - UNRELEASED HDFS-5667. Include DatanodeStorage in StorageReport. (Arpit Agarwal) + BREAKDOWN OF HDFS-4949 SUBTASKS AND RELATED JIRAS + + HDFS-5049. Add JNI mlock support. (Andrew Wang via Colin Patrick McCabe) + + HDFS-5051. Propagate cache status information from the DataNode to the + NameNode (Andrew Wang via Colin Patrick McCabe) + + HDFS-5052. Add cacheRequest/uncacheRequest support to NameNode. + (Contributed by Colin Patrick McCabe.) + + HDFS-5050. Add DataNode support for mlock and munlock (contributed by + Andrew Wang) + + HDFS-5141. Add cache status information to datanode heartbeat. (Contributed + by Andrew Wang) + + HDFS-5121. Add RPCs for creating and manipulating cache pools. + (Contributed by Colin Patrick McCabe) + + HDFS-5163. Miscellaneous cache pool RPC fixes (Contributed by Colin Patrick + McCabe) + + HDFS-5169. hdfs.c: translateZCRException: null pointer deref when + translating some exceptions (Contributed by Colin Patrick McCabe) + + HDFS-5120. Add command-line support for manipulating cache pools. (cmccabe) + + HDFS-5158. Add command-line support for manipulating cache directives. + (cmccabe) + + HDFS-5198. NameNodeRpcServer must not send back DNA_FINALIZE in reply to a + cache report. (cmccabe) + + HDFS-5195. Prevent passing null pointer to mlock and munlock. Contributed + by Chris Nauroth. + + HDFS-5053. NameNode should invoke DataNode APIs to coordinate caching. + (Andrew Wang) + + HDFS-5201. NativeIO: consolidate getrlimit into NativeIO#getMemlockLimit. + (Contributed by Colin Patrick McCabe) + + HDFS-5197. Document dfs.cachereport.intervalMsec in hdfs-default.xml. + Contributed by Chris Nauroth. + + HDFS-5210. Fix some failing unit tests on HDFS-4949 branch. (Contributed by + Andrew Wang) + + HDFS-5213. Separate PathBasedCacheEntry and PathBasedCacheDirectiveWithId. + Contributed by Colin Patrick McCabe. + + HDFS-5236. Change PathBasedCacheDirective APIs to be a single value rather + than batch. (Contributed by Andrew Wang) + + HDFS-5119. Persist CacheManager state in the edit log. (Contributed by + Andrew Wang) + + HDFS-5190. Move cache pool related CLI commands to CacheAdmin. (Contributed + by Andrew Wang) + + HDFS-5309. Fix failing caching unit tests. (Andrew Wang) + + HDFS-5314. Do not expose CachePool type in AddCachePoolOp (Colin Patrick + McCabe) + + HDFS-5304. Expose if a block replica is cached in getFileBlockLocations. + (Contributed by Andrew Wang) + + HDFS-5224. Refactor PathBasedCache* methods to use a Path rather than a + String. Contributed by Chris Nauroth. + + HDFS-5348. Fix error message when dfs.datanode.max.locked.memory is + improperly configured. (Contributed by Colin Patrick McCabe) + + HDFS-5349. DNA_CACHE and DNA_UNCACHE should be by blockId only (cmccabe) + + HDFS-5358. Add replication field to PathBasedCacheDirective. (Contributed + by Colin Patrick McCabe) + + HDFS-5359. Allow LightWeightGSet#Iterator to remove elements. (Contributed + by Colin Patrick McCabe) + + HDFS-5373. hdfs cacheadmin -addDirective short usage does not mention + -replication parameter. Contributed by Chris Nauroth. + + HDFS-5096. Automatically cache new data added to a cached path (contributed + by Colin Patrick McCabe) + + HDFS-5383. fix broken caching unit tests (Andrew Wang) + + HDFS-5388. Loading fsimage fails to find cache pools during namenode + startup (Chris Nauroth via Colin Patrick McCabe) + + HDFS-5203. Concurrent clients that add a cache directive on the same path + may prematurely uncache each other. (Chris Nauroth via Colin Patrick McCabe) + + HDFS-5378. In CacheReport, don't send genstamp and length on the wire + (Contributed by Colin Patrick McCabe) + + HDFS-5385. Caching RPCs are AtMostOnce, but do not persist client ID and + call ID to edit log. (Chris Nauroth via Colin Patrick McCabe) + + HDFS-5404 Resolve regressions in Windows compatibility on HDFS-4949 branch. + Contributed by Chris Nauroth. + + HDFS-5405. Fix possible RetryCache hang for caching RPC handlers in + FSNamesystem. (Contributed by Andrew Wang) + + HDFS-5419. Fixup test-patch.sh warnings on HDFS-4949 branch. (wang) + + HDFS-5386. Add feature documentation for datanode caching. Contributed by + Colin Patrick McCabe. + + HDFS-5468. CacheAdmin help command does not recognize commands (Stephen + Chu via Colin Patrick McCabe) + + HDFS-5326. add modifyDirective to cacheAdmin (cmccabe) + + HDFS-5394: Fix race conditions in DN caching and uncaching (cmccabe) + + HDFS-5320. Add datanode caching metrics. Contributed by Andrew Wang. + + HDFS-5482. DistributedFileSystem#listPathBasedCacheDirectives must support + relative paths. Contributed by Colin Patrick McCabe. + + HDFS-5471. CacheAdmin -listPools fails when user lacks permissions to view + all pools (Andrew Wang via Colin Patrick McCabe) + + HDFS-5450. better API for getting the cached blocks locations. Contributed + by Andrew Wang. + + HDFS-5485. add command-line support for modifyDirective (cmccabe) + + HDFS-5366. recaching improvements (cmccabe) + + HDFS-5520. loading cache path directives from edit log doesnt update + nextEntryId (cmccabe) + + HDFS-5512. CacheAdmin -listPools fails with NPE when user lacks permissions + to view all pools (awang via cmccabe) + + HDFS-5513. CacheAdmin commands fail when using . as the path. Contributed + by Andrew Wang. + + HDFS-5511. improve CacheManipulator interface to allow better unit testing + (cmccabe) + + HDFS-5451. Add byte and file statistics to PathBasedCacheEntry. Contributed + by Colin Patrick McCabe. + + HDFS-5473. Consistent naming of user-visible caching classes and methods + (cmccabe) + + HDFS-5543. Fix narrow race condition in TestPathBasedCacheRequests + (cmccabe) + + HDFS-5565. CacheAdmin help should match against non-dashed commands (wang + via cmccabe) + + HDFS-5556. Add some more NameNode cache statistics, cache pool stats + (cmccabe) + + HDFS-5562. TestCacheDirectives and TestFsDatasetCache should stub out + native mlock. Contributed by Colin Patrick McCabe and Akira Ajisaka. + + HDFS-5430. Support TTL on CacheDirectives. Contributed by Andrew Wang. + + HDFS-5555. CacheAdmin commands fail when first listed NameNode is in + Standby (jxiang via cmccabe) + + HDFS-5626. dfsadmin report shows incorrect values (cmccabe) + + HDFS-5630. Hook up cache directive and pool usage statistics. (wang) + + HDFS-5665. Remove the unnecessary writeLock while initializing CacheManager + in FsNameSystem Ctor. (Uma Maheswara Rao G via Andrew Wang) + + HDFS-5431. Support cachepool-based limit management in path-based caching. + (awang via cmccabe) + + HDFS-5679. TestCacheDirectives should handle the case where native code is + not available. (wang) + + HDFS-5636. Enforce a max TTL per cache pool (awang via cmccabe) + + HDFS-5701. Fix the CacheAdmin -addPool -maxTtl option name. Contributed by + Stephen Chu. + + HDFS-5708. The CacheManager throws a NPE in the DataNode logs when + processing cache reports that refer to a block not known to the BlockManager. + Contributed by Colin Patrick McCabe. + + HDFS-5659. dfsadmin -report doesn't output cache information properly. + Contributed by Andrew Wang. + + HDFS-5651. Remove dfs.namenode.caching.enabled and improve CRM locking. + Contributed by Colin Patrick McCabe. + + HDFS-5589. Namenode loops caching and uncaching when data should be + uncached. (awang via cmccabe) + + HDFS-5724. modifyCacheDirective logging audit log command wrongly as + addCacheDirective (Uma Maheswara Rao G via Colin Patrick McCabe) + + Release 2.3.0 - UNRELEASED INCOMPATIBLE CHANGES @@ -1111,6 +1132,8 @@ Release 2.3.0 - UNRELEASED HDFS-5649. Unregister NFS and Mount service when NFS gateway is shutting down. (brandonli) + HDFS-5789. Some of snapshot APIs missing checkOperation double check in fsn. (umamahesh) + Release 2.2.0 - 2013-10-13 INCOMPATIBLE CHANGES @@ -1278,7 +1301,7 @@ Release 2.1.1-beta - 2013-09-23 HDFS-5091. Support for spnego keytab separate from the JournalNode keytab for secure HA. (jing9) - HDFS-5051. nn fails to download checkpointed image from snn in some + HDFS-5055. nn fails to download checkpointed image from snn in some setups. (Vinay and suresh via suresh) HDFS-4898. BlockPlacementPolicyWithNodeGroup.chooseRemoteRack() fails to Propchange: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java:r1559794-1560767 Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java?rev=1560768&r1=1560767&r2=1560768&view=diff ============================================================================== --- hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java (original) +++ hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java Thu Jan 23 17:49:24 2014 @@ -304,6 +304,8 @@ public class DFSConfigKeys extends Commo public static final String DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME = "default"; public static final String DFS_NAMENODE_AUDIT_LOG_TOKEN_TRACKING_ID_KEY = "dfs.namenode.audit.log.token.tracking.id"; public static final boolean DFS_NAMENODE_AUDIT_LOG_TOKEN_TRACKING_ID_DEFAULT = false; + public static final String DFS_NAMENODE_AUDIT_LOG_ASYNC_KEY = "dfs.namenode.audit.log.async"; + public static final boolean DFS_NAMENODE_AUDIT_LOG_ASYNC_DEFAULT = false; // Much code in hdfs is not yet updated to use these keys. public static final String DFS_CLIENT_BLOCK_WRITE_LOCATEFOLLOWINGBLOCK_RETRIES_KEY = "dfs.client.block.write.locateFollowingBlock.retries"; Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java?rev=1560768&r1=1560767&r2=1560768&view=diff ============================================================================== --- hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java (original) +++ hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java Thu Jan 23 17:49:24 2014 @@ -337,6 +337,7 @@ public class Balancer { sock.connect( NetUtils.createSocketAddr(target.datanode.getXferAddr()), HdfsServerConstants.READ_TIMEOUT); + sock.setSoTimeout(HdfsServerConstants.READ_TIMEOUT); sock.setKeepAlive(true); OutputStream unbufOut = sock.getOutputStream(); Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java?rev=1560768&r1=1560767&r2=1560768&view=diff ============================================================================== --- hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java (original) +++ hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java Thu Jan 23 17:49:24 2014 @@ -79,7 +79,7 @@ public class BlockPlacementPolicyDefault */ protected int tolerateHeartbeatMultiplier; - BlockPlacementPolicyDefault(Configuration conf, FSClusterStats stats, + protected BlockPlacementPolicyDefault(Configuration conf, FSClusterStats stats, NetworkTopology clusterMap) { initialize(conf, stats, clusterMap); } Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java?rev=1560768&r1=1560767&r2=1560768&view=diff ============================================================================== --- hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java (original) +++ hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java Thu Jan 23 17:49:24 2014 @@ -46,12 +46,12 @@ import org.apache.hadoop.net.NodeBase; */ public class BlockPlacementPolicyWithNodeGroup extends BlockPlacementPolicyDefault { - BlockPlacementPolicyWithNodeGroup(Configuration conf, FSClusterStats stats, + protected BlockPlacementPolicyWithNodeGroup(Configuration conf, FSClusterStats stats, NetworkTopology clusterMap) { initialize(conf, stats, clusterMap); } - BlockPlacementPolicyWithNodeGroup() { + protected BlockPlacementPolicyWithNodeGroup() { } @Override Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java?rev=1560768&r1=1560767&r2=1560768&view=diff ============================================================================== --- hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java (original) +++ hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java Thu Jan 23 17:49:24 2014 @@ -176,7 +176,6 @@ public class FSDirectory implements Clos DFSConfigKeys.DFS_LIST_LIMIT, DFSConfigKeys.DFS_LIST_LIMIT_DEFAULT); this.lsLimit = configuredLimit>0 ? configuredLimit : DFSConfigKeys.DFS_LIST_LIMIT_DEFAULT; - this.contentCountLimit = conf.getInt( DFSConfigKeys.DFS_CONTENT_SUMMARY_LIMIT_KEY, DFSConfigKeys.DFS_CONTENT_SUMMARY_LIMIT_DEFAULT); @@ -1503,6 +1502,11 @@ public class FSDirectory implements Clos /** * Get a partial listing of the indicated directory * + * We will stop when any of the following conditions is met: + * 1) this.lsLimit files have been added + * 2) needLocation is true AND enough files have been added such + * that at least this.lsLimit block locations are in the response + * * @param src the directory name * @param startAfter the name to start listing after * @param needLocation if block locations are returned @@ -1534,14 +1538,30 @@ public class FSDirectory implements Clos int startChild = INodeDirectory.nextChild(contents, startAfter); int totalNumChildren = contents.size(); int numOfListing = Math.min(totalNumChildren-startChild, this.lsLimit); + int locationBudget = this.lsLimit; + int listingCnt = 0; HdfsFileStatus listing[] = new HdfsFileStatus[numOfListing]; - for (int i=0; i0; i++) { INode cur = contents.get(startChild+i); listing[i] = createFileStatus(cur.getLocalNameBytes(), cur, needLocation, snapshot); + listingCnt++; + if (needLocation) { + // Once we hit lsLimit locations, stop. + // This helps to prevent excessively large response payloads. + // Approximate #locations with locatedBlockCount() * repl_factor + LocatedBlocks blks = + ((HdfsLocatedFileStatus)listing[i]).getBlockLocations(); + locationBudget -= (blks == null) ? 0 : + blks.locatedBlockCount() * listing[i].getReplication(); + } + } + // truncate return array if necessary + if (listingCnt < numOfListing) { + listing = Arrays.copyOf(listing, listingCnt); } return new DirectoryListing( - listing, totalNumChildren-startChild-numOfListing); + listing, totalNumChildren-startChild-listingCnt); } finally { readUnlock(); } Modified: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java?rev=1560768&r1=1560767&r2=1560768&view=diff ============================================================================== --- hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java (original) +++ hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java Thu Jan 23 17:49:24 2014 @@ -38,6 +38,8 @@ import static org.apache.hadoop.hdfs.DFS import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_AUDIT_LOGGERS_KEY; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_AUDIT_LOG_TOKEN_TRACKING_ID_DEFAULT; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_AUDIT_LOG_TOKEN_TRACKING_ID_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_AUDIT_LOG_ASYNC_DEFAULT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_AUDIT_LOG_ASYNC_KEY; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_TXNS_DEFAULT; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_TXNS_KEY; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME; @@ -122,6 +124,7 @@ import javax.management.StandardMBean; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; +import org.apache.commons.logging.impl.Log4JLogger; import org.apache.hadoop.HadoopIllegalArgumentException; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; @@ -252,6 +255,9 @@ import org.apache.hadoop.util.DataChecks import org.apache.hadoop.util.StringUtils; import org.apache.hadoop.util.Time; import org.apache.hadoop.util.VersionInfo; +import org.apache.log4j.Appender; +import org.apache.log4j.AsyncAppender; +import org.apache.log4j.Logger; import org.mortbay.util.ajax.JSON; import com.google.common.annotations.VisibleForTesting; @@ -656,6 +662,11 @@ public class FSNamesystem implements Nam */ FSNamesystem(Configuration conf, FSImage fsImage, boolean ignoreRetryCache) throws IOException { + if (conf.getBoolean(DFS_NAMENODE_AUDIT_LOG_ASYNC_KEY, + DFS_NAMENODE_AUDIT_LOG_ASYNC_DEFAULT)) { + LOG.info("Enabling async auditlog"); + enableAsyncAuditLog(); + } boolean fair = conf.getBoolean("dfs.namenode.fslock.fair", true); LOG.info("fsLock is fair:" + fair); fsLock = new FSNamesystemLock(fair); @@ -6796,6 +6807,7 @@ public class FSNamesystem implements Nam /** Allow snapshot on a directroy. */ void allowSnapshot(String path) throws SafeModeException, IOException { + checkOperation(OperationCategory.WRITE); writeLock(); try { checkOperation(OperationCategory.WRITE); @@ -6821,6 +6833,7 @@ public class FSNamesystem implements Nam /** Disallow snapshot on a directory. */ void disallowSnapshot(String path) throws SafeModeException, IOException { + checkOperation(OperationCategory.WRITE); writeLock(); try { checkOperation(OperationCategory.WRITE); @@ -6944,6 +6957,7 @@ public class FSNamesystem implements Nam public SnapshottableDirectoryStatus[] getSnapshottableDirListing() throws IOException { SnapshottableDirectoryStatus[] status = null; + checkOperation(OperationCategory.READ); final FSPermissionChecker checker = getPermissionChecker(); readLock(); try { @@ -6977,6 +6991,7 @@ public class FSNamesystem implements Nam SnapshotDiffReport getSnapshotDiffReport(String path, String fromSnapshot, String toSnapshot) throws IOException { SnapshotDiffInfo diffs = null; + checkOperation(OperationCategory.READ); final FSPermissionChecker pc = getPermissionChecker(); readLock(); try { @@ -7498,5 +7513,26 @@ public class FSNamesystem implements Nam auditLog.info(message); } } + + private static void enableAsyncAuditLog() { + if (!(auditLog instanceof Log4JLogger)) { + LOG.warn("Log4j is required to enable async auditlog"); + return; + } + Logger logger = ((Log4JLogger)auditLog).getLogger(); + @SuppressWarnings("unchecked") + List appenders = Collections.list(logger.getAllAppenders()); + // failsafe against trying to async it more than once + if (!appenders.isEmpty() && !(appenders.get(0) instanceof AsyncAppender)) { + AsyncAppender asyncAppender = new AsyncAppender(); + // change logger to have an async appender containing all the + // previously configured appenders + for (Appender appender : appenders) { + logger.removeAppender(appender); + asyncAppender.addAppender(appender); + } + logger.addAppender(asyncAppender); + } + } } Propchange: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/native/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native:r1559794-1560767 Propchange: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode:r1559794-1560767 Propchange: hadoop/common/branches/HDFS-4685/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs:r1559794-1560767