Return-Path: X-Original-To: apmail-hadoop-common-commits-archive@www.apache.org Delivered-To: apmail-hadoop-common-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 08855E1E0 for ; Fri, 11 Jan 2013 19:41:03 +0000 (UTC) Received: (qmail 26062 invoked by uid 500); 11 Jan 2013 19:41:02 -0000 Delivered-To: apmail-hadoop-common-commits-archive@hadoop.apache.org Received: (qmail 26013 invoked by uid 500); 11 Jan 2013 19:41:02 -0000 Mailing-List: contact common-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-dev@hadoop.apache.org Delivered-To: mailing list common-commits@hadoop.apache.org Received: (qmail 26006 invoked by uid 99); 11 Jan 2013 19:41:02 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 11 Jan 2013 19:41:02 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 11 Jan 2013 19:40:52 +0000 Received: from eris.apache.org (localhost [127.0.0.1]) by eris.apache.org (Postfix) with ESMTP id 67A8B23889FA; Fri, 11 Jan 2013 19:40:30 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r1432246 [1/3] - in /hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common: ./ dev-support/ src/main/bin/ src/main/docs/ src/main/docs/src/documentation/content/xdocs/ src/main/java/ src/main/java/org/apache/hadoop/fs/ src/ma... Date: Fri, 11 Jan 2013 19:40:28 -0000 To: common-commits@hadoop.apache.org From: suresh@apache.org X-Mailer: svnmailer-1.0.8-patched Message-Id: <20130111194030.67A8B23889FA@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Author: suresh Date: Fri Jan 11 19:40:23 2013 New Revision: 1432246 URL: http://svn.apache.org/viewvc?rev=1432246&view=rev Log: Merge r1414455:r1426018 from trunk Added: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java - copied unchanged from r1426018, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/proto/ProtobufRpcEngine.proto - copied unchanged from r1426018, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/proto/ProtobufRpcEngine.proto hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/proto/RpcHeader.proto - copied unchanged from r1426018, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/proto/RpcHeader.proto hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/site/apt/CommandsManual.apt.vm - copied unchanged from r1426018, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/CommandsManual.apt.vm hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm - copied unchanged from r1426018, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/site/apt/HttpAuthentication.apt.vm - copied unchanged from r1426018, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/HttpAuthentication.apt.vm hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemInitialization.java - copied unchanged from r1426018, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemInitialization.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestProxyUserFromEnv.java - copied unchanged from r1426018, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestProxyUserFromEnv.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeLibraryChecker.java - copied unchanged from r1426018, hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeLibraryChecker.java Removed: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/HttpAuthentication.xml hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/commands_manual.xml hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/file_system_shell.xml hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/proto/RpcPayloadHeader.proto hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/proto/hadoop_rpc.proto Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/CHANGES.txt (contents, props changed) hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/pom.xml hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/bin/hadoop hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/docs/ (props changed) hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/ (props changed) hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DU.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegationTokenRenewer.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsUrlStreamHandlerFactory.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/RawLocalFs.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Command.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFailoverController.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsMappingWithFallback.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ProtoUtil.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/core/ (props changed) hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDelegationTokenRenewer.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileStatus.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellCopy.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystem.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFs.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/CHANGES.txt URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/CHANGES.txt (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/CHANGES.txt Fri Jan 11 19:40:23 2013 @@ -11,6 +11,9 @@ Trunk (Unreleased) NEW FEATURES + HADOOP-8561. Introduce HADOOP_PROXY_USER for secure impersonation in child + hadoop client processes. (Yu Gao via llu) + HADOOP-8469. Make NetworkTopology class pluggable. (Junping Du via szetszwo) @@ -129,9 +132,6 @@ Trunk (Unreleased) HADOOP-8776. Provide an option in test-patch that can enable/disable compiling native code. (Chris Nauroth via suresh) - HADOOP-9004. Allow security unit tests to use external KDC. (Stephen Chu - via suresh) - HADOOP-6616. Improve documentation for rack awareness. (Adam Faris via jghoman) @@ -141,8 +141,16 @@ Trunk (Unreleased) HADOOP-9093. Move all the Exception in PathExceptions to o.a.h.fs package. (suresh) + HADOOP-9140 Cleanup rpc PB protos (sanjay Radia) + + HADOOP-9162. Add utility to check native library availability. + (Binglin Chang via suresh) + BUG FIXES + HADOOP-9041. FsUrlStreamHandlerFactory could cause an infinite loop in + FileSystem initialization. (Yanbo Liang and Radim Kolar via llu) + HADOOP-8418. Update UGI Principal classes name for running with IBM JDK on 64 bits Windows. (Yu Gao via eyang) @@ -295,6 +303,12 @@ Trunk (Unreleased) HADOOP-9121. InodeTree.java has redundant check for vName while throwing exception. (Arup Malakar via suresh) + HADOOP-9131. Turn off TestLocalFileSystem#testListStatusWithColons on + Windows. (Chris Nauroth via suresh) + + HADOOP-8957 AbstractFileSystem#IsValidName should be overridden for + embedded file systems like ViewFs (Chris Nauroth via Sanjay Radia) + OPTIMIZATIONS HADOOP-7761. Improve the performance of raw comparisons. (todd) @@ -395,6 +409,17 @@ Release 2.0.3-alpha - Unreleased HADOOP-9042. Add a test for umask in FileSystemContractBaseTest. (Colin McCabe via eli) + HADOOP-9127. Update documentation for ZooKeeper Failover Controller. + (Daisuke Kobayashi via atm) + + HADOOP-9004. Allow security unit tests to use external KDC. (Stephen Chu + via suresh) + + HADOOP-9147. Add missing fields to FIleStatus.toString. + (Jonathan Allen via suresh) + + HADOOP-8427. Convert Forrest docs to APT, incremental. (adi2 via tucu) + OPTIMIZATIONS HADOOP-8866. SampleQuantiles#query is O(N^2) instead of O(N). (Andrew Wang @@ -473,6 +498,24 @@ Release 2.0.3-alpha - Unreleased HADOOP-9070. Kerberos SASL server cannot find kerberos key. (daryn via atm) + HADOOP-6762. Exception while doing RPC I/O closes channel + (Sam Rash and todd via todd) + + HADOOP-9126. FormatZK and ZKFC startup can fail due to zkclient connection + establishment delay. (Rakesh R and todd via todd) + + HADOOP-9113. o.a.h.fs.TestDelegationTokenRenewer is failing intermittently. + (Karthik Kambatla via eli) + + HADOOP-9135. JniBasedUnixGroupsMappingWithFallback should log at debug + rather than info during fallback. (Colin Patrick McCabe via todd) + + HADOOP-9152. HDFS can report negative DFS Used on clusters with very small + amounts of data. (Brock Noland via atm) + + HADOOP-9153. Support createNonRecursive in ViewFileSystem. + (Sandy Ryza via tomwhite) + Release 2.0.2-alpha - 2012-09-07 INCOMPATIBLE CHANGES @@ -1184,6 +1227,8 @@ Release 0.23.6 - UNRELEASED HADOOP-9038. unit-tests for AllocatorPerContext.PathIterator (Ivan A. Veselovsky via bobby) + HADOOP-9105. FsShell -moveFromLocal erroneously fails (daryn via bobby) + Release 0.23.5 - UNRELEASED INCOMPATIBLE CHANGES Propchange: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/CHANGES.txt ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt:r1419191-1426018 Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml Fri Jan 11 19:40:23 2013 @@ -260,7 +260,7 @@ - + @@ -272,7 +272,7 @@ - + Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/pom.xml URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/pom.xml?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/pom.xml (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/pom.xml Fri Jan 11 19:40:23 2013 @@ -378,9 +378,9 @@ src/main/proto/HAServiceProtocol.proto src/main/proto/IpcConnectionContext.proto src/main/proto/ProtocolInfo.proto - src/main/proto/RpcPayloadHeader.proto + src/main/proto/RpcHeader.proto src/main/proto/ZKFCProtocol.proto - src/main/proto/hadoop_rpc.proto + src/main/proto/ProtobufRpcEngine.proto Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/bin/hadoop URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/bin/hadoop?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/bin/hadoop (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/bin/hadoop Fri Jan 11 19:40:23 2013 @@ -31,6 +31,7 @@ function print_usage(){ echo " fs run a generic filesystem user client" echo " version print the version" echo " jar run a jar file" + echo " checknative [-a|-h] check native hadoop and compression libraries availability" echo " distcp copy file or directories recursively" echo " archive -archiveName NAME -p * create a hadoop archive" echo " classpath prints the class path needed to get the" @@ -100,6 +101,8 @@ case $COMMAND in CLASS=org.apache.hadoop.util.VersionInfo elif [ "$COMMAND" = "jar" ] ; then CLASS=org.apache.hadoop.util.RunJar + elif [ "$COMMAND" = "checknative" ] ; then + CLASS=org.apache.hadoop.util.NativeLibraryChecker elif [ "$COMMAND" = "distcp" ] ; then CLASS=org.apache.hadoop.tools.DistCp CLASSPATH=${CLASSPATH}:${TOOL_PATH} Propchange: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/docs/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs:r1419191-1426018 Propchange: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java:r1419191-1426018 Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java Fri Jan 11 19:40:23 2013 @@ -85,14 +85,20 @@ public abstract class AbstractFileSystem } /** - * Prohibits names which contain a ".", "..", ":" or "/" + * Returns true if the specified string is considered valid in the path part + * of a URI by this file system. The default implementation enforces the rules + * of HDFS, but subclasses may override this method to implement specific + * validation rules for specific file systems. + * + * @param src String source filename to check, path part of the URI + * @return boolean true if the specified string is considered valid */ - private static boolean isValidName(String src) { - // Check for ".." "." ":" "/" + public boolean isValidName(String src) { + // Prohibit ".." "." and anything containing ":" StringTokenizer tokens = new StringTokenizer(src, Path.SEPARATOR); while(tokens.hasMoreTokens()) { String element = tokens.nextToken(); - if (element.equals("target/generated-sources") || + if (element.equals("..") || element.equals(".") || (element.indexOf(":") >= 0)) { return false; Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DU.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DU.java?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DU.java (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DU.java Fri Jan 11 19:40:23 2013 @@ -136,7 +136,7 @@ public class DU extends Shell { } } - return used.longValue(); + return Math.max(used.longValue(), 0L); } /** Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegationTokenRenewer.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegationTokenRenewer.java?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegationTokenRenewer.java (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegationTokenRenewer.java Fri Jan 11 19:40:23 2013 @@ -18,6 +18,8 @@ package org.apache.hadoop.fs; +import com.google.common.annotations.VisibleForTesting; + import java.io.IOException; import java.lang.ref.WeakReference; import java.util.concurrent.DelayQueue; @@ -147,6 +149,12 @@ public class DelegationTokenRenewer /** Queue to maintain the RenewActions to be processed by the {@link #run()} */ private volatile DelayQueue> queue = new DelayQueue>(); + /** For testing purposes */ + @VisibleForTesting + protected int getRenewQueueLength() { + return queue.size(); + } + /** * Create the singleton instance. However, the thread can be started lazily in * {@link #addRenewAction(FileSystem)} Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java Fri Jan 11 19:40:23 2013 @@ -349,9 +349,15 @@ public class FileStatus implements Writa sb.append("; replication=" + block_replication); sb.append("; blocksize=" + blocksize); } + sb.append("; modification_time=" + modification_time); + sb.append("; access_time=" + access_time); sb.append("; owner=" + owner); sb.append("; group=" + group); sb.append("; permission=" + permission); + sb.append("; isSymlink=" + isSymlink()); + if(isSymlink()) { + sb.append("; symlink=" + symlink); + } sb.append("}"); return sb.toString(); } Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java Fri Jan 11 19:40:23 2013 @@ -166,6 +166,18 @@ public class FilterFileSystem extends Fi return fs.create(f, permission, overwrite, bufferSize, replication, blockSize, progress); } + + + + @Override + @Deprecated + public FSDataOutputStream createNonRecursive(Path f, FsPermission permission, + EnumSet flags, int bufferSize, short replication, long blockSize, + Progressable progress) throws IOException { + + return fs.createNonRecursive(f, permission, flags, bufferSize, replication, blockSize, + progress); + } /** * Set replication for an existing file. Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java Fri Jan 11 19:40:23 2013 @@ -278,4 +278,9 @@ public abstract class FilterFs extends A public List> getDelegationTokens(String renewer) throws IOException { return myFs.getDelegationTokens(renewer); } + + @Override + public boolean isValidName(String src) { + return myFs.isValidName(src); + } } Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsUrlStreamHandlerFactory.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsUrlStreamHandlerFactory.java?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsUrlStreamHandlerFactory.java (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsUrlStreamHandlerFactory.java Fri Jan 11 19:40:23 2013 @@ -56,6 +56,12 @@ public class FsUrlStreamHandlerFactory i public FsUrlStreamHandlerFactory(Configuration conf) { this.conf = new Configuration(conf); + // force init of FileSystem code to avoid HADOOP-9041 + try { + FileSystem.getFileSystemClass("file", conf); + } catch (IOException io) { + throw new RuntimeException(io); + } this.handler = new FsUrlStreamHandler(this.conf); } Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java Fri Jan 11 19:40:23 2013 @@ -30,6 +30,7 @@ import java.io.FileDescriptor; import java.net.URI; import java.nio.ByteBuffer; import java.util.Arrays; +import java.util.EnumSet; import java.util.StringTokenizer; import org.apache.hadoop.classification.InterfaceAudience; @@ -281,6 +282,18 @@ public class RawLocalFileSystem extends return new FSDataOutputStream(new BufferedOutputStream( new LocalFSFileOutputStream(f, false), bufferSize), statistics); } + + @Override + @Deprecated + public FSDataOutputStream createNonRecursive(Path f, FsPermission permission, + EnumSet flags, int bufferSize, short replication, long blockSize, + Progressable progress) throws IOException { + if (exists(f) && !flags.contains(CreateFlag.OVERWRITE)) { + throw new IOException("File already exists: "+f); + } + return new FSDataOutputStream(new BufferedOutputStream( + new LocalFSFileOutputStream(f, false), bufferSize), statistics); + } @Override public FSDataOutputStream create(Path f, FsPermission permission, Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/RawLocalFs.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/RawLocalFs.java?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/RawLocalFs.java (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/RawLocalFs.java Fri Jan 11 19:40:23 2013 @@ -159,6 +159,14 @@ public class RawLocalFs extends Delegate } } + @Override + public boolean isValidName(String src) { + // Different local file systems have different validation rules. Skip + // validation here and just let the OS handle it. This is consistent with + // RawLocalFileSystem. + return true; + } + @Override public Path getLinkTarget(Path f) throws IOException { /* We should never get here. Valid local links are resolved transparently Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Command.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Command.java?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Command.java (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Command.java Fri Jan 11 19:40:23 2013 @@ -311,6 +311,7 @@ abstract public class Command extends Co if (recursive && item.stat.isDirectory()) { recursePath(item); } + postProcessPath(item); } catch (IOException e) { displayError(e); } @@ -330,6 +331,15 @@ abstract public class Command extends Co } /** + * Hook for commands to implement an operation to be applied on each + * path for the command after being processed successfully + * @param item a {@link PathData} object + * @throws IOException if anything goes wrong... + */ + protected void postProcessPath(PathData item) throws IOException { + } + + /** * Gets the directory listing for a path and invokes * {@link #processPaths(PathData, PathData...)} * @param item {@link PathData} for directory to recurse into Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java Fri Jan 11 19:40:23 2013 @@ -24,6 +24,7 @@ import java.util.LinkedList; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; import org.apache.hadoop.fs.PathIOException; +import org.apache.hadoop.fs.PathExistsException; import org.apache.hadoop.fs.shell.CopyCommands.CopyFromLocal; /** Various commands for moving files */ @@ -49,7 +50,21 @@ class MoveCommands { @Override protected void processPath(PathData src, PathData target) throws IOException { - target.fs.moveFromLocalFile(src.path, target.path); + // unlike copy, don't merge existing dirs during move + if (target.exists && target.stat.isDirectory()) { + throw new PathExistsException(target.toString()); + } + super.processPath(src, target); + } + + @Override + protected void postProcessPath(PathData src) throws IOException { + if (!src.fs.delete(src.path, false)) { + // we have no way to know the actual error... + PathIOException e = new PathIOException(src.toString()); + e.setOperation("remove"); + throw e; + } } } @@ -95,4 +110,4 @@ class MoveCommands { } } } -} \ No newline at end of file +} Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java Fri Jan 11 19:40:23 2013 @@ -19,11 +19,14 @@ package org.apache.hadoop.fs.viewfs; import java.io.FileNotFoundException; import java.io.IOException; import java.net.URI; +import java.util.EnumSet; + import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.BlockLocation; import org.apache.hadoop.fs.ContentSummary; +import org.apache.hadoop.fs.CreateFlag; import org.apache.hadoop.fs.FSDataInputStream; import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FileChecksum; @@ -171,6 +174,16 @@ class ChRootedFileSystem extends FilterF return super.create(fullPath(f), permission, overwrite, bufferSize, replication, blockSize, progress); } + + @Override + @Deprecated + public FSDataOutputStream createNonRecursive(Path f, FsPermission permission, + EnumSet flags, int bufferSize, short replication, long blockSize, + Progressable progress) throws IOException { + + return super.createNonRecursive(fullPath(f), permission, flags, bufferSize, replication, blockSize, + progress); + } @Override public boolean delete(final Path f, final boolean recursive) Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java Fri Jan 11 19:40:23 2013 @@ -83,7 +83,12 @@ class ChRootedFs extends AbstractFileSys return new Path((chRootPathPart.isRoot() ? "" : chRootPathPartString) + path.toUri().getPath()); } - + + @Override + public boolean isValidName(String src) { + return myFs.isValidName(fullPath(new Path(src)).toUri().toString()); + } + public ChRootedFs(final AbstractFileSystem fs, final Path theRoot) throws URISyntaxException { super(fs.getUri(), fs.getUri().getScheme(), @@ -103,7 +108,7 @@ class ChRootedFs extends AbstractFileSys // scheme:/// and scheme://authority/ myUri = new URI(myFs.getUri().toString() + (myFs.getUri().getAuthority() == null ? "" : Path.SEPARATOR) + - chRootPathPart.toString().substring(1)); + chRootPathPart.toUri().getPath().substring(1)); super.checkPath(theRoot); } Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java Fri Jan 11 19:40:23 2013 @@ -24,6 +24,7 @@ import java.io.IOException; import java.net.URI; import java.net.URISyntaxException; import java.util.Arrays; +import java.util.EnumSet; import java.util.HashSet; import java.util.List; import java.util.Set; @@ -35,6 +36,7 @@ import org.apache.hadoop.classification. import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.BlockLocation; import org.apache.hadoop.fs.ContentSummary; +import org.apache.hadoop.fs.CreateFlag; import org.apache.hadoop.fs.FSDataInputStream; import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FileAlreadyExistsException; @@ -62,6 +64,9 @@ import org.apache.hadoop.util.Time; @InterfaceAudience.Public @InterfaceStability.Evolving /*Evolving for a release,to be changed to Stable */ public class ViewFileSystem extends FileSystem { + + private static final Path ROOT_PATH = new Path(Path.SEPARATOR); + static AccessControlException readOnlyMountTable(final String operation, final String p) { return new AccessControlException( @@ -97,23 +102,6 @@ public class ViewFileSystem extends File Path homeDir = null; /** - * Prohibits names which contain a ".", "..", ":" or "/" - */ - private static boolean isValidName(final String src) { - // Check for ".." "." ":" "/" - final StringTokenizer tokens = new StringTokenizer(src, Path.SEPARATOR); - while(tokens.hasMoreTokens()) { - String element = tokens.nextToken(); - if (element.equals("..") || - element.equals(".") || - (element.indexOf(":") >= 0)) { - return false; - } - } - return true; - } - - /** * Make the path Absolute and get the path-part of a pathname. * Checks that URI matches this file system * and that the path-part is a valid name. @@ -124,10 +112,6 @@ public class ViewFileSystem extends File private String getUriPath(final Path p) { checkPath(p); String s = makeAbsolute(p).toUri().getPath(); - if (!isValidName(s)) { - throw new InvalidPathException("Path part " + s + " from URI" + p - + " is not a valid filename."); - } return s; } @@ -283,6 +267,21 @@ public class ViewFileSystem extends File } @Override + public FSDataOutputStream createNonRecursive(Path f, FsPermission permission, + EnumSet flags, int bufferSize, short replication, long blockSize, + Progressable progress) throws IOException { + InodeTree.ResolveResult res; + try { + res = fsState.resolve(getUriPath(f), false); + } catch (FileNotFoundException e) { + throw readOnlyMountTable("create", f); + } + assert(res.remainingPath != null); + return res.targetFileSystem.createNonRecursive(res.remainingPath, permission, + flags, bufferSize, replication, blockSize, progress); + } + + @Override public FSDataOutputStream create(final Path f, final FsPermission permission, final boolean overwrite, final int bufferSize, final short replication, final long blockSize, final Progressable progress) throws IOException { @@ -672,7 +671,7 @@ public class ViewFileSystem extends File PERMISSION_RRR, ugi.getUserName(), ugi.getGroupNames()[0], new Path(theInternalDir.fullPath).makeQualified( - myUri, null)); + myUri, ROOT_PATH)); } Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java Fri Jan 11 19:40:23 2013 @@ -597,6 +597,12 @@ public class ViewFs extends AbstractFile return result; } + @Override + public boolean isValidName(String src) { + // Prefix validated at mount time and rest of path validated by mount target. + return true; + } + /* Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java Fri Jan 11 19:40:23 2013 @@ -21,6 +21,8 @@ package org.apache.hadoop.ha; import java.io.IOException; import java.util.Arrays; import java.util.List; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; @@ -45,6 +47,7 @@ import org.apache.zookeeper.KeeperExcept import com.google.common.annotations.VisibleForTesting; import com.google.common.base.Preconditions; +import com.google.common.collect.Lists; /** * @@ -205,7 +208,7 @@ public class ActiveStandbyElector implem int zookeeperSessionTimeout, String parentZnodeName, List acl, List authInfo, ActiveStandbyElectorCallback app) throws IOException, - HadoopIllegalArgumentException { + HadoopIllegalArgumentException, KeeperException { if (app == null || acl == null || parentZnodeName == null || zookeeperHostPorts == null || zookeeperSessionTimeout <= 0) { throw new HadoopIllegalArgumentException("Invalid argument"); @@ -602,10 +605,24 @@ public class ActiveStandbyElector implem * * @return new zookeeper client instance * @throws IOException + * @throws KeeperException zookeeper connectionloss exception */ - protected synchronized ZooKeeper getNewZooKeeper() throws IOException { - ZooKeeper zk = new ZooKeeper(zkHostPort, zkSessionTimeout, null); - zk.register(new WatcherWithClientRef(zk)); + protected synchronized ZooKeeper getNewZooKeeper() throws IOException, + KeeperException { + + // Unfortunately, the ZooKeeper constructor connects to ZooKeeper and + // may trigger the Connected event immediately. So, if we register the + // watcher after constructing ZooKeeper, we may miss that event. Instead, + // we construct the watcher first, and have it queue any events it receives + // before we can set its ZooKeeper reference. + WatcherWithClientRef watcher = new WatcherWithClientRef(); + ZooKeeper zk = new ZooKeeper(zkHostPort, zkSessionTimeout, watcher); + watcher.setZooKeeperRef(zk); + + // Wait for the asynchronous success/failure. This may throw an exception + // if we don't connect within the session timeout. + watcher.waitForZKConnectionEvent(zkSessionTimeout); + for (ZKAuthInfo auth : zkAuthInfo) { zk.addAuthInfo(auth.getScheme(), auth.getAuth()); } @@ -710,13 +727,16 @@ public class ActiveStandbyElector implem } catch(IOException e) { LOG.warn(e); sleepFor(5000); + } catch(KeeperException e) { + LOG.warn(e); + sleepFor(5000); } ++connectionRetryCount; } return success; } - private void createConnection() throws IOException { + private void createConnection() throws IOException, KeeperException { if (zkClient != null) { try { zkClient.close(); @@ -973,14 +993,76 @@ public class ActiveStandbyElector implem * events. */ private final class WatcherWithClientRef implements Watcher { - private final ZooKeeper zk; + private ZooKeeper zk; + + /** + * Latch fired whenever any event arrives. This is used in order + * to wait for the Connected event when the client is first created. + */ + private CountDownLatch hasReceivedEvent = new CountDownLatch(1); + + /** + * If any events arrive before the reference to ZooKeeper is set, + * they get queued up and later forwarded when the reference is + * available. + */ + private final List queuedEvents = Lists.newLinkedList(); + + private WatcherWithClientRef() { + } private WatcherWithClientRef(ZooKeeper zk) { this.zk = zk; } + + /** + * Waits for the next event from ZooKeeper to arrive. + * + * @param connectionTimeoutMs zookeeper connection timeout in milliseconds + * @throws KeeperException if the connection attempt times out. This will + * be a ZooKeeper ConnectionLoss exception code. + * @throws IOException if interrupted while connecting to ZooKeeper + */ + private void waitForZKConnectionEvent(int connectionTimeoutMs) + throws KeeperException, IOException { + try { + if (!hasReceivedEvent.await(connectionTimeoutMs, TimeUnit.MILLISECONDS)) { + LOG.error("Connection timed out: couldn't connect to ZooKeeper in " + + connectionTimeoutMs + " milliseconds"); + synchronized (this) { + zk.close(); + } + throw KeeperException.create(Code.CONNECTIONLOSS); + } + } catch (InterruptedException e) { + Thread.currentThread().interrupt(); + throw new IOException( + "Interrupted when connecting to zookeeper server", e); + } + } + + private synchronized void setZooKeeperRef(ZooKeeper zk) { + Preconditions.checkState(this.zk == null, + "zk already set -- must be set exactly once"); + this.zk = zk; + + for (WatchedEvent e : queuedEvents) { + forwardEvent(e); + } + queuedEvents.clear(); + } @Override - public void process(WatchedEvent event) { + public synchronized void process(WatchedEvent event) { + if (zk != null) { + forwardEvent(event); + } else { + queuedEvents.add(event); + } + } + + private void forwardEvent(WatchedEvent event) { + hasReceivedEvent.countDown(); try { ActiveStandbyElector.this.processWatchEvent( zk, event); @@ -1024,5 +1106,4 @@ public class ActiveStandbyElector implem ((appData == null) ? "null" : StringUtils.byteToHexString(appData)) + " cb=" + appClient; } - } Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFailoverController.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFailoverController.java?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFailoverController.java (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFailoverController.java Fri Jan 11 19:40:23 2013 @@ -180,7 +180,15 @@ public abstract class ZKFailoverControll private int doRun(String[] args) throws HadoopIllegalArgumentException, IOException, InterruptedException { - initZK(); + try { + initZK(); + } catch (KeeperException ke) { + LOG.fatal("Unable to start failover controller. Unable to connect " + + "to ZooKeeper quorum at " + zkQuorum + ". Please check the " + + "configured value for " + ZK_QUORUM_KEY + " and ensure that " + + "ZooKeeper is running."); + return ERR_CODE_NO_ZK; + } if (args.length > 0) { if ("-formatZK".equals(args[0])) { boolean force = false; @@ -199,24 +207,12 @@ public abstract class ZKFailoverControll badArg(args[0]); } } - - try { - if (!elector.parentZNodeExists()) { - LOG.fatal("Unable to start failover controller. " + - "Parent znode does not exist.\n" + - "Run with -formatZK flag to initialize ZooKeeper."); - return ERR_CODE_NO_PARENT_ZNODE; - } - } catch (IOException ioe) { - if (ioe.getCause() instanceof KeeperException.ConnectionLossException) { - LOG.fatal("Unable to start failover controller. Unable to connect " + - "to ZooKeeper quorum at " + zkQuorum + ". Please check the " + - "configured value for " + ZK_QUORUM_KEY + " and ensure that " + - "ZooKeeper is running."); - return ERR_CODE_NO_ZK; - } else { - throw ioe; - } + + if (!elector.parentZNodeExists()) { + LOG.fatal("Unable to start failover controller. " + + "Parent znode does not exist.\n" + + "Run with -formatZK flag to initialize ZooKeeper."); + return ERR_CODE_NO_PARENT_ZNODE; } try { @@ -310,7 +306,8 @@ public abstract class ZKFailoverControll } - private void initZK() throws HadoopIllegalArgumentException, IOException { + private void initZK() throws HadoopIllegalArgumentException, IOException, + KeeperException { zkQuorum = conf.get(ZK_QUORUM_KEY); int zkTimeout = conf.getInt(ZK_SESSION_TIMEOUT_KEY, ZK_SESSION_TIMEOUT_DEFAULT); Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java Fri Jan 11 19:40:23 2013 @@ -38,6 +38,11 @@ import java.util.Iterator; import java.util.Map.Entry; import java.util.Random; import java.util.Set; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.Future; +import java.util.concurrent.RejectedExecutionException; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicLong; @@ -58,11 +63,10 @@ import org.apache.hadoop.io.WritableUtil import org.apache.hadoop.io.retry.RetryPolicies; import org.apache.hadoop.io.retry.RetryPolicy; import org.apache.hadoop.io.retry.RetryPolicy.RetryAction; -import org.apache.hadoop.ipc.protobuf.IpcConnectionContextProtos.IpcConnectionContextProto; -import org.apache.hadoop.ipc.protobuf.RpcPayloadHeaderProtos.RpcPayloadHeaderProto; -import org.apache.hadoop.ipc.protobuf.RpcPayloadHeaderProtos.RpcPayloadOperationProto; -import org.apache.hadoop.ipc.protobuf.RpcPayloadHeaderProtos.RpcResponseHeaderProto; -import org.apache.hadoop.ipc.protobuf.RpcPayloadHeaderProtos.RpcStatusProto; +import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcRequestHeaderProto; +import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcRequestHeaderProto.OperationProto; +import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto; +import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto.RpcStatusProto; import org.apache.hadoop.net.NetUtils; import org.apache.hadoop.security.KerberosInfo; import org.apache.hadoop.security.SaslRpcClient; @@ -78,6 +82,8 @@ import org.apache.hadoop.util.ProtoUtil; import org.apache.hadoop.util.ReflectionUtils; import org.apache.hadoop.util.Time; +import com.google.common.util.concurrent.ThreadFactoryBuilder; + /** A client for an IPC service. IPC calls take a single {@link Writable} as a * parameter, and return a {@link Writable} as their value. A service runs on * a port and is defined by a parameter class and a value class. @@ -104,6 +110,19 @@ public class Client { final static int PING_CALL_ID = -1; /** + * Executor on which IPC calls' parameters are sent. Deferring + * the sending of parameters to a separate thread isolates them + * from thread interruptions in the calling code. + */ + private static final ExecutorService SEND_PARAMS_EXECUTOR = + Executors.newCachedThreadPool( + new ThreadFactoryBuilder() + .setDaemon(true) + .setNameFormat("IPC Parameter Sending Thread #%d") + .build()); + + + /** * set the ping interval value in configuration * * @param conf Configuration @@ -171,7 +190,7 @@ public class Client { */ private class Call { final int id; // call id - final Writable rpcRequest; // the serialized rpc request - RpcPayload + final Writable rpcRequest; // the serialized rpc request Writable rpcResponse; // null if rpc has error IOException error; // exception, null if success final RPC.RpcKind rpcKind; // Rpc EngineKind @@ -245,6 +264,8 @@ public class Client { private AtomicLong lastActivity = new AtomicLong();// last I/O activity time private AtomicBoolean shouldCloseConnection = new AtomicBoolean(); // indicate if the connection is closed private IOException closeException; // close reason + + private final Object sendRpcRequestLock = new Object(); public Connection(ConnectionId remoteId) throws IOException { this.remoteId = remoteId; @@ -746,7 +767,7 @@ public class Client { remoteId.getTicket(), authMethod).writeTo(buf); - // Write out the payload length + // Write out the packet length int bufLen = buf.getLength(); out.writeInt(bufLen); @@ -810,7 +831,7 @@ public class Client { try { while (waitForWork()) {//wait here for work - read or close connection - receiveResponse(); + receiveRpcResponse(); } } catch (Throwable t) { // This truly is unexpected, since we catch IOException in receiveResponse @@ -827,52 +848,86 @@ public class Client { + connections.size()); } - /** Initiates a call by sending the parameter to the remote server. + /** Initiates a rpc call by sending the rpc request to the remote server. * Note: this is not called from the Connection thread, but by other * threads. + * @param call - the rpc request */ - public void sendParam(Call call) { + public void sendRpcRequest(final Call call) + throws InterruptedException, IOException { if (shouldCloseConnection.get()) { return; } - DataOutputBuffer d=null; - try { - synchronized (this.out) { - if (LOG.isDebugEnabled()) - LOG.debug(getName() + " sending #" + call.id); + // Serialize the call to be sent. This is done from the actual + // caller thread, rather than the SEND_PARAMS_EXECUTOR thread, + // so that if the serialization throws an error, it is reported + // properly. This also parallelizes the serialization. + // + // Format of a call on the wire: + // 0) Length of rest below (1 + 2) + // 1) RpcRequestHeader - is serialized Delimited hence contains length + // 2) RpcRequest + // + // Items '1' and '2' are prepared here. + final DataOutputBuffer d = new DataOutputBuffer(); + RpcRequestHeaderProto header = ProtoUtil.makeRpcRequestHeader( + call.rpcKind, OperationProto.RPC_FINAL_PACKET, call.id); + header.writeDelimitedTo(d); + call.rpcRequest.write(d); + + synchronized (sendRpcRequestLock) { + Future senderFuture = SEND_PARAMS_EXECUTOR.submit(new Runnable() { + @Override + public void run() { + try { + synchronized (Connection.this.out) { + if (shouldCloseConnection.get()) { + return; + } + + if (LOG.isDebugEnabled()) + LOG.debug(getName() + " sending #" + call.id); + + byte[] data = d.getData(); + int totalLength = d.getLength(); + out.writeInt(totalLength); // Total Length + out.write(data, 0, totalLength);// RpcRequestHeader + RpcRequest + out.flush(); + } + } catch (IOException e) { + // exception at this point would leave the connection in an + // unrecoverable state (eg half a call left on the wire). + // So, close the connection, killing any outstanding calls + markClosed(e); + } finally { + //the buffer is just an in-memory buffer, but it is still polite to + // close early + IOUtils.closeStream(d); + } + } + }); + + try { + senderFuture.get(); + } catch (ExecutionException e) { + Throwable cause = e.getCause(); - // Serializing the data to be written. - // Format: - // 0) Length of rest below (1 + 2) - // 1) PayloadHeader - is serialized Delimited hence contains length - // 2) the Payload - the RpcRequest - // - d = new DataOutputBuffer(); - RpcPayloadHeaderProto header = ProtoUtil.makeRpcPayloadHeader( - call.rpcKind, RpcPayloadOperationProto.RPC_FINAL_PAYLOAD, call.id); - header.writeDelimitedTo(d); - call.rpcRequest.write(d); - byte[] data = d.getData(); - - int totalLength = d.getLength(); - out.writeInt(totalLength); // Total Length - out.write(data, 0, totalLength);//PayloadHeader + RpcRequest - out.flush(); + // cause should only be a RuntimeException as the Runnable above + // catches IOException + if (cause instanceof RuntimeException) { + throw (RuntimeException) cause; + } else { + throw new RuntimeException("unexpected checked exception", cause); + } } - } catch(IOException e) { - markClosed(e); - } finally { - //the buffer is just an in-memory buffer, but it is still polite to - // close early - IOUtils.closeStream(d); } - } + } /* Receive a response. * Because only one receiver, so no synchronization on in. */ - private void receiveResponse() { + private void receiveRpcResponse() { if (shouldCloseConnection.get()) { return; } @@ -1138,7 +1193,16 @@ public class Client { ConnectionId remoteId) throws InterruptedException, IOException { Call call = new Call(rpcKind, rpcRequest); Connection connection = getConnection(remoteId, call); - connection.sendParam(call); // send the parameter + try { + connection.sendRpcRequest(call); // send the rpc request + } catch (RejectedExecutionException e) { + throw new IOException("connection has been closed", e); + } catch (InterruptedException e) { + Thread.currentThread().interrupt(); + LOG.warn("interrupted waiting to send rpc request to server", e); + throw new IOException(e); + } + boolean interrupted = false; synchronized (call) { while (!call.done) { Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java Fri Jan 11 19:40:23 2013 @@ -39,7 +39,7 @@ import org.apache.hadoop.io.Writable; import org.apache.hadoop.io.retry.RetryPolicy; import org.apache.hadoop.ipc.Client.ConnectionId; import org.apache.hadoop.ipc.RPC.RpcInvoker; -import org.apache.hadoop.ipc.protobuf.HadoopRpcProtos.HadoopRpcRequestProto; +import org.apache.hadoop.ipc.protobuf.ProtobufRpcEngineProtos.RequestProto; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.security.token.SecretManager; import org.apache.hadoop.security.token.TokenIdentifier; @@ -128,10 +128,10 @@ public class ProtobufRpcEngine implement .getProtocolVersion(protocol); } - private HadoopRpcRequestProto constructRpcRequest(Method method, + private RequestProto constructRpcRequest(Method method, Object[] params) throws ServiceException { - HadoopRpcRequestProto rpcRequest; - HadoopRpcRequestProto.Builder builder = HadoopRpcRequestProto + RequestProto rpcRequest; + RequestProto.Builder builder = RequestProto .newBuilder(); builder.setMethodName(method.getName()); @@ -190,7 +190,7 @@ public class ProtobufRpcEngine implement startTime = Time.now(); } - HadoopRpcRequestProto rpcRequest = constructRpcRequest(method, args); + RequestProto rpcRequest = constructRpcRequest(method, args); RpcResponseWritable val = null; if (LOG.isTraceEnabled()) { @@ -271,13 +271,13 @@ public class ProtobufRpcEngine implement * Writable Wrapper for Protocol Buffer Requests */ private static class RpcRequestWritable implements Writable { - HadoopRpcRequestProto message; + RequestProto message; @SuppressWarnings("unused") public RpcRequestWritable() { } - RpcRequestWritable(HadoopRpcRequestProto message) { + RpcRequestWritable(RequestProto message) { this.message = message; } @@ -292,7 +292,7 @@ public class ProtobufRpcEngine implement int length = ProtoUtil.readRawVarint32(in); byte[] bytes = new byte[length]; in.readFully(bytes); - message = HadoopRpcRequestProto.parseFrom(bytes); + message = RequestProto.parseFrom(bytes); } @Override @@ -426,7 +426,7 @@ public class ProtobufRpcEngine implement public Writable call(RPC.Server server, String connectionProtocolName, Writable writableRequest, long receiveTime) throws Exception { RpcRequestWritable request = (RpcRequestWritable) writableRequest; - HadoopRpcRequestProto rpcRequest = request.message; + RequestProto rpcRequest = request.message; String methodName = rpcRequest.getMethodName(); Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java Fri Jan 11 19:40:23 2013 @@ -80,7 +80,8 @@ import org.apache.hadoop.ipc.RPC.Version import org.apache.hadoop.ipc.metrics.RpcDetailedMetrics; import org.apache.hadoop.ipc.metrics.RpcMetrics; import org.apache.hadoop.ipc.protobuf.IpcConnectionContextProtos.IpcConnectionContextProto; -import org.apache.hadoop.ipc.protobuf.RpcPayloadHeaderProtos.*; +import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto.RpcStatusProto; +import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.*; import org.apache.hadoop.net.NetUtils; import org.apache.hadoop.security.AccessControlException; import org.apache.hadoop.security.SaslRpcServer; @@ -160,7 +161,7 @@ public abstract class Server { public static final ByteBuffer HEADER = ByteBuffer.wrap("hrpc".getBytes()); /** - * Serialization type for ConnectionContext and RpcPayloadHeader + * Serialization type for ConnectionContext and RpcRequestHeader */ public enum IpcSerializationType { // Add new serialization type to the end without affecting the enum order @@ -197,7 +198,7 @@ public abstract class Server { // 4 : Introduced SASL security layer // 5 : Introduced use of {@link ArrayPrimitiveWritable$Internal} // in ObjectWritable to efficiently transmit arrays of primitives - // 6 : Made RPC payload header explicit + // 6 : Made RPC Request header explicit // 7 : Changed Ipc Connection Header to use Protocol buffers // 8 : SASL server always sends a final response public static final byte CURRENT_VERSION = 8; @@ -1637,14 +1638,15 @@ public abstract class Server { private void processData(byte[] buf) throws IOException, InterruptedException { DataInputStream dis = new DataInputStream(new ByteArrayInputStream(buf)); - RpcPayloadHeaderProto header = RpcPayloadHeaderProto.parseDelimitedFrom(dis); + RpcRequestHeaderProto header = RpcRequestHeaderProto.parseDelimitedFrom(dis); if (LOG.isDebugEnabled()) LOG.debug(" got #" + header.getCallId()); if (!header.hasRpcOp()) { - throw new IOException(" IPC Server: No rpc op in rpcPayloadHeader"); + throw new IOException(" IPC Server: No rpc op in rpcRequestHeader"); } - if (header.getRpcOp() != RpcPayloadOperationProto.RPC_FINAL_PAYLOAD) { + if (header.getRpcOp() != + RpcRequestHeaderProto.OperationProto.RPC_FINAL_PACKET) { throw new IOException("IPC Server does not implement operation" + header.getRpcOp()); } @@ -1652,7 +1654,7 @@ public abstract class Server { // (Note it would make more sense to have the handler deserialize but // we continue with this original design. if (!header.hasRpcKind()) { - throw new IOException(" IPC Server: No rpc kind in rpcPayloadHeader"); + throw new IOException(" IPC Server: No rpc kind in rpcRequestHeader"); } Class rpcRequestClass = getRpcRequestWrapper(header.getRpcKind()); Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsMappingWithFallback.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsMappingWithFallback.java?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsMappingWithFallback.java (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsMappingWithFallback.java Fri Jan 11 19:40:23 2013 @@ -37,7 +37,7 @@ public class JniBasedUnixGroupsMappingWi if (NativeCodeLoader.isNativeCodeLoaded()) { this.impl = new JniBasedUnixGroupsMapping(); } else { - LOG.info("Falling back to shell based"); + LOG.debug("Falling back to shell based"); this.impl = new ShellBasedUnixGroupsMapping(); } if (LOG.isDebugEnabled()){ Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java Fri Jan 11 19:40:23 2013 @@ -81,6 +81,7 @@ public class UserGroupInformation { */ private static final float TICKET_RENEW_WINDOW = 0.80f; static final String HADOOP_USER_NAME = "HADOOP_USER_NAME"; + static final String HADOOP_PROXY_USER = "HADOOP_PROXY_USER"; /** * UgiMetrics maintains UGI activity statistics @@ -641,10 +642,18 @@ public class UserGroupInformation { newLoginContext(authenticationMethod.getLoginAppName(), subject, new HadoopConfiguration()); login.login(); - loginUser = new UserGroupInformation(subject); - loginUser.setLogin(login); - loginUser.setAuthenticationMethod(authenticationMethod); - loginUser = new UserGroupInformation(login.getSubject()); + UserGroupInformation realUser = new UserGroupInformation(subject); + realUser.setLogin(login); + realUser.setAuthenticationMethod(authenticationMethod); + realUser = new UserGroupInformation(login.getSubject()); + // If the HADOOP_PROXY_USER environment variable or property + // is specified, create a proxy user as the logged in user. + String proxyUser = System.getenv(HADOOP_PROXY_USER); + if (proxyUser == null) { + proxyUser = System.getProperty(HADOOP_PROXY_USER); + } + loginUser = proxyUser == null ? realUser : createProxyUser(proxyUser, realUser); + String fileLocation = System.getenv(HADOOP_TOKEN_FILE_LOCATION); if (fileLocation != null) { // load the token storage file and put all of the tokens into the Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ProtoUtil.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ProtoUtil.java?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ProtoUtil.java (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ProtoUtil.java Fri Jan 11 19:40:23 2013 @@ -24,7 +24,7 @@ import java.io.IOException; import org.apache.hadoop.ipc.RPC; import org.apache.hadoop.ipc.protobuf.IpcConnectionContextProtos.IpcConnectionContextProto; import org.apache.hadoop.ipc.protobuf.IpcConnectionContextProtos.UserInformationProto; -import org.apache.hadoop.ipc.protobuf.RpcPayloadHeaderProtos.*; +import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.*; import org.apache.hadoop.security.SaslRpcServer.AuthMethod; import org.apache.hadoop.security.UserGroupInformation; @@ -157,9 +157,9 @@ public abstract class ProtoUtil { return null; } - public static RpcPayloadHeaderProto makeRpcPayloadHeader(RPC.RpcKind rpcKind, - RpcPayloadOperationProto operation, int callId) { - RpcPayloadHeaderProto.Builder result = RpcPayloadHeaderProto.newBuilder(); + public static RpcRequestHeaderProto makeRpcRequestHeader(RPC.RpcKind rpcKind, + RpcRequestHeaderProto.OperationProto operation, int callId) { + RpcRequestHeaderProto.Builder result = RpcRequestHeaderProto.newBuilder(); result.setRpcKind(convert(rpcKind)).setRpcOp(operation).setCallId(callId); return result.build(); } Modified: hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml?rev=1432246&r1=1432245&r2=1432246&view=diff ============================================================================== --- hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml (original) +++ hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml Fri Jan 11 19:40:23 2013 @@ -1090,4 +1090,70 @@ + + + + ha.health-monitor.connect-retry-interval.ms + 1000 + + How often to retry connecting to the service. + + + + + ha.health-monitor.check-interval.ms + 1000 + + How often to check the service. + + + + + ha.health-monitor.sleep-after-disconnect.ms + 1000 + + How long to sleep after an unexpected RPC error. + + + + + ha.health-monitor.rpc-timeout.ms + 45000 + + Timeout for the actual monitorHealth() calls. + + + + + ha.failover-controller.new-active.rpc-timeout.ms + 60000 + + Timeout that the FC waits for the new active to become active + + + + + ha.failover-controller.graceful-fence.rpc-timeout.ms + 5000 + + Timeout that the FC waits for the old active to go to standby + + + + + ha.failover-controller.graceful-fence.connection.retries + 1 + + FC connection retries for graceful fencing + + + + + ha.failover-controller.cli-check.rpc-timeout.ms + 20000 + + Timeout that the CLI (manual) FC waits for monitorHealth, getServiceState + + +