Return-Path: X-Original-To: apmail-hadoop-hdfs-commits-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 26824CB50 for ; Thu, 3 May 2012 02:14:56 +0000 (UTC) Received: (qmail 55978 invoked by uid 500); 3 May 2012 02:14:55 -0000 Delivered-To: apmail-hadoop-hdfs-commits-archive@hadoop.apache.org Received: (qmail 55887 invoked by uid 500); 3 May 2012 02:14:55 -0000 Mailing-List: contact hdfs-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-commits@hadoop.apache.org Received: (qmail 55768 invoked by uid 99); 3 May 2012 02:14:55 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 03 May 2012 02:14:55 +0000 X-ASF-Spam-Status: No, hits=-1998.0 required=5.0 tests=ALL_TRUSTED,FB_GET_MEDS X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 03 May 2012 02:14:50 +0000 Received: from eris.apache.org (localhost [127.0.0.1]) by eris.apache.org (Postfix) with ESMTP id C14532388BA6; Thu, 3 May 2012 02:14:30 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r1333291 [1/3] - in /hadoop/common/branches/HDFS-3042/hadoop-hdfs-project: dev-support/ hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/ hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/ hadoop-hdfs-httpfs... Date: Thu, 03 May 2012 02:14:26 -0000 To: hdfs-commits@hadoop.apache.org From: todd@apache.org X-Mailer: svnmailer-1.0.8-patched Message-Id: <20120503021430.C14532388BA6@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Author: todd Date: Thu May 3 02:14:01 2012 New Revision: 1333291 URL: http://svn.apache.org/viewvc?rev=1333291&view=rev Log: Merge trunk into auto-HA branch Added: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java - copied unchanged from r1333288, hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java - copied unchanged from r1333288, hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CheckpointFaultInjector.java - copied unchanged from r1333288, hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CheckpointFaultInjector.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem - copied unchanged from r1333288, hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileLengthOnClusterRestart.java - copied unchanged from r1333288, hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileLengthOnClusterRestart.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetImageServlet.java - copied unchanged from r1333288, hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetImageServlet.java Removed: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/dev-support/test-patch.properties Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/ (props changed) hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParams.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/TestHttpFSFileSystem.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestCheckUploadContentTypeFilter.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/README.txt hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/ (props changed) hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ByteRangeInputStream.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HsftpFileSystem.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolTranslatorPB.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/GetUserMappingsProtocolClientSideTranslatorPB.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InterDatanodeProtocolTranslatorPB.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/JournalProtocolTranslatorPB.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/NamenodeProtocolTranslatorPB.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RefreshAuthorizationPolicyProtocolClientSideTranslatorPB.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RefreshUserMappingsProtocolClientSideTranslatorPB.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/resources/DatanodeWebHdfsMethods.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/JournalInfo.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/native/ (props changed) hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4 hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/ (props changed) hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/ (props changed) hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/ (props changed) hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs/ (props changed) hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlocksScheduledCounter.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestByteRangeInputStream.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHftpDelegationToken.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHftpFileSystem.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery2.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadWhileWriting.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplaceDatanodeOnFailure.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestShortCircuitLocalRead.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteRead.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestInterDatanodeProtocol.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAllowFormat.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/HATestUtil.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/net/TestNetworkTopology.java hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml Propchange: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs:r1327719-1333290 Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java Thu May 3 02:14:01 2012 @@ -159,7 +159,7 @@ public class HttpFSFileSystem extends Fi * Get operations. */ public enum GetOpValues { - OPEN, GETFILESTATUS, LISTSTATUS, GETHOMEDIR, GETCONTENTSUMMARY, GETFILECHECKSUM, + OPEN, GETFILESTATUS, LISTSTATUS, GETHOMEDIRECTORY, GETCONTENTSUMMARY, GETFILECHECKSUM, GETDELEGATIONTOKEN, GETFILEBLOCKLOCATIONS, INSTRUMENTATION } @@ -684,7 +684,7 @@ public class HttpFSFileSystem extends Fi @Override public Path getHomeDirectory() { Map params = new HashMap(); - params.put(OP_PARAM, GetOpValues.GETHOMEDIR.toString()); + params.put(OP_PARAM, GetOpValues.GETHOMEDIRECTORY.toString()); try { HttpURLConnection conn = getConnection(HTTP_GET, params, new Path(getUri().toString(), "/"), false); validateResponse(conn, HttpURLConnection.HTTP_OK); Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java Thu May 3 02:14:01 2012 @@ -43,8 +43,8 @@ import java.util.Map; public class FSOperations { /** - * Converts a Unix permission octal & symbolic representation - * (i.e. 655 or -rwxr--r--) into a FileSystemAccess permission. + * Converts a Unix permission octal + * (i.e. 655 or 1777) into a FileSystemAccess permission. * * @param str Unix permission symbolic representation. * @@ -55,10 +55,8 @@ public class FSOperations { FsPermission permission; if (str.equals(HttpFSFileSystem.DEFAULT_PERMISSION)) { permission = FsPermission.getDefault(); - } else if (str.length() == 3) { - permission = new FsPermission(Short.parseShort(str, 8)); } else { - permission = FsPermission.valueOf(str); + permission = new FsPermission(Short.parseShort(str, 8)); } return permission; } Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParams.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParams.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParams.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParams.java Thu May 3 02:14:01 2012 @@ -446,7 +446,7 @@ public class HttpFSParams { * Symbolic Unix permissions regular expression pattern. */ private static final Pattern PERMISSION_PATTERN = - Pattern.compile(DEFAULT + "|(-[-r][-w][-x][-r][-w][-x][-r][-w][-x])" + "|[0-7][0-7][0-7]"); + Pattern.compile(DEFAULT + "|[0-1]?[0-7][0-7][0-7]"); /** * Constructor. Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java Thu May 3 02:14:01 2012 @@ -291,7 +291,7 @@ public class HttpFSServer { response = Response.ok(json).type(MediaType.APPLICATION_JSON).build(); break; } - case GETHOMEDIR: { + case GETHOMEDIRECTORY: { FSOperations.FSHomeDir command = new FSOperations.FSHomeDir(); JSONObject json = fsExecute(user, doAs.value(), command); AUDIT_LOG.info(""); Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/TestHttpFSFileSystem.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/TestHttpFSFileSystem.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/TestHttpFSFileSystem.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/TestHttpFSFileSystem.java Thu May 3 02:14:01 2012 @@ -310,11 +310,8 @@ public class TestHttpFSFileSystem extend private void testSetPermission() throws Exception { FileSystem fs = FileSystem.get(TestHdfsHelper.getHdfsConf()); - Path path = new Path(TestHdfsHelper.getHdfsTestDir(), "foo.txt"); - OutputStream os = fs.create(path); - os.write(1); - os.close(); - fs.close(); + Path path = new Path(TestHdfsHelper.getHdfsTestDir(), "foodir"); + fs.mkdirs(path); fs = getHttpFileSystem(); FsPermission permission1 = new FsPermission(FsAction.READ_WRITE, FsAction.NONE, FsAction.NONE); @@ -326,6 +323,19 @@ public class TestHttpFSFileSystem extend fs.close(); FsPermission permission2 = status1.getPermission(); Assert.assertEquals(permission2, permission1); + + //sticky bit + fs = getHttpFileSystem(); + permission1 = new FsPermission(FsAction.READ_WRITE, FsAction.NONE, FsAction.NONE, true); + fs.setPermission(path, permission1); + fs.close(); + + fs = FileSystem.get(TestHdfsHelper.getHdfsConf()); + status1 = fs.getFileStatus(path); + fs.close(); + permission2 = status1.getPermission(); + Assert.assertTrue(permission2.getStickyBit()); + Assert.assertEquals(permission2, permission1); } private void testSetOwner() throws Exception { Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestCheckUploadContentTypeFilter.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestCheckUploadContentTypeFilter.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestCheckUploadContentTypeFilter.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestCheckUploadContentTypeFilter.java Thu May 3 02:14:01 2012 @@ -53,7 +53,7 @@ public class TestCheckUploadContentTypeF @Test public void getOther() throws Exception { - test("GET", HttpFSFileSystem.GetOpValues.GETHOMEDIR.toString(), "plain/text", false, false); + test("GET", HttpFSFileSystem.GetOpValues.GETHOMEDIRECTORY.toString(), "plain/text", false, false); } @Test Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt Thu May 3 02:14:01 2012 @@ -65,8 +65,17 @@ Trunk (unreleased changes) HDFS-3273. Refactor BackupImage and FSEditLog, and rename JournalListener.rollLogs(..) to startLogSegment(..). (szetszwo) - HDFS-3292. Remove the deprecated DiskStatus, getDiskStatus(), getRawCapacity() and - getRawUsed() from DistributedFileSystem. (Arpit Gupta via szetszwo) + HDFS-3292. Remove the deprecated DiskStatus, getDiskStatus(), getRawUsed() + and getRawCapacity() from DistributedFileSystem. (Arpit Gupta via szetszwo) + + HADOOP-8285. HDFS changes for Use ProtoBuf for RpcPayLoadHeader. (sanjay + radia) + + HDFS-2743. Streamline usage of bookkeeper journal manager. + (Ivan Kelly via umamahesh) + + HDFS-3293. Add toString(), equals(..) and hashCode() to JournalInfo. + (Hari Mankude via szetszwo) OPTIMIZATIONS @@ -130,6 +139,8 @@ Trunk (unreleased changes) (Henry Robinson via atm) HDFS-3243. TestParallelRead timing out on jenkins. (Henry Robinson via todd) + + HDFS-3265. PowerPc Build error. (Kumar Ravi via mattf) Release 2.0.0 - UNRELEASED @@ -210,6 +221,10 @@ Release 2.0.0 - UNRELEASED HDFS-3004. Implement Recovery Mode. (Colin Patrick McCabe via eli) + HDFS-3282. Add HdfsDataInputStream as a public API. (umamahesh) + + HDFS-3298. Add HdfsDataOutputStream as a public API. (szetszwo) + IMPROVEMENTS HDFS-2018. Move all journal stream management code into one place. @@ -390,6 +405,20 @@ Release 2.0.0 - UNRELEASED HDFS-3263. HttpFS should read HDFS config from Hadoop site.xml files (tucu) + HDFS-3206. Miscellaneous xml cleanups for OEV. + (Colin Patrick McCabe via eli) + + HDFS-3169. TestFsck should test multiple -move operations in a row. + (Colin Patrick McCabe via eli) + + HDFS-3258. Test for HADOOP-8144 (pseudoSortByDistance in + NetworkTopology for first rack local node). (Junping Du via eli) + + HDFS-3322. Use HdfsDataInputStream and HdfsDataOutputStream in Hdfs. + (szetszwo) + + HDFS-3339. Change INode to package private. (John George via szetszwo) + OPTIMIZATIONS HDFS-3024. Improve performance of stringification in addStoredBlock (todd) @@ -535,6 +564,31 @@ Release 2.0.0 - UNRELEASED HDFS-3165. HDFS Balancer scripts are refering to wrong path of hadoop-daemon.sh (Amith D K via eli) + HDFS-891. DataNode no longer needs to check for dfs.network.script. + (harsh via eli) + + HDFS-3305. GetImageServlet should consider SBN a valid requestor in a + secure HA setup. (atm) + + HDFS-3314. HttpFS operation for getHomeDirectory is incorrect. (tucu) + + HDFS-3319. Change DFSOutputStream to not to start a thread in constructors. + (szetszwo) + + HDFS-3181. Fix a test case in TestLeaseRecovery2. (szetszwo) + + HDFS-3309. HttpFS (Hoop) chmod not supporting octal and sticky bit + permissions. (tucu) + + HDFS-3326. Append enabled log message uses the wrong variable. + (Matthew Jacobs via eli) + + HDFS-3336. hdfs launcher script will be better off not special casing + namenode command with regards to hadoop.security.logger (rvs via tucu) + + HDFS-3330. If GetImageServlet throws an Error or RTE, response should not + have HTTP "OK" status. (todd) + BREAKDOWN OF HDFS-1623 SUBTASKS HDFS-2179. Add fencing framework and mechanisms for NameNode HA. (todd) @@ -871,6 +925,23 @@ Release 0.23.3 - UNRELEASED HDFS-2652. Add support for host-based delegation tokens. (Daryn Sharp via szetszwo) + HDFS-3308. Uses canonical URI to select delegation tokens in HftpFileSystem + and WebHdfsFileSystem. (Daryn Sharp via szetszwo) + + HDFS-3312. In HftpFileSystem, the namenode URI is non-secure but the + delegation tokens have to use secure URI. (Daryn Sharp via szetszwo) + + HDFS-3318. Use BoundedInputStream in ByteRangeInputStream, otherwise, it + hangs on transfers >2 GB. (Daryn Sharp via szetszwo) + + HDFS-3321. Fix safe mode turn off tip message. (Ravi Prakash via szetszwo) + + HDFS-3334. Fix ByteRangeInputStream stream leakage. (Daryn Sharp via + szetszwo) + + HDFS-3331. In namenode, check superuser privilege for setBalancerBandwidth + and acquire the write lock for finalizeUpgrade. (szetszwo) + Release 0.23.2 - UNRELEASED INCOMPATIBLE CHANGES Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/README.txt URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/README.txt?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/README.txt (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/README.txt Thu May 3 02:14:01 2012 @@ -12,19 +12,25 @@ How do I build? To generate the distribution packages for BK journal, do the following. - $ mvn clean install -Pdist -Dtar + $ mvn clean package -Pdist - This will generate a tarball, - target/hadoop-hdfs-bkjournal-.tar.gz + This will generate a jar with all the dependencies needed by the journal + manager, + + target/hadoop-hdfs-bkjournal-.jar + + Note that the -Pdist part of the build command is important, as otherwise + the dependencies would not be packaged in the jar. ------------------------------------------------------------------------------- How do I use the BookKeeper Journal? - To run a HDFS namenode using BookKeeper as a backend, extract the - distribution package on top of hdfs + To run a HDFS namenode using BookKeeper as a backend, copy the bkjournal + jar, generated above, into the lib directory of hdfs. In the standard + distribution of HDFS, this is at $HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/ - cd hadoop-hdfs-/ - tar --strip-components 1 -zxvf path/to/hadoop-hdfs-bkjournal-.tar.gz + cp target/hadoop-hdfs-bkjournal-.jar \ + $HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/ Then, in hdfs-site.xml, set the following properties. Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml Thu May 3 02:14:01 2012 @@ -65,4 +65,50 @@ test + + + dist + + + + org.apache.maven.plugins + maven-shade-plugin + 1.5 + + + package + + shade + + + false + + + org.apache.bookkeeper:bookkeeper-server + org.apache.zookeeper:zookeeper + org.jboss.netty:netty + + + + + org.apache.bookkeeper + hidden.bkjournal.org.apache.bookkeeper + + + org.apache.zookeeper + hidden.bkjournal.org.apache.zookeeper + + + org.jboss.netty + hidden.bkjournal.org.jboss.netty + + + + + + + + + + Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs Thu May 3 02:14:01 2012 @@ -122,12 +122,7 @@ if $cygwin; then fi export CLASSPATH=$CLASSPATH -#turn security logger on the namenode -if [ $COMMAND = "namenode" ]; then - HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS}" -else - HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,NullAppender}" -fi +HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,NullAppender}" # Check to see if we should start a secure datanode if [ "$starting_secure_dn" = "true" ]; then Propchange: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java:r1327719-1333290 Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java Thu May 3 02:14:01 2012 @@ -35,6 +35,8 @@ import org.apache.hadoop.hdfs.CorruptFil import org.apache.hadoop.hdfs.DFSClient; import org.apache.hadoop.hdfs.DFSUtil; import org.apache.hadoop.hdfs.HdfsConfiguration; +import org.apache.hadoop.hdfs.client.HdfsDataInputStream; +import org.apache.hadoop.hdfs.client.HdfsDataOutputStream; import org.apache.hadoop.hdfs.protocol.DirectoryListing; import org.apache.hadoop.hdfs.protocol.HdfsConstants; import org.apache.hadoop.hdfs.protocol.HdfsFileStatus; @@ -43,8 +45,8 @@ import org.apache.hadoop.hdfs.security.t import org.apache.hadoop.hdfs.server.namenode.NameNode; import org.apache.hadoop.io.Text; import org.apache.hadoop.security.AccessControlException; -import org.apache.hadoop.security.token.Token; import org.apache.hadoop.security.token.SecretManager.InvalidToken; +import org.apache.hadoop.security.token.Token; import org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier; import org.apache.hadoop.util.Progressable; @@ -88,11 +90,11 @@ public class Hdfs extends AbstractFileSy } @Override - public FSDataOutputStream createInternal(Path f, + public HdfsDataOutputStream createInternal(Path f, EnumSet createFlag, FsPermission absolutePermission, int bufferSize, short replication, long blockSize, Progressable progress, int bytesPerChecksum, boolean createParent) throws IOException { - return new FSDataOutputStream(dfs.primitiveCreate(getUriPath(f), + return new HdfsDataOutputStream(dfs.primitiveCreate(getUriPath(f), absolutePermission, createFlag, createParent, replication, blockSize, progress, bufferSize, bytesPerChecksum), getStatistics()); } @@ -324,8 +326,9 @@ public class Hdfs extends AbstractFileSy dfs.mkdirs(getUriPath(dir), permission, createParent); } + @SuppressWarnings("deprecation") @Override - public FSDataInputStream open(Path f, int bufferSize) + public HdfsDataInputStream open(Path f, int bufferSize) throws IOException, UnresolvedLinkException { return new DFSClient.DFSDataInputStream(dfs.open(getUriPath(f), bufferSize, verifyChecksum)); Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ByteRangeInputStream.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ByteRangeInputStream.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ByteRangeInputStream.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ByteRangeInputStream.java Thu May 3 02:14:01 2012 @@ -23,9 +23,12 @@ import java.io.InputStream; import java.net.HttpURLConnection; import java.net.URL; +import org.apache.commons.io.input.BoundedInputStream; import org.apache.hadoop.fs.FSInputStream; import org.apache.hadoop.hdfs.server.namenode.StreamFile; +import com.google.common.annotations.VisibleForTesting; + /** * To support HTTP byte streams, a new connection to an HTTP server needs to be * created each time. This class hides the complexity of those multiple @@ -60,7 +63,7 @@ public abstract class ByteRangeInputStre } enum StreamStatus { - NORMAL, SEEK + NORMAL, SEEK, CLOSED } protected InputStream in; protected URLOpener originalURL; @@ -88,66 +91,93 @@ public abstract class ByteRangeInputStre protected abstract URL getResolvedUrl(final HttpURLConnection connection ) throws IOException; - private InputStream getInputStream() throws IOException { - if (status != StreamStatus.NORMAL) { - - if (in != null) { - in.close(); - in = null; - } - - // Use the original url if no resolved url exists, eg. if - // it's the first time a request is made. - final URLOpener opener = - (resolvedURL.getURL() == null) ? originalURL : resolvedURL; - - final HttpURLConnection connection = opener.openConnection(startPos); - connection.connect(); - checkResponseCode(connection); - - final String cl = connection.getHeaderField(StreamFile.CONTENT_LENGTH); - filelength = (cl == null) ? -1 : Long.parseLong(cl); - in = connection.getInputStream(); - - resolvedURL.setURL(getResolvedUrl(connection)); - status = StreamStatus.NORMAL; + @VisibleForTesting + protected InputStream getInputStream() throws IOException { + switch (status) { + case NORMAL: + break; + case SEEK: + if (in != null) { + in.close(); + } + in = openInputStream(); + status = StreamStatus.NORMAL; + break; + case CLOSED: + throw new IOException("Stream closed"); } - return in; } - private void update(final boolean isEOF, final int n) - throws IOException { - if (!isEOF) { + @VisibleForTesting + protected InputStream openInputStream() throws IOException { + // Use the original url if no resolved url exists, eg. if + // it's the first time a request is made. + final URLOpener opener = + (resolvedURL.getURL() == null) ? originalURL : resolvedURL; + + final HttpURLConnection connection = opener.openConnection(startPos); + connection.connect(); + checkResponseCode(connection); + + final String cl = connection.getHeaderField(StreamFile.CONTENT_LENGTH); + if (cl == null) { + throw new IOException(StreamFile.CONTENT_LENGTH+" header is missing"); + } + final long streamlength = Long.parseLong(cl); + filelength = startPos + streamlength; + // Java has a bug with >2GB request streams. It won't bounds check + // the reads so the transfer blocks until the server times out + InputStream is = + new BoundedInputStream(connection.getInputStream(), streamlength); + + resolvedURL.setURL(getResolvedUrl(connection)); + + return is; + } + + private int update(final int n) throws IOException { + if (n != -1) { currentPos += n; } else if (currentPos < filelength) { throw new IOException("Got EOF but currentPos = " + currentPos + " < filelength = " + filelength); } + return n; } + @Override public int read() throws IOException { final int b = getInputStream().read(); - update(b == -1, 1); + update((b == -1) ? -1 : 1); return b; } + + @Override + public int read(byte b[], int off, int len) throws IOException { + return update(getInputStream().read(b, off, len)); + } /** * Seek to the given offset from the start of the file. * The next read() will be from that location. Can't * seek past the end of the file. */ + @Override public void seek(long pos) throws IOException { if (pos != currentPos) { startPos = pos; currentPos = pos; - status = StreamStatus.SEEK; + if (status != StreamStatus.CLOSED) { + status = StreamStatus.SEEK; + } } } /** * Return the current offset from the start of the file */ + @Override public long getPos() throws IOException { return currentPos; } @@ -156,7 +186,17 @@ public abstract class ByteRangeInputStre * Seeks a different copy of the data. Returns true if * found a new source, false otherwise. */ + @Override public boolean seekToNewSource(long targetPos) throws IOException { return false; } -} \ No newline at end of file + + @Override + public void close() throws IOException { + if (in != null) { + in.close(); + in = null; + } + status = StreamStatus.CLOSED; + } +} Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java Thu May 3 02:14:01 2012 @@ -78,8 +78,6 @@ import org.apache.hadoop.fs.BlockLocatio import org.apache.hadoop.fs.CommonConfigurationKeysPublic; import org.apache.hadoop.fs.ContentSummary; import org.apache.hadoop.fs.CreateFlag; -import org.apache.hadoop.fs.FSDataInputStream; -import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FileAlreadyExistsException; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.FsServerDefaults; @@ -91,6 +89,8 @@ import org.apache.hadoop.fs.ParentNotDir import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.UnresolvedLinkException; import org.apache.hadoop.fs.permission.FsPermission; +import org.apache.hadoop.hdfs.client.HdfsDataInputStream; +import org.apache.hadoop.hdfs.client.HdfsDataOutputStream; import org.apache.hadoop.hdfs.protocol.ClientProtocol; import org.apache.hadoop.hdfs.protocol.CorruptFileBlocks; import org.apache.hadoop.hdfs.protocol.DSQuotaExceededException; @@ -996,7 +996,7 @@ public class DFSClient implements java.i * Call {@link #create(String, FsPermission, EnumSet, boolean, short, * long, Progressable, int)} with createParent set to true. */ - public OutputStream create(String src, + public DFSOutputStream create(String src, FsPermission permission, EnumSet flag, short replication, @@ -1029,7 +1029,7 @@ public class DFSClient implements java.i * @see ClientProtocol#create(String, FsPermission, String, EnumSetWritable, * boolean, short, long) for detailed description of exceptions thrown */ - public OutputStream create(String src, + public DFSOutputStream create(String src, FsPermission permission, EnumSet flag, boolean createParent, @@ -1046,9 +1046,9 @@ public class DFSClient implements java.i if(LOG.isDebugEnabled()) { LOG.debug(src + ": masked=" + masked); } - final DFSOutputStream result = new DFSOutputStream(this, src, masked, flag, - createParent, replication, blockSize, progress, buffersize, - dfsClientConf.createChecksum()); + final DFSOutputStream result = DFSOutputStream.newStreamForCreate(this, + src, masked, flag, createParent, replication, blockSize, progress, + buffersize, dfsClientConf.createChecksum()); leaserenewer.put(src, result, this); return result; } @@ -1078,7 +1078,7 @@ public class DFSClient implements java.i * Progressable, int)} except that the permission * is absolute (ie has already been masked with umask. */ - public OutputStream primitiveCreate(String src, + public DFSOutputStream primitiveCreate(String src, FsPermission absPermission, EnumSet flag, boolean createParent, @@ -1095,7 +1095,7 @@ public class DFSClient implements java.i DataChecksum checksum = DataChecksum.newDataChecksum( dfsClientConf.checksumType, bytesPerChecksum); - result = new DFSOutputStream(this, src, absPermission, + result = DFSOutputStream.newStreamForCreate(this, src, absPermission, flag, createParent, replication, blockSize, progress, buffersize, checksum); } @@ -1154,7 +1154,7 @@ public class DFSClient implements java.i UnsupportedOperationException.class, UnresolvedPathException.class); } - return new DFSOutputStream(this, src, buffersize, progress, + return DFSOutputStream.newStreamForAppend(this, src, buffersize, progress, lastBlock, stat, dfsClientConf.createChecksum()); } @@ -1169,11 +1169,11 @@ public class DFSClient implements java.i * * @see ClientProtocol#append(String, String) */ - public FSDataOutputStream append(final String src, final int buffersize, + public HdfsDataOutputStream append(final String src, final int buffersize, final Progressable progress, final FileSystem.Statistics statistics ) throws IOException { final DFSOutputStream out = append(src, buffersize, progress); - return new FSDataOutputStream(out, statistics, out.getInitialLen()); + return new HdfsDataOutputStream(out, statistics, out.getInitialLen()); } private DFSOutputStream append(String src, int buffersize, Progressable progress) @@ -1809,41 +1809,13 @@ public class DFSClient implements java.i } /** - * The Hdfs implementation of {@link FSDataInputStream} + * @deprecated use {@link HdfsDataInputStream} instead. */ - @InterfaceAudience.Private - public static class DFSDataInputStream extends FSDataInputStream { - public DFSDataInputStream(DFSInputStream in) - throws IOException { - super(in); - } - - /** - * Returns the datanode from which the stream is currently reading. - */ - public DatanodeInfo getCurrentDatanode() { - return ((DFSInputStream)in).getCurrentDatanode(); - } - - /** - * Returns the block containing the target position. - */ - public ExtendedBlock getCurrentBlock() { - return ((DFSInputStream)in).getCurrentBlock(); - } + @Deprecated + public static class DFSDataInputStream extends HdfsDataInputStream { - /** - * Return collection of blocks that has already been located. - */ - synchronized List getAllBlocks() throws IOException { - return ((DFSInputStream)in).getAllBlocks(); - } - - /** - * @return The visible length of the file. - */ - public long getVisibleLength() throws IOException { - return ((DFSInputStream)in).getFileLength(); + public DFSDataInputStream(DFSInputStream in) throws IOException { + super(in); } } Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java Thu May 3 02:14:01 2012 @@ -118,6 +118,39 @@ public class DFSInputStream extends FSIn * Grab the open-file info from namenode */ synchronized void openInfo() throws IOException, UnresolvedLinkException { + lastBlockBeingWrittenLength = fetchLocatedBlocksAndGetLastBlockLength(); + int retriesForLastBlockLength = 3; + while (retriesForLastBlockLength > 0) { + // Getting last block length as -1 is a special case. When cluster + // restarts, DNs may not report immediately. At this time partial block + // locations will not be available with NN for getting the length. Lets + // retry for 3 times to get the length. + if (lastBlockBeingWrittenLength == -1) { + DFSClient.LOG.warn("Last block locations not available. " + + "Datanodes might not have reported blocks completely." + + " Will retry for " + retriesForLastBlockLength + " times"); + waitFor(4000); + lastBlockBeingWrittenLength = fetchLocatedBlocksAndGetLastBlockLength(); + } else { + break; + } + retriesForLastBlockLength--; + } + if (retriesForLastBlockLength == 0) { + throw new IOException("Could not obtain the last block locations."); + } + } + + private void waitFor(int waitTime) throws IOException { + try { + Thread.sleep(waitTime); + } catch (InterruptedException e) { + throw new IOException( + "Interrupted while getting the last block length."); + } + } + + private long fetchLocatedBlocksAndGetLastBlockLength() throws IOException { LocatedBlocks newInfo = DFSClient.callGetBlockLocations(dfsClient.namenode, src, 0, prefetchSize); if (DFSClient.LOG.isDebugEnabled()) { DFSClient.LOG.debug("newInfo = " + newInfo); @@ -136,10 +169,13 @@ public class DFSInputStream extends FSIn } } locatedBlocks = newInfo; - lastBlockBeingWrittenLength = 0; + long lastBlockBeingWrittenLength = 0; if (!locatedBlocks.isLastBlockComplete()) { final LocatedBlock last = locatedBlocks.getLastLocatedBlock(); if (last != null) { + if (last.getLocations().length == 0) { + return -1; + } final long len = readBlockLength(last); last.getBlock().setNumBytes(len); lastBlockBeingWrittenLength = len; @@ -147,13 +183,12 @@ public class DFSInputStream extends FSIn } currentNode = null; + return lastBlockBeingWrittenLength; } /** Read the block length from one of the datanodes. */ private long readBlockLength(LocatedBlock locatedblock) throws IOException { - if (locatedblock == null || locatedblock.getLocations().length == 0) { - return 0; - } + assert locatedblock != null : "LocatedBlock cannot be null"; int replicaNotFoundCount = locatedblock.getLocations().length; for(DatanodeInfo datanode : locatedblock.getLocations()) { @@ -224,7 +259,7 @@ public class DFSInputStream extends FSIn /** * Return collection of blocks that has already been located. */ - synchronized List getAllBlocks() throws IOException { + public synchronized List getAllBlocks() throws IOException { return getBlockRange(0, getFileLength()); } Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java Thu May 3 02:14:01 2012 @@ -44,7 +44,7 @@ import org.apache.hadoop.fs.ParentNotDir import org.apache.hadoop.fs.Syncable; import org.apache.hadoop.fs.UnresolvedLinkException; import org.apache.hadoop.fs.permission.FsPermission; -import org.apache.hadoop.hdfs.protocol.ClientProtocol; +import org.apache.hadoop.hdfs.client.HdfsDataOutputStream; import org.apache.hadoop.hdfs.protocol.DSQuotaExceededException; import org.apache.hadoop.hdfs.protocol.DatanodeInfo; import org.apache.hadoop.hdfs.protocol.ExtendedBlock; @@ -99,7 +99,7 @@ import org.apache.hadoop.util.Progressab * starts sending packets from the dataQueue. ****************************************************************/ @InterfaceAudience.Private -class DFSOutputStream extends FSOutputSummer implements Syncable { +public class DFSOutputStream extends FSOutputSummer implements Syncable { private final DFSClient dfsClient; private static final int MAX_PACKETS = 80; // each packet 64K, total 5MB private Socket s; @@ -1233,14 +1233,11 @@ class DFSOutputStream extends FSOutputSu this.checksum = checksum; } - /** - * Create a new output stream to the given DataNode. - * @see ClientProtocol#create(String, FsPermission, String, EnumSetWritable, boolean, short, long) - */ - DFSOutputStream(DFSClient dfsClient, String src, FsPermission masked, EnumSet flag, - boolean createParent, short replication, long blockSize, Progressable progress, - int buffersize, DataChecksum checksum) - throws IOException { + /** Construct a new output stream for creating a file. */ + private DFSOutputStream(DFSClient dfsClient, String src, FsPermission masked, + EnumSet flag, boolean createParent, short replication, + long blockSize, Progressable progress, int buffersize, + DataChecksum checksum) throws IOException { this(dfsClient, src, blockSize, progress, checksum, replication); computePacketChunkSize(dfsClient.getConf().writePacketSize, @@ -1260,14 +1257,21 @@ class DFSOutputStream extends FSOutputSu UnresolvedPathException.class); } streamer = new DataStreamer(); - streamer.start(); } - /** - * Create a new output stream to the given DataNode. - * @see ClientProtocol#create(String, FsPermission, String, boolean, short, long) - */ - DFSOutputStream(DFSClient dfsClient, String src, int buffersize, Progressable progress, + static DFSOutputStream newStreamForCreate(DFSClient dfsClient, String src, + FsPermission masked, EnumSet flag, boolean createParent, + short replication, long blockSize, Progressable progress, int buffersize, + DataChecksum checksum) throws IOException { + final DFSOutputStream out = new DFSOutputStream(dfsClient, src, masked, + flag, createParent, replication, blockSize, progress, buffersize, + checksum); + out.streamer.start(); + return out; + } + + /** Construct a new output stream for append. */ + private DFSOutputStream(DFSClient dfsClient, String src, int buffersize, Progressable progress, LocatedBlock lastBlock, HdfsFileStatus stat, DataChecksum checksum) throws IOException { this(dfsClient, src, stat.getBlockSize(), progress, checksum, stat.getReplication()); @@ -1285,7 +1289,15 @@ class DFSOutputStream extends FSOutputSu checksum.getBytesPerChecksum()); streamer = new DataStreamer(); } - streamer.start(); + } + + static DFSOutputStream newStreamForAppend(DFSClient dfsClient, String src, + int buffersize, Progressable progress, LocatedBlock lastBlock, + HdfsFileStatus stat, DataChecksum checksum) throws IOException { + final DFSOutputStream out = new DFSOutputStream(dfsClient, src, buffersize, + progress, lastBlock, stat, checksum); + out.streamer.start(); + return out; } private void computePacketChunkSize(int psize, int csize) { @@ -1530,14 +1542,20 @@ class DFSOutputStream extends FSOutputSu } /** - * Returns the number of replicas of current block. This can be different - * from the designated replication factor of the file because the NameNode - * does not replicate the block to which a client is currently writing to. - * The client continues to write to a block even if a few datanodes in the - * write pipeline have failed. - * @return the number of valid replicas of the current block + * @deprecated use {@link HdfsDataOutputStream#getCurrentBlockReplication()}. */ + @Deprecated public synchronized int getNumCurrentReplicas() throws IOException { + return getCurrentBlockReplication(); + } + + /** + * Note that this is not a public API; + * use {@link HdfsDataOutputStream#getCurrentBlockReplication()} instead. + * + * @return the number of valid replicas of the current block + */ + public synchronized int getCurrentBlockReplication() throws IOException { dfsClient.checkOpen(); isClosed(); if (streamer == null) { Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java Thu May 3 02:14:01 2012 @@ -52,7 +52,6 @@ import org.apache.hadoop.hdfs.protocolPB import org.apache.hadoop.hdfs.server.namenode.NameNode; import org.apache.hadoop.ipc.ProtobufRpcEngine; import org.apache.hadoop.ipc.RPC; -import org.apache.hadoop.ipc.RpcPayloadHeader.RpcKind; import org.apache.hadoop.net.NetUtils; import org.apache.hadoop.net.NodeBase; import org.apache.hadoop.security.UserGroupInformation; @@ -140,37 +139,6 @@ public class DFSUtil { } /** - * Utility class to facilitate junit test error simulation. - */ - @InterfaceAudience.Private - public static class ErrorSimulator { - private static boolean[] simulation = null; // error simulation events - public static void initializeErrorSimulationEvent(int numberOfEvents) { - simulation = new boolean[numberOfEvents]; - for (int i = 0; i < numberOfEvents; i++) { - simulation[i] = false; - } - } - - public static boolean getErrorSimulation(int index) { - if(simulation == null) - return false; - assert(index < simulation.length); - return simulation[index]; - } - - public static void setErrorSimulation(int index) { - assert(index < simulation.length); - simulation[index] = true; - } - - public static void clearErrorSimulation(int index) { - assert(index < simulation.length); - simulation[index] = false; - } - } - - /** * Converts a byte array to a string using UTF8 encoding. */ public static String bytes2String(byte[] bytes) { @@ -1010,7 +978,7 @@ public class DFSUtil { public static void addPBProtocol(Configuration conf, Class protocol, BlockingService service, RPC.Server server) throws IOException { RPC.setProtocolEngine(conf, protocol, ProtobufRpcEngine.class); - server.addProtocol(RpcKind.RPC_PROTOCOL_BUFFER, protocol, service); + server.addProtocol(RPC.RpcKind.RPC_PROTOCOL_BUFFER, protocol, service); } /** Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java Thu May 3 02:14:01 2012 @@ -33,7 +33,6 @@ import org.apache.hadoop.fs.BlockLocatio import org.apache.hadoop.fs.ContentSummary; import org.apache.hadoop.fs.CreateFlag; import org.apache.hadoop.fs.FSDataInputStream; -import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.FsServerDefaults; @@ -45,7 +44,8 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.PathFilter; import org.apache.hadoop.fs.RemoteIterator; import org.apache.hadoop.fs.permission.FsPermission; -import org.apache.hadoop.hdfs.DFSClient.DFSDataInputStream; +import org.apache.hadoop.hdfs.client.HdfsDataInputStream; +import org.apache.hadoop.hdfs.client.HdfsDataOutputStream; import org.apache.hadoop.hdfs.protocol.DatanodeInfo; import org.apache.hadoop.hdfs.protocol.DirectoryListing; import org.apache.hadoop.hdfs.protocol.ExtendedBlock; @@ -88,6 +88,17 @@ public class DistributedFileSystem exten public DistributedFileSystem() { } + /** + * Return the protocol scheme for the FileSystem. + *

+ * + * @return hdfs + */ + @Override + public String getScheme() { + return "hdfs"; + } + @Deprecated public DistributedFileSystem(InetSocketAddress namenode, Configuration conf) throws IOException { @@ -194,8 +205,9 @@ public class DistributedFileSystem exten return dfs.recoverLease(getPathName(f)); } + @SuppressWarnings("deprecation") @Override - public FSDataInputStream open(Path f, int bufferSize) throws IOException { + public HdfsDataInputStream open(Path f, int bufferSize) throws IOException { statistics.incrementReadOps(1); return new DFSClient.DFSDataInputStream( dfs.open(getPathName(f), bufferSize, verifyChecksum)); @@ -203,31 +215,33 @@ public class DistributedFileSystem exten /** This optional operation is not yet supported. */ @Override - public FSDataOutputStream append(Path f, int bufferSize, + public HdfsDataOutputStream append(Path f, int bufferSize, Progressable progress) throws IOException { statistics.incrementWriteOps(1); return dfs.append(getPathName(f), bufferSize, progress, statistics); } @Override - public FSDataOutputStream create(Path f, FsPermission permission, + public HdfsDataOutputStream create(Path f, FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, Progressable progress) throws IOException { statistics.incrementWriteOps(1); - return new FSDataOutputStream(dfs.create(getPathName(f), permission, - overwrite ? EnumSet.of(CreateFlag.CREATE, CreateFlag.OVERWRITE) - : EnumSet.of(CreateFlag.CREATE), replication, blockSize, progress, - bufferSize), statistics); + final EnumSet cflags = overwrite? + EnumSet.of(CreateFlag.CREATE, CreateFlag.OVERWRITE) + : EnumSet.of(CreateFlag.CREATE); + final DFSOutputStream out = dfs.create(getPathName(f), permission, cflags, + replication, blockSize, progress, bufferSize); + return new HdfsDataOutputStream(out, statistics); } @SuppressWarnings("deprecation") @Override - protected FSDataOutputStream primitiveCreate(Path f, + protected HdfsDataOutputStream primitiveCreate(Path f, FsPermission absolutePermission, EnumSet flag, int bufferSize, short replication, long blockSize, Progressable progress, int bytesPerChecksum) throws IOException { statistics.incrementReadOps(1); - return new FSDataOutputStream(dfs.primitiveCreate(getPathName(f), + return new HdfsDataOutputStream(dfs.primitiveCreate(getPathName(f), absolutePermission, flag, true, replication, blockSize, progress, bufferSize, bytesPerChecksum),statistics); } @@ -235,14 +249,14 @@ public class DistributedFileSystem exten /** * Same as create(), except fails if parent directory doesn't already exist. */ - public FSDataOutputStream createNonRecursive(Path f, FsPermission permission, + public HdfsDataOutputStream createNonRecursive(Path f, FsPermission permission, EnumSet flag, int bufferSize, short replication, long blockSize, Progressable progress) throws IOException { statistics.incrementWriteOps(1); if (flag.contains(CreateFlag.OVERWRITE)) { flag.add(CreateFlag.CREATE); } - return new FSDataOutputStream(dfs.create(getPathName(f), permission, flag, + return new HdfsDataOutputStream(dfs.create(getPathName(f), permission, flag, false, replication, blockSize, progress, bufferSize), statistics); } @@ -627,14 +641,14 @@ public class DistributedFileSystem exten FSDataInputStream in, long inPos, FSDataInputStream sums, long sumsPos) { - if(!(in instanceof DFSDataInputStream && sums instanceof DFSDataInputStream)) - throw new IllegalArgumentException("Input streams must be types " + - "of DFSDataInputStream"); + if(!(in instanceof HdfsDataInputStream && sums instanceof HdfsDataInputStream)) + throw new IllegalArgumentException( + "Input streams must be types of HdfsDataInputStream"); LocatedBlock lblocks[] = new LocatedBlock[2]; // Find block in data stream. - DFSClient.DFSDataInputStream dfsIn = (DFSClient.DFSDataInputStream) in; + HdfsDataInputStream dfsIn = (HdfsDataInputStream) in; ExtendedBlock dataBlock = dfsIn.getCurrentBlock(); if (dataBlock == null) { LOG.error("Error: Current block in data stream is null! "); @@ -647,7 +661,7 @@ public class DistributedFileSystem exten + dataNode[0]); // Find block in checksum stream - DFSClient.DFSDataInputStream dfsSums = (DFSClient.DFSDataInputStream) sums; + HdfsDataInputStream dfsSums = (HdfsDataInputStream) sums; ExtendedBlock sumsBlock = dfsSums.getCurrentBlock(); if (sumsBlock == null) { LOG.error("Error: Current block in checksum stream is null! "); Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java Thu May 3 02:14:01 2012 @@ -94,8 +94,8 @@ public class HftpFileSystem extends File protected UserGroupInformation ugi; private URI hftpURI; - protected InetSocketAddress nnAddr; - protected InetSocketAddress nnSecureAddr; + protected URI nnUri; + protected URI nnSecureUri; public static final String HFTP_TIMEZONE = "UTC"; public static final String HFTP_DATE_FORMAT = "yyyy-MM-dd'T'HH:mm:ssZ"; @@ -139,11 +139,30 @@ public class HftpFileSystem extends File return NetUtils.createSocketAddrForHost(uri.getHost(), getDefaultSecurePort()); } + protected URI getNamenodeUri(URI uri) { + return DFSUtil.createUri("http", getNamenodeAddr(uri)); + } + + protected URI getNamenodeSecureUri(URI uri) { + return DFSUtil.createUri("https", getNamenodeSecureAddr(uri)); + } + @Override public String getCanonicalServiceName() { // unlike other filesystems, hftp's service is the secure port, not the // actual port in the uri - return SecurityUtil.buildTokenService(nnSecureAddr).toString(); + return SecurityUtil.buildTokenService(nnSecureUri).toString(); + } + + /** + * Return the protocol scheme for the FileSystem. + *

+ * + * @return hftp + */ + @Override + public String getScheme() { + return "hftp"; } @Override @@ -152,8 +171,8 @@ public class HftpFileSystem extends File super.initialize(name, conf); setConf(conf); this.ugi = UserGroupInformation.getCurrentUser(); - this.nnAddr = getNamenodeAddr(name); - this.nnSecureAddr = getNamenodeSecureAddr(name); + this.nnUri = getNamenodeUri(name); + this.nnSecureUri = getNamenodeSecureUri(name); try { this.hftpURI = new URI(name.getScheme(), name.getAuthority(), null, null, null); @@ -168,7 +187,7 @@ public class HftpFileSystem extends File protected void initDelegationToken() throws IOException { // look for hftp token, then try hdfs - Token token = selectDelegationToken(); + Token token = selectDelegationToken(ugi); // if we don't already have a token, go get one over https boolean createdToken = false; @@ -189,8 +208,9 @@ public class HftpFileSystem extends File } } - protected Token selectDelegationToken() { - return hftpTokenSelector.selectToken(getUri(), ugi.getTokens(), getConf()); + protected Token selectDelegationToken( + UserGroupInformation ugi) { + return hftpTokenSelector.selectToken(nnSecureUri, ugi.getTokens(), getConf()); } @@ -221,7 +241,7 @@ public class HftpFileSystem extends File ugi.reloginFromKeytab(); return ugi.doAs(new PrivilegedExceptionAction>() { public Token run() throws IOException { - final String nnHttpUrl = DFSUtil.createUri("https", nnSecureAddr).toString(); + final String nnHttpUrl = nnSecureUri.toString(); Credentials c; try { c = DelegationTokenFetcher.getDTfromRemote(nnHttpUrl, renewer); @@ -263,8 +283,8 @@ public class HftpFileSystem extends File * @throws IOException on error constructing the URL */ protected URL getNamenodeURL(String path, String query) throws IOException { - final URL url = new URL("http", nnAddr.getHostName(), - nnAddr.getPort(), path + '?' + query); + final URL url = new URL("http", nnUri.getHost(), + nnUri.getPort(), path + '?' + query); if (LOG.isTraceEnabled()) { LOG.trace("url=" + url); } Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HsftpFileSystem.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HsftpFileSystem.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HsftpFileSystem.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HsftpFileSystem.java Thu May 3 02:14:01 2012 @@ -58,6 +58,17 @@ public class HsftpFileSystem extends Hft private static final long MM_SECONDS_PER_DAY = 1000 * 60 * 60 * 24; private volatile int ExpWarnDays = 0; + /** + * Return the protocol scheme for the FileSystem. + *

+ * + * @return hsftp + */ + @Override + public String getScheme() { + return "hsftp"; + } + @Override public void initialize(URI name, Configuration conf) throws IOException { super.initialize(name, conf); @@ -133,11 +144,16 @@ public class HsftpFileSystem extends Hft } @Override + protected URI getNamenodeUri(URI uri) { + return getNamenodeSecureUri(uri); + } + + @Override protected HttpURLConnection openConnection(String path, String query) throws IOException { query = addDelegationTokenParam(query); - final URL url = new URL("https", nnAddr.getHostName(), - nnAddr.getPort(), path + '?' + query); + final URL url = new URL("https", nnUri.getHost(), + nnUri.getPort(), path + '?' + query); HttpsURLConnection conn = (HttpsURLConnection)URLUtils.openConnection(url); // bypass hostname verification conn.setHostnameVerifier(new DummyHostnameVerifier()); Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolTranslatorPB.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolTranslatorPB.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolTranslatorPB.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolTranslatorPB.java Thu May 3 02:14:01 2012 @@ -46,7 +46,6 @@ import org.apache.hadoop.ipc.ProtocolMet import org.apache.hadoop.ipc.ProtocolTranslator; import org.apache.hadoop.ipc.RPC; import org.apache.hadoop.ipc.RpcClientUtil; -import org.apache.hadoop.ipc.RpcPayloadHeader.RpcKind; import org.apache.hadoop.net.NetUtils; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.security.token.Token; @@ -193,7 +192,7 @@ public class ClientDatanodeProtocolTrans @Override public boolean isMethodSupported(String methodName) throws IOException { return RpcClientUtil.isMethodSupported(rpcProxy, - ClientDatanodeProtocolPB.class, RpcKind.RPC_PROTOCOL_BUFFER, + ClientDatanodeProtocolPB.class, RPC.RpcKind.RPC_PROTOCOL_BUFFER, RPC.getProtocolVersion(ClientDatanodeProtocolPB.class), methodName); } Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java Thu May 3 02:14:01 2012 @@ -109,7 +109,6 @@ import org.apache.hadoop.ipc.ProtobufHel import org.apache.hadoop.ipc.ProtocolMetaInterface; import org.apache.hadoop.ipc.RPC; import org.apache.hadoop.ipc.RpcClientUtil; -import org.apache.hadoop.ipc.RpcPayloadHeader.RpcKind; import org.apache.hadoop.security.AccessControlException; import org.apache.hadoop.security.token.Token; @@ -812,7 +811,7 @@ public class ClientNamenodeProtocolTrans @Override public boolean isMethodSupported(String methodName) throws IOException { return RpcClientUtil.isMethodSupported(rpcProxy, - ClientNamenodeProtocolPB.class, RpcKind.RPC_PROTOCOL_BUFFER, + ClientNamenodeProtocolPB.class, RPC.RpcKind.RPC_PROTOCOL_BUFFER, RPC.getProtocolVersion(ClientNamenodeProtocolPB.class), methodName); } Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java Thu May 3 02:14:01 2012 @@ -69,7 +69,6 @@ import org.apache.hadoop.ipc.ProtocolMet import org.apache.hadoop.ipc.RPC; import org.apache.hadoop.ipc.RemoteException; import org.apache.hadoop.ipc.RpcClientUtil; -import org.apache.hadoop.ipc.RpcPayloadHeader.RpcKind; import org.apache.hadoop.net.NetUtils; import org.apache.hadoop.security.UserGroupInformation; @@ -308,7 +307,7 @@ public class DatanodeProtocolClientSideT public boolean isMethodSupported(String methodName) throws IOException { return RpcClientUtil.isMethodSupported(rpcProxy, DatanodeProtocolPB.class, - RpcKind.RPC_PROTOCOL_BUFFER, + RPC.RpcKind.RPC_PROTOCOL_BUFFER, RPC.getProtocolVersion(DatanodeProtocolPB.class), methodName); } } Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/GetUserMappingsProtocolClientSideTranslatorPB.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/GetUserMappingsProtocolClientSideTranslatorPB.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/GetUserMappingsProtocolClientSideTranslatorPB.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/GetUserMappingsProtocolClientSideTranslatorPB.java Thu May 3 02:14:01 2012 @@ -26,7 +26,6 @@ import org.apache.hadoop.ipc.ProtobufHel import org.apache.hadoop.ipc.ProtocolMetaInterface; import org.apache.hadoop.ipc.RPC; import org.apache.hadoop.ipc.RpcClientUtil; -import org.apache.hadoop.ipc.RpcPayloadHeader.RpcKind; import org.apache.hadoop.tools.GetUserMappingsProtocol; import com.google.protobuf.RpcController; @@ -65,7 +64,7 @@ public class GetUserMappingsProtocolClie @Override public boolean isMethodSupported(String methodName) throws IOException { return RpcClientUtil.isMethodSupported(rpcProxy, - GetUserMappingsProtocolPB.class, RpcKind.RPC_PROTOCOL_BUFFER, + GetUserMappingsProtocolPB.class, RPC.RpcKind.RPC_PROTOCOL_BUFFER, RPC.getProtocolVersion(GetUserMappingsProtocolPB.class), methodName); } } Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InterDatanodeProtocolTranslatorPB.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InterDatanodeProtocolTranslatorPB.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InterDatanodeProtocolTranslatorPB.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InterDatanodeProtocolTranslatorPB.java Thu May 3 02:14:01 2012 @@ -39,7 +39,6 @@ import org.apache.hadoop.ipc.ProtobufRpc import org.apache.hadoop.ipc.ProtocolMetaInterface; import org.apache.hadoop.ipc.RPC; import org.apache.hadoop.ipc.RpcClientUtil; -import org.apache.hadoop.ipc.RpcPayloadHeader.RpcKind; import org.apache.hadoop.security.UserGroupInformation; import com.google.protobuf.RpcController; @@ -119,7 +118,7 @@ public class InterDatanodeProtocolTransl @Override public boolean isMethodSupported(String methodName) throws IOException { return RpcClientUtil.isMethodSupported(rpcProxy, - InterDatanodeProtocolPB.class, RpcKind.RPC_PROTOCOL_BUFFER, + InterDatanodeProtocolPB.class, RPC.RpcKind.RPC_PROTOCOL_BUFFER, RPC.getProtocolVersion(InterDatanodeProtocolPB.class), methodName); } } Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/JournalProtocolTranslatorPB.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/JournalProtocolTranslatorPB.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/JournalProtocolTranslatorPB.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/JournalProtocolTranslatorPB.java Thu May 3 02:14:01 2012 @@ -33,7 +33,6 @@ import org.apache.hadoop.ipc.ProtobufHel import org.apache.hadoop.ipc.ProtocolMetaInterface; import org.apache.hadoop.ipc.RPC; import org.apache.hadoop.ipc.RpcClientUtil; -import org.apache.hadoop.ipc.RpcPayloadHeader.RpcKind; import com.google.protobuf.RpcController; import com.google.protobuf.ServiceException; @@ -109,7 +108,7 @@ public class JournalProtocolTranslatorPB @Override public boolean isMethodSupported(String methodName) throws IOException { return RpcClientUtil.isMethodSupported(rpcProxy, JournalProtocolPB.class, - RpcKind.RPC_PROTOCOL_BUFFER, + RPC.RpcKind.RPC_PROTOCOL_BUFFER, RPC.getProtocolVersion(JournalProtocolPB.class), methodName); } } Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/NamenodeProtocolTranslatorPB.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/NamenodeProtocolTranslatorPB.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/NamenodeProtocolTranslatorPB.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/NamenodeProtocolTranslatorPB.java Thu May 3 02:14:01 2012 @@ -47,7 +47,6 @@ import org.apache.hadoop.ipc.ProtobufHel import org.apache.hadoop.ipc.ProtocolMetaInterface; import org.apache.hadoop.ipc.RPC; import org.apache.hadoop.ipc.RpcClientUtil; -import org.apache.hadoop.ipc.RpcPayloadHeader.RpcKind; import com.google.protobuf.RpcController; import com.google.protobuf.ServiceException; @@ -209,7 +208,7 @@ public class NamenodeProtocolTranslatorP @Override public boolean isMethodSupported(String methodName) throws IOException { return RpcClientUtil.isMethodSupported(rpcProxy, NamenodeProtocolPB.class, - RpcKind.RPC_PROTOCOL_BUFFER, + RPC.RpcKind.RPC_PROTOCOL_BUFFER, RPC.getProtocolVersion(NamenodeProtocolPB.class), methodName); } } Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RefreshAuthorizationPolicyProtocolClientSideTranslatorPB.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RefreshAuthorizationPolicyProtocolClientSideTranslatorPB.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RefreshAuthorizationPolicyProtocolClientSideTranslatorPB.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RefreshAuthorizationPolicyProtocolClientSideTranslatorPB.java Thu May 3 02:14:01 2012 @@ -26,7 +26,6 @@ import org.apache.hadoop.ipc.ProtobufHel import org.apache.hadoop.ipc.ProtocolMetaInterface; import org.apache.hadoop.ipc.RPC; import org.apache.hadoop.ipc.RpcClientUtil; -import org.apache.hadoop.ipc.RpcPayloadHeader.RpcKind; import org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol; import com.google.protobuf.RpcController; @@ -64,7 +63,7 @@ public class RefreshAuthorizationPolicyP public boolean isMethodSupported(String methodName) throws IOException { return RpcClientUtil.isMethodSupported(rpcProxy, RefreshAuthorizationPolicyProtocolPB.class, - RpcKind.RPC_PROTOCOL_BUFFER, + RPC.RpcKind.RPC_PROTOCOL_BUFFER, RPC.getProtocolVersion(RefreshAuthorizationPolicyProtocolPB.class), methodName); } Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RefreshUserMappingsProtocolClientSideTranslatorPB.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RefreshUserMappingsProtocolClientSideTranslatorPB.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RefreshUserMappingsProtocolClientSideTranslatorPB.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RefreshUserMappingsProtocolClientSideTranslatorPB.java Thu May 3 02:14:01 2012 @@ -27,7 +27,6 @@ import org.apache.hadoop.ipc.ProtobufHel import org.apache.hadoop.ipc.ProtocolMetaInterface; import org.apache.hadoop.ipc.RPC; import org.apache.hadoop.ipc.RpcClientUtil; -import org.apache.hadoop.ipc.RpcPayloadHeader.RpcKind; import org.apache.hadoop.security.RefreshUserMappingsProtocol; import com.google.protobuf.RpcController; @@ -76,7 +75,7 @@ public class RefreshUserMappingsProtocol public boolean isMethodSupported(String methodName) throws IOException { return RpcClientUtil .isMethodSupported(rpcProxy, RefreshUserMappingsProtocolPB.class, - RpcKind.RPC_PROTOCOL_BUFFER, + RPC.RpcKind.RPC_PROTOCOL_BUFFER, RPC.getProtocolVersion(RefreshUserMappingsProtocolPB.class), methodName); } Modified: hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java?rev=1333291&r1=1333290&r2=1333291&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java (original) +++ hadoop/common/branches/HDFS-3042/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java Thu May 3 02:14:01 2012 @@ -94,7 +94,7 @@ import org.apache.hadoop.util.ToolRunner * * *

DESCRIPTION - *

The threshold parameter is a fraction in the range of (0%, 100%) with a + *

The threshold parameter is a fraction in the range of (1%, 100%) with a * default value of 10%. The threshold sets a target for whether the cluster * is balanced. A cluster is balanced if for each datanode, the utilization * of the node (ratio of used space at the node to total capacity of the node) @@ -1503,14 +1503,14 @@ public class Balancer { i++; try { threshold = Double.parseDouble(args[i]); - if (threshold < 0 || threshold > 100) { - throw new NumberFormatException( + if (threshold < 1 || threshold > 100) { + throw new IllegalArgumentException( "Number out of range: threshold = " + threshold); } LOG.info( "Using a threshold of " + threshold ); - } catch(NumberFormatException e) { + } catch(IllegalArgumentException e) { System.err.println( - "Expecting a number in the range of [0.0, 100.0]: " + "Expecting a number in the range of [1.0, 100.0]: " + args[i]); throw e; }