Return-Path: Delivered-To: apmail-hadoop-hdfs-commits-archive@minotaur.apache.org Received: (qmail 50503 invoked from network); 12 Sep 2009 22:12:01 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 12 Sep 2009 22:12:01 -0000 Received: (qmail 50892 invoked by uid 500); 12 Sep 2009 22:12:01 -0000 Delivered-To: apmail-hadoop-hdfs-commits-archive@hadoop.apache.org Received: (qmail 50874 invoked by uid 500); 12 Sep 2009 22:12:01 -0000 Mailing-List: contact hdfs-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-commits@hadoop.apache.org Received: (qmail 50864 invoked by uid 99); 12 Sep 2009 22:12:01 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 12 Sep 2009 22:12:01 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 12 Sep 2009 22:11:59 +0000 Received: by eris.apache.org (Postfix, from userid 65534) id 111E0238888E; Sat, 12 Sep 2009 22:11:39 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r814244 - in /hadoop/hdfs/branches/HDFS-265: ./ lib/ src/java/org/apache/hadoop/hdfs/ src/java/org/apache/hadoop/hdfs/server/namenode/ src/test/hdfs/org/apache/hadoop/hdfs/ Date: Sat, 12 Sep 2009 22:11:38 -0000 To: hdfs-commits@hadoop.apache.org From: shv@apache.org X-Mailer: svnmailer-1.0.8 Message-Id: <20090912221139.111E0238888E@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Author: shv Date: Sat Sep 12 22:11:37 2009 New Revision: 814244 URL: http://svn.apache.org/viewvc?rev=814244&view=rev Log: HDFS-604. Merge -r 813631:814221 from trunk to the append branch. Modified: hadoop/hdfs/branches/HDFS-265/CHANGES.txt hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-0.21.0-dev.jar hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-examples-0.21.0-dev.jar hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-test-0.21.0-dev.jar hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/DFSClient.java hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/BlockManager.java hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java hadoop/hdfs/branches/HDFS-265/src/test/hdfs/org/apache/hadoop/hdfs/TestBlockReport.java hadoop/hdfs/branches/HDFS-265/src/test/hdfs/org/apache/hadoop/hdfs/TestDatanodeBlockScanner.java Modified: hadoop/hdfs/branches/HDFS-265/CHANGES.txt URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/CHANGES.txt?rev=814244&r1=814243&r2=814244&view=diff ============================================================================== --- hadoop/hdfs/branches/HDFS-265/CHANGES.txt (original) +++ hadoop/hdfs/branches/HDFS-265/CHANGES.txt Sat Sep 12 22:11:37 2009 @@ -54,6 +54,9 @@ FileNotFoundException from FileSystem::listStatus rather than returning null. (Jakob Homan via cdouglas) + HDFS-602. DistributedFileSystem mkdirs throws FileAlreadyExistsException + instead of FileNotFoundException. (Boris Shkolnik via suresh) + NEW FEATURES HDFS-436. Introduce AspectJ framework for HDFS code and tests. @@ -269,6 +272,15 @@ HDFS-605. Do not run fault injection tests in the run-test-hdfs-with-mr target. (Konstantin Boudnik via szetszwo) + HDFS-606. Fix ConcurrentModificationException in invalidateCorruptReplicas() + (shv) + + HDFS-601. TestBlockReport obtains data directories directly from + MiniHDFSCluster. (Konstantin Boudnik via shv) + + HDFS-614. TestDatanodeBlockScanner obtains data directories directly from + MiniHDFSCluster. (shv) + Release 0.20.1 - Unreleased IMPROVEMENTS Modified: hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-0.21.0-dev.jar URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-0.21.0-dev.jar?rev=814244&r1=814243&r2=814244&view=diff ============================================================================== Binary files - no diff available. Modified: hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-examples-0.21.0-dev.jar URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-examples-0.21.0-dev.jar?rev=814244&r1=814243&r2=814244&view=diff ============================================================================== Binary files - no diff available. Modified: hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-test-0.21.0-dev.jar URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-test-0.21.0-dev.jar?rev=814244&r1=814243&r2=814244&view=diff ============================================================================== Binary files - no diff available. Modified: hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/DFSClient.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/DFSClient.java?rev=814244&r1=814243&r2=814244&view=diff ============================================================================== --- hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/DFSClient.java (original) +++ hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/DFSClient.java Sat Sep 12 22:11:37 2009 @@ -65,6 +65,7 @@ import org.apache.hadoop.fs.FSInputChecker; import org.apache.hadoop.fs.FSInputStream; import org.apache.hadoop.fs.FSOutputSummer; +import org.apache.hadoop.fs.FileAlreadyExistsException; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.FsServerDefaults; @@ -975,7 +976,8 @@ } catch(RemoteException re) { throw re.unwrapRemoteException(AccessControlException.class, NSQuotaExceededException.class, - DSQuotaExceededException.class); + DSQuotaExceededException.class, + FileAlreadyExistsException.class); } } Modified: hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/BlockManager.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/BlockManager.java?rev=814244&r1=814243&r2=814244&view=diff ============================================================================== --- hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/BlockManager.java (original) +++ hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/BlockManager.java Sat Sep 12 22:11:37 2009 @@ -1136,8 +1136,10 @@ boolean gotException = false; if (nodes == null) return; - for (Iterator it = nodes.iterator(); it.hasNext(); ) { - DatanodeDescriptor node = it.next(); + // make a copy of the array of nodes in order to avoid + // ConcurrentModificationException, when the block is removed from the node + DatanodeDescriptor[] nodesCopy = nodes.toArray(new DatanodeDescriptor[0]); + for (DatanodeDescriptor node : nodesCopy) { try { invalidateBlock(blk, node); } catch (IOException e) { Modified: hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java?rev=814244&r1=814243&r2=814244&view=diff ============================================================================== --- hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java (original) +++ hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java Sat Sep 12 22:11:37 2009 @@ -17,23 +17,30 @@ */ package org.apache.hadoop.hdfs.server.namenode; -import java.io.*; +import java.io.Closeable; +import java.io.FileNotFoundException; +import java.io.IOException; import java.net.URI; -import java.util.*; +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.ContentSummary; +import org.apache.hadoop.fs.FileAlreadyExistsException; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.ContentSummary; -import org.apache.hadoop.fs.permission.*; -import org.apache.hadoop.metrics.MetricsRecord; -import org.apache.hadoop.metrics.MetricsUtil; -import org.apache.hadoop.metrics.MetricsContext; -import org.apache.hadoop.hdfs.protocol.FSConstants; +import org.apache.hadoop.fs.permission.FsPermission; +import org.apache.hadoop.fs.permission.PermissionStatus; import org.apache.hadoop.hdfs.protocol.Block; +import org.apache.hadoop.hdfs.protocol.ClientProtocol; +import org.apache.hadoop.hdfs.protocol.FSConstants; import org.apache.hadoop.hdfs.protocol.QuotaExceededException; import org.apache.hadoop.hdfs.server.common.HdfsConstants.BlockUCState; import org.apache.hadoop.hdfs.server.common.HdfsConstants.StartupOption; +import org.apache.hadoop.metrics.MetricsContext; +import org.apache.hadoop.metrics.MetricsRecord; +import org.apache.hadoop.metrics.MetricsUtil; /************************************************* * FSDirectory stores the filesystem directory state. @@ -957,7 +964,7 @@ */ boolean mkdirs(String src, PermissionStatus permissions, boolean inheritPermission, long now) - throws FileNotFoundException, QuotaExceededException { + throws FileAlreadyExistsException, QuotaExceededException { src = normalizePath(src); String[] names = INode.getPathNames(src); byte[][] components = INode.getPathComponents(names); @@ -972,7 +979,7 @@ for(; i < inodes.length && inodes[i] != null; i++) { pathbuilder.append(Path.SEPARATOR + names[i]); if (!inodes[i].isDirectory()) { - throw new FileNotFoundException("Parent path is not a directory: " + throw new FileAlreadyExistsException("Parent path is not a directory: " + pathbuilder); } } Modified: hadoop/hdfs/branches/HDFS-265/src/test/hdfs/org/apache/hadoop/hdfs/TestBlockReport.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/src/test/hdfs/org/apache/hadoop/hdfs/TestBlockReport.java?rev=814244&r1=814243&r2=814244&view=diff ============================================================================== --- hadoop/hdfs/branches/HDFS-265/src/test/hdfs/org/apache/hadoop/hdfs/TestBlockReport.java (original) +++ hadoop/hdfs/branches/HDFS-265/src/test/hdfs/org/apache/hadoop/hdfs/TestBlockReport.java Sat Sep 12 22:11:37 2009 @@ -17,6 +17,16 @@ */ package org.apache.hadoop.hdfs; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertTrue; + +import java.io.File; +import java.io.FilenameFilter; +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.Random; + import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.commons.logging.impl.Log4JLogger; @@ -30,18 +40,9 @@ import org.apache.hadoop.hdfs.server.namenode.NameNode; import org.apache.log4j.Level; import org.junit.After; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertTrue; import org.junit.Before; import org.junit.Test; -import java.io.File; -import java.io.FilenameFilter; -import java.io.IOException; -import java.util.ArrayList; -import java.util.List; -import java.util.Random; - /** * This test simulates a variety of situations when blocks are being intentionally * corrupted, unexpectedly modified, and so on before a block report is happening @@ -155,7 +156,7 @@ (long)AppendTestUtil.FILE_SIZE, REPL_FACTOR, rand.nextLong()); // mock around with newly created blocks and delete some - String testDataDirectory = System.getProperty("test.build.data"); + String testDataDirectory = cluster.getDataDirectory(); File dataDir = new File(testDataDirectory); assertTrue(dataDir.isDirectory()); Modified: hadoop/hdfs/branches/HDFS-265/src/test/hdfs/org/apache/hadoop/hdfs/TestDatanodeBlockScanner.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/src/test/hdfs/org/apache/hadoop/hdfs/TestDatanodeBlockScanner.java?rev=814244&r1=814243&r2=814244&view=diff ============================================================================== --- hadoop/hdfs/branches/HDFS-265/src/test/hdfs/org/apache/hadoop/hdfs/TestDatanodeBlockScanner.java (original) +++ hadoop/hdfs/branches/HDFS-265/src/test/hdfs/org/apache/hadoop/hdfs/TestDatanodeBlockScanner.java Sat Sep 12 22:11:37 2009 @@ -131,7 +131,7 @@ public static boolean corruptReplica(String blockName, int replica) throws IOException { Random random = new Random(); - File baseDir = new File(System.getProperty("test.build.data"), "dfs/data"); + File baseDir = new File(MiniDFSCluster.getBaseDirectory(), "data"); boolean corrupted = false; for (int i=replica*2; i