Return-Path: X-Original-To: apmail-hadoop-hdfs-commits-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 918A1DB86 for ; Fri, 21 Sep 2012 05:54:20 +0000 (UTC) Received: (qmail 30924 invoked by uid 500); 21 Sep 2012 05:54:20 -0000 Delivered-To: apmail-hadoop-hdfs-commits-archive@hadoop.apache.org Received: (qmail 30890 invoked by uid 500); 21 Sep 2012 05:54:20 -0000 Mailing-List: contact hdfs-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-commits@hadoop.apache.org Received: (qmail 30864 invoked by uid 99); 21 Sep 2012 05:54:19 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 21 Sep 2012 05:54:19 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 21 Sep 2012 05:54:17 +0000 Received: from eris.apache.org (localhost [127.0.0.1]) by eris.apache.org (Postfix) with ESMTP id 9E0F9238890D; Fri, 21 Sep 2012 05:53:34 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r1388332 - in /hadoop/common/branches/branch-2/hadoop-hdfs-project: ./ hadoop-hdfs/ hadoop-hdfs/src/main/java/ hadoop-hdfs/src/main/native/ hadoop-hdfs/src/main/webapps/datanode/ hadoop-hdfs/src/main/webapps/hdfs/ hadoop-hdfs/src/main/webap... Date: Fri, 21 Sep 2012 05:53:34 -0000 To: hdfs-commits@hadoop.apache.org From: eli@apache.org X-Mailer: svnmailer-1.0.8-patched Message-Id: <20120921055334.9E0F9238890D@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Author: eli Date: Fri Sep 21 05:53:33 2012 New Revision: 1388332 URL: http://svn.apache.org/viewvc?rev=1388332&view=rev Log: HDFS-3931. TestDatanodeBlockScanner#testBlockCorruptionPolicy2 is broken. Contributed by Andy Isaacson Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/ (props changed) hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/ (props changed) hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/ (props changed) hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/native/ (props changed) hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/ (props changed) hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/ (props changed) hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/ (props changed) hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs/ (props changed) hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeBlockScanner.java Propchange: hadoop/common/branches/branch-2/hadoop-hdfs-project/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-hdfs-project:r1388331 Propchange: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs:r1388331 Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt?rev=1388332&r1=1388331&r2=1388332&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt Fri Sep 21 05:53:33 2012 @@ -39,6 +39,9 @@ Release 2.0.3-alpha - Unreleased HDFS-3932. NameNode Web UI broken if the rpc-address is set to the wildcard. (Colin Patrick McCabe via eli) + HDFS-3931. TestDatanodeBlockScanner#testBlockCorruptionPolicy2 is broken. + (Andy Isaacson via eli) + Release 2.0.2-alpha - 2012-09-07 INCOMPATIBLE CHANGES Propchange: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java:r1388331 Propchange: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/native/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native:r1388331 Propchange: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode:r1388331 Propchange: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs:r1388331 Propchange: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary:r1388331 Propchange: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs:r1388331 Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java?rev=1388332&r1=1388331&r2=1388332&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java Fri Sep 21 05:53:33 2012 @@ -506,7 +506,7 @@ public class DFSTestUtil { public static void waitReplication(FileSystem fs, Path fileName, short replFactor) throws IOException, InterruptedException, TimeoutException { boolean correctReplFactor; - final int ATTEMPTS = 20; + final int ATTEMPTS = 40; int count = 0; do { Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeBlockScanner.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeBlockScanner.java?rev=1388332&r1=1388331&r2=1388332&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeBlockScanner.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeBlockScanner.java Fri Sep 21 05:53:33 2012 @@ -269,6 +269,7 @@ public class TestDatanodeBlockScanner { conf.setLong(DFSConfigKeys.DFS_NAMENODE_REPLICATION_INTERVAL_KEY, 3); conf.setLong(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 3L); conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_REPLICATION_CONSIDERLOAD_KEY, false); + conf.setLong(DFSConfigKeys.DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_KEY, 5L); MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).numDataNodes(numDataNodes).build(); cluster.waitActive(); @@ -276,35 +277,47 @@ public class TestDatanodeBlockScanner { Path file1 = new Path("/tmp/testBlockCorruptRecovery/file"); DFSTestUtil.createFile(fs, file1, 1024, numReplicas, 0); ExtendedBlock block = DFSTestUtil.getFirstBlock(fs, file1); + final int ITERATIONS = 10; // Wait until block is replicated to numReplicas DFSTestUtil.waitReplication(fs, file1, numReplicas); - // Corrupt numCorruptReplicas replicas of block - int[] corruptReplicasDNIDs = new int[numCorruptReplicas]; - for (int i=0, j=0; (j != numCorruptReplicas) && (i < numDataNodes); i++) { - if (corruptReplica(block, i)) { - corruptReplicasDNIDs[j++] = i; - LOG.info("successfully corrupted block " + block + " on node " - + i + " " + cluster.getDataNodes().get(i).getDisplayName()); + for (int k = 0; ; k++) { + // Corrupt numCorruptReplicas replicas of block + int[] corruptReplicasDNIDs = new int[numCorruptReplicas]; + for (int i=0, j=0; (j != numCorruptReplicas) && (i < numDataNodes); i++) { + if (corruptReplica(block, i)) { + corruptReplicasDNIDs[j++] = i; + LOG.info("successfully corrupted block " + block + " on node " + + i + " " + cluster.getDataNodes().get(i).getDisplayName()); + } + } + + // Restart the datanodes containing corrupt replicas + // so they would be reported to namenode and re-replicated + // They MUST be restarted in reverse order from highest to lowest index, + // because the act of restarting them removes them from the ArrayList + // and causes the indexes of all nodes above them in the list to change. + for (int i = numCorruptReplicas - 1; i >= 0 ; i--) { + LOG.info("restarting node with corrupt replica: position " + + i + " node " + corruptReplicasDNIDs[i] + " " + + cluster.getDataNodes().get(corruptReplicasDNIDs[i]).getDisplayName()); + cluster.restartDataNode(corruptReplicasDNIDs[i]); } - } - - // Restart the datanodes containing corrupt replicas - // so they would be reported to namenode and re-replicated - // They MUST be restarted in reverse order from highest to lowest index, - // because the act of restarting them removes them from the ArrayList - // and causes the indexes of all nodes above them in the list to change. - for (int i = numCorruptReplicas - 1; i >= 0 ; i--) { - LOG.info("restarting node with corrupt replica: position " - + i + " node " + corruptReplicasDNIDs[i] + " " - + cluster.getDataNodes().get(corruptReplicasDNIDs[i]).getDisplayName()); - cluster.restartDataNode(corruptReplicasDNIDs[i]); - } - // Loop until all corrupt replicas are reported - DFSTestUtil.waitCorruptReplicas(fs, cluster.getNamesystem(), file1, - block, numCorruptReplicas); + // Loop until all corrupt replicas are reported + try { + DFSTestUtil.waitCorruptReplicas(fs, cluster.getNamesystem(), file1, + block, numCorruptReplicas); + } catch(TimeoutException e) { + if (k > ITERATIONS) { + throw e; + } + LOG.info("Timed out waiting for corrupt replicas, trying again, iteration " + k); + continue; + } + break; + } // Loop until the block recovers after replication DFSTestUtil.waitReplication(fs, file1, numReplicas);