Return-Path: Delivered-To: apmail-hadoop-core-commits-archive@www.apache.org Received: (qmail 56360 invoked from network); 14 Mar 2008 23:42:43 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 14 Mar 2008 23:42:43 -0000 Received: (qmail 75291 invoked by uid 500); 14 Mar 2008 23:42:40 -0000 Delivered-To: apmail-hadoop-core-commits-archive@hadoop.apache.org Received: (qmail 75261 invoked by uid 500); 14 Mar 2008 23:42:40 -0000 Mailing-List: contact core-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-commits@hadoop.apache.org Received: (qmail 75252 invoked by uid 99); 14 Mar 2008 23:42:40 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 14 Mar 2008 16:42:40 -0700 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.3] (HELO eris.apache.org) (140.211.11.3) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 14 Mar 2008 23:42:11 +0000 Received: by eris.apache.org (Postfix, from userid 65534) id A6E221A9832; Fri, 14 Mar 2008 16:42:21 -0700 (PDT) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r637305 - in /hadoop/core/trunk: CHANGES.txt src/java/org/apache/hadoop/dfs/FSNamesystem.java src/test/org/apache/hadoop/dfs/TestFileCreation.java Date: Fri, 14 Mar 2008 23:42:21 -0000 To: core-commits@hadoop.apache.org From: dhruba@apache.org X-Mailer: svnmailer-1.0.8 Message-Id: <20080314234221.A6E221A9832@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Author: dhruba Date: Fri Mar 14 16:42:20 2008 New Revision: 637305 URL: http://svn.apache.org/viewvc?rev=637305&view=rev Log: HADOOP-3009. TestFileCreation sometimes fails because restarting minidfscluster sometimes creates datanodes with ports that are different from their original instance. (dhruba) Modified: hadoop/core/trunk/CHANGES.txt hadoop/core/trunk/src/java/org/apache/hadoop/dfs/FSNamesystem.java hadoop/core/trunk/src/test/org/apache/hadoop/dfs/TestFileCreation.java Modified: hadoop/core/trunk/CHANGES.txt URL: http://svn.apache.org/viewvc/hadoop/core/trunk/CHANGES.txt?rev=637305&r1=637304&r2=637305&view=diff ============================================================================== --- hadoop/core/trunk/CHANGES.txt (original) +++ hadoop/core/trunk/CHANGES.txt Fri Mar 14 16:42:20 2008 @@ -218,6 +218,10 @@ HADOOP-2994. Code cleanup for DFSClient: remove redundant conversions from string to string. (Dave Brosius via dhruba) + HADOOP-3009. TestFileCreation sometimes fails because restarting + minidfscluster sometimes creates datanodes with ports that are + different from their original instance. (dhruba) + Release 0.16.1 - 2008-03-13 INCOMPATIBLE CHANGES Modified: hadoop/core/trunk/src/java/org/apache/hadoop/dfs/FSNamesystem.java URL: http://svn.apache.org/viewvc/hadoop/core/trunk/src/java/org/apache/hadoop/dfs/FSNamesystem.java?rev=637305&r1=637304&r2=637305&view=diff ============================================================================== --- hadoop/core/trunk/src/java/org/apache/hadoop/dfs/FSNamesystem.java (original) +++ hadoop/core/trunk/src/java/org/apache/hadoop/dfs/FSNamesystem.java Fri Mar 14 16:42:20 2008 @@ -2605,24 +2605,34 @@ + block.getBlockName() + " on " + node.getName() + " size " + block.getNumBytes()); } - // - // if file is being actively written to, then do not check - // replication-factor here. It will be checked when the file is closed. + // If this block does not belong to anyfile, then we are done. // - if (fileINode == null || fileINode.isUnderConstruction()) { + if (fileINode == null) { + NameNode.stateChangeLog.info("BLOCK* NameSystem.addStoredBlock: " + + "addStoredBlock request received for " + + block.getBlockName() + " on " + node.getName() + + " size " + block.getNumBytes() + + " But it does not belong to any file."); return block; } - + // filter out containingNodes that are marked for decommission. NumberReplicas num = countNodes(block); int numCurrentReplica = num.liveReplicas() + pendingReplications.getNumReplicas(block); - + // check whether safe replication is reached for the block - // only if it is a part of a files incrementSafeBlockCount(numCurrentReplica); + // + // if file is being actively written to, then do not check + // replication-factor here. It will be checked when the file is closed. + // + if (fileINode.isUnderConstruction()) { + return block; + } + // handle underReplication/overReplication short fileReplication = fileINode.getReplication(); if (numCurrentReplica >= fileReplication) { Modified: hadoop/core/trunk/src/test/org/apache/hadoop/dfs/TestFileCreation.java URL: http://svn.apache.org/viewvc/hadoop/core/trunk/src/test/org/apache/hadoop/dfs/TestFileCreation.java?rev=637305&r1=637304&r2=637305&view=diff ============================================================================== --- hadoop/core/trunk/src/test/org/apache/hadoop/dfs/TestFileCreation.java (original) +++ hadoop/core/trunk/src/test/org/apache/hadoop/dfs/TestFileCreation.java Fri Mar 14 16:42:20 2008 @@ -67,6 +67,16 @@ } // + // writes specified bytes to file. + // + private void writeFile(FSDataOutputStream stm, int size) throws IOException { + byte[] buffer = new byte[fileSize]; + Random rand = new Random(seed); + rand.nextBytes(buffer); + stm.write(buffer, 0, size); + } + + // // verify that the data written to the full blocks are sane // private void checkFile(FileSystem fileSys, Path name, int repl) @@ -362,7 +372,10 @@ System.out.println("testFileCreationNamenodeRestart: " + "Created file filestatus.dat with one " + " replicas."); - writeFile(stm); + + // write two full blocks. + writeFile(stm, numBlocks * blockSize); + stm.flush(); // create another new file. // @@ -410,7 +423,7 @@ file1.toString(), 0, Long.MAX_VALUE); System.out.println("locations = " + locations.locatedBlockCount()); assertTrue("Error blocks were not cleaned up for file " + file1, - locations.locatedBlockCount() == 1); + locations.locatedBlockCount() == 3); // verify filestatus2.dat locations = client.namenode.getBlockLocations(