Return-Path: Delivered-To: apmail-hadoop-hdfs-commits-archive@minotaur.apache.org Received: (qmail 3641 invoked from network); 2 Sep 2009 02:39:26 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 2 Sep 2009 02:39:26 -0000 Received: (qmail 54659 invoked by uid 500); 2 Sep 2009 02:39:25 -0000 Delivered-To: apmail-hadoop-hdfs-commits-archive@hadoop.apache.org Received: (qmail 54620 invoked by uid 500); 2 Sep 2009 02:39:25 -0000 Mailing-List: contact hdfs-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-commits@hadoop.apache.org Received: (qmail 54610 invoked by uid 99); 2 Sep 2009 02:39:25 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 02 Sep 2009 02:39:25 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 02 Sep 2009 02:39:24 +0000 Received: by eris.apache.org (Postfix, from userid 65534) id D029E2388896; Wed, 2 Sep 2009 02:39:03 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r810341 - in /hadoop/hdfs/branches/HDFS-265: CHANGES.txt src/java/org/apache/hadoop/hdfs/protocol/BlockListAsLongs.java src/java/org/apache/hadoop/hdfs/server/namenode/DatanodeDescriptor.java Date: Wed, 02 Sep 2009 02:39:03 -0000 To: hdfs-commits@hadoop.apache.org From: shv@apache.org X-Mailer: svnmailer-1.0.8 Message-Id: <20090902023903.D029E2388896@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Author: shv Date: Wed Sep 2 02:39:03 2009 New Revision: 810341 URL: http://svn.apache.org/viewvc?rev=810341&view=rev Log: HDFS-581. Merge -r 809440:810333 from trunk to the append branch. Modified: hadoop/hdfs/branches/HDFS-265/CHANGES.txt hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/protocol/BlockListAsLongs.java hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/DatanodeDescriptor.java Modified: hadoop/hdfs/branches/HDFS-265/CHANGES.txt URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/CHANGES.txt?rev=810341&r1=810340&r2=810341&view=diff ============================================================================== --- hadoop/hdfs/branches/HDFS-265/CHANGES.txt (original) +++ hadoop/hdfs/branches/HDFS-265/CHANGES.txt Wed Sep 2 02:39:03 2009 @@ -1,5 +1,37 @@ Hadoop HDFS Change Log +Append branch (unreleased changes) + + INCOMPATIBLE CHANGES + + NEW FEATURES + + HDFS-536. Support hflush at DFSClient. (hairong) + + HDFS-517. Introduce BlockInfoUnderConstruction to reflect block replica + states while writing. (shv) + + HDFS-544. Add a "rbw" subdir to DataNode data directory. (hairong) + + HDFS-565. Introduce block committing logic during new block allocation + and file close. (shv) + + IMPROVEMENTS + + HDFS-509. Redesign DataNode volumeMap to include all types of Replicas. + (hairong) + + HDFS-562. Add a test for NameNode.getBlockLocations(..) to check read from + un-closed file. (szetszwo) + + HDFS-543. Break FSDatasetInterface#writToBlock() into writeToRemporary, + writeToRBW, ad append. (hairong) + + BUG FIXES + + HDFS-547. TestHDFSFileSystemContract#testOutputStreamClosedTwice + sometimes fails with CloseByInterruptException. (hairong) + Trunk (unreleased changes) INCOMPATIBLE CHANGES @@ -19,16 +51,6 @@ HDFS-461. Tool to analyze file size distribution in HDFS. (shv) - HDFS-536. Support hflush at DFSClient. (hairong) - - HDFS-517. Introduce BlockInfoUnderConstruction to reflect block replica - states while writing. (shv) - - HDFS-544. Add a "rbw" subdir to DataNode data directory. (hairong) - - HDFS-565. Introduce block committing logic during new block allocation - and file close. (shv) - HDFS-492. Add two JSON JSP pages to the Namenode for providing corrupt blocks/replicas information. (Bill Zeller via szetszwo) @@ -108,9 +130,10 @@ HDFS-451. Add fault injection tests for DataTransferProtocol. (szetszwo) - HDFS-509. Redesign DataNode volumeMap to include all types of Replicas. - (hairong) - + HDFS-409. Add more access token tests. (Kan Zhang via szetszwo) + + HDFS-546. DatanodeDescriptor iterates blocks as BlockInfo. (shv) + HDFS-457. Do not shutdown datanode if some, but not all, volumes fail. (Boris Shkolnik via szetszwo) @@ -122,22 +145,11 @@ HDFS-552. Change TestFiDataTransferProtocol to junit 4 and add a few new tests. (szetszwo) -<<<<<<< .working - HDFS-562. Add a test for NameNode.getBlockLocations(..) to check read from - un-closed file. (szetszwo) - - HDFS-543. Break FSDatasetInterface#writToBlock() into writeToRemporary, - writeToRBW, ad append. (hairong) - - HDFS-549. Allow a non-fault-inject test, which is specified by -Dtestcase, - to be executed by the run-test-hdfs-fault-inject target. (Konstantin - Boudnik via szetszwo) - -======= ->>>>>>> .merge-right.r809439 HDFS-563. Simplify the codes in FSNamesystem.getBlockLocations(..). (szetszwo) + HDFS-581. Introduce an iterator over blocks in the block report array. (shv) + BUG FIXES HDFS-76. Better error message to users when commands fail because of @@ -191,8 +203,8 @@ HDFS-534. Include avro in ivy. (szetszwo) - HDFS-547. TestHDFSFileSystemContract#testOutputStreamClosedTwice - sometimes fails with CloseByInterruptException. (hairong) + HDFS-532. Allow applications to know that a read request failed + because block is missing. (dhruba) HDFS-561. Fix write pipeline READ_TIMEOUT in DataTransferProtocol. (Kan Zhang via szetszwo) Modified: hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/protocol/BlockListAsLongs.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/protocol/BlockListAsLongs.java?rev=810341&r1=810340&r2=810341&view=diff ============================================================================== --- hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/protocol/BlockListAsLongs.java (original) +++ hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/protocol/BlockListAsLongs.java Wed Sep 2 02:39:03 2009 @@ -17,14 +17,16 @@ */ package org.apache.hadoop.hdfs.protocol; +import java.util.Iterator; + /** * This class provides an interface for accessing list of blocks that * has been implemented as long[]. - * This class is usefull for block report. Rather than send block reports + * This class is useful for block report. Rather than send block reports * as a Block[] we can send it as a long[]. * */ -public class BlockListAsLongs { +public class BlockListAsLongs implements Iterable{ /** * A block as 3 longs * block-id and block length and generation stamp @@ -48,7 +50,6 @@ * @param blockArray - the input array block[] * @return the output array of long[] */ - public static long[] convertToArrayLongs(final Block[] blockArray) { long[] blocksAsLongs = new long[blockArray.length * LONGS_PER_BLOCK]; @@ -61,6 +62,10 @@ return blocksAsLongs; } + public BlockListAsLongs() { + this(null); + } + /** * Constructor * @param iBlockList - BlockListALongs create from this long[] parameter @@ -77,7 +82,43 @@ } } - + /** + * Iterates over blocks in the block report. + * Avoids object allocation on each iteration. + */ + private class BlockReportIterator implements Iterator { + private int currentBlockIndex; + private Block block; + + BlockReportIterator() { + this.currentBlockIndex = 0; + this.block = new Block(); + } + + public boolean hasNext() { + return currentBlockIndex < getNumberOfBlocks(); + } + + public Block next() { + block.set(blockList[index2BlockId(currentBlockIndex)], + blockList[index2BlockLen(currentBlockIndex)], + blockList[index2BlockGenStamp(currentBlockIndex)]); + currentBlockIndex++; + return block; + } + + public void remove() { + throw new UnsupportedOperationException("Sorry. can't remove."); + } + } + + /** + * Returns an iterator over blocks in the block report. + */ + public Iterator iterator() { + return new BlockReportIterator(); + } + /** * The number of blocks * @return - the number of blocks @@ -85,13 +126,13 @@ public int getNumberOfBlocks() { return blockList.length/LONGS_PER_BLOCK; } - - + /** * The block-id of the indexTh block * @param index - the block whose block-id is desired * @return the block-id */ + @Deprecated public long getBlockId(final int index) { return blockList[index2BlockId(index)]; } @@ -101,6 +142,7 @@ * @param index - the block whose block-len is desired * @return - the block-len */ + @Deprecated public long getBlockLen(final int index) { return blockList[index2BlockLen(index)]; } @@ -110,6 +152,7 @@ * @param index - the block whose block-len is desired * @return - the generation stamp */ + @Deprecated public long getBlockGenStamp(final int index) { return blockList[index2BlockGenStamp(index)]; } @@ -119,7 +162,7 @@ * @param index - the index of the block to set * @param b - the block is set to the value of the this block */ - void setBlock(final int index, final Block b) { + private void setBlock(final int index, final Block b) { blockList[index2BlockId(index)] = b.getBlockId(); blockList[index2BlockLen(index)] = b.getNumBytes(); blockList[index2BlockGenStamp(index)] = b.getGenerationStamp(); Modified: hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/DatanodeDescriptor.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/DatanodeDescriptor.java?rev=810341&r1=810340&r2=810341&view=diff ============================================================================== --- hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/DatanodeDescriptor.java (original) +++ hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/DatanodeDescriptor.java Wed Sep 2 02:39:03 2009 @@ -381,20 +381,17 @@ Collection toAdd, // add to DatanodeDescriptor Collection toRemove, // remove from DatanodeDescriptor Collection toInvalidate) { // should be removed from DN - // place a deilimiter in the list which separates blocks + // place a delimiter in the list which separates blocks // that have been reported from those that have not BlockInfo delimiter = new BlockInfo(new Block(), 1); boolean added = this.addBlock(delimiter); assert added : "Delimiting block cannot be present in the node"; if(newReport == null) - newReport = new BlockListAsLongs( new long[0]); + newReport = new BlockListAsLongs(); // scan the report and collect newly reported blocks // Note we are taking special precaution to limit tmp blocks allocated // as part this block report - which why block list is stored as longs - Block iblk = new Block(); // a fixed new'ed block to be reused with index i - for (int i = 0; i < newReport.getNumberOfBlocks(); ++i) { - iblk.set(newReport.getBlockId(i), newReport.getBlockLen(i), - newReport.getBlockGenStamp(i)); + for (Block iblk : newReport) { BlockInfo storedBlock = blocksMap.getStoredBlock(iblk); if(storedBlock == null) { // If block is not in blocksMap it does not belong to any file