Return-Path: Delivered-To: apmail-hadoop-hdfs-commits-archive@minotaur.apache.org Received: (qmail 12305 invoked from network); 24 Mar 2010 22:01:00 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 24 Mar 2010 22:01:00 -0000 Received: (qmail 6944 invoked by uid 500); 24 Mar 2010 22:01:00 -0000 Delivered-To: apmail-hadoop-hdfs-commits-archive@hadoop.apache.org Received: (qmail 6906 invoked by uid 500); 24 Mar 2010 22:00:59 -0000 Mailing-List: contact hdfs-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-commits@hadoop.apache.org Received: (qmail 6898 invoked by uid 99); 24 Mar 2010 22:00:59 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 24 Mar 2010 22:00:59 +0000 X-ASF-Spam-Status: No, hits=-1026.6 required=10.0 tests=ALL_TRUSTED,AWL X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 24 Mar 2010 22:00:56 +0000 Received: by eris.apache.org (Postfix, from userid 65534) id C06E92388A02; Wed, 24 Mar 2010 22:00:36 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r927198 - in /hadoop/hdfs/trunk: ./ ivy/ src/java/org/apache/hadoop/hdfs/protocol/ src/java/org/apache/hadoop/hdfs/security/ src/java/org/apache/hadoop/hdfs/server/datanode/ src/java/org/apache/hadoop/hdfs/server/namenode/ src/java/org/apac... Date: Wed, 24 Mar 2010 22:00:36 -0000 To: hdfs-commits@hadoop.apache.org From: cutting@apache.org X-Mailer: svnmailer-1.0.8 Message-Id: <20100324220036.C06E92388A02@eris.apache.org> Author: cutting Date: Wed Mar 24 22:00:35 2010 New Revision: 927198 URL: http://svn.apache.org/viewvc?rev=927198&view=rev Log: HDFS-892. Optionally use Avro reflection for Namenode RPC. Added: hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/protocol/NamenodeProtocols.java hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/TestDfsOverAvroRpc.java Modified: hadoop/hdfs/trunk/CHANGES.txt hadoop/hdfs/trunk/build.xml hadoop/hdfs/trunk/ivy.xml hadoop/hdfs/trunk/ivy/ivysettings.xml hadoop/hdfs/trunk/ivy/libraries.properties hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/DSQuotaExceededException.java hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/NSQuotaExceededException.java hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/QuotaExceededException.java hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/security/BlockAccessKey.java hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/SafeModeException.java hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/protocol/DatanodeCommand.java hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/protocol/DatanodeProtocol.java hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/MiniDFSCluster.java hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java Modified: hadoop/hdfs/trunk/CHANGES.txt URL: http://svn.apache.org/viewvc/hadoop/hdfs/trunk/CHANGES.txt?rev=927198&r1=927197&r2=927198&view=diff ============================================================================== --- hadoop/hdfs/trunk/CHANGES.txt (original) +++ hadoop/hdfs/trunk/CHANGES.txt Wed Mar 24 22:00:35 2010 @@ -106,6 +106,10 @@ Trunk (unreleased changes) HDFS-1043. NNThroughputBenchmark modifications to support benchmarking of server-side user group resolution. (shv) + HDFS-982. Optionally use Avro reflection for Namenode RPC. This + is not a complete implementation yet, but rather a starting point. + (cutting) + OPTIMIZATIONS HDFS-946. NameNode should not return full path name when lisitng a Modified: hadoop/hdfs/trunk/build.xml URL: http://svn.apache.org/viewvc/hadoop/hdfs/trunk/build.xml?rev=927198&r1=927197&r2=927198&view=diff ============================================================================== --- hadoop/hdfs/trunk/build.xml (original) +++ hadoop/hdfs/trunk/build.xml Wed Mar 24 22:00:35 2010 @@ -97,6 +97,7 @@ + @@ -308,6 +309,18 @@ + + + + + + @@ -513,6 +526,7 @@ + Modified: hadoop/hdfs/trunk/ivy.xml URL: http://svn.apache.org/viewvc/hadoop/hdfs/trunk/ivy.xml?rev=927198&r1=927197&r2=927198&view=diff ============================================================================== --- hadoop/hdfs/trunk/ivy.xml (original) +++ hadoop/hdfs/trunk/ivy.xml Wed Mar 24 22:00:35 2010 @@ -58,6 +58,7 @@ + Modified: hadoop/hdfs/trunk/ivy/ivysettings.xml URL: http://svn.apache.org/viewvc/hadoop/hdfs/trunk/ivy/ivysettings.xml?rev=927198&r1=927197&r2=927198&view=diff ============================================================================== --- hadoop/hdfs/trunk/ivy/ivysettings.xml (original) +++ hadoop/hdfs/trunk/ivy/ivysettings.xml Wed Mar 24 22:00:35 2010 @@ -43,8 +43,8 @@ checkmodified="true" changingPattern=".*SNAPSHOT"/> - - + + Modified: hadoop/hdfs/trunk/ivy/libraries.properties URL: http://svn.apache.org/viewvc/hadoop/hdfs/trunk/ivy/libraries.properties?rev=927198&r1=927197&r2=927198&view=diff ============================================================================== --- hadoop/hdfs/trunk/ivy/libraries.properties (original) +++ hadoop/hdfs/trunk/ivy/libraries.properties Wed Mar 24 22:00:35 2010 @@ -16,6 +16,7 @@ #These are the versions of our dependencies (in alphabetical order) apacheant.version=1.7.1 ant-task.version=2.0.10 +avro.version=1.3.1 checkstyle.version=4.2 Modified: hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java?rev=927198&r1=927197&r2=927198&view=diff ============================================================================== --- hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java (original) +++ hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java Wed Mar 24 22:00:35 2010 @@ -20,6 +20,8 @@ package org.apache.hadoop.hdfs.protocol; import java.io.FileNotFoundException; import java.io.IOException; +import org.apache.avro.reflect.Nullable; + import org.apache.hadoop.fs.ContentSummary; import org.apache.hadoop.fs.CreateFlag; import org.apache.hadoop.fs.FileStatus; @@ -30,6 +32,7 @@ import org.apache.hadoop.fs.permission.F import org.apache.hadoop.hdfs.DFSConfigKeys; import org.apache.hadoop.hdfs.protocol.FSConstants.UpgradeAction; import org.apache.hadoop.hdfs.server.common.UpgradeStatusReport; +import org.apache.hadoop.hdfs.server.namenode.SafeModeException; import org.apache.hadoop.io.EnumSetWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.ipc.VersionedProtocol; @@ -81,11 +84,13 @@ public interface ClientProtocol extends * @return file length and array of blocks with their locations * @throws IOException * @throws UnresolvedLinkException if the path contains a symlink. + * @throws FileNotFoundException if the path does not exist. */ + @Nullable public LocatedBlocks getBlockLocations(String src, long offset, long length) - throws IOException, UnresolvedLinkException; + throws IOException, UnresolvedLinkException, FileNotFoundException; /** * Get server default values for a number of configuration params. @@ -125,6 +130,8 @@ public interface ClientProtocol extends * any quota restriction * @throws IOException if other errors occur. * @throws UnresolvedLinkException if the path contains a symlink. + * @throws AlreadyBeingCreatedException if the path does not exist. + * @throws NSQuotaExceededException if the namespace quota is exceeded. */ public void create(String src, FsPermission masked, @@ -133,7 +140,8 @@ public interface ClientProtocol extends boolean createParent, short replication, long blockSize) - throws IOException, UnresolvedLinkException; + throws IOException, UnresolvedLinkException, + AlreadyBeingCreatedException, NSQuotaExceededException; /** * Append to the end of the file. @@ -175,10 +183,10 @@ public interface ClientProtocol extends * @throws UnresolvedLinkException if the path contains a symlink. */ public void setPermission(String src, FsPermission permission) - throws IOException, UnresolvedLinkException; + throws IOException, UnresolvedLinkException, SafeModeException; /** - * Set owner of a path (i.e. a file or a directory). + * Set Owner of a path (i.e. a file or a directory). * The parameters username and groupname cannot both be null. * @param src * @param username If it is null, the original username remains unchanged. @@ -216,10 +224,12 @@ public interface ClientProtocol extends * allocated for the current block * @return LocatedBlock allocated block information. * @throws UnresolvedLinkException if the path contains a symlink. + * @throws DSQuotaExceededException if the directory's quota is exceeded. */ public LocatedBlock addBlock(String src, String clientName, - Block previous, DatanodeInfo[] excludedNodes) - throws IOException, UnresolvedLinkException; + @Nullable Block previous, + @Nullable DatanodeInfo[] excludedNodes) + throws IOException, UnresolvedLinkException, DSQuotaExceededException; /** * The client is done writing data to the given filename, and would @@ -350,7 +360,7 @@ public interface ClientProtocol extends * any quota restriction. */ public boolean mkdirs(String src, FsPermission masked, boolean createParent) - throws IOException, UnresolvedLinkException; + throws IOException, UnresolvedLinkException, NSQuotaExceededException; /** * Get a partial listing of the indicated directory @@ -525,6 +535,7 @@ public interface ClientProtocol extends * @return upgrade status information or null if no upgrades are in progress * @throws IOException */ + @Nullable public UpgradeStatusReport distributedUpgradeProgress(UpgradeAction action) throws IOException; @@ -552,6 +563,7 @@ public interface ClientProtocol extends * @return object containing information regarding the file * or null if file not found */ + @Nullable public HdfsFileStatus getFileInfo(String src) throws IOException, UnresolvedLinkException; @@ -595,7 +607,8 @@ public interface ClientProtocol extends * is greater than the given quota */ public void setQuota(String path, long namespaceQuota, long diskspaceQuota) - throws IOException, UnresolvedLinkException; + throws IOException, UnresolvedLinkException, + FileNotFoundException, SafeModeException; /** * Write all metadata for this file into persistent storage. Modified: hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/DSQuotaExceededException.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/DSQuotaExceededException.java?rev=927198&r1=927197&r2=927198&view=diff ============================================================================== --- hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/DSQuotaExceededException.java (original) +++ hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/DSQuotaExceededException.java Wed Mar 24 22:00:35 2010 @@ -23,6 +23,8 @@ import org.apache.hadoop.util.StringUtil public class DSQuotaExceededException extends QuotaExceededException { protected static final long serialVersionUID = 1L; + public DSQuotaExceededException() {} + public DSQuotaExceededException(String msg) { super(msg); } Modified: hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java?rev=927198&r1=927197&r2=927198&view=diff ============================================================================== --- hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java (original) +++ hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java Wed Mar 24 22:00:35 2010 @@ -33,6 +33,8 @@ import org.apache.hadoop.net.Node; import org.apache.hadoop.net.NodeBase; import org.apache.hadoop.util.StringUtils; +import org.apache.avro.reflect.Nullable; + /** * DatanodeInfo represents the status of a DataNode. * This object is used for communication in the @@ -49,10 +51,12 @@ public class DatanodeInfo extends Datano /** HostName as supplied by the datanode during registration as its * name. Namenode uses datanode IP address as the name. */ + @Nullable protected String hostName = null; // administrative states of a datanode public enum AdminStates {NORMAL, DECOMMISSION_INPROGRESS, DECOMMISSIONED; } + @Nullable protected AdminStates adminState; @@ -285,8 +289,8 @@ public class DatanodeInfo extends Datano } } - private int level; //which level of the tree the node resides - private Node parent; //its parent + private transient int level; //which level of the tree the node resides + private transient Node parent; //its parent /** Return this node's parent */ public Node getParent() { return parent; } Modified: hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java?rev=927198&r1=927197&r2=927198&view=diff ============================================================================== --- hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java (original) +++ hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java Wed Mar 24 22:00:35 2010 @@ -29,6 +29,8 @@ import org.apache.hadoop.io.Writable; import org.apache.hadoop.io.WritableFactories; import org.apache.hadoop.io.WritableFactory; +import org.apache.avro.reflect.Nullable; + /** Interface that represents the over the wire information for a file. */ public class HdfsFileStatus implements Writable { @@ -41,7 +43,8 @@ public class HdfsFileStatus implements W } private byte[] path; // local name of the inode that's encoded in java UTF8 - private byte[] symlink; // symlink target encoded in java UTF8 + @Nullable + private byte[] symlink; // symlink target encoded in java UTF8 or null private long length; private boolean isdir; private short block_replication; Modified: hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java?rev=927198&r1=927197&r2=927198&view=diff ============================================================================== --- hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java (original) +++ hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java Wed Mar 24 22:00:35 2010 @@ -29,6 +29,8 @@ import org.apache.hadoop.io.Writable; import org.apache.hadoop.io.WritableFactories; import org.apache.hadoop.io.WritableFactory; +import org.apache.avro.reflect.Nullable; + /** * Collection of blocks with their locations and the file length. */ @@ -36,6 +38,7 @@ public class LocatedBlocks implements Wr private long fileLength; private List blocks; // array of blocks with prioritized locations private boolean underConstruction; + @Nullable private LocatedBlock lastLocatedBlock = null; private boolean isLastBlockComplete = false; Modified: hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/NSQuotaExceededException.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/NSQuotaExceededException.java?rev=927198&r1=927197&r2=927198&view=diff ============================================================================== --- hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/NSQuotaExceededException.java (original) +++ hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/NSQuotaExceededException.java Wed Mar 24 22:00:35 2010 @@ -21,6 +21,8 @@ package org.apache.hadoop.hdfs.protocol; public final class NSQuotaExceededException extends QuotaExceededException { protected static final long serialVersionUID = 1L; + public NSQuotaExceededException() {} + public NSQuotaExceededException(String msg) { super(msg); } Modified: hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/QuotaExceededException.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/QuotaExceededException.java?rev=927198&r1=927197&r2=927198&view=diff ============================================================================== --- hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/QuotaExceededException.java (original) +++ hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/protocol/QuotaExceededException.java Wed Mar 24 22:00:35 2010 @@ -38,6 +38,8 @@ public class QuotaExceededException exte protected long quota; // quota protected long count; // actual value + protected QuotaExceededException() {} + protected QuotaExceededException(String msg) { super(msg); } Modified: hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/security/BlockAccessKey.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/security/BlockAccessKey.java?rev=927198&r1=927197&r2=927198&view=diff ============================================================================== --- hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/security/BlockAccessKey.java (original) +++ hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/security/BlockAccessKey.java Wed Mar 24 22:00:35 2010 @@ -35,7 +35,7 @@ public class BlockAccessKey implements W private long keyID; private Text key; private long expiryDate; - private Mac mac; + private transient Mac mac; public BlockAccessKey() { this(0L, new Text(), 0L); @@ -107,4 +107,4 @@ public class BlockAccessKey implements W key.readFields(in); expiryDate = WritableUtils.readVLong(in); } -} \ No newline at end of file +} Modified: hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java?rev=927198&r1=927197&r2=927198&view=diff ============================================================================== --- hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java (original) +++ hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java Wed Mar 24 22:00:35 2010 @@ -572,7 +572,7 @@ public class DataNode extends Configured try { // reset name to machineName. Mainly for web interface. dnRegistration.name = machineName + ":" + dnRegistration.getPort(); - dnRegistration = namenode.register(dnRegistration); + dnRegistration = namenode.registerDatanode(dnRegistration); break; } catch(SocketTimeoutException e) { // namenode is busy LOG.info("Problem connecting to server: " + getNameNodeAddr()); Modified: hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java?rev=927198&r1=927197&r2=927198&view=diff ============================================================================== --- hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java (original) +++ hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java Wed Mar 24 22:00:35 2010 @@ -66,6 +66,7 @@ import org.apache.hadoop.hdfs.server.pro import org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration; import org.apache.hadoop.hdfs.server.protocol.NamenodeCommand; import org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol; +import org.apache.hadoop.hdfs.server.protocol.NamenodeProtocols; import org.apache.hadoop.hdfs.server.protocol.NamenodeRegistration; import org.apache.hadoop.hdfs.server.protocol.NamespaceInfo; import org.apache.hadoop.hdfs.server.protocol.NodeRegistration; @@ -128,10 +129,7 @@ import org.apache.hadoop.util.StringUtil * secondary namenodes or rebalancing processes to get partial namenode's * state, for example partial blocksMap etc. **********************************************************/ -public class NameNode implements ClientProtocol, DatanodeProtocol, - NamenodeProtocol, FSConstants, - RefreshAuthorizationPolicyProtocol, - RefreshUserToGroupMappingsProtocol { +public class NameNode implements NamenodeProtocols, FSConstants { static{ Configuration.addDefaultResource("hdfs-default.xml"); Configuration.addDefaultResource("hdfs-site.xml"); @@ -301,10 +299,10 @@ public class NameNode implements ClientP NameNode.initMetrics(conf, this.getRole()); loadNamesystem(conf); // create rpc server - this.server = RPC.getServer(this.getClass(), this, socAddr.getHostName(), - socAddr.getPort(), handlerCount, false, conf, namesystem - .getDelegationTokenSecretManager()); - + this.server = RPC.getServer(NamenodeProtocols.class, this, + socAddr.getHostName(), socAddr.getPort(), + handlerCount, false, conf, + namesystem.getDelegationTokenSecretManager()); // The rpc-server port can be ephemeral... ensure we have the correct info this.rpcAddress = this.server.getListenerAddress(); setRpcServerAddress(conf); @@ -1051,7 +1049,7 @@ public class NameNode implements ClientP //////////////////////////////////////////////////////////////// /** */ - public DatanodeRegistration register(DatanodeRegistration nodeReg) + public DatanodeRegistration registerDatanode(DatanodeRegistration nodeReg) throws IOException { verifyVersion(nodeReg.getVersion()); namesystem.registerDatanode(nodeReg); Modified: hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/SafeModeException.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/SafeModeException.java?rev=927198&r1=927197&r2=927198&view=diff ============================================================================== --- hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/SafeModeException.java (original) +++ hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/SafeModeException.java Wed Mar 24 22:00:35 2010 @@ -28,6 +28,8 @@ import java.io.IOException; public class SafeModeException extends IOException { private static final long serialVersionUID = 1L; + public SafeModeException() {} + public SafeModeException(String text, FSNamesystem.SafeModeInfo mode ) { super(text + ". Name node is in safe mode.\n" + mode.getTurnOffTip()); } Modified: hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/protocol/DatanodeCommand.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/protocol/DatanodeCommand.java?rev=927198&r1=927197&r2=927198&view=diff ============================================================================== --- hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/protocol/DatanodeCommand.java (original) +++ hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/protocol/DatanodeCommand.java Wed Mar 24 22:00:35 2010 @@ -22,11 +22,19 @@ import java.io.DataOutput; import org.apache.hadoop.io.Writable; import org.apache.hadoop.io.WritableFactory; import org.apache.hadoop.io.WritableFactories; +import org.apache.avro.reflect.Union; /** * Base class for data-node command. * Issued by the name-node to notify data-nodes what should be done. */ + +// Declare subclasses for Avro's denormalized representation +@Union({Void.class, + DatanodeCommand.Register.class, DatanodeCommand.Finalize.class, + BlockCommand.class, UpgradeCommand.class, + BlockRecoveryCommand.class, KeyUpdateCommand.class}) + public abstract class DatanodeCommand extends ServerCommand { static class Register extends DatanodeCommand { private Register() {super(DatanodeProtocol.DNA_REGISTER);} Modified: hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/protocol/DatanodeProtocol.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/protocol/DatanodeProtocol.java?rev=927198&r1=927197&r2=927198&view=diff ============================================================================== --- hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/protocol/DatanodeProtocol.java (original) +++ hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/protocol/DatanodeProtocol.java Wed Mar 24 22:00:35 2010 @@ -27,6 +27,8 @@ import org.apache.hadoop.hdfs.protocol.L import org.apache.hadoop.ipc.VersionedProtocol; import org.apache.hadoop.security.KerberosInfo; +import org.apache.avro.reflect.Nullable; + /********************************************************************** * Protocol that a DFS datanode uses to communicate with the NameNode. * It's used to upload current load information and block reports. @@ -38,9 +40,9 @@ import org.apache.hadoop.security.Kerber @KerberosInfo(DFSConfigKeys.DFS_NAMENODE_USER_NAME_KEY) public interface DatanodeProtocol extends VersionedProtocol { /** - * 23: nextGenerationStamp() removed. + * 24: register() renamed registerDatanode() */ - public static final long versionID = 23L; + public static final long versionID = 24L; // error code final static int NOTIFY = 0; @@ -71,7 +73,7 @@ public interface DatanodeProtocol extend * new storageID if the datanode did not have one and * registration ID for further communication. */ - public DatanodeRegistration register(DatanodeRegistration registration + public DatanodeRegistration registerDatanode(DatanodeRegistration registration ) throws IOException; /** * sendHeartbeat() tells the NameNode that the DataNode is still @@ -81,6 +83,7 @@ public interface DatanodeProtocol extend * A DatanodeCommand tells the DataNode to invalidate local block(s), * or to copy them to other DataNodes, etc. */ + @Nullable public DatanodeCommand[] sendHeartbeat(DatanodeRegistration registration, long capacity, long dfsUsed, long remaining, Added: hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/protocol/NamenodeProtocols.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/protocol/NamenodeProtocols.java?rev=927198&view=auto ============================================================================== --- hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/protocol/NamenodeProtocols.java (added) +++ hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/protocol/NamenodeProtocols.java Wed Mar 24 22:00:35 2010 @@ -0,0 +1,32 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hdfs.server.protocol; + +import org.apache.hadoop.hdfs.protocol.ClientProtocol; +import org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol; +import org.apache.hadoop.security.RefreshUserToGroupMappingsProtocol; + +/** The full set of RPC methods implemented by the Namenode. */ +public interface NamenodeProtocols + extends ClientProtocol, + DatanodeProtocol, + NamenodeProtocol, + RefreshAuthorizationPolicyProtocol, + RefreshUserToGroupMappingsProtocol { +} Modified: hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/MiniDFSCluster.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/MiniDFSCluster.java?rev=927198&r1=927197&r2=927198&view=diff ============================================================================== --- hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/MiniDFSCluster.java (original) +++ hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/MiniDFSCluster.java Wed Mar 24 22:00:35 2010 @@ -35,6 +35,13 @@ import org.apache.hadoop.hdfs.protocol.B import org.apache.hadoop.hdfs.protocol.BlockListAsLongs; import org.apache.hadoop.hdfs.protocol.DatanodeInfo; import org.apache.hadoop.hdfs.protocol.FSConstants.DatanodeReportType; +import org.apache.hadoop.hdfs.server.protocol.NamenodeProtocols; +import org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol; +import org.apache.hadoop.hdfs.protocol.ClientProtocol; +import org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol; +import org.apache.hadoop.security.RefreshUserToGroupMappingsProtocol; +import org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol; +import org.apache.hadoop.fs.CommonConfigurationKeys; import static org.apache.hadoop.hdfs.server.common.Util.fileAsURI; import org.apache.hadoop.hdfs.server.common.HdfsConstants.StartupOption; import org.apache.hadoop.hdfs.server.datanode.DataNode; @@ -243,6 +250,28 @@ public class MiniDFSCluster { base_dir = new File(getBaseDirectory()); data_dir = new File(base_dir, "data"); + // use alternate RPC engine if spec'd + String rpcEngineName = System.getProperty("hdfs.rpc.engine"); + if (rpcEngineName != null && !"".equals(rpcEngineName)) { + + System.out.println("HDFS using RPCEngine: "+rpcEngineName); + try { + Class rpcEngine = conf.getClassByName(rpcEngineName); + setRpcEngine(conf, NamenodeProtocols.class, rpcEngine); + setRpcEngine(conf, NamenodeProtocol.class, rpcEngine); + setRpcEngine(conf, ClientProtocol.class, rpcEngine); + setRpcEngine(conf, DatanodeProtocol.class, rpcEngine); + setRpcEngine(conf, RefreshAuthorizationPolicyProtocol.class, rpcEngine); + setRpcEngine(conf, RefreshUserToGroupMappingsProtocol.class, rpcEngine); + } catch (ClassNotFoundException e) { + throw new RuntimeException(e); + } + + // disable service authorization, as it does not work with tunnelled RPC + conf.setBoolean(CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION, + false); + } + // Setup the NameNode configuration FileSystem.setDefaultUri(conf, "hdfs://localhost:"+ Integer.toString(nameNodePort)); conf.set(DFSConfigKeys.DFS_NAMENODE_HTTP_ADDRESS_KEY, "127.0.0.1:0"); @@ -289,6 +318,10 @@ public class MiniDFSCluster { } } + private void setRpcEngine(Configuration conf, Class protocol, Class engine) { + conf.setClass("rpc.engine."+protocol.getName(), engine, Object.class); + } + /** * * @return URI of this MiniDFSCluster Added: hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/TestDfsOverAvroRpc.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/TestDfsOverAvroRpc.java?rev=927198&view=auto ============================================================================== --- hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/TestDfsOverAvroRpc.java (added) +++ hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/TestDfsOverAvroRpc.java Wed Mar 24 22:00:35 2010 @@ -0,0 +1,33 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs; + +import java.io.IOException; + +/** Test for simple signs of life using Avro RPC. Not an exhaustive test + * yet, just enough to catch fundamental problems using Avro reflection to + * infer namenode RPC protocols. */ +public class TestDfsOverAvroRpc extends TestLocalDFS { + + public void testWorkingDirectory() throws IOException { + System.setProperty("hdfs.rpc.engine", + "org.apache.hadoop.ipc.AvroRpcEngine"); + super.testWorkingDirectory(); + } + +} Modified: hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java URL: http://svn.apache.org/viewvc/hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java?rev=927198&r1=927197&r2=927198&view=diff ============================================================================== --- hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java (original) +++ hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java Wed Mar 24 22:00:35 2010 @@ -757,7 +757,7 @@ public class NNThroughputBenchmark { dnRegistration.setStorageInfo(new DataStorage(nsInfo, "")); DataNode.setNewStorageID(dnRegistration); // register datanode - dnRegistration = nameNode.register(dnRegistration); + dnRegistration = nameNode.registerDatanode(dnRegistration); } /**