Return-Path: Delivered-To: apmail-hadoop-core-commits-archive@www.apache.org Received: (qmail 21680 invoked from network); 2 Feb 2009 18:36:31 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 2 Feb 2009 18:36:31 -0000 Received: (qmail 36010 invoked by uid 500); 2 Feb 2009 18:36:31 -0000 Delivered-To: apmail-hadoop-core-commits-archive@hadoop.apache.org Received: (qmail 35835 invoked by uid 500); 2 Feb 2009 18:36:30 -0000 Mailing-List: contact core-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-commits@hadoop.apache.org Received: (qmail 35826 invoked by uid 99); 2 Feb 2009 18:36:30 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 02 Feb 2009 10:36:30 -0800 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 02 Feb 2009 18:36:26 +0000 Received: by eris.apache.org (Postfix, from userid 65534) id 789DB2388878; Mon, 2 Feb 2009 18:36:04 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r740064 - in /hadoop/core/trunk: ./ src/hdfs/org/apache/hadoop/hdfs/server/namenode/ src/webapps/datanode/ src/webapps/hdfs/ Date: Mon, 02 Feb 2009 18:36:03 -0000 To: core-commits@hadoop.apache.org From: szetszwo@apache.org X-Mailer: svnmailer-1.0.8 Message-Id: <20090202183604.789DB2388878@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Author: szetszwo Date: Mon Feb 2 18:36:01 2009 New Revision: 740064 URL: http://svn.apache.org/viewvc?rev=740064&view=rev Log: HADOOP-5097. Remove static variable JspHelper.fsn. (szetszwo) Modified: hadoop/core/trunk/CHANGES.txt hadoop/core/trunk/src/hdfs/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop/core/trunk/src/hdfs/org/apache/hadoop/hdfs/server/namenode/FileDataServlet.java hadoop/core/trunk/src/hdfs/org/apache/hadoop/hdfs/server/namenode/JspHelper.java hadoop/core/trunk/src/webapps/datanode/browseBlock.jsp hadoop/core/trunk/src/webapps/datanode/browseDirectory.jsp hadoop/core/trunk/src/webapps/datanode/tail.jsp hadoop/core/trunk/src/webapps/hdfs/dfshealth.jsp hadoop/core/trunk/src/webapps/hdfs/dfsnodelist.jsp Modified: hadoop/core/trunk/CHANGES.txt URL: http://svn.apache.org/viewvc/hadoop/core/trunk/CHANGES.txt?rev=740064&r1=740063&r2=740064&view=diff ============================================================================== --- hadoop/core/trunk/CHANGES.txt (original) +++ hadoop/core/trunk/CHANGES.txt Mon Feb 2 18:36:01 2009 @@ -56,6 +56,9 @@ or the main Java task in Hadoop's case, kills the entire subtree of processes. (Ravi Gummadi via ddas) + HADOOP-5097. Remove static variable JspHelper.fsn, a static reference to + a non-singleton FSNamesystem object. (szetszwo) + OPTIMIZATIONS BUG FIXES Modified: hadoop/core/trunk/src/hdfs/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java URL: http://svn.apache.org/viewvc/hadoop/core/trunk/src/hdfs/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java?rev=740064&r1=740063&r2=740064&view=diff ============================================================================== --- hadoop/core/trunk/src/hdfs/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java (original) +++ hadoop/core/trunk/src/hdfs/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java Mon Feb 2 18:36:01 2009 @@ -61,7 +61,6 @@ import java.io.PrintWriter; import java.io.DataOutputStream; import java.net.InetAddress; -import java.net.InetSocketAddress; import java.util.*; import java.util.Map.Entry; @@ -244,8 +243,6 @@ private int replIndex = 0; private static FSNamesystem fsNamesystemObject; - /** NameNode RPC address */ - private InetSocketAddress nameNodeAddress = null; // TODO: name-node has this field, it should be removed here private SafeModeInfo safeMode; // safe mode information private Host2NodesMap host2DataNodeMap = new Host2NodesMap(); @@ -292,7 +289,6 @@ this.systemStart = now(); setConfigurationParameters(conf); - this.nameNodeAddress = nn.getNameNodeAddress(); this.registerMBean(conf); // register the MBean for the FSNamesystemStutus this.dir = new FSDirectory(this, conf); StartupOption startOpt = NameNode.getStartupOption(conf); @@ -3457,16 +3453,6 @@ return datanodeMap.get(name); } - /** - * @deprecated use {@link NameNode#getNameNodeAddress()} instead. - */ - @Deprecated - public InetSocketAddress getDFSNameNodeAddress() { - return nameNodeAddress; - } - - /** - */ public Date getStartTime() { return new Date(systemStart); } Modified: hadoop/core/trunk/src/hdfs/org/apache/hadoop/hdfs/server/namenode/FileDataServlet.java URL: http://svn.apache.org/viewvc/hadoop/core/trunk/src/hdfs/org/apache/hadoop/hdfs/server/namenode/FileDataServlet.java?rev=740064&r1=740063&r2=740064&view=diff ============================================================================== --- hadoop/core/trunk/src/hdfs/org/apache/hadoop/hdfs/server/namenode/FileDataServlet.java (original) +++ hadoop/core/trunk/src/hdfs/org/apache/hadoop/hdfs/server/namenode/FileDataServlet.java Mon Feb 2 18:36:01 2009 @@ -34,6 +34,8 @@ * @see org.apache.hadoop.hdfs.HftpFileSystem */ public class FileDataServlet extends DfsServlet { + /** For java.io.Serializable */ + private static final long serialVersionUID = 1L; /** Create a redirection URI */ protected URI createUri(FileStatus i, UnixUserGroupInformation ugi, @@ -54,26 +56,20 @@ "/streamFile", "filename=" + i.getPath() + "&ugi=" + ugi, null); } - private static JspHelper jspHelper = null; - /** Select a datanode to service this request. * Currently, this looks at no more than the first five blocks of a file, * selecting a datanode randomly from the most represented. */ - private static DatanodeID pickSrcDatanode(FileStatus i, + private DatanodeID pickSrcDatanode(FileStatus i, ClientProtocol nnproxy) throws IOException { - // a race condition can happen by initializing a static member this way. - // A proper fix should make JspHelper a singleton. Since it doesn't affect - // correctness, we leave it as is for now. - if (jspHelper == null) - jspHelper = new JspHelper(); final LocatedBlocks blks = nnproxy.getBlockLocations( i.getPath().toUri().getPath(), 0, 1); if (i.getLen() == 0 || blks.getLocatedBlocks().size() <= 0) { // pick a random datanode - return jspHelper.randomNode(); + NameNode nn = (NameNode)getServletContext().getAttribute("name.node"); + return nn.getNamesystem().getRandomDatanode(); } - return jspHelper.bestNode(blks.get(0)); + return JspHelper.bestNode(blks.get(0)); } /** Modified: hadoop/core/trunk/src/hdfs/org/apache/hadoop/hdfs/server/namenode/JspHelper.java URL: http://svn.apache.org/viewvc/hadoop/core/trunk/src/hdfs/org/apache/hadoop/hdfs/server/namenode/JspHelper.java?rev=740064&r1=740063&r2=740064&view=diff ============================================================================== --- hadoop/core/trunk/src/hdfs/org/apache/hadoop/hdfs/server/namenode/JspHelper.java (original) +++ hadoop/core/trunk/src/hdfs/org/apache/hadoop/hdfs/server/namenode/JspHelper.java Mon Feb 2 18:36:01 2009 @@ -32,51 +32,39 @@ import javax.servlet.jsp.JspWriter; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.Path; import org.apache.hadoop.hdfs.DFSClient; -import org.apache.hadoop.hdfs.protocol.DatanodeID; import org.apache.hadoop.hdfs.protocol.DatanodeInfo; import org.apache.hadoop.hdfs.protocol.LocatedBlock; import org.apache.hadoop.hdfs.protocol.FSConstants.UpgradeAction; import org.apache.hadoop.hdfs.server.common.HdfsConstants; import org.apache.hadoop.hdfs.server.common.UpgradeStatusReport; -import org.apache.hadoop.hdfs.server.datanode.DataNode; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.util.StringUtils; import org.apache.hadoop.net.NetUtils; -import org.apache.hadoop.security.*; +import org.apache.hadoop.security.UnixUserGroupInformation; +import org.apache.hadoop.util.StringUtils; +import org.apache.hadoop.util.VersionInfo; public class JspHelper { final static public String WEB_UGI_PROPERTY_NAME = "dfs.web.ugi"; - static FSNamesystem fsn = null; - public static InetSocketAddress nameNodeAddr; public static final Configuration conf = new Configuration(); public static final UnixUserGroupInformation webUGI = UnixUserGroupInformation.createImmutable( conf.getStrings(WEB_UGI_PROPERTY_NAME)); - public static final int defaultChunkSizeToView = + private static final int defaultChunkSizeToView = conf.getInt("dfs.default.chunk.view.size", 32 * 1024); - static Random rand = new Random(); - - public JspHelper() { - if (DataNode.getDataNode() != null) { - nameNodeAddr = DataNode.getDataNode().getNameNodeAddr(); - } - else { - fsn = FSNamesystem.getFSNamesystem(); - nameNodeAddr = fsn.getDFSNameNodeAddress(); - } + static final Random rand = new Random(); + static { UnixUserGroupInformation.saveToConf(conf, UnixUserGroupInformation.UGI_PROPERTY_NAME, webUGI); } - public DatanodeID randomNode() throws IOException { - return fsn.getRandomDatanode(); - } + /** Private constructor for preventing creating JspHelper object. */ + private JspHelper() {} - public DatanodeInfo bestNode(LocatedBlock blk) throws IOException { + public static DatanodeInfo bestNode(LocatedBlock blk) throws IOException { TreeSet deadNodes = new TreeSet(); DatanodeInfo chosenNode = null; int failures = 0; @@ -115,7 +103,8 @@ s.close(); return chosenNode; } - public void streamBlockInAscii(InetSocketAddress addr, long blockId, + + public static void streamBlockInAscii(InetSocketAddress addr, long blockId, long genStamp, long blockSize, long offsetIntoBlock, long chunkSizeToView, JspWriter out) throws IOException { @@ -155,24 +144,20 @@ s.close(); out.print(new String(buf)); } - public void DFSNodesStatus(ArrayList live, - ArrayList dead) { - if (fsn != null) - fsn.DFSNodesStatus(live, dead); - } - public void addTableHeader(JspWriter out) throws IOException { + + public static void addTableHeader(JspWriter out) throws IOException { out.print(""); out.print(""); } - public void addTableRow(JspWriter out, String[] columns) throws IOException { + public static void addTableRow(JspWriter out, String[] columns) throws IOException { out.print(""); for (int i = 0; i < columns.length; i++) { out.print(""); } out.print(""); } - public void addTableRow(JspWriter out, String[] columns, int row) throws IOException { + public static void addTableRow(JspWriter out, String[] columns, int row) throws IOException { out.print(""); for (int i = 0; i < columns.length; i++) { @@ -185,17 +170,17 @@ } out.print(""); } - public void addTableFooter(JspWriter out) throws IOException { + public static void addTableFooter(JspWriter out) throws IOException { out.print("
"+columns[i]+"
"); } - public String getSafeModeText() { + public static String getSafeModeText(FSNamesystem fsn) { if (!fsn.isInSafeMode()) return ""; return "Safe mode is ON. " + fsn.getSafeModeTip() + "
"; } - public String getInodeLimitText() { + public static String getInodeLimitText(FSNamesystem fsn) { long inodes = fsn.dir.totalInodes(); long blocks = fsn.getBlocksTotal(); long maxobjects = fsn.getMaxObjects(); @@ -217,7 +202,7 @@ return str; } - public String getUpgradeStatusText() { + public static String getUpgradeStatusText(FSNamesystem fsn) { String statusText = ""; try { UpgradeStatusReport status = @@ -231,7 +216,7 @@ return statusText; } - public void sortNodeList(ArrayList nodes, + public static void sortNodeList(ArrayList nodes, String field, String order) { class NodeComapare implements Comparator { @@ -370,4 +355,20 @@ file = "..." + file.substring(start, file.length()); out.print("HDFS:" + file + ""); } + + /** Convert a String to chunk-size-to-view. */ + public static int string2ChunkSizeToView(String s) { + int n = s == null? 0: Integer.parseInt(s); + return n > 0? n: defaultChunkSizeToView; + } + + /** Return a table containing version information. */ + public static String getVersionTable(FSNamesystem fsn) { + return "
" + + "\n \n" + + "\n
Started:" + fsn.getStartTime() + "
Version:" + VersionInfo.getVersion() + ", " + VersionInfo.getRevision() + + "\n
Compiled:" + VersionInfo.getDate() + " by " + VersionInfo.getUser() + " from " + VersionInfo.getBranch() + + "\n
Upgrades:" + getUpgradeStatusText(fsn) + + "\n
"; + } } Modified: hadoop/core/trunk/src/webapps/datanode/browseBlock.jsp URL: http://svn.apache.org/viewvc/hadoop/core/trunk/src/webapps/datanode/browseBlock.jsp?rev=740064&r1=740063&r2=740064&view=diff ============================================================================== --- hadoop/core/trunk/src/webapps/datanode/browseBlock.jsp (original) +++ hadoop/core/trunk/src/webapps/datanode/browseBlock.jsp Mon Feb 2 18:36:01 2009 @@ -17,12 +17,11 @@ %> <%! - static JspHelper jspHelper = new JspHelper(); + static final DataNode datanode = DataNode.getDataNode(); public void generateFileDetails(JspWriter out, HttpServletRequest req) throws IOException { - int chunkSizeToView = 0; long startOffset = 0; int datanodePort; @@ -47,10 +46,7 @@ if (namenodeInfoPortStr != null) namenodeInfoPort = Integer.parseInt(namenodeInfoPortStr); - String chunkSizeToViewStr = req.getParameter("chunkSizeToView"); - if (chunkSizeToViewStr != null && Integer.parseInt(chunkSizeToViewStr) > 0) - chunkSizeToView = Integer.parseInt(chunkSizeToViewStr); - else chunkSizeToView = jspHelper.defaultChunkSizeToView; + final int chunkSizeToView = JspHelper.string2ChunkSizeToView(req.getParameter("chunkSizeToView")); String startOffsetStr = req.getParameter("startOffset"); if (startOffsetStr == null || Long.parseLong(startOffsetStr) < 0) @@ -71,7 +67,7 @@ } blockSize = Long.parseLong(blockSizeStr); - DFSClient dfs = new DFSClient(jspHelper.nameNodeAddr, jspHelper.conf); + final DFSClient dfs = new DFSClient(datanode.getNameNodeAddr(), JspHelper.conf); List blocks = dfs.namenode.getBlockLocations(filename, 0, Long.MAX_VALUE).getLocatedBlocks(); //Add the various links for looking at the file contents @@ -87,7 +83,7 @@ LocatedBlock lastBlk = blocks.get(blocks.size() - 1); long blockId = lastBlk.getBlock().getBlockId(); try { - chosenNode = jspHelper.bestNode(lastBlk); + chosenNode = JspHelper.bestNode(lastBlk); } catch (IOException e) { out.print(e.toString()); dfs.close(); @@ -157,7 +153,7 @@ } out.println(""); out.print("
"); - String namenodeHost = jspHelper.nameNodeAddr.getHostName(); + String namenodeHost = datanode.getNameNodeAddr().getHostName(); out.print("
Go back to DFS home"); @@ -168,7 +164,6 @@ throws IOException { long startOffset = 0; int datanodePort = 0; - int chunkSizeToView = 0; String namenodeInfoPortStr = req.getParameter("namenodeInfoPort"); int namenodeInfoPort = -1; @@ -208,10 +203,7 @@ } blockSize = Long.parseLong(blockSizeStr); - String chunkSizeToViewStr = req.getParameter("chunkSizeToView"); - if (chunkSizeToViewStr != null && Integer.parseInt(chunkSizeToViewStr) > 0) - chunkSizeToView = Integer.parseInt(chunkSizeToViewStr); - else chunkSizeToView = jspHelper.defaultChunkSizeToView; + final int chunkSizeToView = JspHelper.string2ChunkSizeToView(req.getParameter("chunkSizeToView")); String startOffsetStr = req.getParameter("startOffset"); if (startOffsetStr == null || Long.parseLong(startOffsetStr) < 0) @@ -240,7 +232,7 @@ out.print("
"); //Determine the prev & next blocks - DFSClient dfs = new DFSClient(jspHelper.nameNodeAddr, jspHelper.conf); + final DFSClient dfs = new DFSClient(datanode.getNameNodeAddr(), JspHelper.conf); long nextStartOffset = 0; long nextBlockSize = 0; String nextBlockIdStr = null; @@ -261,7 +253,7 @@ nextGenStamp = Long.toString(nextBlock.getBlock().getGenerationStamp()); nextStartOffset = 0; nextBlockSize = nextBlock.getBlock().getNumBytes(); - DatanodeInfo d = jspHelper.bestNode(nextBlock); + DatanodeInfo d = JspHelper.bestNode(nextBlock); String datanodeAddr = d.getName(); nextDatanodePort = Integer.parseInt( datanodeAddr.substring( @@ -315,7 +307,7 @@ if (prevStartOffset < 0) prevStartOffset = 0; prevBlockSize = prevBlock.getBlock().getNumBytes(); - DatanodeInfo d = jspHelper.bestNode(prevBlock); + DatanodeInfo d = JspHelper.bestNode(prevBlock); String datanodeAddr = d.getName(); prevDatanodePort = Integer.parseInt( datanodeAddr.substring( @@ -353,7 +345,7 @@ out.print("
"); out.print(""); dfs.close(); } Modified: hadoop/core/trunk/src/webapps/hdfs/dfshealth.jsp URL: http://svn.apache.org/viewvc/hadoop/core/trunk/src/webapps/hdfs/dfshealth.jsp?rev=740064&r1=740063&r2=740064&view=diff ============================================================================== --- hadoop/core/trunk/src/webapps/hdfs/dfshealth.jsp (original) +++ hadoop/core/trunk/src/webapps/hdfs/dfshealth.jsp Mon Feb 2 18:36:01 2009 @@ -17,8 +17,6 @@ import="java.net.URLEncoder" %> <%! - JspHelper jspHelper = new JspHelper(); - int rowNum = 0; int colNum = 0; @@ -161,7 +159,7 @@ FSNamesystem fsn = nn.getNamesystem(); ArrayList live = new ArrayList(); ArrayList dead = new ArrayList(); - jspHelper.DFSNodesStatus(live, dead); + fsn.DFSNodesStatus(live, dead); sorterField = request.getParameter("sorter/field"); sorterOrder = request.getParameter("sorter/order"); @@ -235,22 +233,15 @@

NameNode '<%=namenodeLabel%>'

- - -
-
Started: <%= fsn.getStartTime()%> -
Version: <%= VersionInfo.getVersion()%>, <%= VersionInfo.getRevision()%> -
Compiled: <%= VersionInfo.getDate()%> by <%= VersionInfo.getUser()%> from <%= VersionInfo.getBranch()%> -
Upgrades: <%= jspHelper.getUpgradeStatusText()%> -

- +<%= JspHelper.getVersionTable(fsn) %> +
Browse the filesystem
Namenode Logs

Cluster Summary

- <%= jspHelper.getSafeModeText()%> - <%= jspHelper.getInodeLimitText()%> + <%= JspHelper.getSafeModeText(fsn)%> + <%= JspHelper.getInodeLimitText(fsn)%> <% generateDFSHealthReport(out, nn, request); %> Modified: hadoop/core/trunk/src/webapps/hdfs/dfsnodelist.jsp URL: http://svn.apache.org/viewvc/hadoop/core/trunk/src/webapps/hdfs/dfsnodelist.jsp?rev=740064&r1=740063&r2=740064&view=diff ============================================================================== --- hadoop/core/trunk/src/webapps/hdfs/dfsnodelist.jsp (original) +++ hadoop/core/trunk/src/webapps/hdfs/dfsnodelist.jsp Mon Feb 2 18:36:01 2009 @@ -16,8 +16,6 @@ import="java.net.URLEncoder" %> <%! - JspHelper jspHelper = new JspHelper(); - int rowNum = 0; int colNum = 0; @@ -127,7 +125,7 @@ throws IOException { ArrayList live = new ArrayList(); ArrayList dead = new ArrayList(); - jspHelper.DFSNodesStatus(live, dead); + nn.getNamesystem().DFSNodesStatus(live, dead); whatNodes = request.getParameter("whatNodes"); // show only live or only dead nodes sorterField = request.getParameter("sorter/field"); @@ -137,8 +135,8 @@ if ( sorterOrder == null ) sorterOrder = "ASC"; - jspHelper.sortNodeList(live, sorterField, sorterOrder); - jspHelper.sortNodeList(dead, "name", "ASC"); + JspHelper.sortNodeList(live, sorterField, sorterOrder); + JspHelper.sortNodeList(dead, "name", "ASC"); // Find out common suffix. Should this be before or after the sort? String port_suffix = null; @@ -203,7 +201,7 @@ NodeHeaderStr("pcremaining") + "> Remaining
(%) Blocks\n" ); - jspHelper.sortNodeList(live, sorterField, sorterOrder); + JspHelper.sortNodeList(live, sorterField, sorterOrder); for ( int i=0; i < live.size(); i++ ) { generateNodeData(out, live.get(i), port_suffix, true, nnHttpPort); } @@ -218,7 +216,7 @@ out.print( " " + "
Node \n" ); - jspHelper.sortNodeList(dead, "name", "ASC"); + JspHelper.sortNodeList(dead, "name", "ASC"); for ( int i=0; i < dead.size() ; i++ ) { generateNodeData(out, dead.get(i), port_suffix, false, nnHttpPort); } @@ -243,15 +241,8 @@

NameNode '<%=namenodeLabel%>'

- - -
-
Started: <%= fsn.getStartTime()%> -
Version: <%= VersionInfo.getVersion()%>, r<%= VersionInfo.getRevision()%> -
Compiled: <%= VersionInfo.getDate()%> by <%= VersionInfo.getUser()%> -
Upgrades: <%= jspHelper.getUpgradeStatusText()%> -

- +<%= JspHelper.getVersionTable(fsn) %> +
Browse the filesystem
Namenode Logs
Go back to DFS home