hadoop-hdfs-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From hair...@apache.org
Subject svn commit: r810323 - in /hadoop/hdfs/branches/HDFS-265: ./ src/contrib/hdfsproxy/ src/java/ src/java/org/apache/hadoop/hdfs/server/datanode/ src/java/org/apache/hadoop/hdfs/server/namenode/ src/test/ src/test/hdfs-with-mr/ src/test/hdfs/ src/test/hdfs...
Date Wed, 02 Sep 2009 00:54:27 GMT
Author: hairong
Date: Wed Sep  2 00:54:27 2009
New Revision: 810323

URL: http://svn.apache.org/viewvc?rev=810323&view=rev
Log:
Merge -r 808671:809439 from trunk to bring its changes to the append branch.

Added:
    hadoop/hdfs/branches/HDFS-265/src/test/hdfs/org/apache/hadoop/hdfs/server/namenode/TestBlocksWithNotEnoughRacks.java
      - copied unchanged from r809439, hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/server/namenode/TestBlocksWithNotEnoughRacks.java
    hadoop/hdfs/branches/HDFS-265/src/test/hdfs/org/apache/hadoop/hdfs/server/namenode/TestCorruptReplicaInfo.java
      - copied unchanged from r809439, hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/server/namenode/TestCorruptReplicaInfo.java
    hadoop/hdfs/branches/HDFS-265/src/test/mapred-site.xml
      - copied unchanged from r809439, hadoop/hdfs/trunk/src/test/mapred-site.xml
    hadoop/hdfs/branches/HDFS-265/src/webapps/hdfs/block_info_xml.jsp
      - copied unchanged from r809439, hadoop/hdfs/trunk/src/webapps/hdfs/block_info_xml.jsp
    hadoop/hdfs/branches/HDFS-265/src/webapps/hdfs/corrupt_replicas_xml.jsp
      - copied unchanged from r809439, hadoop/hdfs/trunk/src/webapps/hdfs/corrupt_replicas_xml.jsp
Modified:
    hadoop/hdfs/branches/HDFS-265/   (props changed)
    hadoop/hdfs/branches/HDFS-265/CHANGES.txt
    hadoop/hdfs/branches/HDFS-265/build.xml   (contents, props changed)
    hadoop/hdfs/branches/HDFS-265/src/contrib/hdfsproxy/   (props changed)
    hadoop/hdfs/branches/HDFS-265/src/java/   (props changed)
    hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/datanode/DatanodeJspHelper.java
    hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java
  (props changed)
    hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/BlockManager.java
    hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/CorruptReplicasMap.java
    hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
    hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/ListPathsServlet.java
    hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java
    hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/UnderReplicatedBlocks.java
    hadoop/hdfs/branches/HDFS-265/src/test/hdfs/   (props changed)
    hadoop/hdfs/branches/HDFS-265/src/test/hdfs-with-mr/   (props changed)
    hadoop/hdfs/branches/HDFS-265/src/webapps/datanode/   (props changed)
    hadoop/hdfs/branches/HDFS-265/src/webapps/hdfs/   (props changed)
    hadoop/hdfs/branches/HDFS-265/src/webapps/secondary/   (props changed)

Propchange: hadoop/hdfs/branches/HDFS-265/
------------------------------------------------------------------------------
--- svn:mergeinfo (original)
+++ svn:mergeinfo Wed Sep  2 00:54:27 2009
@@ -1,2 +1,2 @@
 /hadoop/core/branches/branch-0.19/hdfs:713112
-/hadoop/hdfs/trunk:796829-800617,800619-803337,804756-805652
+/hadoop/hdfs/trunk:796829-800617,800619-803337,804756-805652,808672-809439

Modified: hadoop/hdfs/branches/HDFS-265/CHANGES.txt
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/CHANGES.txt?rev=810323&r1=810322&r2=810323&view=diff
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/CHANGES.txt (original)
+++ hadoop/hdfs/branches/HDFS-265/CHANGES.txt Wed Sep  2 00:54:27 2009
@@ -29,6 +29,9 @@
     HDFS-565. Introduce block committing logic during new block allocation
     and file close. (shv)
 
+    HDFS-492. Add two JSON JSP pages to the Namenode for providing corrupt
+    blocks/replicas information.  (Bill Zeller via szetszwo)
+
   IMPROVEMENTS
 
     HDFS-381. Remove blocks from DataNode maps when corresponding file
@@ -119,6 +122,7 @@
     HDFS-552. Change TestFiDataTransferProtocol to junit 4 and add a few new
     tests.  (szetszwo)
 
+<<<<<<< .working
     HDFS-562. Add a test for NameNode.getBlockLocations(..) to check read from
     un-closed file.  (szetszwo)
 
@@ -129,6 +133,8 @@
     to be executed by the run-test-hdfs-fault-inject target.  (Konstantin
     Boudnik via szetszwo)
 
+=======
+>>>>>>> .merge-right.r809439
     HDFS-563. Simplify the codes in FSNamesystem.getBlockLocations(..).
     (szetszwo)
 
@@ -194,8 +200,11 @@
     HDFS-553. BlockSender reports wrong failed position in ChecksumException.
     (hairong)
 
-    HDFS-568. Update hadoop-mapred-examples-0.21.0-dev.jar for MAPREDUCE-874.
-    (szetszwo)
+    HDFS-568. Set mapred.job.tracker.retire.jobs to false in
+    src/test/mapred-site.xml for mapreduce tests to run.  (Amareshwari
+    Sriramadasu via szetszwo)
+ 
+    HDFS-15. All replicas end up on 1 rack. (Jitendra Nath Pandey via hairong)
  
 Release 0.20.1 - Unreleased
 

Modified: hadoop/hdfs/branches/HDFS-265/build.xml
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/build.xml?rev=810323&r1=810322&r2=810323&view=diff
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/build.xml (original)
+++ hadoop/hdfs/branches/HDFS-265/build.xml Wed Sep  2 00:54:27 2009
@@ -636,7 +636,6 @@
         </batchtest>
         <batchtest todir="${test.build.dir}" if="tests.testcase.fi">
           <fileset dir="${test.src.dir}/aop" includes="**/${testcase}.java"/>
-          <fileset dir="${test.src.dir}/hdfs" includes="**/${testcase}.java"/>
         </batchtest>
       </junit>
       <antcall target="checkfailure"/>
@@ -696,7 +695,6 @@
       </batchtest>
       <batchtest todir="${test.build.dir}" if="tests.testcase.fi">
         <fileset dir="${test.src.dir}/aop" includes="**/${testcase}.java"/>
-        <fileset dir="${test.src.dir}/hdfs-with-mr" includes="**/${testcase}.java"/>
       </batchtest>
     </junit>
     <antcall target="checkfailure"/>

Propchange: hadoop/hdfs/branches/HDFS-265/build.xml
------------------------------------------------------------------------------
--- svn:mergeinfo (original)
+++ svn:mergeinfo Wed Sep  2 00:54:27 2009
@@ -1,3 +1,3 @@
 /hadoop/core/branches/branch-0.19/hdfs/build.xml:713112
 /hadoop/core/trunk/build.xml:779102
-/hadoop/hdfs/trunk/build.xml:796829-800617,800619-803337,804756-805652
+/hadoop/hdfs/trunk/build.xml:796829-800617,800619-803337,804756-805652,808672-809439

Propchange: hadoop/hdfs/branches/HDFS-265/src/contrib/hdfsproxy/
------------------------------------------------------------------------------
--- svn:mergeinfo (original)
+++ svn:mergeinfo Wed Sep  2 00:54:27 2009
@@ -1,3 +1,3 @@
 /hadoop/core/branches/branch-0.19/hdfs/src/contrib/hdfsproxy:713112
 /hadoop/core/trunk/src/contrib/hdfsproxy:776175-784663
-/hadoop/hdfs/trunk/src/contrib/hdfsproxy:796829-800617,800619-803337,804756-805652
+/hadoop/hdfs/trunk/src/contrib/hdfsproxy:796829-800617,800619-803337,804756-805652,808672-809439

Propchange: hadoop/hdfs/branches/HDFS-265/src/java/
------------------------------------------------------------------------------
--- svn:mergeinfo (original)
+++ svn:mergeinfo Wed Sep  2 00:54:27 2009
@@ -1,3 +1,3 @@
 /hadoop/core/branches/branch-0.19/hdfs/src/java:713112
 /hadoop/core/trunk/src/hdfs:776175-785643,785929-786278
-/hadoop/hdfs/trunk/src/java:796829-800617,800619-803337,804756-805652
+/hadoop/hdfs/trunk/src/java:796829-800617,800619-803337,804756-805652,808672-809439

Modified: hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/datanode/DatanodeJspHelper.java
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/datanode/DatanodeJspHelper.java?rev=810323&r1=810322&r2=810323&view=diff
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/datanode/DatanodeJspHelper.java
(original)
+++ hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/datanode/DatanodeJspHelper.java
Wed Sep  2 00:54:27 2009
@@ -258,6 +258,10 @@
     out.print("<B>Total number of blocks: " + blocks.size() + "</B><br>");
     // generate a table and dump the info
     out.println("\n<table>");
+    
+    String namenodeHost = datanode.getNameNodeAddr().getHostName();
+    String namenodeHostName = InetAddress.getByName(namenodeHost).getCanonicalHostName();
+    
     for (LocatedBlock cur : blocks) {
       out.print("<tr>");
       final String blockidstring = Long.toString(cur.getBlock().getBlockId());
@@ -277,14 +281,18 @@
             + "&genstamp=" + cur.getBlock().getGenerationStamp()
             + "&namenodeInfoPort=" + namenodeInfoPort
             + "&chunkSizeToView=" + chunkSizeToView;
+
+        String blockInfoUrl = "http://" + namenodeHostName + ":"
+            + namenodeInfoPort
+            + "/block_info_xml.jsp?blockId=" + blockidstring;
         out.print("<td>&nbsp</td><td><a href=\"" + blockUrl + "\">"
-            + datanodeAddr + "</a></td>");
+            + datanodeAddr + "</a></td><td>"
+            + "<a href=\"" + blockInfoUrl + "\">View Block Info</a></td>");
       }
       out.println("</tr>");
     }
     out.println("</table>");
     out.print("<hr>");
-    String namenodeHost = datanode.getNameNodeAddr().getHostName();
     out.print("<br><a href=\"http://"
         + InetAddress.getByName(namenodeHost).getCanonicalHostName() + ":"
         + namenodeInfoPort + "/dfshealth.jsp\">Go back to DFS home</a>");
@@ -577,4 +585,4 @@
     out.print("</textarea>");
     dfs.close();
   }
-}
\ No newline at end of file
+}

Propchange: hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java
------------------------------------------------------------------------------
--- svn:mergeinfo (original)
+++ svn:mergeinfo Wed Sep  2 00:54:27 2009
@@ -1,3 +1,3 @@
 /hadoop/core/branches/branch-0.19/hdfs/src/java/org/apache/hadoop/hdfs/server/datanode/DatanodeBlockInfo.java:713112
 /hadoop/core/trunk/src/hdfs/org/apache/hadoop/hdfs/server/datanode/DatanodeBlockInfo.java:776175-785643,785929-786278
-/hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java:800619-803337,804756-805652
+/hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java:800619-803337,804756-805652,808672-809439

Modified: hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/BlockManager.java
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/BlockManager.java?rev=810323&r1=810322&r2=810323&view=diff
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/BlockManager.java
(original)
+++ hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/BlockManager.java
Wed Sep  2 00:54:27 2009
@@ -105,6 +105,9 @@
   // Default number of replicas
   int defaultReplication;
 
+  // variable to enable check for enough racks 
+  boolean shouldCheckForEnoughRacks = true;
+
   /**
    * Last block index used for replication work.
    */
@@ -155,10 +158,13 @@
                             + " must be less than dfs.replication.max = "
                             + maxReplication);
     this.maxReplicationStreams = conf.getInt("dfs.max-repl-streams", 2);
+    this.shouldCheckForEnoughRacks = conf.get("topology.script.file.name") == null ? false
+                                                                             : true;
     FSNamesystem.LOG.info("defaultReplication = " + defaultReplication);
     FSNamesystem.LOG.info("maxReplication = " + maxReplication);
     FSNamesystem.LOG.info("minReplication = " + minReplication);
     FSNamesystem.LOG.info("maxReplicationStreams = " + maxReplicationStreams);
+    FSNamesystem.LOG.info("shouldCheckForEnoughRacks = " + shouldCheckForEnoughRacks);
   }
 
   void activate() {
@@ -677,6 +683,7 @@
     int requiredReplication, numEffectiveReplicas;
     List<DatanodeDescriptor> containingNodes;
     DatanodeDescriptor srcNode;
+    int additionalReplRequired;
 
     synchronized (namesystem) {
       synchronized (neededReplications) {
@@ -688,6 +695,7 @@
           replIndex--;
           return false;
         }
+
         requiredReplication = fileINode.getReplication();
 
         // get a source data-node
@@ -704,21 +712,32 @@
         // do not schedule more if enough replicas is already pending
         numEffectiveReplicas = numReplicas.liveReplicas() +
                                 pendingReplications.getNumReplicas(block);
-        if(numEffectiveReplicas >= requiredReplication) {
-          neededReplications.remove(block, priority); // remove from neededReplications
-          replIndex--;
-          NameNode.stateChangeLog.info("BLOCK* "
-              + "Removing block " + block
-              + " from neededReplications as it has enough replicas.");
-          return false;
+      
+        if (numEffectiveReplicas >= requiredReplication) {
+          if ( (pendingReplications.getNumReplicas(block) > 0) ||
+               (blockHasEnoughRacks(block)) ) {
+            neededReplications.remove(block, priority); // remove from neededReplications
+            replIndex--;
+            NameNode.stateChangeLog.info("BLOCK* "
+                + "Removing block " + block
+                + " from neededReplications as it has enough replicas.");
+            return false;
+          }
+        }
+
+        if (numReplicas.liveReplicas() < requiredReplication) {
+          additionalReplRequired = requiredReplication - numEffectiveReplicas;
+        } else {
+          additionalReplRequired = 1; //Needed on a new rack
         }
+
       }
     }
 
     // choose replication targets: NOT HOLDING THE GLOBAL LOCK
-    DatanodeDescriptor targets[] = replicator.chooseTarget(
-        requiredReplication - numEffectiveReplicas,
-        srcNode, containingNodes, null, block.getNumBytes());
+    DatanodeDescriptor targets[] = 
+                       replicator.chooseTarget(additionalReplRequired,
+                       srcNode, containingNodes, null, block.getNumBytes());
     if(targets.length == 0)
       return false;
 
@@ -739,13 +758,25 @@
         NumberReplicas numReplicas = countNodes(block);
         numEffectiveReplicas = numReplicas.liveReplicas() +
         pendingReplications.getNumReplicas(block);
-        if(numEffectiveReplicas >= requiredReplication) {
-          neededReplications.remove(block, priority); // remove from neededReplications
-          replIndex--;
-          NameNode.stateChangeLog.info("BLOCK* "
-              + "Removing block " + block
-              + " from neededReplications as it has enough replicas.");
-          return false;
+
+        if (numEffectiveReplicas >= requiredReplication) {
+          if ( (pendingReplications.getNumReplicas(block) > 0) ||
+               (blockHasEnoughRacks(block)) ) {
+            neededReplications.remove(block, priority); // remove from neededReplications
+            replIndex--;
+            NameNode.stateChangeLog.info("BLOCK* "
+                + "Removing block " + block
+                + " from neededReplications as it has enough replicas.");
+            return false;
+          }
+        }
+
+        if ( (numReplicas.liveReplicas() >= requiredReplication) &&
+             (!blockHasEnoughRacks(block)) ) {
+          if (srcNode.getNetworkLocation().equals(targets[0].getNetworkLocation())) {
+            //No use continuing, unless a new rack in this case
+            return false;
+          }
         }
 
         // Add block to the to be replicated list
@@ -867,10 +898,13 @@
       synchronized (namesystem) {
         for (int i = 0; i < timedOutItems.length; i++) {
           NumberReplicas num = countNodes(timedOutItems[i]);
-          neededReplications.add(timedOutItems[i],
-                                 num.liveReplicas(),
-                                 num.decommissionedReplicas(),
-                                 getReplication(timedOutItems[i]));
+          if (isNeededReplication(timedOutItems[i], getReplication(timedOutItems[i]),
+                                 num.liveReplicas())) {
+            neededReplications.add(timedOutItems[i],
+                                   num.liveReplicas(),
+                                   num.decommissionedReplicas(),
+                                   getReplication(timedOutItems[i]));
+          }
         }
       }
       /* If we know the target datanodes where the replication timedout,
@@ -1122,9 +1156,11 @@
         NumberReplicas num = countNodes(block);
         int numCurrentReplica = num.liveReplicas();
         // add to under-replicated queue if need to be
-        if (neededReplications.add(block, numCurrentReplica, num
-            .decommissionedReplicas(), expectedReplication)) {
-          nrUnderReplicated++;
+        if (isNeededReplication(block, expectedReplication, numCurrentReplica)) {
+          if (neededReplications.add(block, numCurrentReplica, num
+              .decommissionedReplicas(), expectedReplication)) {
+            nrUnderReplicated++;
+          }
         }
 
         if (numCurrentReplica > expectedReplication) {
@@ -1303,8 +1339,11 @@
         NumberReplicas num = countNodes(block);
         int curReplicas = num.liveReplicas();
         int curExpectedReplicas = getReplication(block);
-        if (curExpectedReplicas > curReplicas) {
-          status = true;
+        if (isNeededReplication(block, curExpectedReplicas, curReplicas)) {
+          if (curExpectedReplicas > curReplicas) {
+            //Set to true only if strictly under-replicated
+            status = true;
+          }
           if (!neededReplications.contains(block) &&
             pendingReplications.getNumReplicas(block) == 0) {
             //
@@ -1357,16 +1396,23 @@
     synchronized (namesystem) {
       NumberReplicas repl = countNodes(block);
       int curExpectedReplicas = getReplication(block);
-      neededReplications.update(block, repl.liveReplicas(), repl
-          .decommissionedReplicas(), curExpectedReplicas, curReplicasDelta,
-          expectedReplicasDelta);
+      if (isNeededReplication(block, curExpectedReplicas, repl.liveReplicas())) {
+        neededReplications.update(block, repl.liveReplicas(), repl
+            .decommissionedReplicas(), curExpectedReplicas, curReplicasDelta,
+            expectedReplicasDelta);
+      } else {
+        int oldReplicas = repl.liveReplicas()-curReplicasDelta;
+        int oldExpectedReplicas = curExpectedReplicas-expectedReplicasDelta;
+        neededReplications.remove(block, oldReplicas, repl.decommissionedReplicas(),
+                                  oldExpectedReplicas);
+      }
     }
   }
 
   void checkReplication(Block block, int numExpectedReplicas) {
     // filter out containingNodes that are marked for decommission.
     NumberReplicas number = countNodes(block);
-    if (number.liveReplicas() < numExpectedReplicas) {
+    if (isNeededReplication(block, numExpectedReplicas, number.liveReplicas())) { 
       neededReplications.add(block,
                              number.liveReplicas(),
                              number.decommissionedReplicas,
@@ -1448,7 +1494,68 @@
       return blocksToInvalidate.size();
     }
   }
+  
+  //Returns the number of racks over which a given block is replicated
+  //decommissioning/decommissioned nodes are not counted. corrupt replicas 
+  //are also ignored
+  int getNumberOfRacks(Block b) {
+    HashSet<String> rackSet = new HashSet<String>(0);
+    Collection<DatanodeDescriptor> corruptNodes = 
+                                  corruptReplicas.getNodes(b);
+    for (Iterator<DatanodeDescriptor> it = blocksMap.nodeIterator(b); 
+         it.hasNext();) {
+      DatanodeDescriptor cur = it.next();
+      if (!cur.isDecommissionInProgress() && !cur.isDecommissioned()) {
+        if ((corruptNodes == null ) || !corruptNodes.contains(cur)) {
+          String rackName = cur.getNetworkLocation();
+          if (!rackSet.contains(rackName)) {
+            rackSet.add(rackName);
+          }
+        }
+      }
+    }
+    return rackSet.size();
+  }
+
+  boolean blockHasEnoughRacks(Block b) {
+    if (!this.shouldCheckForEnoughRacks) {
+      return true;
+    }
+    boolean enoughRacks = false;;
+    Collection<DatanodeDescriptor> corruptNodes = 
+                                  corruptReplicas.getNodes(b);
+    int numExpectedReplicas = getReplication(b);
+    String rackName = null;
+    for (Iterator<DatanodeDescriptor> it = blocksMap.nodeIterator(b); 
+         it.hasNext();) {
+      DatanodeDescriptor cur = it.next();
+      if (!cur.isDecommissionInProgress() && !cur.isDecommissioned()) {
+        if ((corruptNodes == null ) || !corruptNodes.contains(cur)) {
+          if (numExpectedReplicas == 1) {
+            enoughRacks = true;
+            break;
+          }
+          String rackNameNew = cur.getNetworkLocation();
+          if (rackName == null) {
+            rackName = rackNameNew;
+          } else if (!rackName.equals(rackNameNew)) {
+            enoughRacks = true;
+            break;
+          }
+        }
+      }
+    }
+    return enoughRacks;
+  }
 
+  boolean isNeededReplication(Block b, int expectedReplication, int curReplicas) {
+    if ((curReplicas >= expectedReplication) && (blockHasEnoughRacks(b))) {
+      return false;
+    } else {
+      return true;
+    }
+  }
+  
   long getMissingBlocksCount() {
     // not locking
     return Math.max(missingBlocksInPrevIter, missingBlocksInCurIter);
@@ -1483,4 +1590,26 @@
   float getLoadFactor() {
     return blocksMap.getLoadFactor();
   }
+  
+  
+  /**
+   * Return a range of corrupt replica block ids. Up to numExpectedBlocks 
+   * blocks starting at the next block after startingBlockId are returned
+   * (fewer if numExpectedBlocks blocks are unavailable). If startingBlockId 
+   * is null, up to numExpectedBlocks blocks are returned from the beginning.
+   * If startingBlockId cannot be found, null is returned.
+   *
+   * @param numExpectedBlocks Number of block ids to return.
+   *  0 <= numExpectedBlocks <= 100
+   * @param startingBlockId Block id from which to start. If null, start at
+   *  beginning.
+   * @return Up to numExpectedBlocks blocks from startingBlockId if it exists
+   *
+   */
+  long[] getCorruptReplicaBlockIds(int numExpectedBlocks,
+                                   Long startingBlockId) {
+    return corruptReplicas.getCorruptReplicaBlockIds(numExpectedBlocks,
+                                                     startingBlockId);
+  }  
+  
 }

Modified: hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/CorruptReplicasMap.java
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/CorruptReplicasMap.java?rev=810323&r1=810322&r2=810323&view=diff
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/CorruptReplicasMap.java
(original)
+++ hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/CorruptReplicasMap.java
Wed Sep  2 00:54:27 2009
@@ -33,7 +33,7 @@
 
 public class CorruptReplicasMap{
 
-  private Map<Block, Collection<DatanodeDescriptor>> corruptReplicasMap =
+  private SortedMap<Block, Collection<DatanodeDescriptor>> corruptReplicasMap
=
     new TreeMap<Block, Collection<DatanodeDescriptor>>();
   
   /**
@@ -126,4 +126,59 @@
   public int size() {
     return corruptReplicasMap.size();
   }
+
+  /**
+   * Return a range of corrupt replica block ids. Up to numExpectedBlocks 
+   * blocks starting at the next block after startingBlockId are returned
+   * (fewer if numExpectedBlocks blocks are unavailable). If startingBlockId 
+   * is null, up to numExpectedBlocks blocks are returned from the beginning.
+   * If startingBlockId cannot be found, null is returned.
+   *
+   * @param numExpectedBlocks Number of block ids to return.
+   *  0 <= numExpectedBlocks <= 100
+   * @param startingBlockId Block id from which to start. If null, start at
+   *  beginning.
+   * @return Up to numExpectedBlocks blocks from startingBlockId if it exists
+   *
+   */
+  long[] getCorruptReplicaBlockIds(int numExpectedBlocks,
+                                   Long startingBlockId) {
+    if (numExpectedBlocks < 0 || numExpectedBlocks > 100) {
+      return null;
+    }
+    
+    Iterator<Block> blockIt = corruptReplicasMap.keySet().iterator();
+    
+    // if the starting block id was specified, iterate over keys until
+    // we find the matching block. If we find a matching block, break
+    // to leave the iterator on the next block after the specified block. 
+    if (startingBlockId != null) {
+      boolean isBlockFound = false;
+      while (blockIt.hasNext()) {
+        Block b = blockIt.next();
+        if (b.getBlockId() == startingBlockId) {
+          isBlockFound = true;
+          break; 
+        }
+      }
+      
+      if (!isBlockFound) {
+        return null;
+      }
+    }
+
+    ArrayList<Long> corruptReplicaBlockIds = new ArrayList<Long>();
+
+    // append up to numExpectedBlocks blockIds to our list
+    for(int i=0; i<numExpectedBlocks && blockIt.hasNext(); i++) {
+      corruptReplicaBlockIds.add(blockIt.next().getBlockId());
+    }
+    
+    long[] ret = new long[corruptReplicaBlockIds.size()];
+    for(int i=0; i<ret.length; i++) {
+      ret[i] = corruptReplicaBlockIds.get(i);
+    }
+    
+    return ret;
+  }  
 }

Modified: hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java?rev=810323&r1=810322&r2=810323&view=diff
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
(original)
+++ hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
Wed Sep  2 00:54:27 2009
@@ -3743,4 +3743,25 @@
   DatanodeDescriptor getDatanode(String nodeID) {
     return datanodeMap.get(nodeID);
   }
+
+  /**
+   * Return a range of corrupt replica block ids. Up to numExpectedBlocks 
+   * blocks starting at the next block after startingBlockId are returned
+   * (fewer if numExpectedBlocks blocks are unavailable). If startingBlockId 
+   * is null, up to numExpectedBlocks blocks are returned from the beginning.
+   * If startingBlockId cannot be found, null is returned.
+   *
+   * @param numExpectedBlocks Number of block ids to return.
+   *  0 <= numExpectedBlocks <= 100
+   * @param startingBlockId Block id from which to start. If null, start at
+   *  beginning.
+   * @return Up to numExpectedBlocks blocks from startingBlockId if it exists
+   *
+   */
+  long[] getCorruptReplicaBlockIds(int numExpectedBlocks,
+                                   Long startingBlockId) {  
+    return blockManager.getCorruptReplicaBlockIds(numExpectedBlocks,
+                                                  startingBlockId);
+  }
+
 }

Modified: hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/ListPathsServlet.java
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/ListPathsServlet.java?rev=810323&r1=810322&r2=810323&view=diff
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/ListPathsServlet.java
(original)
+++ hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/ListPathsServlet.java
Wed Sep  2 00:54:27 2009
@@ -165,11 +165,10 @@
         }
         catch(RemoteException re) {re.writeXml(p, doc);}
       }
-    } finally {
       if (doc != null) {
         doc.endDocument();
       }
-
+    } finally {
       if (out != null) {
         out.close();
       }

Modified: hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java?rev=810323&r1=810322&r2=810323&view=diff
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java
(original)
+++ hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java
Wed Sep  2 00:54:27 2009
@@ -28,6 +28,8 @@
 import javax.servlet.http.HttpServletResponse;
 import javax.servlet.jsp.JspWriter;
 
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.protocol.DatanodeID;
 import org.apache.hadoop.hdfs.protocol.FSConstants.UpgradeAction;
 import org.apache.hadoop.hdfs.server.common.JspHelper;
@@ -38,6 +40,8 @@
 import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.util.VersionInfo;
 
+import org.znerd.xmlenc.*;
+
 class NamenodeJspHelper {
   static String getSafeModeText(FSNamesystem fsn) {
     if (!fsn.isInSafeMode())
@@ -449,4 +453,195 @@
       }
     }
   }
+  
+  // utility class used in block_info_xml.jsp
+  static class XMLBlockInfo {
+    final Block block;
+    final INodeFile inode;
+    final FSNamesystem fsn;
+    
+    public XMLBlockInfo(FSNamesystem fsn, Long blockId) {
+      this.fsn = fsn;
+      if (blockId == null) {
+        this.block = null;
+        this.inode = null;
+      } else {
+        this.block = new Block(blockId);
+        this.inode = fsn.blockManager.getINode(block);
+      }
+    }
+
+    private String getLocalParentDir(INode inode) {
+      StringBuilder pathBuf = new StringBuilder();
+      INode node = inode;
+      
+      // loop up to directory root, prepending each directory name to buffer
+      while ((node = node.getParent()) != null && node.getLocalName() != "") {
+        pathBuf.insert(0, '/').insert(0, node.getLocalName());
+      }
+
+      return pathBuf.toString();
+    }
+
+    public void toXML(XMLOutputter doc) throws IOException {
+      doc.startTag("block_info");
+      if (block == null) {
+        doc.startTag("error");
+        doc.pcdata("blockId must be a Long");
+        doc.endTag();
+      }else{
+        doc.startTag("block_id");
+        doc.pcdata(""+block.getBlockId());
+        doc.endTag();
+
+        doc.startTag("block_name");
+        doc.pcdata(block.getBlockName());
+        doc.endTag();
+
+        if (inode != null) {
+          doc.startTag("file");
+
+          doc.startTag("local_name");
+          doc.pcdata(inode.getLocalName());
+          doc.endTag();
+
+          doc.startTag("local_directory");
+          doc.pcdata(getLocalParentDir(inode));
+          doc.endTag();
+
+          doc.startTag("user_name");
+          doc.pcdata(inode.getUserName());
+          doc.endTag();
+
+          doc.startTag("group_name");
+          doc.pcdata(inode.getGroupName());
+          doc.endTag();
+
+          doc.startTag("is_directory");
+          doc.pcdata(""+inode.isDirectory());
+          doc.endTag();
+
+          doc.startTag("access_time");
+          doc.pcdata(""+inode.getAccessTime());
+          doc.endTag();
+
+          doc.startTag("is_under_construction");
+          doc.pcdata(""+inode.isUnderConstruction());
+          doc.endTag();
+
+          doc.startTag("ds_quota");
+          doc.pcdata(""+inode.getDsQuota());
+          doc.endTag();
+
+          doc.startTag("permission_status");
+          doc.pcdata(inode.getPermissionStatus().toString());
+          doc.endTag();
+
+          doc.startTag("replication");
+          doc.pcdata(""+inode.getReplication());
+          doc.endTag();
+
+          doc.startTag("disk_space_consumed");
+          doc.pcdata(""+inode.diskspaceConsumed());
+          doc.endTag();
+
+          doc.startTag("preferred_block_size");
+          doc.pcdata(""+inode.getPreferredBlockSize());
+          doc.endTag();
+
+          doc.endTag(); // </file>
+        } 
+
+        doc.startTag("replicas");
+       
+        if (fsn.blockManager.blocksMap.contains(block)) {
+          Iterator<DatanodeDescriptor> it =
+            fsn.blockManager.blocksMap.nodeIterator(block);
+
+          while (it.hasNext()) {
+            doc.startTag("replica");
+
+            DatanodeDescriptor dd = it.next();
+
+            doc.startTag("host_name");
+            doc.pcdata(dd.getHostName());
+            doc.endTag();
+
+            boolean isCorrupt = fsn.getCorruptReplicaBlockIds(0,
+                                  block.getBlockId()) != null;
+            
+            doc.startTag("is_corrupt");
+            doc.pcdata(""+isCorrupt);
+            doc.endTag();
+            
+            doc.endTag(); // </replica>
+          }
+
+        } 
+        doc.endTag(); // </replicas>
+                
+      }
+      
+      doc.endTag(); // </block_info>
+      
+    }
+  }
+  
+  // utility class used in corrupt_replicas_xml.jsp
+  static class XMLCorruptBlockInfo {
+    final FSNamesystem fsn;
+    final Configuration conf;
+    final Long startingBlockId;
+    final int numCorruptBlocks;
+    
+    public XMLCorruptBlockInfo(FSNamesystem fsn, Configuration conf,
+                               int numCorruptBlocks, Long startingBlockId) {
+      this.fsn = fsn;
+      this.conf = conf;
+      this.numCorruptBlocks = numCorruptBlocks;
+      this.startingBlockId = startingBlockId;
+    }
+
+
+    public void toXML(XMLOutputter doc) throws IOException {
+      
+      doc.startTag("corrupt_block_info");
+      
+      if (numCorruptBlocks < 0 || numCorruptBlocks > 100) {
+        doc.startTag("error");
+        doc.pcdata("numCorruptBlocks must be >= 0 and <= 100");
+        doc.endTag();
+      }
+      
+      doc.startTag("dfs_replication");
+      doc.pcdata(""+conf.getInt("dfs.replication", 3));
+      doc.endTag();
+      
+      doc.startTag("num_missing_blocks");
+      doc.pcdata(""+fsn.getMissingBlocksCount());
+      doc.endTag();
+      
+      doc.startTag("num_corrupt_replica_blocks");
+      doc.pcdata(""+fsn.getCorruptReplicaBlocks());
+      doc.endTag();
+     
+      doc.startTag("corrupt_replica_block_ids");
+      long[] corruptBlockIds
+        = fsn.getCorruptReplicaBlockIds(numCorruptBlocks,
+                                        startingBlockId);
+      if (corruptBlockIds != null) {
+        for (Long blockId: corruptBlockIds) {
+          doc.startTag("block_id");
+          doc.pcdata(""+blockId);
+          doc.endTag();
+        }
+      }
+      
+      doc.endTag(); // </corrupt_replica_block_ids>
+
+      doc.endTag(); // </corrupt_block_info>
+      
+      doc.getWriter().flush();
+    }
+  }    
 }
\ No newline at end of file

Modified: hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/UnderReplicatedBlocks.java
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/UnderReplicatedBlocks.java?rev=810323&r1=810322&r2=810323&view=diff
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/UnderReplicatedBlocks.java
(original)
+++ hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/UnderReplicatedBlocks.java
Wed Sep  2 00:54:27 2009
@@ -26,7 +26,7 @@
  * Blocks have only one replicas has the highest
  */
 class UnderReplicatedBlocks implements Iterable<Block> {
-  static final int LEVEL = 3;
+  static final int LEVEL = 4;
   private List<TreeSet<Block>> priorityQueues = new ArrayList<TreeSet<Block>>();
       
   /* constructor */
@@ -53,7 +53,7 @@
     }
     return size;
   }
-        
+
   /* Check if a block is in the neededReplication queue */
   synchronized boolean contains(Block block) {
     for(TreeSet<Block> set:priorityQueues) {
@@ -71,8 +71,10 @@
                           int curReplicas, 
                           int decommissionedReplicas,
                           int expectedReplicas) {
-    if (curReplicas<0 || curReplicas>=expectedReplicas) {
-      return LEVEL; // no need to replicate
+    if (curReplicas<0) {
+      return LEVEL;
+    } else if (curReplicas>=expectedReplicas) {
+      return 3; // Block doesn't have enough racks
     } else if(curReplicas==0) {
       // If there are zero non-decommissioned replica but there are
       // some decommissioned replicas, then assign them highest priority
@@ -99,7 +101,7 @@
                            int curReplicas, 
                            int decomissionedReplicas,
                            int expectedReplicas) {
-    if(curReplicas<0 || expectedReplicas <= curReplicas) {
+    if(curReplicas<0) {
       return false;
     }
     int priLevel = getPriority(block, curReplicas, decomissionedReplicas,

Propchange: hadoop/hdfs/branches/HDFS-265/src/test/hdfs/
------------------------------------------------------------------------------
--- svn:mergeinfo (original)
+++ svn:mergeinfo Wed Sep  2 00:54:27 2009
@@ -1,3 +1,3 @@
 /hadoop/core/branches/branch-0.19/hdfs/src/test/hdfs:713112
 /hadoop/core/trunk/src/test/hdfs:776175-785643
-/hadoop/hdfs/trunk/src/test/hdfs:796829-800617,800619-803337,804756-805652
+/hadoop/hdfs/trunk/src/test/hdfs:796829-800617,800619-803337,804756-805652,808672-809439

Propchange: hadoop/hdfs/branches/HDFS-265/src/test/hdfs-with-mr/
------------------------------------------------------------------------------
--- svn:mergeinfo (original)
+++ svn:mergeinfo Wed Sep  2 00:54:27 2009
@@ -1,3 +1,3 @@
 /hadoop/core/branches/branch-0.19/hdfs/src/test/hdfs-with-mr:713112
 /hadoop/core/trunk/src/test/hdfs-with-mr:776175-784663
-/hadoop/hdfs/trunk/src/test/hdfs-with-mr:796829-800617,800619-803337,804756-805652
+/hadoop/hdfs/trunk/src/test/hdfs-with-mr:796829-800617,800619-803337,804756-805652,808672-809439

Propchange: hadoop/hdfs/branches/HDFS-265/src/webapps/datanode/
------------------------------------------------------------------------------
--- svn:mergeinfo (original)
+++ svn:mergeinfo Wed Sep  2 00:54:27 2009
@@ -1,3 +1,3 @@
 /hadoop/core/branches/branch-0.19/hdfs/src/webapps/datanode:713112
 /hadoop/core/trunk/src/webapps/datanode:776175-784663
-/hadoop/hdfs/trunk/src/webapps/datanode:796829-800617,800619-803337,804756-805652
+/hadoop/hdfs/trunk/src/webapps/datanode:796829-800617,800619-803337,804756-805652,808672-809439

Propchange: hadoop/hdfs/branches/HDFS-265/src/webapps/hdfs/
------------------------------------------------------------------------------
--- svn:mergeinfo (original)
+++ svn:mergeinfo Wed Sep  2 00:54:27 2009
@@ -1,3 +1,3 @@
 /hadoop/core/branches/branch-0.19/hdfs/src/webapps/hdfs:713112
 /hadoop/core/trunk/src/webapps/hdfs:776175-784663
-/hadoop/hdfs/trunk/src/webapps/hdfs:796829-800617,800619-803337,804756-805652
+/hadoop/hdfs/trunk/src/webapps/hdfs:796829-800617,800619-803337,804756-805652,808672-809439

Propchange: hadoop/hdfs/branches/HDFS-265/src/webapps/secondary/
------------------------------------------------------------------------------
--- svn:mergeinfo (original)
+++ svn:mergeinfo Wed Sep  2 00:54:27 2009
@@ -1,3 +1,3 @@
 /hadoop/core/branches/branch-0.19/hdfs/src/webapps/secondary:713112
 /hadoop/core/trunk/src/webapps/secondary:776175-784663
-/hadoop/hdfs/trunk/src/webapps/secondary:796829-800617,800619-803337,804756-805652
+/hadoop/hdfs/trunk/src/webapps/secondary:796829-800617,800619-803337,804756-805652,808672-809439



Mime
View raw message