hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From s..@apache.org
Subject svn commit: r769623 - in /hadoop/core/trunk: CHANGES.txt src/docs/src/documentation/content/xdocs/hdfs_design.xml src/hdfs/org/apache/hadoop/hdfs/server/namenode/ReplicationTargetChooser.java
Date Wed, 29 Apr 2009 01:40:02 GMT
Author: shv
Date: Wed Apr 29 01:40:01 2009
New Revision: 769623

URL: http://svn.apache.org/viewvc?rev=769623&view=rev
Log:
HADOOP-5734. Correct block placement policy description in HDFS Design document. Contributed
by Konstantin Boudnik.

Modified:
    hadoop/core/trunk/CHANGES.txt
    hadoop/core/trunk/src/docs/src/documentation/content/xdocs/hdfs_design.xml
    hadoop/core/trunk/src/hdfs/org/apache/hadoop/hdfs/server/namenode/ReplicationTargetChooser.java

Modified: hadoop/core/trunk/CHANGES.txt
URL: http://svn.apache.org/viewvc/hadoop/core/trunk/CHANGES.txt?rev=769623&r1=769622&r2=769623&view=diff
==============================================================================
--- hadoop/core/trunk/CHANGES.txt (original)
+++ hadoop/core/trunk/CHANGES.txt Wed Apr 29 01:40:01 2009
@@ -262,6 +262,9 @@
     HADOOP-5589. Eliminate source limit of 64 for map-side joins imposed by
     TupleWritable encoding. (Jingkei Ly via cdouglas)
 
+    HADOOP-5734. Correct block placement policy description in HDFS
+    Design document. (Konstantin Boudnik via shv)
+
   OPTIMIZATIONS
 
     HADOOP-5595. NameNode does not need to run a replicator to choose a

Modified: hadoop/core/trunk/src/docs/src/documentation/content/xdocs/hdfs_design.xml
URL: http://svn.apache.org/viewvc/hadoop/core/trunk/src/docs/src/documentation/content/xdocs/hdfs_design.xml?rev=769623&r1=769622&r2=769623&view=diff
==============================================================================
--- hadoop/core/trunk/src/docs/src/documentation/content/xdocs/hdfs_design.xml (original)
+++ hadoop/core/trunk/src/docs/src/documentation/content/xdocs/hdfs_design.xml Wed Apr 29
01:40:01 2009
@@ -140,7 +140,7 @@
         The NameNode determines the rack id each DataNode belongs to via the process outlined
in <a href="cluster_setup.html#Hadoop+Rack+Awareness">Rack Awareness</a>. A simple
but non-optimal policy is to place replicas on unique racks. This prevents losing data when
an entire rack fails and allows use of bandwidth from multiple racks when reading data. This
policy evenly distributes replicas in the cluster which makes it easy to balance load on component
failure. However, this policy increases the cost of writes because a write needs to transfer
blocks to multiple racks. 
         </p>
         <p>
-        For the common case, when the replication factor is three, HDFS&#x2019;s placement
policy is to put one replica on one node in the local rack, another on a different node in
the local rack, and the last on a different node in a different rack. This policy cuts the
inter-rack write traffic which generally improves write performance. The chance of rack failure
is far less than that of node failure; this policy does not impact data reliability and availability
guarantees. However, it does reduce the aggregate network bandwidth used when reading data
since a block is placed in only two unique racks rather than three. With this policy, the
replicas of a file do not evenly distribute across the racks. One third of replicas are on
one node, two thirds of replicas are on one rack, and the other third are evenly distributed
across the remaining racks. This policy improves write performance without compromising data
reliability or read performance.
+        For the common case, when the replication factor is three, HDFS&#x2019;s placement
policy is to put one replica on one node in the local rack, another on a node in a different
(remote) rack, and the last on a different node in the same remote rack. This policy cuts
the inter-rack write traffic which generally improves write performance. The chance of rack
failure is far less than that of node failure; this policy does not impact data reliability
and availability guarantees. However, it does reduce the aggregate network bandwidth used
when reading data since a block is placed in only two unique racks rather than three. With
this policy, the replicas of a file do not evenly distribute across the racks. One third of
replicas are on one node, two thirds of replicas are on one rack, and the other third are
evenly distributed across the remaining racks. This policy improves write performance without
compromising data reliability or read performance.
         </p>
         <p>
         The current, default replica placement policy described here is a work in progress.

Modified: hadoop/core/trunk/src/hdfs/org/apache/hadoop/hdfs/server/namenode/ReplicationTargetChooser.java
URL: http://svn.apache.org/viewvc/hadoop/core/trunk/src/hdfs/org/apache/hadoop/hdfs/server/namenode/ReplicationTargetChooser.java?rev=769623&r1=769622&r2=769623&view=diff
==============================================================================
--- hadoop/core/trunk/src/hdfs/org/apache/hadoop/hdfs/server/namenode/ReplicationTargetChooser.java
(original)
+++ hadoop/core/trunk/src/hdfs/org/apache/hadoop/hdfs/server/namenode/ReplicationTargetChooser.java
Wed Apr 29 01:40:01 2009
@@ -32,7 +32,7 @@
  * the 1st replica is placed on the local machine, 
  * otherwise a random datanode. The 2nd replica is placed on a datanode
  * that is on a different rack. The 3rd replica is placed on a datanode
- * which is on the same rack as the first replica.
+ * which is on a different node of the rack as the second replica.
  */
 class ReplicationTargetChooser {
   private final boolean considerLoad; 



Mime
View raw message