hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Trivial Update of "FAQ" by RaviPhulari
Date Wed, 31 Mar 2010 04:23:08 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The "FAQ" page has been changed by RaviPhulari.
http://wiki.apache.org/hadoop/FAQ?action=diff&rev1=68&rev2=69

--------------------------------------------------

  
  So, the simple answer is that 4-6Gbps is most likely just fine for most practical jobs.
If you want to be extra safe, many inexpensive switches can operate in a "stacked" configuration
where the bandwidth between them is essentially backplane speed. That should scale you to
96 nodes with plenty of headroom. Many inexpensive gigabit switches also have one or two 10GigE
ports which can be used effectively to connect to each other or to a 10GE core.
  
+ '''29.[[#29|How to limit Data node's disk usage?]]'''
+ 
+ Use dfs.datanode.du.reserved configuration value in $HADOOP_HOME/conf/hdfs-site.xml for
limiting disk usage.
+ 
+ {{{
+   <property>
+     <name>dfs.datanode.du.reserved</name>
+     <!-- cluster variant -->
+     <value>182400</value>
+     <description>Reserved space in bytes per volume. Always leave this much space
free for non dfs use.
+     </description>
+   </property>
+ }}} 
+ 

Mime
View raw message