hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Update of "Hbase/Troubleshooting" by AndrewPurtell
Date Wed, 16 Sep 2009 18:18:24 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The following page has been changed by AndrewPurtell:
http://wiki.apache.org/hadoop/Hbase/Troubleshooting

------------------------------------------------------------------------------
  
  [[Anchor(6)]]
  == 6. Problem: "No live nodes contain current block" ==
-  * See an exception with above message in logs (usually hadoop 0.18.x).
+  * See an exception with above message in logs.
  === Causes ===
+  * Insufficient file descriptors available at the OS level for DFS DataNodes
+  * Patch for HDFS-127 is not present (Should not be an issue for HBase >= 0.20.0 as a
private Hadoop jar is shipped with the client side fix applied)
   * Slow datanodes are marked as down by DFSClient; eventually all replicas are marked as
'bad' (HADOOP-3831).
-  * Insufficient file descriptors available at the OS level for DFS DataNodes.
  === Resolution ===
   * Increase the file descriptor limit of the user account under which the DFS DataNode processes
are operating. On most Linux systems, adding the following lines to /etc/security/limits.conf
will increase the file descriptor limit from the default of 1024 to 32768. Substitute the
actual user name for {{{<user>}}}. 
     {{{
@@ -149, +150 @@

   * Hadoop and HBase daemons require 1GB heap, therefore RAM, per daemon. For load intensive
environments, HBase regionservers may require more heap than this. There must be enough available
RAM to comfortably hold the working sets of all Java processes running on the instance. This
includes any mapper or reducer tasks which may run co-located with system daemons. Small and
Medium instances do not have enough available RAM to contain typical Hadoop+HBase deployments.

   * Hadoop and HBase daemons are latency sensitive. There should be enough free RAM so no
swapping occurs. Swapping during garbage collection may cause JVM threads to be suspended
for a critically long time. Also, there should be sufficient virtual cores to service the
JVM threads whenever they become runnable. Large instances have two virtual cores, so they
can run HDFS and HBase daemons concurrently, but nothing more. X-Large instances have four
virtual cores, so they can run in addition to HDFS and HBase daemons two mappers or reducers
concurrently. Configure TaskTracker concurrency limits accordingly, or separate mapreduce
computation from storage functions. 
  === Resolution ===
-  * Use X-Large (c1.xlarge)
+  * Use X-Large (c1.xlarge) instances
   * Consider splitting storage and computational function over disjoint instance sets. 
  
  [[Anchor(9)]]

Mime
View raw message