hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Update of "Hbase/Troubleshooting" by AndrewPurtell
Date Wed, 16 Sep 2009 18:07:27 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The following page has been changed by AndrewPurtell:

   * Hadoop and HBase daemons require 1GB heap, therefore RAM, per daemon. For load intensive
environments, HBase regionservers may require more heap than this. There must be enough available
RAM to comfortably hold the working sets of all Java processes running on the instance. This
includes any mapper or reducer tasks which may run co-located with system daemons. Small and
Medium instances do not have enough available RAM to contain typical Hadoop+HBase deployments.

   * Hadoop and HBase daemons are latency sensitive. There should be enough free RAM so no
swapping occurs. Swapping during garbage collection may cause JVM threads to be suspended
for a critically long time. Also, there should be sufficient virtual cores to service the
JVM threads whenever they become runnable. Large instances have two virtual cores, so they
can run HDFS and HBase daemons concurrently, but nothing more. X-Large instances have four
virtual cores, so they can run in addition to HDFS and HBase daemons two mappers or reducers
concurrently. Configure TaskTracker concurrency limits accordingly, or separate mapreduce
computation from storage functions. 
  === Resolution ===
+  * Use X-Large (c1.xlarge)
-  * Use Large instances for HDFS and HBase storage tasks.
-  * Use X-Large instances if you are also running mappers and reducers co-located with system
   * Consider splitting storage and computational function over disjoint instance sets. 

View raw message