hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Update of "Hbase/FAQ" by MaximVeksler
Date Mon, 02 Aug 2010 18:23:51 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The "Hbase/FAQ" page has been changed by MaximVeksler.
http://wiki.apache.org/hadoop/Hbase/FAQ?action=diff&rev1=66&rev2=67

--------------------------------------------------

  }}}
  '''''hbase''' is the user under which HBase is running''. To test the configuration reboot
and run '''ulimit -n'''
  
- You may also need to edit /etc/sysctl.conf, relevant configuration '''fs.file-max''' See
http://thedaneshproject.com/posts/how-to-increase-total-file-descriptors-count-on-linux/
+ You may also need to edit /etc/sysctl.conf, relevant configuration '''fs.file-max''' See
http://serverfault.com/questions/165316/how-to-configure-linux-file-descriptor-limit-with-fs-file-max-and-ulimit/
  
  The math runs roughly as follows: Per column family, there is at least one mapfile and possibly
up to 5 or 6 if a region is under load (lets say 3 per column family on average).  Multiply
by the number of regions per region server.  So, for example, say you have a schema of 3 column
familes per region and that you have 100 regions per regionserver, the JVM will open 3 * 3
* 100 mapfiles -- 900 file descriptors not counting open jar files, conf files, etc (Run 'lsof
-p REGIONSERVER_PID' to see for sure).
  

Mime
View raw message