hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Update of "Hbase/FAQ" by MaximVeksler
Date Thu, 29 Jul 2010 15:16:39 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The "Hbase/FAQ" page has been changed by MaximVeksler.
The comment on this change is: Clear /etc/security/limits.conf editing.


  Currently Hbase is a file handle glutton.  Running an Hbase loaded w/ more than a few regions,
its possible to blow past the common 1024 default file handle limit for the user running the
process.  Running out of file handles is like an OOME, things start to fail in strange ways.
 To up the users' file handles, edit '''/etc/security/limits.conf''' on all nodes and restart
your cluster.
+ {{{
- {{{# Each line describes a limit for a user in the form:
+ # Each line describes a limit for a user in the form:
  # domain    type    item    value
- hbase     -    nofile  32768}}}
+ hbase     -    nofile  32768
+ }}}
+ '''''hbase''' is the user under which HBase is running''. To test the configuration reboot
and run '''ulimit -n'''
- You may need to also edit sysctl.conf.
+ You may also need to edit /etc/sysctl.conf, relevant configuration '''fs.file-max''' See
  The math runs roughly as follows: Per column family, there is at least one mapfile and possibly
up to 5 or 6 if a region is under load (lets say 3 per column family on average).  Multiply
by the number of regions per region server.  So, for example, say you have a schema of 3 column
familes per region and that you have 100 regions per regionserver, the JVM will open 3 * 3
* 100 mapfiles -- 900 file descriptors not counting open jar files, conf files, etc (Run 'lsof
-p REGIONSERVER_PID' to see for sure).

View raw message