hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Lucene-hadoop Wiki] Trivial Update of "Hbase/FAQ" by stack
Date Fri, 28 Dec 2007 18:42:02 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Lucene-hadoop Wiki" for change notification.

The following page has been changed by stack:

  Running an Hbase loaded w/ more than a few regions, its possible to blow past the environment
file handle limit for the user running the process.  Running out of file handles is like an
OOME, things start to fail in strange ways.  To up the users' file handles, edit '''/etc/security/limits.conf'''
on all nodes and restart your cluster.
- '''6. [[Anchor(6)]] Performance?'''
+ '''6. [[Anchor(6)]] What can I do to improve hbase performance?'''
  To improve random-read performance, if you can, try making the hdfs block size smaller (as
is suggested in the bigtable paper).  By default its 64MB.  Try setting it to 8MB.  On every
random read, hbase has to fetch from hdfs the blocks that contain the wanted row.  If your
rows are small, much smaller than the hdfs block size, then we'll be fetching a lot of data
only to discard the bulk.  Meantime the big block fetches and processing consume CPU, network,
etc. in the datanodes and hbase client.

View raw message