hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vidhyashankar Venkataraman <vidhy...@yahoo-inc.com>
Subject Re: Performance at large number of regions/node
Date Fri, 28 May 2010 17:12:18 GMT
I am not sure if I understood this right, but does changing hfile.block.cache.size also help?

On 5/27/10 3:27 PM, "Jean-Daniel Cryans" <jdcryans@apache.org> wrote:

Well we do have a couple of other configs for high write throughput:


The last one is for restarts. Uploading very fast, you will more
likely hit all the upper limits (blocking store file and memstore) and
this will lower your throughput. Those configs relax that. Also for
speedier uploads we disable writing to the WAL
If the job fails or any machines fails you'll have to restart it or
figure the whole, and you absolutely need to force flushes when the MR
is done.


On Thu, May 27, 2010 at 2:57 PM, Jacob Isaac <jacob@ebrary.com> wrote:
> Thanks J-D
> Currently we are trying to find/optimize our load/write times - although in
> prod we expect it to be 25/75 (writes/reads) ratio.
> We are using long table model with only one column - row-size is typically ~
> 4-5k
> As to your suggestion on not using even 50% of disk space - I agree and was
> planning to use only ~30-40% (1.5T of 4T) for HDFS
> and as I reported earlier
> 4000 regions@256m per region(with 3 replications) on 20 nodes ==  150G
> per/node == 10% utilization
> while using 1GB as maxfilesize did you have to adjust other params such
> as hbase.hstore.compactionThreshold and hbase.hregion.memstore.flush.size.
> There is an interesting observation by Jonathan Gray documented/reported in
> HBASE-2375 -
> wondering whether that issue gets compounded when using 1G as the
> hbase.hregion.max.filesize
> Thx
> Jacob

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message