hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From lars hofhansl <la...@apache.org>
Subject Re: Slow Get Performance (or how many disk I/O does it take for one non-cached read?)
Date Sat, 01 Feb 2014 05:30:06 GMT
Pardon the bad spelling. Hit sent too early. Also, in the second to last paragraph I meant
using the HBase *Shell* to alter the BLOCKSIZE.

-- Lars



----- Original Message -----
From: lars hofhansl <larsh@apache.org>
To: "user@hbase.apache.org" <user@hbase.apache.org>
Cc: 
Sent: Friday, January 31, 2014 9:25 PM
Subject: Re: Slow Get Performance (or how many disk I/O does it take for one non-cached read?)

If you data does not fit into cache and your request patter is essentially random then each
GET will likely cause an entirely new HFile block to be read from disk (since that block was
likely evicted due to other random GETs).

This is somewhat of a worst case for HBase. The default block size if 64k.
That is why the cache hit ratio is low and your disk IO is high. For each GET even reading
just a single KV of a few hundred bytes, HBase needs to bring in 64k worth of data from disk.


With your load you can set the block size as low as 4k (or even lower).
That way HBase would still need to bring in a new block for each GET, but that block will
only be 4k.
You can also try disabling the block cache, as it does not help in your scenario anyway.


Note that I mean the HFile block size, not the HDFS block (which is typically 64, 128, or
256 mb).


You can set this via the HBase as a column family parameter: BLOCKSIZE => '4906'
I'd start with 4k and then vary up and down and do some testing.

Truly random reads are very hard for any caching system.
Is your load really truly random, or is it just for testing?

-- Lars



----- Original Message -----
From: Jan Schellenberger <leipzig3@gmail.com>
To: user@hbase.apache.org
Cc: 
Sent: Friday, January 31, 2014 3:12 PM
Subject: Slow Get Performance (or how many disk I/O does it take for one non-cached read?)

I am running a cluster and getting slow performance - about 50 reads/sec/node
or about 800 reads/sec for the cluster.  The data is too big to fit into
memory and my access pattern is completely random reads which is presumably
difficult for hbase.  Is my read speed reasonable?  I feel like typical read
speeds I've seen reported are much higher?



Hardware/Software Configuration:
17 nodes + 1 master
8 cores
24 gigs ram
4x1TB 3.5" hard drives (I know this is low for hbase - we're working on
getting more disks)
running Cloudera CDH 4.3 with hbase .94.6
Most configurations are default except I'm using 12GB heap space/region
server and the block cache is .4 instead of .25 but neither of these two
things makes much of a difference.   I am NOT having a GC issue.  Latencies
are around 40ms and 99% is 200ms. 


Dataset Description:
6 tables ~300GB each (uncompressed) or 120GB each compressed <- compression
speeds things up a bit.
I just ran a major compaction so block locality is 100%
Each Table has a single column family and a single column ("c:d").  
keys are short strigs ~10-20 characters.
values are short json ~500 characters
100% Gets.  No Puts
I am heavily using time stamping.  maxversions is set to Integer.MAXINT.  My
gets have a maxretrieved of 200.  A typical row would have < 10 versions on
average though.  <1% of queries would max out at 200 versions returned.

Here are table configurations (I've also tried Snappy compression)
{NAME => 'TABLE1', FAMILIES => [{NAME => 'c', DATA_BLOCK_ENCODING => 'NONE'
, BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '2147483647',
COMPR
ESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647',
KEEP_DELETED_CELLS =>
  'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK =>
'true', BLOCKCACHE => 'true'}]}


I am using the master node to query (with 20 threads) and get about 800
Gets/second.  Each worker node is completely swamped by disk i/o - I'm
seeing 80 io/sec with iostat for each of the 4 disk with a throughput of
about 10MB/sec each.  So this means it's reading roughly 120kB/transfer and
it's taking about 8 Hard Disk I/O's per Get request.  Does that seem
reasonable?  I've read the HFILE specs and I feel if the block index is
loaded into memory, it should take 1 hard disk read to read the proper block
with my row.


The region servers have a blockCacheHitRatio of about 33% (no compression)
or 50% (snappy compression)

Here are some regionserver stats while I'm running queries.  This is the
uncompressed table version and queries are only 38/sec

requestsPerSecond=38, numberOfOnlineRegions=212,
numberOfStores=212, numberOfStorefiles=212, storefileIndexSizeMB=0,
rootIndexSizeKB=190, totalStaticIndexSizeKB=172689,
totalStaticBloomSizeKB=79692, memstoreSizeMB=0, mbInMemoryWithoutWAL=0,
numberOfPutsWithoutWAL=0, readRequestsCount=1865459,
writeRequestsCount=0, compactionQueueSize=0, flushQueueSize=0,
usedHeapMB=4565, maxHeapMB=12221, blockCacheSizeMB=4042.53,
blockCacheFreeMB=846.07, blockCacheCount=62176,
blockCacheHitCount=5389770, blockCacheMissCount=9909385,
blockCacheEvictedCount=2744919, blockCacheHitRatio=35%,
blockCacheHitCachingRatio=65%, hdfsBlocksLocalityIndex=99,
slowHLogAppendCount=0, fsReadLatencyHistogramMean=1570049.34,
fsReadLatencyHistogramCount=1239690.00,
fsReadLatencyHistogramMedian=20859045.50,
fsReadLatencyHistogram75th=35791318.75,
fsReadLatencyHistogram95th=97093132.05,
fsReadLatencyHistogram99th=179688655.93,
fsReadLatencyHistogram999th=312277183.40,
fsPreadLatencyHistogramMean=35548585.63,
fsPreadLatencyHistogramCount=2803268.00,
fsPreadLatencyHistogramMedian=37662144.00,
fsPreadLatencyHistogram75th=55991186.50,
fsPreadLatencyHistogram95th=116227275.50,
fsPreadLatencyHistogram99th=173173999.27,
fsPreadLatencyHistogram999th=273812341.79,
fsWriteLatencyHistogramMean=1523660.72,
fsWriteLatencyHistogramCount=1225000.00,
fsWriteLatencyHistogramMedian=226540.50,
fsWriteLatencyHistogram75th=380366.00,
fsWriteLatencyHistogram95th=2193516.80,
fsWriteLatencyHistogram99th=4290208.93,
fsWriteLatencyHistogram999th=6926850.53









--
View this message in context: http://apache-hbase.679495.n3.nabble.com/Slow-Get-Performance-or-how-many-disk-I-O-does-it-take-for-one-non-cached-read-tp4055545.html
Sent from the HBase User mailing list archive at Nabble.com.


Mime
View raw message