hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From lars hofhansl <la...@apache.org>
Subject Re: Poor HBase random read performance
Date Sat, 29 Jun 2013 22:09:59 GMT
I've seen the same bad performance behavior when I tested this on a real cluster. (I think
it was in 0.94.6)


Instead of en/disabling the blockcache, I tested sequential and random reads on a data set
that does not fit into the (aggregate) block cache.
Sequential reads were drastically faster than Random reads (7 vs 34 minutes), which can really
only be explained with the fact that the next get will with high probability hit an already
cached block, whereas in the random read case it likely will not.

In the RandomRead case I estimate that each RegionServer brings in between 100 and 200mb/s
from the OS cache. Even at 200mb/s this would be quite slow.I understand that performance
is bad when index/bloom blocks are not cached, but bringing in data blocks from the OS cache
should be faster than it is.


So this is something to debug.

-- Lars



________________________________
 From: Varun Sharma <varun@pinterest.com>
To: "dev@hbase.apache.org" <dev@hbase.apache.org> 
Sent: Saturday, June 29, 2013 12:13 PM
Subject: Poor HBase random read performance
 

Hi,

I was doing some tests on how good HBase random reads are. The setup is
consists of a 1 node cluster with dfs replication set to 1. Short circuit
local reads and HBase checksums are enabled. The data set is small enough
to be largely cached in the filesystem cache - 10G on a 60G machine.

Client sends out multi-get operations in batches to 10 and I try to measure
throughput.

Test #1

All Data was cached in the block cache.

Test Time = 120 seconds
Num Read Ops = 12M

Throughput = 100K per second

Test #2

I disable block cache. But now all the data is in the file system cache. I
verify this by making sure that IOPs on the disk drive are 0 during the
test. I run the same test with batched ops.

Test Time = 120 seconds
Num Read Ops = 0.6M
Throughput = 5K per second

Test #3

I saw all the threads are now stuck in idLock.lockEntry(). So I now run
with the lock disabled and the block cache disabled.

Test Time = 120 seconds
Num Read Ops = 1.2M
Throughput = 10K per second

Test #4

I re enable block cache and this time hack hbase to only cache Index and
Bloom blocks but data blocks come from File System cache.

Test Time = 120 seconds
Num Read Ops = 1.6M
Throughput = 13K per second

So, I wonder how come such a massive drop in throughput. I know that HDFS
code adds tremendous overhead but this seems pretty high to me. I use
0.94.7 and cdh 4.2.0

Thanks
Varun
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message