hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kireet <kir...@feedly.com>
Subject HBase load problems
Date Wed, 16 Oct 2013 15:47:05 GMT
Over the past couple of months we have seen a significant increase in 
datanode I/O load in our cluster, an increase of 100% in disk read/write 
rates while our application requests have increased by a much smaller 
amount, perhaps 5-10%. The read/write rate has been increasing gradually 
over time.

The data size of our cluster has increased quite a bit. In particular we 
have one table that is keyed by randomized timestamp (random bytes + 
timestamp). It has grown at about 40GB/day (before replication) with an 
average row size of about 1KB in a single column. It makes up about 80% 
of our total data size and is at about 50 regions per data node. Our 
first guess is the issue has something to do with this table since it 
dominates the cluster data size.

We are considering splitting the table into multiple tables organized by 
timestamp. 90% or more of reads/writes are for recent data, so our 
thinking is we could keep the "most recent data" table much smaller by 
doing this and perhaps make it easier for hbase to optimize things. 
E.g., compactions would be quicker and perhaps the block cache would 
become more effective as each block would have recent data instead of a 
continually decreasing fraction.

However, this would be a big code change and we would like to confirm 
as much as possible that this is the true problem. What are the key 
metrics we should look at for confirmation?

Also we don't have short circuit reads enabled at the moment. We have 
seen articles on the web claiming big improvements in some cases but no 
change in others. Are there particular characteristics of systems that 
will see big improvements when this setting is enabled?


Mime
View raw message