hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chen Bangzhong <bangzh...@gmail.com>
Subject hbase performance
Date Fri, 02 Apr 2010 07:46:23 GMT
Hi, All

I am benchmarking hbase. My HDFS clusters includes 4 servers (Dell 860, with
2 GB RAM). One NameNode, one JobTracker, 2 DataNodes.

My HBase Cluster also comprise 4 servers too. One Master, 2 region and one
ZooKeeper. (Dell 860, with 2 GB RAM)

I runned the org.apache.hadoop.PerformanceEvaluation on the ZooKeeper
server. the ROW_LENGTH was changed from 1000 to ROW_LENGTH = 100*1024;
So each value will be 100k in size.

hadoop version is 0.20.2, hbase version is 0.20.3. dfs.replication set to 1.

The following is the command line:

bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred
--rows=10000 randomWrite 20.

It tooks about one hour to complete the test(3468628 ms), about 60 writes
per second. It seems the performance is disappointing.

Is there anything I can do to make hbase perform better under 100k size ´╝čI
didn't try the method mentioned in the performance wiki yet, because I
thought 60writes/sec is too low.

If the value size is 1k, hbase performs much better. 200000 sequencewrite
tooks about 16 seconds, about 12500 writes/per second.

Now I am trying to benchmark using two clients on 2 servers, no result yet.

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message