hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From lars hofhansl <la...@apache.org>
Subject Re: HBase Thrift inserts bottlenecked somewhere -- but where?
Date Sat, 02 Mar 2013 17:38:48 GMT
"That's only true from the HDFS perspective, right? Any given region is 
"owned" by 1 of the 6 regionservers at any given time, and writes are 
buffered to memory before being persisted to HDFS, right?"

Only if you disabled the WAL, otherwise each change is written to the WAL first, and then
committed to the memstore.
So in the sense it's even worse. Each edit is written twice to the FS, replicated 3 times,
and all that only 6 data nodes.

20k writes does seem a bit low.


-- Lars



________________________________
 From: Dan Crosta <dan@magnetic.com>
To: "user@hbase.apache.org" <user@hbase.apache.org> 
Sent: Saturday, March 2, 2013 9:12 AM
Subject: Re: HBase Thrift inserts bottlenecked somewhere -- but where?
 
On Mar 1, 2013, at 10:42 PM, lars hofhansl wrote:
> What performance profile do you expect?

That's a good question. Our configuration is actually already exceeding our minimum and desired
performance thresholds, so I'm not too worried about it. My concern is more that I develop
an understanding of where the bottlenecks are (e.g. it doesn't appear to be disk, CPU, or
network bound at the moment), and develop an intuition for working with HBase in case we are
ever under the gun.


> Where does it top out (i.e. how many ops/sec)?

We're doing about 20,000 writes per second sustained across 4 tables and 6 CFs. Does this
sound ballpark right for 6x EC2 m1.xlarges?


> Also note that each data item is replicated to three nodes (by HDFS). So in a 6 machine
cluster each machine would get 50% of the writes.
> If you are looking for performance you really need a larger cluster to amortize this
replication cost across more machines.

That's only true from the HDFS perspective, right? Any given region is "owned" by 1 of the
6 regionservers at any given time, and writes are buffered to memory before being persisted
to HDFS, right?

In any event, there doesn't seem to be any disk contention to speak of -- we average around
10% disk utilization at this level of load (each machine has 4 spindles of local storage,
we are not using EBS).

One setting no one has mentioned yet is the DataNode handler count (dfs.datanode.handler.count)
-- which is currently set to its default of 3. Should we experiment with raising that?


> The other issue to watch out for is whether your keys are generated such that a single
regionserver is hot spotted (you can look at the operation count on the master page).

All of our keys are hashes or UUIDs, so the key distribution is very smooth, and this is confirmed
by the "Region Servers" table on the master node's web UI.


Thanks,
- Dan
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message