hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From <Michael.Grund...@high5games.com>
Subject RE: HBase Client Performance Bottleneck in a Single Virtual Machine
Date Mon, 04 Nov 2013 05:43:40 GMT
Thanks for the input and things to look at. To respond:

1) I don't quite understand the question. When we increase threads we are not changing the
work each thread is doing, just expecting more to be done concurrently. The machine we are
testing on has effectively 16 cores available and is just idling away. The test harness we
are using runs the same process whether it's using 1 thread or 100. 
2) We have done 1 to 1 for all of our tests thus far. We started there and were planning to
back it off once we saw how it worked.
3) No, the requests spread very nicely across all the servers. We spent a good bit of time
designing a key that would distribute almost perfectly across the entire cluster and it appears
to be working great. 
4) hbase.regionserver.handler.count is currently set to 600 when I look at the master configuration.
Is that what you are referencing or should I look at something else? 

As for memory, we've increased that to the point that any single region server could cache
the entire dataset 100% in memory and didn't see any performance improvement at all.

Thanks!

-Mike

-----Original Message-----
From: Sriram Ramachandrasekaran [mailto:sri.rams85@gmail.com] 
Sent: Sunday, November 03, 2013 10:12 PM
To: user@hbase.apache.org
Subject: Re: HBase Client Performance Bottleneck in a Single Virtual Machine

Hey Michael,
I am relatively new to HBase, so do take my response with a grain of salt.
I think, definitely your requirements are something that HBase should be able to handle easily(assuming
you are not pulling inordinate amounts of
data(payload) from HBase).
Few things that you should look for to understand this better is, 1. What are your clients
doing when you increase the number of threads?
2. How is the thread-connection mapping - 1 to 1? Are you creating a connection every time
in your threads?
3. Do you see any one region server unduly getting more requests than the rest of them (region
hotspot)?
4. What is your number of request handler
count(hbase.regionserver.handler.count) on HBase? If it's too low, then, your connections
on the client would wait before actually getting into the application layer(here, RS).

This is assuming you've given enough memory to your Region servers and your HDFS layer is
stable.
Hope this helps.







On Mon, Nov 4, 2013 at 9:16 AM, <Michael.Grundvig@high5games.com> wrote:

> Hi all; I posted this as a question on StackOverflow as well but 
> realized I should have gone straight ot the horses-mouth with my 
> question. Sorry for the double post!
>
> We are running a series of HBase tests to see if we can migrate one of 
> our existing datasets from a RDBMS to HBase. We are running 15 nodes 
> with 5 zookeepers and HBase 0.94.12 for this test.
>
> We have a single table with three column families and a key that is 
> distributing very well across the cluster. All of our queries are 
> running a direct look-up; no searching or scanning. Since the 
> HTablePool is now frowned upon, we are using the Apache commons pool 
> and a simple connection factory to create a pool of connections and 
> use them in our threads. Each thread creates an HTableInstance as 
> needed and closes it when done. There are no leaks we can identify.
>
> If we run a single thread and just do lots of random calls 
> sequentially, the performance is quite good. Everything works great 
> until we start trying to scale the performance. As we add more threads 
> and try and get more work done in a single VM, we start seeing 
> performance degrade quickly. The client code is simply attempting to 
> run either one of several gets or a single put at a given frequency. 
> It then waits until the next time to run, we use this to simulate the 
> workload from external clients. With a single thread, we will see call times in the 2-3
milliseconds which is acceptable.
>
> As we add more threads, this call time starts increasing quickly. What 
> gets strange is if we add more VMs, the times hold steady across them 
> all so clearly it's a bottleneck in the running instance and not the cluster.
> We can get a huge amount of processing happening across the cluster 
> very easily - it just has to use a lot of VMs on the client side to do 
> it. We know the contention isn't in the connection pool as we see the 
> problem even when we have more connections than threads. 
> Unfortunately, the times are spiraling out of control very quickly. We 
> need it to support at least 128 threads in practice, but most 
> important I want to support 500 updates/sec and 250 gets/sec. In 
> theory, this should be a piece of cake for the cluster as we can do 
> FAR more work than that with a few VMs, but we don't even get close to this with a single
VM.
>
> So my question: how do people building high-performance apps with 
> HBase get around this? What approach are others using for connection 
> pooling in a multi-threaded environment? There seems to be a 
> surprisingly little amount of info about this on the web considering 
> the popularity. Is there some client setting we need to use that makes 
> it perform better in a threaded environment? We are going to try to 
> cache HTable instances next but that's a total guess. There are 
> solutions to offloading work to other VMs but we really want to avoid 
> this as clearly the cluster can handle the load and it will dramatically decrease the
application performance in critical areas.
>
> Any help is greatly appreciated! Thanks!
> -Mike
>



--
It's just about how deep your longing is!

Mime
View raw message