hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-15594) [YCSB] Improvements
Date Mon, 06 Jun 2016 02:42:59 GMT

    [ https://issues.apache.org/jira/browse/HBASE-15594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15316142#comment-15316142

stack commented on HBASE-15594:

So, again, the Reader doing the whole read/parse of the request and then executing it ups
our ops by >2x (From about 125k to 425k workloadc random reads from LRUBlockCache -- about
7-11% CPU idle). The new occupied-readers-count metric shows Readers reading all occupied
nearly all the time... as opposed to what we see when we look at handlers (I can't get a higher
utilization on handlers no matter what loading I put up). Mighty [~tlipcon] pointed me at
a short-circuit the kudu folks do where they do direct handoff from reader to worker thread
http://gerrit.cloudera.org:8080/#/c/2938/... let me see if I can do similar.

After the above hackery, the next 'blocker' is the registry of Scanners in the Region CSLM
with synchronization to get read point. If I hack it out -- have some ideas for making it
less of a hurdle -- it is interesting to see that we then get stuck behind sending the response
AND our throughput goes down slightly... So some work to do here.

> [YCSB] Improvements
> -------------------
>                 Key: HBASE-15594
>                 URL: https://issues.apache.org/jira/browse/HBASE-15594
>             Project: HBase
>          Issue Type: Umbrella
>            Reporter: stack
>            Priority: Critical
>         Attachments: fast.patch
> Running YCSB and getting good results is an arcane art. For example, in my testing, a
few handlers (100) with as many readers as I had CPUs (48), and upping connections on clients
to same as #cpus made for 2-3x the throughput. The above config changes came of lore; which
configurations need tweaking is not obvious going by their names, there were no indications
from the app on where/why we were blocked or on which metrics are important to consider. Nor
was any of this stuff written down in docs.
> Even still, I am stuck trying to make use of all of the machine. I am unable to overrun
a server though 8 client nodes trying to beat up a single node (workloadc, all random-read,
with no data returned -p  readallfields=false). There is also a strange phenomenon where if
I add a few machines, rather than 3x the YCSB throughput when 3 nodes in cluster, each machine
instead is doing about 1/3rd.
> This umbrella issue is to host items that improve our defaults and noting how to get
good numbers running YCSB. In particular, I want to be able to saturate a machine.
> Here are the configs I'm currently working with. I've not done the work to figure client-side
if they are optimal (weird is how big a difference client-side changes can make -- need to
fix this). On my 48 cpu machine, I can do about 370k random reads a second from data totally
cached in bucketcache. If I short-circuit the user gets so they don't do any work but return
immediately, I can do 600k ops a second but the CPUs are at 60-70% only. I cannot get them
to go above this. Working on it.
> {code}
> <property>
> <name>
> hbase.ipc.server.read.threadpool.size
> </name>
> <value>48</value>
> </property>
> <property>
> <name>
>     hbase.regionserver.handler.count
> </name>
> <value>100</value>
> </property>
> <property>
> <name>
> hbase.client.ipc.pool.size
> </name>
> <value>100</value>
> </property>
> <property>
> <name>
> hbase.htable.threads.max
> </name>
> <value>48</value>
> </property>
> {code}

This message was sent by Atlassian JIRA

View raw message