hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yu Li (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-15594) [YCSB] Improvements
Date Tue, 05 Apr 2016 18:54:25 GMT

    [ https://issues.apache.org/jira/browse/HBASE-15594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15226919#comment-15226919
] 

Yu Li commented on HBASE-15594:
-------------------------------

Which version of YCSB you're running sir [~stack]? We happen to do benchmarking these days
comparing 1.1.2 (with miscellaneous fixes on increment as well as reader/writer rowlock backported)
and 0.98.12 using ycsb-0.7.0, and it turns out the pure read(get) performance declined a lot
in our 1.1.2. After days of debugging (actually just located the root cause few hours ago),
we found that there's a *small but fatal* bug in {{HBaseClient10}} that it initializes one
connection per thread. We could find below codes in {{HBaseClient10#init}}, line 135:
{code}
      THREAD_COUNT.getAndIncrement();
      synchronized(THREAD_COUNT) {
        connection = ConnectionFactory.createConnection(config);
      }
{code}
After fixing it in below way, the performance recovered:
{code}
      THREAD_COUNT.getAndIncrement();
      synchronized(THREAD_COUNT) {
        if(connection == null) connection = ConnectionFactory.createConnection(config);
      }
{code}

We were using 4 physical nodes as client, each node ran 8 YCSB process, each process launched
100 threads. We loaded 100GB data into 3 RS cluster, and then ran each client to do random
get for 30 min, and w/o the fix 1.1.2 performance is ~20% lower than 0.98.12, which could
be reproduced steadily. Hope this information could help.

> [YCSB] Improvements
> -------------------
>
>                 Key: HBASE-15594
>                 URL: https://issues.apache.org/jira/browse/HBASE-15594
>             Project: HBase
>          Issue Type: Umbrella
>            Reporter: stack
>            Priority: Critical
>
> Running YCSB and getting good results is an arcane art. For example, in my testing, a
few handlers (100) with as many readers as I had CPUs (48), and upping connections on clients
to same as #cpus made for 2-3x the throughput. The above config changes came of lore; which
configurations need tweaking is not obvious going by their names, there were no indications
from the app on where/why we were blocked or on which metrics are important to consider. Nor
was any of this stuff written down in docs.
> Even still, I am stuck trying to make use of all of the machine. I am unable to overrun
a server though 8 client nodes trying to beat up a single node (workloadc, all random-read,
with no data returned -p  readallfields=false). There is also a strange phenomenon where if
I add a few machines, rather than 3x the YCSB throughput when 3 nodes in cluster, each machine
instead is doing about 1/3rd.
> This umbrella issue is to host items that improve our defaults and noting how to get
good numbers running YCSB. In particular, I want to be able to saturate a machine.
> Here are the configs I'm currently working with. I've not done the work to figure client-side
if they are optimal (weird is how big a difference client-side changes can make -- need to
fix this). On my 48 cpu machine, I can do about 370k random reads a second from data totally
cached in bucketcache. If I short-circuit the user gets so they don't do any work but return
immediately, I can do 600k ops a second but the CPUs are at 60-70% only. I cannot get them
to go above this. Working on it.
> {code}
> <property>
> <name>
> hbase.ipc.server.read.threadpool.size
> </name>
> <value>48</value>
> </property>
> <property>
> <name>
>     hbase.regionserver.handler.count
> </name>
> <value>100</value>
> </property>
> <property>
> <name>
> hbase.client.ipc.pool.size
> </name>
> <value>100</value>
> </property>
> <property>
> <name>
> hbase.htable.threads.max
> </name>
> <value>48</value>
> </property>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message