hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HBASE-15619) Performance regression observed: Empty random read(get) performance of branch-1 worse than 0.98
Date Tue, 12 Apr 2016 00:14:25 GMT

     [ https://issues.apache.org/jira/browse/HBASE-15619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

stack updated HBASE-15619:
--------------------------
    Attachment: compare.png

I ran loadings comparing 0.98.13 (works w/ jdk8) to tip of 1.1 and branch-1.

The diagram shows three versions of the software all undergoing 5 loadings.

 # C-Empty is running ycsb workload c against an empty table
 # load is running 20minutes of loading of table
 # A is workload 'a'; i.e. 50/50 read/write for 20minutes
 # C is workload 'c' against loaded table
 # C` is running workload c alone where we are get keys most of the time but fail to find
values almost as much.

>From the diagram we can see:

# For empty table, indeed there is regression.
# For load phase, 1.1 and branch-1 tip are a little slower
# For workload 'A', 50/50, all are about same.
# For workload 'C' when our random read is actually fetching keys, 1.1 and branch-1 are about
25% better.
# For the case where we are reading values about 60% of the time and doing reads of non-existent
values about 40% of the time, we are about 15% slower in 1.1 and branch-1.

I'm thinking that us being bad at reading non-existent values is a problem but not a critical
issue. What you think [~carp84]? Seems like 1.1 is about the same as 0.98 otherwise (I thought
it was much better).


> Performance regression observed: Empty random read(get) performance of branch-1 worse
than 0.98
> -----------------------------------------------------------------------------------------------
>
>                 Key: HBASE-15619
>                 URL: https://issues.apache.org/jira/browse/HBASE-15619
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Yu Li
>            Assignee: Yu Li
>            Priority: Critical
>         Attachments: compare.png
>
>
> As titled, I observed the perf regression in the final stress testing before upgrading
our online cluster to 1.x. More details as follows:
> 1. HBase version in the comparison test:
>   * 0.98: based on 0.98.12 with some backports, among which HBASE-11297 is the most important
perf-related one (especially under high stress)
>   * 1.x: checked 3 releases in total
>      1) 1.1.2 with important perf fixes/improvements including HBASE-15031 and HBASE-14465
>      2) 1.1.4 release
>      3) 1.2.1RC1
> 2. Test environment
>     * YCSB: 0.7.0 with [YCSB-651|https://github.com/brianfrankcooper/YCSB/pull/651] applied
>     * Client: 4 physical nodes, each with 8 YCSB instance, each instance with 100 threads
>     * Server: 1 Master with 3 RS, each RS with 256 handlers and 64G heap
>     * Hardware: 64-core CPU, 256GB Mem, 10Gb Net, 1 PCIe-SSD and 11 HDD, same hardware
for client and server
> 3. Test cases
>     * -p fieldcount=1 -p fieldlength=128 -p readproportion=1
>     * case #1: read against empty table
>     * -case #2: lrucache 100% hit-
>     * -case #3: BLOCKCACHE=>false-
> 4. Test result
> * 1.1.4 and 1.2.1 have a similar perf (less than 2% deviation) as 1.1.2+, so will only
paste comparison data of 0.98.12+ and 1.1.2+
> * per-RS Throughput(ops/s)
> ||HBaseVersion||case#1||-case#2-||-case#3-||
> |0.98.12+|383562|-257493-|-47594-|
> |1.1.2+|363050|-232757-|-35872-|
> * AverageLatency(us)
> ||HBaseVersion||case#1||-case#2-||-case#3-||
> |0.98.12+|2774|-4134-|-22371-|
> |1.1.2+|2930|-4572-|-29690-|
> It seems there's perf regression on RPCServer (we tried 0.98 client against 1.x server
and observed a similar perf to 1.x client)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message