hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrew Purtell <andrew.purt...@gmail.com>
Subject Re: Comparing the performance of 0.98.2 RC0 and 0.98.0 using YCSB
Date Tue, 13 May 2014 08:56:40 GMT
Thanks for the offer J-M. We'd be / are in your debt for kicking the tires. 

I think YCSB or LoadTestTool (the former is more well known with a convenient summary report)
can help detect fine trends in operation latencies. And certainly regressions too. Would be
a good compliment to PE. 

I should try to spend some time each release looking at throughput also. 

> On May 13, 2014, at 2:24 AM, Jean-Marc Spaggiari <jean-marc@spaggiari.org> wrote:
> 
> I run PE for (almost*) all the releases. I can add YCSB if you want. It's a
> dedicated cluster where I have only Hadoop 2.2.0 running, ZK (on the
> master) and HBase. Nothing else. Only 3RS+1MASTER for now. Just send me the
> YCSB workload (as long as it doesn't require to run for more than 24h).
> 
> JM
> 
> *: I sometime don't hve the bandwidth to test 2 releases at a time but try
> and wil continue to try to test all the releases.
> 
> 
> 2014-05-11 3:27 GMT-04:00 Andrew Purtell <andrew.purtell@gmail.com>:
> 
>> For cost and time reasons, and for regression checking operation latencies
>> not drag racing peak throughput. I wanted YCSB because many try HBase out
>> with it, I find that YCSB is difficult to use to say the least in a
>> coordinated way with multiple instances.
>> 
>> For benchmarking throughput I would use a completely different methodology
>> involving many test clients.
>> 
>>> On May 11, 2014, at 12:29 PM, Konstantin Boudnik <cos@apache.org> wrote:
>>> 
>>> I think the point wasn't a benchmarking but merely to make sure there's
>> no
>>> regressions.
>>> 
>>>> On Wed, May 07, 2014 at 02:16PM, Vladimir Rodionov wrote:
>>>> *7x EC2 c3.8xlarge: 1 master, 5 slaves, 1 test client*
>>>> 
>>>> Andrew, I think these numbers are far from maximum you can get from this
>>>> set up. Why only 1 test client?
>>>> 
>>>> -Vladimir Rodionov
>>>> 
>>>> 
>>>>> On Tue, May 6, 2014 at 6:58 PM, Andrew Purtell <apurtell@apache.org>
>> wrote:
>>>>> 
>>>>> Comparing the relative performance of 0.98.2 RC0 and 0.98.0 on Hadoop
>> 2.2.0
>>>>> using YCSB.
>>>>> 
>>>>> The hardware used is different than for the previous report comparing
>>>>> 0.98.1 to 0.98.0. However the results are very similar, both in terms
>> of
>>>>> 0.98.2 RC0 numbers with respect to those measured for 0.98.0, and the
>>>>> workload specific deltas observed when testing 0.98.1.
>>>>> 
>>>>> *Hardware and Versions*
>>>>> 
>>>>> Hadoop 2.2.0
>>>>> HBase 0.98.2-hadoop2 RC0
>>>>> 
>>>>> 7x EC2 c3.8xlarge: 1 master, 5 slaves, 1 test client
>>>>> 
>>>>>   32 cores
>>>>> 
>>>>>   60 GB RAM
>>>>> 
>>>>>   2 x 320 GB directly attached SSD
>>>>> 
>>>>>   NameNode: 4 GB heap
>>>>> 
>>>>>   DataNode: 1 GB heap
>>>>> 
>>>>>   Master: 1 GB heap
>>>>> 
>>>>>   RegionServer: 8 GB heap, 24 GB bucket cache offheap engine
>>>>> 
>>>>> 
>>>>> *Methodology*
>>>>> 
>>>>> 
>>>>> Setup:
>>>>> 
>>>>>    0. Start cluster
>>>>>    1. shell: create "seed", { NAME=>"u", COMPRESSION=>"snappy"
}
>>>>>    2. YCSB:  Preload 100 million rows into table "seed"
>>>>>     3. shell: flush "seed" ; compact "seed"
>>>>>    4. Wait for compaction to complete
>>>>>    5. shell: create_snapshot "seed", "seed_snap"
>>>>>     6. shell: disable "seed"
>>>>> 
>>>>> 
>>>>> For each test:
>>>>> 
>>>>>    7. shell: clone_snapshot "seed_snap", "test"
>>>>>     8. YCSB:  Run test -p operationcount=10000000 -threads 32 -target
>>>>> 50000 (clamp at ~10k ops/server/sec)
>>>>>     9. shell: disable "test"
>>>>>   10. shell: drop "test"
>>>>> 
>>>>> 
>>>>> *Workload A*
>>>>> 
>>>>> *0.98.0*
>>>>> 
>>>>> [OVERALL], RunTime(ms), 2097825
>>>>> [OVERALL], Throughput(ops/sec), 4767
>>>>> [UPDATE], Operations, 4999049
>>>>> [UPDATE], AverageLatency(us), 1.107036384
>>>>> [UPDATE], MinLatency(us), 0
>>>>> [UPDATE], MaxLatency(us), 97865
>>>>> [UPDATE], 95thPercentileLatency(ms), 0
>>>>> [UPDATE], 99thPercentileLatency(ms), 0
>>>>> [READ], Operations, 5000952
>>>>> [READ], AverageLatency(us), 413.9172277
>>>>> [READ], MinLatency(us), 295
>>>>> [READ], MaxLatency(us), 927729
>>>>> [READ], 95thPercentileLatency(ms), 0
>>>>> [READ], 99thPercentileLatency(ms), 0
>>>>> 
>>>>> 
>>>>> *0.98.2*
>>>>> 
>>>>> ​[OVERALL], RunTime(ms), 2082682
>>>>> [OVERALL], Throughput(ops/sec), 4802
>>>>> [UPDATE], Operations, 5001208
>>>>> [UPDATE], AverageLatency(us), 1.227632714
>>>>> [UPDATE], MinLatency(us), 0
>>>>> [UPDATE], MaxLatency(us), 720423
>>>>> [UPDATE], 95thPercentileLatency(ms), 0
>>>>> [UPDATE], 99thPercentileLatency(ms), 0
>>>>> [READ], Operations, 4998792.667
>>>>> [READ], AverageLatency(us), 411.0522393
>>>>> [READ], MinLatency(us), 288
>>>>> [READ], MaxLatency(us), 977500
>>>>> [READ], 95thPercentileLatency(ms), 0
>>>>> [READ], 99thPercentileLatency(ms), 0
>>>>> 
>>>>> ​
>>>>> 
>>>>> ​*Workload B*
>>>>> 
>>>>> *0.98.0*
>>>>> 
>>>>> [OVERALL], RunTime(ms), 3678408
>>>>> [OVERALL], Throughput(ops/sec), 2719
>>>>> [UPDATE], Operations, 500239
>>>>> [UPDATE], AverageLatency(us), 2.218397098
>>>>> [UPDATE], MinLatency(us), 0
>>>>> [UPDATE], MaxLatency(us), 101523
>>>>> [UPDATE], 95thPercentileLatency(ms), 0
>>>>> [UPDATE], 99thPercentileLatency(ms), 0
>>>>> [READ], Operations, 9499762.333
>>>>> [READ], AverageLatency(us), 384.8231468
>>>>> [READ], MinLatency(us), 283
>>>>> [READ], MaxLatency(us), 922395
>>>>> [READ], 95thPercentileLatency(ms), 0
>>>>> [READ], 99thPercentileLatency(ms), 0
>>>>> 
>>>>> *0.98.2*
>>>>> 
>>>>> ​[OVERALL], RunTime(ms), 3643856
>>>>> [OVERALL], Throughput(ops/sec), 2744
>>>>> [UPDATE], Operations, 499256
>>>>> [UPDATE], AverageLatency(us), 2.561636579
>>>>> [UPDATE], MinLatency(us), 0
>>>>> [UPDATE], MaxLatency(us), 713811
>>>>> [UPDATE], 95thPercentileLatency(ms), 0
>>>>> [UPDATE], 99thPercentileLatency(ms), 0
>>>>> [READ], Operations, 9500745
>>>>> [READ], AverageLatency(us), 381.1349225
>>>>> [READ], MinLatency(us), 284
>>>>> [READ], MaxLatency(us), 921680
>>>>> [READ], 95thPercentileLatency(ms), 0
>>>>> [READ], 99thPercentileLatency(ms), 0
>>>>> 
>>>>> ​
>>>>> *Workload C*
>>>>> 
>>>>> *0.98.0*
>>>>> 
>>>>> [OVERALL], RunTime(ms), 3258845
>>>>> [OVERALL], Throughput(ops/sec), 3069
>>>>> [READ], Operations, 10000000
>>>>> [READ], AverageLatency(us), 323.7287128
>>>>> [READ], MinLatency(us), 276
>>>>> [READ], MaxLatency(us), 928472
>>>>> [READ], 95thPercentileLatency(ms), 0
>>>>> [READ], 99thPercentileLatency(ms), 0
>>>>> 
>>>>> *0.98.2*
>>>>> 
>>>>> [OVERALL], RunTime(ms), 3288822
>>>>> [OVERALL], Throughput(ops/sec), 3041
>>>>> [READ], Operations, 10000000
>>>>> [READ], AverageLatency(us), 326.6214268
>>>>> [READ], MinLatency(us), 284
>>>>> [READ], MaxLatency(us), 924632
>>>>> [READ], 95thPercentileLatency(ms), 0
>>>>> [READ], 99thPercentileLatency(ms), 0
>>>>> 
>>>>> ​*Workload D*
>>>>> 
>>>>> *0.98.0*
>>>>> 
>>>>> [OVERALL], RunTime(ms), 3707601
>>>>> [OVERALL], Throughput(ops/sec), 2700
>>>>> [INSERT], Operations, 500774
>>>>> [INSERT], AverageLatency(us), 6.432826519
>>>>> [INSERT], MinLatency(us), 4
>>>>> [INSERT], MaxLatency(us), 40274
>>>>> [INSERT], 95thPercentileLatency(ms), 0
>>>>> [INSERT], 99thPercentileLatency(ms), 0
>>>>> [READ], Operations, 9499225.667
>>>>> [READ], AverageLatency(us), 387.7104498
>>>>> [READ], MinLatency(us), 283
>>>>> [READ], MaxLatency(us), 927377
>>>>> [READ], 95thPercentileLatency(ms), 0
>>>>> [READ], 99thPercentileLatency(ms), 1
>>>>> 
>>>>> *0.98.2*
>>>>> 
>>>>> [OVERALL], RunTime(ms), 3650724
>>>>> [OVERALL], Throughput(ops/sec), 2740
>>>>> [INSERT], Operations, 499872
>>>>> [INSERT], AverageLatency(us), 6.46417158
>>>>> [INSERT], MinLatency(us), 4
>>>>> [INSERT], MaxLatency(us), 47732
>>>>> [INSERT], 95thPercentileLatency(ms), 0
>>>>> [INSERT], 99thPercentileLatency(ms), 0
>>>>> [READ], Operations, 9500128
>>>>> [READ], AverageLatency(us), 381.6517188
>>>>> [READ], MinLatency(us), 278
>>>>> [READ], MaxLatency(us), 922107
>>>>> [READ], 95thPercentileLatency(ms), 0
>>>>> [READ], 99thPercentileLatency(ms), 1
>>>>> 
>>>>> ​
>>>>> ​*Workload E*
>>>>> 
>>>>> *0.98.0*
>>>>> 
>>>>> [OVERALL], RunTime(ms), 15717450.00
>>>>> [OVERALL], Throughput(ops/sec), 636.2355217
>>>>> [INSERT], Operations, 499943
>>>>> [INSERT], AverageLatency(us), 12.57311534
>>>>> [INSERT], MinLatency(us), 6
>>>>> [INSERT], MaxLatency(us), 39539
>>>>> [INSERT], 95thPercentileLatency(ms), 0
>>>>> [INSERT], 99thPercentileLatency(ms), 0
>>>>> [SCAN], Operations, 9500057
>>>>> [SCAN], AverageLatency(us), 1648.836612
>>>>> [SCAN], MinLatency(us), 768
>>>>> [SCAN], MaxLatency(us), 1001461
>>>>> [SCAN], 95thPercentileLatency(ms), 3
>>>>> [SCAN], 99thPercentileLatency(ms), 5
>>>>> 
>>>>> *0.98.2*
>>>>> 
>>>>> ​[OVERALL], RunTime(ms), 15624378
>>>>> [OVERALL], Throughput(ops/sec), 640
>>>>> [INSERT], Operations, 499679
>>>>> [INSERT], AverageLatency(us), 11.62045822
>>>>> [INSERT], MinLatency(us), 5
>>>>> [INSERT], MaxLatency(us), 40475
>>>>> [INSERT], 95thPercentileLatency(ms), 0
>>>>> [INSERT], 99thPercentileLatency(ms), 0
>>>>> [SCAN], Operations, 9500321
>>>>> [SCAN], AverageLatency(us), 1639.114033
>>>>> [SCAN], MinLatency(us), 753
>>>>> [SCAN], MaxLatency(us), 942908
>>>>> [SCAN], 95thPercentileLatency(ms), 3
>>>>> [SCAN], 99thPercentileLatency(ms), 5
>>>>> 
>>>>> ​
>>>>> *Workload F*
>>>>> 
>>>>> *0.98.0*
>>>>> 
>>>>> [OVERALL], RunTime(ms), 4144245
>>>>> [OVERALL], Throughput(ops/sec), 2413
>>>>> [UPDATE], Operations, 4999220
>>>>> [UPDATE], AverageLatency(us), 1.520920945
>>>>> [UPDATE], MinLatency(us), 0
>>>>> [UPDATE], MaxLatency(us), 94874
>>>>> [UPDATE], 95thPercentileLatency(ms), 0
>>>>> [UPDATE], 99thPercentileLatency(ms), 0
>>>>> [READ-MODIFY-WRITE], Operations, 4999219
>>>>> [READ-MODIFY-WRITE], AverageLatency(us), 413.7597696
>>>>> [READ-MODIFY-WRITE], MinLatency(us), 0
>>>>> [READ-MODIFY-WRITE], MaxLatency(us), 107201
>>>>> [READ-MODIFY-WRITE], 95thPercentileLatency(ms), 0
>>>>> [READ-MODIFY-WRITE], 99thPercentileLatency(ms), 0
>>>>> [READ], Operations, 10000000
>>>>> [READ], AverageLatency(us), 410.3120381
>>>>> [READ], MinLatency(us), 1
>>>>> [READ], MaxLatency(us), 921156
>>>>> [READ], 95thPercentileLatency(ms), 0
>>>>> [READ], 99thPercentileLatency(ms), 0
>>>>> 
>>>>> *0.98.2*
>>>>> 
>>>>> [OVERALL], RunTime(ms), 4169551
>>>>> [OVERALL], Throughput(ops/sec), 2398
>>>>> [UPDATE], Operations, 4999046
>>>>> [UPDATE], AverageLatency(us), 1.524197967
>>>>> [UPDATE], MinLatency(us), 0
>>>>> [UPDATE], MaxLatency(us), 103630
>>>>> [UPDATE], 95thPercentileLatency(ms), 0
>>>>> [UPDATE], 99thPercentileLatency(ms), 0
>>>>> [READ-MODIFY-WRITE], Operations, 4999045
>>>>> [READ-MODIFY-WRITE], AverageLatency(us), 416.3519861
>>>>> [READ-MODIFY-WRITE], MinLatency(us), 0
>>>>> [READ-MODIFY-WRITE], MaxLatency(us), 926671
>>>>> [READ-MODIFY-WRITE], 95thPercentileLatency(ms), 0
>>>>> [READ-MODIFY-WRITE], 99thPercentileLatency(ms), 0
>>>>> [READ], Operations, 10000000
>>>>> [READ], AverageLatency(us), 412.8674851
>>>>> [READ], MinLatency(us), 1
>>>>> [READ], MaxLatency(us), 928123
>>>>> [READ], 95thPercentileLatency(ms), 0
>>>>> [READ], 99thPercentileLatency(ms), 0
>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> Best regards,
>>>>> 
>>>>>  - Andy
>>>>> 
>>>>> Problems worthy of attack prove their worth by hitting back. - Piet
>> Hein
>>>>> (via Tom White)
>> 

Mime
View raw message