hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Du, Jingcheng" <jingcheng...@intel.com>
Subject RE: Use experience and performance data of offheap from Alibaba online cluster
Date Fri, 18 Nov 2016 08:54:57 GMT
Thanks Yu for the sharing, great achievements.
It seems the images cannot be displayed? Maybe just me?


From: Yu Li [mailto:carp84@gmail.com]
Sent: Friday, November 18, 2016 4:11 PM
To: user@hbase.apache.org; dev@hbase.apache.org
Subject: Use experience and performance data of offheap from Alibaba online cluster

Dear all,

We have backported read path offheap (HBASE-11425) to our customized hbase-1.1.2 (thanks @Anoop
for the help/support) and run it online for more than a month, and would like to share our
experience, for what it's worth (smile).

Generally speaking, we gained a better and more stable throughput/performance with offheap,
and below are some details:

1. QPS become more stable with offheap

Performance w/o offheap:


Performance w/ offheap:


These data come from our online A/B test cluster (with 450 physical machines, and each with
256G memory + 64 core) with real world workloads, it shows using offheap we could gain a more
stable throughput as well as better performance

Not showing fully online data here because for online we published the version with both offheap
and NettyRpcServer together, so no standalone comparison data for offheap

2. Full GC frequency and cost

Average Full GC STW time reduce from 11s to 7s with offheap.

3. Young GC frequency and cost

No performance degradation observed with offheap.

4. Peak throughput of one single RS

On Singles Day (11/11), peak throughput of one single RS reached 100K, among which 90K from
Get. Plus internet in/out data we could know the average result size of get request is ~1KB


Offheap are used on all online machines (more than 1600 nodes) instead of LruCache, so the
above QPS is gained from offheap bucketcache, along with NettyRpcServer(HBASE-15756).
Just let us know if any comments. Thanks.

Best Regards,

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message