kylin-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ma Gang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (KYLIN-3601) The max connection number generated by the PreparedContextPool is inconsistent with the configuration.
Date Tue, 16 Oct 2018 06:06:00 GMT

    [ https://issues.apache.org/jira/browse/KYLIN-3601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16651174#comment-16651174
] 

Ma Gang commented on KYLIN-3601:
--------------------------------

What's your expected query performance(load and latency)? Could you provide the details performance
data(including concurrency level, avg latency, etc)?  Do you have any proof that the preparedStatement
creation is the bottleneck of the system?

Below is my test result on my env(cube info and env info not provided):

not use preparedstatement cache:
===============================================================
concurrency qps avg_latency(ms) 50%(ms) 90%(ms)
5 19.79 252.672 249 277
10 22.74 439.764 432 499
20 23.74 842.411 829 1008
50 24.07 2077.124 1964 2485

use preparedstatement cache:
===============================================================
concurrency qps avg_latency(ms) 50%(ms) 90%(ms)
5 77.63 64.411 62 76
10 76.75 130.285 124 163
20 74.20 269.540 251 335
50 70.96 704.581 638 1005

 

You can see that the load performance is from around 20 qps to around 70 qps, more than 3
times increment. And the concurrency is limited, more than 50 concurrent client will greatly
downgrade the system performance, the bottleneck should be the rpc call to Hbase(not very
sure, I didn't dig into it).

 

> The max connection number generated by the PreparedContextPool is inconsistent with the
configuration.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: KYLIN-3601
>                 URL: https://issues.apache.org/jira/browse/KYLIN-3601
>             Project: Kylin
>          Issue Type: Bug
>          Components: Query Engine
>    Affects Versions: v2.5.0
>            Reporter: huaicui
>            Priority: Major
>         Attachments: FirstResponseDistribute.jpg, SixthResponseDistribute.jpg, image-2018-09-28-15-14-00-288.png,
image.png
>
>
> 因为并发性能不够,使用了magang提供的PrepareStatement方法进行测试。性能有所有提高,但随着测试次数的增加,吞吐率会越来越低而且数据超时也越来越多。经过修改代码在queryAndUpdateCache最后返回前加入日志打印:logger.debug("BorrowedCount:"+preparedContextPool.getBorrowedCount()
>  +",DestroyedCount:"+preparedContextPool.getDestroyedCount()
>  +",CreatedCount:"+preparedContextPool.getCreatedCount()
>  +",ReturnedCount:"+preparedContextPool.getReturnedCount()
> 同时配置文件加入该配置:
> kylin.query.statement-cache-max-num-per-key=200
>  
>  
> 日志显示,当同一sql并发一段时间后,PreparedContextPool创建了越来越多PrepareStatement,并没有进行阻塞后续来的请求。
> !image-2018-09-28-15-14-00-288.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message