flink-user-zh mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "claylin" <1012539...@qq.com>
Subject 关于在读和写频率都很高的情况下怎么优化rocksDB
Date Wed, 26 Feb 2020 15:27:19 GMT
Hi 大家好:这边遇到一个有关rocksDB优化的问题,我这里系统平均tps为10w,几乎每条数据都会出发读写rocksDB,下面是我用sar
-dp 命令统计的io情况:
Average:&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; DEV&nbsp; &nbsp; &nbsp;
&nbsp;tps&nbsp; rd_sec/s&nbsp; wr_sec/s&nbsp; avgrq-sz&nbsp; avgqu-sz&nbsp;
&nbsp; &nbsp;await&nbsp; &nbsp; &nbsp;svctm&nbsp; &nbsp; &nbsp;%util
Average:&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; sda&nbsp; &nbsp; 285.36&nbsp;
&nbsp;2152.91&nbsp; 88322.99&nbsp; &nbsp; 317.06&nbsp; &nbsp; &nbsp;21.48&nbsp;
&nbsp; &nbsp;75.27&nbsp; &nbsp; &nbsp; 0.58&nbsp; &nbsp; &nbsp;16.60
,运行时间长了就会严重造成反压,我按照这里https://ci.apache.org/projects/flink/flink-docs-stable/ops/state/large_state_tuning.html#tuning-rocksdb做了些调优,譬如增大Manager
Memory,增大rocksDB的flush和compact线程数等,但还是一样,时间一长,状态变大就会造成反压,我也做了state的过期处理。


这个问题困扰好久了,大家有什么好的优化方案吗。
Mime
  • Unnamed multipart/alternative (inline, 8-Bit, 0 bytes)
View raw message