flink-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chen Qin ...@uber.com>
Subject Re: 答复: States split over to external storage
Date Tue, 17 Jan 2017 19:55:54 GMT
Hi liuxinchun,

Thanks for expedite feedback!

I think if dev community find it makes sense to invest on this feature,
allowing user config eviction strategy(2) makes sense to me. Given the
nature how flink job states increase various a lot, there might be a
interface allow state backend decide which state can be evicted or

Regarding to (1), I see there are optimizations can give performance boost
immediately. I would suggest raise a jira and discuss with whole dev
community. There might be cases it will conflict with upcoming refactors.
Notice Flink devs are super busy releasing 1.2 so expecting late response :)


> (1) The organization form of current sliding window(SlidingProcessingTimeWindow
> and SlidingEventTimeWindow) have a drawback: When using ListState, a
> element may be kept in multiple windows (size / slide). It's time consuming
> and waste storage when checkpointing.
>   Opinion: I think this is a optimal point. Elements can be organized
> according to the key and split(maybe also can called as pane). When
> triggering cleanup, only the oldest split(pane) can be cleanup.
> (2) Incremental backup strategy. In original idea, we plan to only backup
> the new coming element, and that means a whole window may span several
> checkpoints, and we have develop this idea in our private SPS. But in
> Flink, the window may not keep raw data(for example, ReducingState and
> FoldingState). The idea of Chen Qin maybe a candidate strategy. We can keep
> in touch and exchange our respective strategy.
> -----邮件原件-----
> 发件人: Chen Qin [mailto:cq@uber.com]
> 发送时间: 2017年1月17日 13:30
> 收件人: dev@flink.apache.org
> 抄送: iuxinchun@huawei.com; Aljoscha Krettek; shijinkui
> 主题: States split over to external storage
> Hi there,
> I would like to discuss split over local states to external storage. The
> use case is NOT another external state backend like HDFS, rather just to
> expand beyond what local disk/ memory can hold when large key space exceeds
> what task managers could handle. Realizing FLINK-4266 might be hard to
> tacking all-in-one, I would live give a shot to split-over first.
> An intuitive approach would be treat HeapStatebackend as LRU cache and
> split over to external key/value storage when threshold triggered. To make
> this happen, we need minor refactor to runtime and adding set/get logic.
> One nice thing of keeping HDFS to store snapshots would be avoid
> versioning conflicts. Once checkpoint restore happens, partial write data
> will be overwritten with previously checkpointed value.
> Comments?
> --
> -Chen Qin

-Chen Qin

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message