flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sihua Zhou (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (FLINK-8602) Improve recovery performance for rocksdb backend
Date Mon, 26 Feb 2018 05:06:00 GMT

     [ https://issues.apache.org/jira/browse/FLINK-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Sihua Zhou updated FLINK-8602:
    Summary: Improve recovery performance for rocksdb backend  (was: Accelerate recover from
failover when use incremental checkpoint)

> Improve recovery performance for rocksdb backend
> ------------------------------------------------
>                 Key: FLINK-8602
>                 URL: https://issues.apache.org/jira/browse/FLINK-8602
>             Project: Flink
>          Issue Type: Improvement
>          Components: State Backends, Checkpointing
>    Affects Versions: 1.5.0
>            Reporter: Sihua Zhou
>            Assignee: Sihua Zhou
>            Priority: Major
> Currently, when enable incremental checkpoint, if user change the parallelism then
```hasExtraKeys``` may be ```true```. If this occur, flink will loop all rocksdb instance
and iterator all data to fetch the data that fails into current ```KeyGroupRange````, this
can be improved as follows:
> - 1. For multi rocksdbs, we don't need to iterate the entry of them and insert into another
one, we can use the `ingestExternalFile()` api to merge them.
> - 2. For the keyGroup which is not belong the target keyGroupRange, we can delete them
lazily by set the `CompactFilter` for the `ColumnFamily`.
> Any advice would be highly appreciated!

This message was sent by Atlassian JIRA

View raw message