From issues-return-151770-archive-asf-public=cust-asf.ponee.io@flink.apache.org Sat Feb 10 06:06:14 2018 Return-Path: X-Original-To: archive-asf-public@eu.ponee.io Delivered-To: archive-asf-public@eu.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by mx-eu-01.ponee.io (Postfix) with ESMTP id 8C62C180654 for ; Sat, 10 Feb 2018 06:06:14 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 7C0E0160C5F; Sat, 10 Feb 2018 05:06:14 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 9C239160C4C for ; Sat, 10 Feb 2018 06:06:13 +0100 (CET) Received: (qmail 40331 invoked by uid 500); 10 Feb 2018 05:06:12 -0000 Mailing-List: contact issues-help@flink.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@flink.apache.org Delivered-To: mailing list issues@flink.apache.org Received: (qmail 40316 invoked by uid 99); 10 Feb 2018 05:06:12 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 10 Feb 2018 05:06:12 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 25022C031C for ; Sat, 10 Feb 2018 05:06:12 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -110.311 X-Spam-Level: X-Spam-Status: No, score=-110.311 tagged_above=-999 required=6.31 tests=[ENV_AND_HDR_SPF_MATCH=-0.5, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01, USER_IN_DEF_SPF_WL=-7.5, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id GKhgUT-GZ3DF for ; Sat, 10 Feb 2018 05:06:10 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id 9B1625F178 for ; Sat, 10 Feb 2018 05:06:09 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 37B30E0257 for ; Sat, 10 Feb 2018 05:06:09 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 1FF2C24386 for ; Sat, 10 Feb 2018 05:06:05 +0000 (UTC) Date: Sat, 10 Feb 2018 05:06:05 +0000 (UTC) From: "ASF GitHub Bot (JIRA)" To: issues@flink.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (FLINK-8360) Implement task-local state recovery MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/FLINK-8360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16359253#comment-16359253 ] ASF GitHub Bot commented on FLINK-8360: --------------------------------------- Github user bowenli86 commented on a diff in the pull request: https://github.com/apache/flink/pull/5239#discussion_r167356561 --- Diff: docs/ops/state/large_state_tuning.md --- @@ -234,4 +234,97 @@ Compression can be activated through the `ExecutionConfig`: **Notice:** The compression option has no impact on incremental snapshots, because they are using RocksDB's internal format which is always using snappy compression out of the box. +## Task-Local Recovery + +### Motivation + +In Flink's checkpointing, each task produces a snapshot of its state that is then written to a distributed store. Each task acknowledges +a successful write of the state to the job manager by sending a handle that describes the location of the state in the distributed store. +The job manager, in turn, collects the handles from all tasks and bundles them into a checkpoint object. + +In case of recovery, the job manager opens the latest checkpoint object and sends the handles back to the corresponding tasks, which can +then restore their state from the distributed storage. Using a distributed storage to store state has two important advantages. First, the storage +is fault tolerant and second, all state in the distributed store is accessible to all nodes and can be easily redistributed (e.g. for rescaling). + +However, using a remote distributed store has also one big disadvantage: all tasks must read their state from a remote location, over the network. +In many scenarios, recovery could reschedule failed tasks to the same task manager as in the previous run (of course there are exceptions like machine +failures), but we still have to read remote state. This can result in *long recovery times for large states*, even if there was only a small failure on +a single machine. + +### Approach + +Task-local state recovery targets exactly this problem of long recovery times and the main idea is the following: for every checkpoint, we do not +only write task states to the distributed storage, but also keep *a secondary copy of the state snapshot in a storage that is local to the task* +(e.g. on local disk or in memory). Notice that the primary store for snapshots must still be the distributed store, because local storage does not +ensure durability under node failures abd also does not provide access for other nodes to redistribute state, this functionality still requires the +primary copy. + +However, for each task that can be rescheduled to the previous location for recovery, we can restore state from the secondary, local +copy and avoid the costs of reading the state remotely. Given that *many failures are not node failures and node failures typically only affect one +or very few nodes at a time*, it is very likely that in a recovery most tasks can return to their previous location and find their local state intact. +This is what makes local recovery effective in reducing recovery time. + +Please note that this can come at some additional costs per checkpoint for creating and storing the secondary local state copy, depending on the +chosen state backend and checkpointing strategy. For example, in most cases the implementation will simply duplicate the writes to the distributed +store to a local file. + +Illustration of checkpointing with task-local recovery. + +### Relationship of primary (distributed store) and secondary (task-local) state snapshots + +Task-local state is always considered a secondary copy, the ground truth of the checkpoint state is the primary copy in the distributed store. This +has implications for problems with local state during checkpointing and recovery: + +- For checkpointing, the *primary copy must be successful* and a failure to produce the *secondary, local copy will not fail* the checkpoint. A checkpoint +will fail if the primary copy could not be created, even if the secondary copy was successfully created. + +- Only the primary copy is acknowledged and managed by the job manager, secondary copies are owned by task managers and their life cycle can be +independent from their primary copy. For example, it is possible to retain a history of the 3 latest checkpoints as primary copies and only keep +the task-local state of the latest checkpoint. + +- For recovery, Flink will always *attempt to restore from task-local state first*, if a matching secondary copy is available. If any problem occurs during +the recovery from the secondary copy, Flink will *transparently retry to recovery the task from the primary copy*. Recovery only fails, if primary +and the (optional) secondary copy failed. In this case, depending on the configuration Flink could still fall back to an older checkpoint. --- End diff -- secondary cop**ies** > Implement task-local state recovery > ----------------------------------- > > Key: FLINK-8360 > URL: https://issues.apache.org/jira/browse/FLINK-8360 > Project: Flink > Issue Type: New Feature > Components: State Backends, Checkpointing > Reporter: Stefan Richter > Assignee: Stefan Richter > Priority: Major > Fix For: 1.5.0 > > > This issue tracks the development of recovery from task-local state. The main idea is to have a secondary, local copy of the checkpointed state, while there is still a primary copy in DFS that we report to the checkpoint coordinator. > Recovery can attempt to restore from the secondary local copy, if available, to save network bandwidth. This requires that the assignment from tasks to slots is as sticky is possible. > For starters, we will implement this feature for all managed keyed states and can easily enhance it to all other state types (e.g. operator state) later. -- This message was sent by Atlassian JIRA (v7.6.3#76005)