Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id C11D5200B71 for ; Wed, 31 Aug 2016 21:12:26 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id BCC84160AC4; Wed, 31 Aug 2016 19:12:26 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id DBE85160AA7 for ; Wed, 31 Aug 2016 21:12:25 +0200 (CEST) Received: (qmail 6236 invoked by uid 500); 31 Aug 2016 19:12:25 -0000 Mailing-List: contact commits-help@beam.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@beam.incubator.apache.org Delivered-To: mailing list commits@beam.incubator.apache.org Received: (qmail 6227 invoked by uid 99); 31 Aug 2016 19:12:25 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 31 Aug 2016 19:12:25 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id AAFB8C0D9B for ; Wed, 31 Aug 2016 19:12:24 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -4.646 X-Spam-Level: X-Spam-Status: No, score=-4.646 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KAM_LAZY_DOMAIN_SECURITY=1, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-1.426] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id 896ygV_YC-ep for ; Wed, 31 Aug 2016 19:12:22 +0000 (UTC) Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with SMTP id AD2C25FBD7 for ; Wed, 31 Aug 2016 19:12:21 +0000 (UTC) Received: (qmail 2236 invoked by uid 99); 31 Aug 2016 19:12:20 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 31 Aug 2016 19:12:20 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id AAEC62C1B7D for ; Wed, 31 Aug 2016 19:12:20 +0000 (UTC) Date: Wed, 31 Aug 2016 19:12:20 +0000 (UTC) From: "ASF GitHub Bot (JIRA)" To: commits@beam.incubator.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (BEAM-610) Enable spark's checkpointing mechanism for driver-failure recovery in streaming MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Wed, 31 Aug 2016 19:12:26 -0000 [ https://issues.apache.org/jira/browse/BEAM-610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15453099#comment-15453099 ] ASF GitHub Bot commented on BEAM-610: ------------------------------------- GitHub user amitsela opened a pull request: https://github.com/apache/incubator-beam/pull/909 [BEAM-610] Enable spark's checkpointing mechanism for driver-failure recovery in streaming Be sure to do all of the following to help us incorporate your contribution quickly and easily: - [ ] Make sure the PR title is formatted like: `[BEAM-] Description of pull request` - [ ] Make sure tests pass via `mvn clean verify`. (Even better, enable Travis-CI on your fork and ensure the whole test matrix passes). - [ ] Replace `` in the title with the actual Jira issue number, if there is one. - [ ] If this contribution is large, please file an Apache [Individual Contributor License Agreement](https://www.apache.org/licenses/icla.txt). --- Support basic functionality with GroupByKey and ParDo. Added support for grouping operations. Added checkpointDir option, using it before execution. Support Accumulator recovery from checkpoint. Streaming tests should use JUnit's TemporaryFolder Rule for checkpoint directory. Support combine optimizations. Support durable sideInput via Broadcast. Branches in the pipeline are either Bounded or Unbounded and should be handles so. Handle flatten/union of Bouned/Unbounded RDD/DStream. JavaDoc Rebased on master. Reuse functionality between batch and streaming translators Better implementation of streaming/batch pipeline-branch translation. Move group/combine functions to their own wrapping class. You can merge this pull request into a Git repository by running: $ git pull https://github.com/amitsela/incubator-beam BEAM-610 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-beam/pull/909.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #909 ---- commit ec9cd0805c23afd792dcccf0f8fb268cdbb0e319 Author: Sela Date: 2016-08-25T20:49:01Z Refactor translation mechanism to support checkpointing of DStream. Support basic functionality with GroupByKey and ParDo. Added support for grouping operations. Added checkpointDir option, using it before execution. Support Accumulator recovery from checkpoint. Streaming tests should use JUnit's TemporaryFolder Rule for checkpoint directory. Support combine optimizations. Support durable sideInput via Broadcast. Branches in the pipeline are either Bounded or Unbounded and should be handles so. Handle flatten/union of Bouned/Unbounded RDD/DStream. JavaDoc Rebased on master. Reuse functionality between batch and streaming translators Better implementation of streaming/batch pipeline-branch translation. Move group/combine functions to their own wrapping class. ---- > Enable spark's checkpointing mechanism for driver-failure recovery in streaming > ------------------------------------------------------------------------------- > > Key: BEAM-610 > URL: https://issues.apache.org/jira/browse/BEAM-610 > Project: Beam > Issue Type: Bug > Components: runner-spark > Reporter: Amit Sela > Assignee: Amit Sela > > For streaming applications, Spark provides a checkpoint mechanism useful for stateful processing and driver failures. See: https://spark.apache.org/docs/1.6.2/streaming-programming-guide.html#checkpointing > This requires the "lambdas", or the content of DStream/RDD functions to be Serializable - currently, the runner a lot of the translation work in streaming to the batch translator, which can no longer be the case because it passes along non-serializables. > This also requires wrapping the creation of the streaming application's graph in a "getOrCreate" manner. See: https://spark.apache.org/docs/1.6.2/streaming-programming-guide.html#how-to-configure-checkpointing > Another limitation is the need to wrap Accumulators and Broadcast variables in Singletons in order for them to be re-created once stale after recovery. > This work is a prerequisite to support PerKey workflows, which will be support via Spark's stateful operators such as mapWithState. -- This message was sent by Atlassian JIRA (v6.3.4#6332)