beam-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Baptiste Onofré (JIRA) <j...@apache.org>
Subject [jira] [Commented] (BEAM-313) Enable the use of an existing spark context with the SparkPipelineRunner
Date Tue, 31 May 2016 11:42:12 GMT

    [ https://issues.apache.org/jira/browse/BEAM-313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307605#comment-15307605
] 

Jean-Baptiste Onofré commented on BEAM-313:
-------------------------------------------

Right now, each runner provides its own context, and it's the preferred approach as the context
is directly handled by the runner.
However, we can imagine to inject the context via the pipeline options.

> Enable the use of an existing spark context with the SparkPipelineRunner
> ------------------------------------------------------------------------
>
>                 Key: BEAM-313
>                 URL: https://issues.apache.org/jira/browse/BEAM-313
>             Project: Beam
>          Issue Type: New Feature
>            Reporter: Abbass Marouni
>            Assignee: Jean-Baptiste Onofré
>
> The general use case is that the SparkPipelineRunner creates its own Spark context and
uses it for the pipeline execution.
> Another alternative is to provide the SparkPipelineRunner with an existing spark context.
This can be interesting for a lot of use cases where the Spark context is managed outside
of beam (context reuse, advanced context management, spark job server, ...).
> code sample : https://github.com/amarouni/incubator-beam/commit/fe0bb517bf0ccde07ef5a61f3e44df695b75f076



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message