reef-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Matteo Interlandi (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (REEF-1791) Implement reef-runtime-spark
Date Mon, 15 May 2017 03:46:04 GMT

    [ https://issues.apache.org/jira/browse/REEF-1791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16009965#comment-16009965
] 

Matteo Interlandi commented on REEF-1791:
-----------------------------------------

Hi Saikat,
could you please explain why we need so deep integration into Spark to get REEF runtime working
on Spark? In theory if we can add a dependency to Spark, one can simply run a mapPartion job
over a properly created partitions holding up resources, where each task spawn an evaluator.
This design requires no deep integration into Spark runtime, and is what other libraries like
TensorFlowOnSpark adopts.

> Implement reef-runtime-spark
> ----------------------------
>
>                 Key: REEF-1791
>                 URL: https://issues.apache.org/jira/browse/REEF-1791
>             Project: REEF
>          Issue Type: New Feature
>          Components: REEF
>            Reporter: Sergiy Matusevych
>            Assignee: Saikat Kanjilal
>   Original Estimate: 1,344h
>  Remaining Estimate: 1,344h
>
> We need to run REEF Tasks on Spark Executors. Ideally, that should require only a few
lines of changes in the REEF application configuration. All Spark-related logic must be encapsulated
in the {{reef-runtime-spark}} module, similar to the existing e.g. {{reef-runtime-yarn}} or
{{reef-runtime-local}}. As a first step, we can have a Java-only solution, but later we'll
need to run .NET Tasks on Executors as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message