reef-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Saikat Kanjilal (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (REEF-1791) Implement reef-runtime-spark
Date Mon, 15 May 2017 01:47:04 GMT

    [ https://issues.apache.org/jira/browse/REEF-1791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16009912#comment-16009912
] 

Saikat Kanjilal commented on REEF-1791:
---------------------------------------

[~motus][~markus.weimer]

First cut of the design, several options:

I spent some time researching the design for this runtime and there are a couple of ways to
tackle this problem, both of these options assume that spark executors are already available
and that we can invoke one of these and launch our reef task on it:

Option 1:
Spark uses an internal rest api server called livy to launch and monitor spark jobs, we can
create a reef-rest-client to package up parameters and internally use livy to make a rest
API call into the spark cluster to execute the reef task, there are some things to work out
here, namely what is the functionality of the driver/evaluator, I had some ideas on this,
I was thinking that the driver can launch and manage the evaluator and the evaluator in turn
can use livy to make the rest API calls into and monitor the spark job, there's an issue with
this in that this was not really the goal of the evaluators , I'm open to expanding their
responsibility but we'd need to discuss details a bit further


Option 2:
We make a low level networking call using the driver and the evaluator and figure out how
to leverage spark-submit on the spark head node to invoke the reef task, this would essentially
require logging into the spark head node , understanding where spark-submit is located and
invoking it and passing it parameters relevant to the reef task, for example for a custom
reef ml algorithm it would involve executing the code for the algorithm through the use of
spark executors (very similar to hot deploying a chunk of scala or python code), can you guys
think of some other types of reef jobs that would leverage this?

At the end of the day the spark head node is responsible for executing a job on the spark
cluster by farming out parts of the job to the various save nodes so the reef task would basically
live inside each of the worker nodes, the spark master node then would combine the result
and potentially either send the result back to the reef driver/evaluator.

Some more things to think about:
1) What type of reef jobs would be advantageous to run on spark-executors?
2) Spark has its own monitoring through livy, should reef leverage this or come up with its
own monitoring to track progress on its jobs
3) Should a reef job be executed in scala/python or should it live at a higher level (namely
I was thinking maybe reef should indicate the what of the job as opposed to the how as Spark
is specifically solving the how)

Let me know your thoughts, would love in person discussions as well if needed.






> Implement reef-runtime-spark
> ----------------------------
>
>                 Key: REEF-1791
>                 URL: https://issues.apache.org/jira/browse/REEF-1791
>             Project: REEF
>          Issue Type: New Feature
>          Components: REEF
>            Reporter: Sergiy Matusevych
>            Assignee: Saikat Kanjilal
>   Original Estimate: 1,344h
>  Remaining Estimate: 1,344h
>
> We need to run REEF Tasks on Spark Executors. Ideally, that should require only a few
lines of changes in the REEF application configuration. All Spark-related logic must be encapsulated
in the {{reef-runtime-spark}} module, similar to the existing e.g. {{reef-runtime-yarn}} or
{{reef-runtime-local}}. As a first step, we can have a Java-only solution, but later we'll
need to run .NET Tasks on Executors as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message