flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Maximilian Michels <...@apache.org>
Subject Re: Remote upload and execute
Date Mon, 05 Sep 2016 10:37:19 GMT
The jar file always needs to be available. Whether this is locally on
your machine or in the jar file directory of the web interface which
runs on the JobManager. Without the file we can't generate a JobGraph
which is Flink's representation of a compiled program ready for
execution. Additionally, we ship the file to the cluster nodes upon
execution for runtime dependencies.

You're right that it is an unnecessary restriction that we enforce the
local file system here. However, there is a workaround:

new PackedProgram(File file, Collections.singletonList("s3:///path/to/my.jar"));

This allows you to supply an URL dependency to your local file. You
still need to specify a file but that one can be an empty jar or some
code to bootstrap your Flink program. We could think about replacing
the File argument with an URL as well and then distribute jars which
are only accessible locally.

On Fri, Sep 2, 2016 at 2:38 PM, Paul Wilson <paulalexwilson@gmail.com> wrote:
> Hi,
> I'd like to write a client that can execute an already 'uploaded' JAR (i.e.
> the JAR is deployed and available by some other external process). This is
> similar to what the web console allows which consists of 2 steps: upload the
> JAR followed by a submit with parameters.
> I'm looking at the Flink client however ClusterClient appears to require a
> PackagedProgram or local access to the required JAR. However I do not want
> to have to re-upload the JAR each time (I don't even want the client to have
> access to the JAR).
> Is there some way to specify that the JAR is available on some filesystem
> (s3) location. have that cached in Flink more locally and then trigger a
> parameterised execution of that from a client?
> Regards,
> Paul

View raw message