flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Shannon Carey <sca...@expedia.com>
Subject Re: Connecting workflows in batch
Date Wed, 08 Mar 2017 23:47:15 GMT
It may not return for batch jobs, either. See my post http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Job-completion-or-failure-callback-td12123.html

In short, if Flink returned an OptimizerPlanEnvironment from your call to ExecutionEnvironment.getExecutionEnvironment,
when you call execute() it will only generate the job plan (the job hasn't been submitted/isn't
executing yet), and if no exceptions are thrown during creation of the job plan, then a ProgramAbortException
is always thrown, and none of your code after execute() would run, and as a result you're
definitely not able to use any JobExecutionResult in your main method, even though the code
makes it looks like you will.

-Shannon

From: Aljoscha Krettek <aljoscha@apache.org<mailto:aljoscha@apache.org>>
Date: Friday, March 3, 2017 at 9:36 AM
To: <user@flink.apache.org<mailto:user@flink.apache.org>>
Subject: Re: Connecting workflows in batch

Yes, right now that call never returns for a long-running streaming job. We will (in the future)
provide a way for that call to return so that the result can be used for checking aggregators
and other things.


On Thu, Mar 2, 2017, at 19:14, Mohit Anchlia wrote:
Does it mean that for streaming jobs it never returns?

On Thu, Mar 2, 2017 at 6:21 AM, Till Rohrmann <trohrmann@apache.org<mailto:trohrmann@apache.org>>
wrote:

Hi Mohit,

StreamExecutionEnvironment.execute() will only return giving you the JobExecutionResult after
the job has reached a final stage. If that works for you to schedule the second job, then
it should be ok to combine both jobs in one program and execute the second job after the first
one has completed.

Cheers,
Till


On Thu, Mar 2, 2017 at 2:33 AM, Mohit Anchlia <mohitanchlia@gmail.com<mailto:mohitanchlia@gmail.com>>
wrote:
It looks like JobExecutionResult can be used here by using the accumulators?

On Wed, Mar 1, 2017 at 8:37 AM, Aljoscha Krettek <aljoscha@apache.org<mailto:aljoscha@apache.org>>
wrote:
I think right now the best option is the JobManager REST interface: https://ci.apache.org/projects/flink/flink-docs-release-1.3/monitoring/rest_api.html

You would have to know the ID of your job and then you can poll the status of your running
jobs.

On Mon, 27 Feb 2017 at 18:15 Mohit Anchlia <mohitanchlia@gmail.com<mailto:mohitanchlia@gmail.com>>
wrote:
What's the best way to track the progress of the job?

On Mon, Feb 27, 2017 at 7:56 AM, Aljoscha Krettek <aljoscha@apache.org<mailto:aljoscha@apache.org>>
wrote:
Hi Mohit,
I'm afraid there is nothing like this in Flink yet. As you mentioned you probably have to
manually track the completion of one job and then trigger execution of the next one.

Best,
Aljoscha

On Fri, 24 Feb 2017 at 19:16 Mohit Anchlia <mohitanchlia@gmail.com<mailto:mohitanchlia@gmail.com>>
wrote:
Is there a way to connect 2 workflows such that one triggers the other if certain condition
is met? However, the workaround may be to insert a notification in a topic to trigger another
workflow. The problem is that the addSink ends the flow so if we need to add a trigger after
addSink there doesn't seem to be any good way of sending a notification to a queue that the
batch processing is complete. Any suggestions? One option could be track the progress of a
job and on a successful completion add a notification. Is there such a mechanism available?

Mime
View raw message