flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Metzger <rmetz...@apache.org>
Subject Re: Apache Flink on Hadoop YARN using a YARN Session
Date Fri, 20 Nov 2015 12:34:03 GMT
Hi,
I'll fix the link in the YARN documentation. Thank you for reporting the
issue.

I'm not aware of any discussions or implementations related to the
scheduling. From my experience working with users and also from the mailing
list, I don't think that such features are very important.
Since streaming jobs usually run permanently, there is no need to queue
jobs somehow.
For batch jobs, YARN is taking care of the resource allocation (in practice
this means that the job has to wait until the required resources are
available).

There are some discussions (and user requests) regarding resource
elasticity going on and I think we'll add features for dynamically changing
the size of a Flink cluster on YARN while a job is running.

Which features are you missing wrt to scheduling in Flink? Please let me
know if there is anything blocking you from using Flink in production and
we'll see what we can do.

Regards,
Robert



On Fri, Nov 20, 2015 at 1:24 PM, Ovidiu-Cristian MARCU <
ovidiu-cristian.marcu@inria.fr> wrote:

> Hi,
>
> The link to FAQ (
> https://ci.apache.org/projects/flink/flink-docs-release-0.10/faq.html) is
> on the yarn setup 0.10 documentation page (
> https://ci.apache.org/projects/flink/flink-docs-release-0.10/setup/yarn_setup.html)
> described in this sentence: *If you have troubles using the Flink YARN
> client, have a look in the FAQ section
> <https://ci.apache.org/projects/flink/flink-docs-release-0.10/faq.html>.*
>
> Is the scheduling features considered for next releases?
>
> Thank you.
> Best regards,
> Ovidiu
>
> On 20 Nov 2015, at 11:59, Robert Metzger <rmetzger@apache.org> wrote:
>
> Hi Ovidiu,
>
> you can submit multiple programs to a running Flink cluster (or a YARN
> session). Flink does currently not have any queuing mechanism.
> The JobManager will reject a program if there are not enough free
> resources for it. If there are enough resources for multiple programs,
> they'll run concurrently.
> Note that Flink is not starting separate JVMs for the programs, so if one
> program is doing a System.exit(0), it is killing the entire JVM,
> including other running programs.
>
> You can start as many YARN sessions (or single jobs to YARN) as you have
> resources available on the cluster. The resource allocation is up to the
> scheduler you've configured in YARN.
>
> In general, we recommend to start a YARN session per program. You can also
> directly submit a Flink program to YARN.
>
> Where did you find the link to the FAQ? The link on the front page is
> working: http://flink.apache.org/faq.html
>
>
>
> On Fri, Nov 20, 2015 at 11:41 AM, Ovidiu-Cristian MARCU <
> ovidiu-cristian.marcu@inria.fr> wrote:
>
>> Hi,
>>
>> I am currently interested in experimenting on Flink over Hadoop YARN.
>> I am documenting from the documentation we have here:
>> https://ci.apache.org/projects/flink/flink-docs-release-0.10/setup/yarn_setup.html
>>
>> There is a subsection *Start Flink Session* which states the following: *A
>> session will start all required Flink services (JobManager and
>> TaskManagers) so that you can submit programs to the cluster. Note that you
>> can run multiple programs per session.*
>>
>> Can you be more precise regarding the multiple programs per session? If I
>> submit multiple programs concurently what will happen (can I?)? Maybe they
>> will run in a FIFO fashion or what should I expect?
>>
>> The internals section specify that users can execute multiple Flink Yarn
>> sessions in parallel. This is great, this invites to static partitioning of
>> resources in order to run multiple applications concurrently. Do you
>> support a fair scheduler similar to what Spark claims it has?
>>
>> There is FAQ section (
>> https://ci.apache.org/projects/flink/flink-docs-release-0.10/faq.html)
>> resource that is missing, can this be updated?
>>
>> Thank you.
>>
>> Best regards,
>> Ovidiu
>>
>>
>
>
>

Mime
View raw message