beam-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <>
Subject [jira] [Work logged] (BEAM-5110) Reconile Flink JVM singleton management with deployment
Date Fri, 10 Aug 2018 13:08:00 GMT


ASF GitHub Bot logged work on BEAM-5110:

                Author: ASF GitHub Bot
            Created on: 10/Aug/18 13:07
            Start Date: 10/Aug/18 13:07
    Worklog Time Spent: 10m 
      Work Description: tweise commented on issue #6189: [BEAM-5110] Explicitly count the
references for BatchFlinkExecutableStageContext …
   See comment on

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:

Issue Time Tracking

    Worklog Id:     (was: 133574)
    Time Spent: 1h 50m  (was: 1h 40m)

> Reconile Flink JVM singleton management with deployment
> -------------------------------------------------------
>                 Key: BEAM-5110
>                 URL:
>             Project: Beam
>          Issue Type: Bug
>          Components: runner-flink
>            Reporter: Ben Sidhom
>            Assignee: Ben Sidhom
>            Priority: Major
>          Time Spent: 1h 50m
>  Remaining Estimate: 0h
> [~angoenka] noticed through debugging that multiple instances of BatchFlinkExecutableStageContext.BatchFactory
are loaded for a given job when executing in standalone cluster mode. This context factory
is responsible for maintaining singleton state across a TaskManager (JVM) in order to share
SDK Environments across workers in a given job. The multiple-loading breaks singleton semantics
and results in an indeterminate number of Environments being created.
> It turns out that the [Flink classloading mechanism|]
is determined by deployment mode. Note that "user code" as referenced by this link is actually
the Flink job server jar. Actual end-user code lives inside of the SDK Environment and uploaded
> In order to maintain singletons without resorting to IPC (for example, using file locks
and/or additional gRPC servers), we need to force non-dynamic classloading. For example,
this happens when jobs are submitted to YARN for one-off deployments via `flink run`. However,
connecting to an existing (Flink standalone) deployment results in dynamic classloading.
> We should investigate this behavior and either document (and attempt to enforce) deployment
modes that are consistent with our requirements, or (if possible) create a custom classloader
that enforces singleton loading.

This message was sent by Atlassian JIRA

View raw message