flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-6020) Blob Server cannot hanlde multiple job sumits(with same content) parallelly
Date Fri, 17 Mar 2017 17:10:41 GMT

    [ https://issues.apache.org/jira/browse/FLINK-6020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930311#comment-15930311

ASF GitHub Bot commented on FLINK-6020:

Github user WangTaoTheTonic commented on the issue:

    The second rename will not fail, but make the file which written by the first corrupted,
which will make the first job failed if the task is loading this jar.
    by the way, the jar file will be uploaded to hdfs for recovery, and the uploading will
fail too if there are more than two clients writing file with same name.
    It is easy to reoccur. First launch a session with enough slots, then run a script contains
many same job submitting, says there are 20 lines of "flink run ../examples/steaming/WindowJoin.jar
&". Make sure there's a "&" in end of each line to make them run in parallel.

> Blob Server cannot hanlde multiple job sumits(with same content) parallelly
> ---------------------------------------------------------------------------
>                 Key: FLINK-6020
>                 URL: https://issues.apache.org/jira/browse/FLINK-6020
>             Project: Flink
>          Issue Type: Bug
>            Reporter: Tao Wang
>            Assignee: Tao Wang
>            Priority: Critical
> In yarn-cluster mode, if we submit one same job multiple times parallelly, the task will
encounter class load problem and lease occuputation.
> Because blob server stores user jars in name with generated sha1sum of those, first writes
a temp file and move it to finalialize. For recovery it also will put them to HDFS with same
file name.
> In same time, when multiple clients sumit same job with same jar, the local jar files
in blob server and those file on hdfs will be handled in multiple threads(BlobServerConnection),
and impact each other.
> It's better to have a way to handle this, now two ideas comes up to my head:
> 1. lock the write operation, or
> 2. use some unique identifier as file name instead of ( or added up to) sha1sum of the
file contents.

This message was sent by Atlassian JIRA

View raw message