From user-return-22619-archive-asf-public=cust-asf.ponee.io@flink.apache.org Tue Sep 4 07:22:12 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id A60FE180629 for ; Tue, 4 Sep 2018 07:22:11 +0200 (CEST) Received: (qmail 77732 invoked by uid 500); 4 Sep 2018 05:22:10 -0000 Mailing-List: contact user-help@flink.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list user@flink.apache.org Received: (qmail 77722 invoked by uid 99); 4 Sep 2018 05:22:10 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 04 Sep 2018 05:22:10 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 08512C057F for ; Tue, 4 Sep 2018 05:22:10 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.969 X-Spam-Level: * X-Spam-Status: No, score=1.969 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001, T_DKIMWL_WL_MED=-0.01] autolearn=disabled Authentication-Results: spamd4-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=data-artisans-com.20150623.gappssmtp.com Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id mxfZza8y9dhy for ; Tue, 4 Sep 2018 05:22:08 +0000 (UTC) Received: from mail-it0-f51.google.com (mail-it0-f51.google.com [209.85.214.51]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id 102945F1B3 for ; Tue, 4 Sep 2018 05:22:08 +0000 (UTC) Received: by mail-it0-f51.google.com with SMTP id u13-v6so3318649iti.1 for ; Mon, 03 Sep 2018 22:22:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=data-artisans-com.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=UB9L0IOYNGrz3dl57u9jT790PeVjuVghNeFhxPdWg3Y=; b=vUxqW05Om1K+pUGRabdRJFSgNCbWxTgnWL4JC/3FYnh3B8qvHDL91lUqTfa1DX03vg ZVzUknR/KMkg35M3A0lNpoPUwo/VgUkfyOcFn/xzD4l7a9l/Grp91s657GaH/7WOOGVm YzfmCzbQBAwG5+OStY3LBtO6G0o5oPbiHaEjb+oBJGmQ8/5XjWe2ktLAIfux6ksq4BV8 BkxkQV+clghTHhANHQ3bRAxoVgT491W6t6fIvAbN4GVB7m/VyomVL3PUy0jkxun0WtXy gWnjJO5kHnJ040p6qS60Eyh8YqLO9SesHwKiDzmszLVMPUaC9CxAoI8sNBwZMV39MWtt YviQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=UB9L0IOYNGrz3dl57u9jT790PeVjuVghNeFhxPdWg3Y=; b=kZ7VEpHAKHDJfrXk8Dp2iQWgXxLzmzP46mOdz/7vWCyyW/urqi4hwKFf7VN33pFnXP YMQYuHL7UVIRIXrwvN7TtsrCTcuEnrko4aTkAb9NGrne2dd8VAh1KuT6kttZhGo07MM0 les1RSVjMrFEsfyNiM0yjoNIQBzHcyjFptNKohqiGP+UxmuOP93vBWu6e3ptffW2ncto Ay6dfT92IcIR/szXsQcY1N0M8ZDjnQZwc615c4lTua4HNzzff/ak8OZs19yZI9X2naz/ nzgHoxRsoKftdkA1wOCbrFO9NL95VGB3OyaV0Qbv8+Ru03Q03kGsPX4A8T5rlB5gaDNa MIrg== X-Gm-Message-State: APzg51DsZapahzMsxW0hqDKd7Asn3Uz2q9Atf4OsjjxOTSpdmeFEgs0q ED5hBg059qw1CfQ22rSvC11mz9i5xRKlT5Ts544bNw== X-Google-Smtp-Source: ANB0VdaUw2I5klmkeVcUdiSQdBEDkxLRqA09ux5142YbdBvDQI9nzvJtC9lFqJtCgepOrdsnP+MTm+FBDzk2623yRDw= X-Received: by 2002:a24:4d8d:: with SMTP id l135-v6mr7244670itb.49.1536038527466; Mon, 03 Sep 2018 22:22:07 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:ac0:aeb0:0:0:0:0:0 with HTTP; Mon, 3 Sep 2018 22:22:06 -0700 (PDT) In-Reply-To: <4B828566-FA46-4425-9C54-0B4C59808D38@coupang.com> References: <713A5FF5-1984-42B8-A24B-974CAA441FCE@coupang.com> <4B828566-FA46-4425-9C54-0B4C59808D38@coupang.com> From: Gary Yao Date: Tue, 4 Sep 2018 07:22:06 +0200 Message-ID: Subject: Re: Flink on Yarn, restart job will not destroy original task manager To: "James (Jian Wu) [FDS Data Platform]" Cc: "user@flink.apache.org" Content-Type: multipart/alternative; boundary="000000000000ca95ae057504d6ee" --000000000000ca95ae057504d6ee Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hi James, Local recovery is disabled by default. You do not need to configure anythin= g in addition. Did you run into problems again or does it work now? If you are stil experiencing task spread out, can you configure logging on DEBUG level, and share the jobmanager logs with us? Best, Gary On Tue, Sep 4, 2018 at 5:42 AM, James (Jian Wu) [FDS Data Platform] < james.wu@coupang.com> wrote: > Hi Gary: > > > > From 1.5/1.6 document: > > > > Configuring task-local recovery > > Task-local recovery is *deactivated by default* and can be activated > through Flink=E2=80=99s configuration with the key state.backend.local-re= covery as > specified in CheckpointingOptions.LOCAL_RECOVERY. The value for this > setting can either be *true* to enable or *false*(default) to disable > local recovery. > > > > By default, local recovery is deactive. In 1.5.0, I=E2=80=99ve not enable= local > recovery. > > > > So whether I need manual disable local recovery via flink.conf? > > > > Regards > > > > James > > > > *From: *"James (Jian Wu) [FDS Data Platform]" > *Date: *Monday, September 3, 2018 at 4:13 PM > *To: *Gary Yao > > *Cc: *"user@flink.apache.org" > *Subject: *Re: Flink on Yarn, restart job will not destroy original task > manager > > > > My Flink version is 1.5, I will rebuild new version flink > > > > Regards > > > > James > > > > *From: *Gary Yao > *Date: *Monday, September 3, 2018 at 3:57 PM > *To: *"James (Jian Wu) [FDS Data Platform]" > *Cc: *"user@flink.apache.org" > *Subject: *Re: Flink on Yarn, restart job will not destroy original task > manager > > > > Hi James, > > What version of Flink are you running? In 1.5.0, tasks can spread out due > to > changes that were introduced to support "local recovery". There is a > mitigation in 1.5.1 that prevents task spread out but local recovery must > be > disabled [2]. > > Best, > Gary > > [1] https://issues.apache.org/jira/browse/FLINK-9635 > [2] https://issues.apache.org/jira/browse/FLINK-9634 > > > > On Mon, Sep 3, 2018 at 9:20 AM, James (Jian Wu) [FDS Data Platform] < > james.wu@coupang.com> wrote: > > Hi: > > > > I launch flink application on yarn with 5 task manager, every task > manager has 5 slots with such script > > > > #!/bin/sh > > CLASSNAME=3D$1 > > JARNAME=3D$2 > > ARUGMENTS=3D$3 > > > > export JVM_ARGS=3D"${JVM_ARGS} -Dmill.env.active=3Daws" > > /usr/bin/flink run -m yarn-cluster --parallelism 15 -yn 5 -ys 3 -yjm 819= 2 > -ytm 8192 -ynm flink-order-detection -yD env.java.opts.jobmanager=3D'-Dm= ill.env.active=3Daws' > -yD env.java.opts.taskmanager=3D'-Dmill.env.active=3Daws' -c $CLASSNAME = \ > > $JARNAME $ARUGMENTS > > > > > > The original flink app occupy 5 containers and 15 vcores, run for 3+ days= , > one of task manage killed by yarn because of memory leak and job manager > start new task managers. Currently my flink app running normally on yarn, > but occupy 10 containers, 28 vcores. (Application Master shows my flink > job running for 75 hours, click into running job in flink web ui, it show= s > my job running for 28hours because of restart) > > > > In my opinion, job manager will attempt to start the failed task manager, > and in the final app still use 5 containers and 15 vcores, why after > restart job by yarn will occupy double resource. > > > > Any one can give me some suggestion? > > > > Regards > > > > James > > > --000000000000ca95ae057504d6ee Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi James,

Local recovery is disabled by d= efault. You do not need to configure anything
in addition.

Did yo= u run into problems again or does it work now? If you are stil
experienc= ing task spread out, can you configure logging on DEBUG level, and
share= the jobmanager logs with us?

Best,
Gary

On Tue, Sep 4, 2018 at 5:4= 2 AM, James (Jian Wu) [FDS Data Platform] <james.wu@coupang.com>= wrote:

Hi Gary:

=C2=A0

From 1.5/1.6 document:=

=C2=A0

Configuring task-local= recovery

Task-local recovery is= =C2=A0deactivated by default=C2=A0and can be activated through Flink= =E2=80=99s configuration with the key=C2=A0state.backend.local-recover= y=C2=A0as specified in=C2=A0CheckpointingOptions.LOCAL_RECOVERY. The v= alue for this setting can either be=C2=A0true=C2=A0to enable or=C2=A0= false(default) to disable local recovery.

=C2=A0

By default, local reco= very is deactive. In 1.5.0, I=E2=80=99ve not enable local recovery.<= u>

=C2=A0

So whether I need manu= al disable local recovery via flink.conf?

=C2=A0

Regards<= /p>

=C2=A0

James

=C2=A0

From: = "James (Jian Wu)= [FDS Data Platform]" <james.wu@coupang.com>
Date: Monday, September 3, 2018 at 4:13 PM
To: Gary Yao <gary@data-artisans.com>


Cc: "user@flink.apache.org" <user@flink.apache.org>
Subject: Re: Flink on Yarn, restart job will not destroy original ta= sk manager

=C2=A0

My Flink version is 1.5, I will rebuild new version = flink

=C2=A0

Regards

=C2=A0

James

=C2=A0

From: = Gary Yao <gary@data-artisans.com= >
Date: Monday, September 3, 2018 at 3:57 PM
To: "James (Jian Wu) [FDS Data Platform]" <james.wu@coupang.com><= br> Cc: "user@flink.apache.org" <user@flink.apache.org>
Subject: Re: Flink on Yarn, restart job will not destroy original ta= sk manager

=C2=A0

Hi James,

What version of Flink are you running? In 1.5.0, tasks can spread out due t= o
changes that were introduced to support "local recovery". There i= s a
mitigation in 1.5.1 that prevents task spread out but local recovery must b= e
disabled [2].

Best,
Gary

[1] https://issues.apache.org/jira/browse/FLINK-9635
[2] https://issues.apache.org/jira/browse/FLINK-9634=

=C2=A0

On Mon, Sep 3, 2018 at 9:20 AM, James (Jian Wu) [FDS= Data Platform] <james.wu@coupang.com> wrote:

Hi:

=C2=A0

=C2=A0 I launch flink application on yarn with 5 tas= k manager, every task manager has 5 slots with such script

=C2=A0

#!/bin/sh

CLASSNAME=3D$1

JARNAME=3D$2

ARUGMENTS=3D$3

=C2=A0

export JVM_ARGS=3D"${JVM_ARGS} -Dmill.env.activ= e=3Daws"

/usr/bin/flink run -m yarn-cluster --parallelism 15= =C2=A0 -yn 5 -ys 3 -yjm 8192 -ytm 8192=C2=A0 -ynm flink-order-detection -yD= env.java.opts.jobmanager=3D'-Dmill.env.active=3Daws'=C2=A0 -y= D env.java.opts.taskmanager=3D'-Dmill.env.active=3Daws'=C2=A0 -c $CLASSNAME=C2=A0=C2=A0 \

$JARNAME $ARUGMENTS

=C2=A0

=C2=A0

The original flink app occupy 5 containers and 15 vc= ores, run for 3+ days, one of task manage killed by yarn because of memory = leak and job manager start new task managers. Currently my flink app running normally on yarn, =C2=A0but occupy 10 containers, 28 = vcores. (Application Master shows my flink job running for 75 hours, click = into running job in flink web ui, it shows my job running for 28hours becau= se of restart)

=C2=A0

In my opinion, job manager will attempt to start the= failed task manager, and in the final app still use 5 containers and 15 vc= ores, why after restart job by yarn will occupy double resource.

=C2=A0

Any one can give me some suggestion?

=C2=A0

Regards

=C2=A0=

James<= u>

=C2=A0


--000000000000ca95ae057504d6ee--