From user-return-256-archive-asf-public=cust-asf.ponee.io@livy.incubator.apache.org Wed Mar 21 01:02:19 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id EB27D18067E for ; Wed, 21 Mar 2018 01:02:18 +0100 (CET) Received: (qmail 18511 invoked by uid 500); 21 Mar 2018 00:02:18 -0000 Mailing-List: contact user-help@livy.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@livy.incubator.apache.org Delivered-To: mailing list user@livy.incubator.apache.org Received: (qmail 18500 invoked by uid 99); 21 Mar 2018 00:02:17 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 21 Mar 2018 00:02:17 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 93FB71821F6 for ; Wed, 21 Mar 2018 00:02:17 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.879 X-Spam-Level: * X-Spam-Status: No, score=1.879 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd3-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id tXaOi2s6KrxZ for ; Wed, 21 Mar 2018 00:02:06 +0000 (UTC) Received: from mail-pg0-f53.google.com (mail-pg0-f53.google.com [74.125.83.53]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id 67FB35F1EC for ; Wed, 21 Mar 2018 00:02:06 +0000 (UTC) Received: by mail-pg0-f53.google.com with SMTP id m24so1287007pgv.8 for ; Tue, 20 Mar 2018 17:02:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=W6cpsxAtzNcPbQb1HSN1FGONqnKmBd5STWTHx+zi1fo=; b=ku8nl/Mp86aLPyNcnkqMKm8UBQ1+cGS6Co1uLGt1/bgokdJk6EKNPsRP0tjKNUepo8 6FtOpTdgGo7jEZpGK5TKu/YuIvdkVFGQI8R50XEmAu8xHHvsGSNzN+Qj5IAasWz/Z0R7 VjAHkEBF6lkBN/tA6Tb0AxpNsjBUwlZgqEXiC7kICFcgb9L0AbBQ85Az/YMk3AyWbp0e OmW35G/OQM3PAzxoc+rOWGUOJdRAFzX5QE7J/17dC5poaA6mC4YomgiaIbqzalroeO6Q xRoiEfx/wVClTqq0h4QvHvT0lxwhz8G6oEaR8LfWDkIWeP6QTaAX0TW2+hj/oamHb0Zz jdNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=W6cpsxAtzNcPbQb1HSN1FGONqnKmBd5STWTHx+zi1fo=; b=QIfqllquBCy28Ql1G9ba2LdJEBzUynV9ItyFpIERW53c14uas9glUVFXyhGf9lInFT EHg4ZG1FhHXSQU7IeaYR6t9NSSnEX9UR28Vp0E4urOjQ9DF1ZQ2ysq3WNFeaQc+OUZ6i 7cixNe+AgdT33Ow6KOMKeD4s9hzMQklvcBuk8NBqxZEa1Sm8o1vGyFWD0SzjXhussozw E9pqEnTJ9NxiAfYlnIdRDNdh/RDVLAOe7WKpqgER6Z9I/zmyRfyEcZHFoEwaRLgb46yd KzXcDv0OBZefvFaNjo+Hf8bCQPc6sg/wlrGuHBB+d7DxmNOSzkSDYySbrBJ5NwXfouyg 7h5g== X-Gm-Message-State: AElRT7FepoKujthrV6nQKRyfBJ+dT3cUB5rrgFbFN4AgH4DZI6Wk/Oyk ptYd9dwBf7LYHT31MMlJosI7xqeO5Ts/ku0MZ3Wifw== X-Google-Smtp-Source: AG47ELvqkZK6bGBE1Hg7WzyxGuBXnBsitdClmP7SP58/qYNd5mMD4w0NLlGkn/TuLzi+hTSYA2qWk1YRvb6pERn1Z50= X-Received: by 10.98.18.70 with SMTP id a67mr15172976pfj.213.1521590523901; Tue, 20 Mar 2018 17:02:03 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Meisam Fathi Date: Wed, 21 Mar 2018 00:01:53 +0000 Message-ID: Subject: Re: What happens if Livy server crashes ? All the spark jobs are gone? To: user@livy.incubator.apache.org Content-Type: multipart/alternative; boundary="001a1145b61cabbaf20567e0e643" --001a1145b61cabbaf20567e0e643 Content-Type: text/plain; charset="UTF-8" If you are running on cluster mode, the application should keep running on YRAN. On Tue, Mar 20, 2018 at 3:34 PM kant kodali wrote: > @Meisam Fathi I am running with yarn and zookeeper as a state store. I > spawned a job via livy that reads from kafka and writes to Kafka > but the moment I kill the livy server the job also is getting killed. not > sure why? I believe once the livy server crashes the spark context also > get's killed so do I need to need to set the livy.spark.deploy.mode ? if > so, what value should I set it to? > > > On Mon, Mar 12, 2018 at 12:30 PM, Meisam Fathi > wrote: > >> On YARN, your application keeps running even if the launcher fails. So >> after recovery, Livy reconnects to the application. On Spark standalone, I >> am not sure what happens to the application of the launcher fails. >> >> Thanks, >> Meisam >> >> On Mon, Mar 12, 2018 at 10:34 AM kant kodali wrote: >> >>> can someone please explain how YARN helps here? And why not spark master? >>> >>> On Mon, Mar 12, 2018 at 3:41 AM, Matteo Durighetto < >>> m.durighetto@miriade.it> wrote: >>> >>>> >>>> >>>> 2018-03-12 9:58 GMT+01:00 kant kodali : >>>> >>>>> Sorry I see there is a recovery mode and also I can set state store to >>>>> zookeeper but looks like I need YARN? because I get the error message below >>>>> >>>>> "requirement failed: Session recovery requires YARN" >>>>> >>>>> >>>>> I am using spark standalone and I don't use YARN anywhere in my >>>>> cluster. is there any other option for recovery in this case? >>>>> >>>>> >>>>> On Sun, Mar 11, 2018 at 11:57 AM, kant kodali >>>>> wrote: >>>>> >>>>>> Hi All, >>>>>> >>>>>> When my live server crashes it looks like all my spark jobs are gone. >>>>>> I am trying to see how I can make it more resilient? other words, I would >>>>>> like spark jobs that were spawned by Livy to be running even if my Livy >>>>>> server crashes because in theory Livy server can crash anytime and Spark >>>>>> Jobs should run for weeks or months in my case. How can I achieve this? >>>>>> >>>>>> Thanks! >>>>>> >>>>>> >>>>> Hello, >>>> to enable recovery in Livy you need Spark on YARN >>>> >>>> ( https://spark.apache.org/docs/latest/running-on-yarn.html ) >>>> >>>> >>>> >>>> Kind Regards >>>> >>> >>> > --001a1145b61cabbaf20567e0e643 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
If you are running on cluster mode, the application should= keep running on YRAN.

On Tue, Mar 20, 2018 at 3:34 PM kant kodali <kanth909@gmail.com> wrote:
@Meisam Fathi I am running with yarn and zookeeper as a state store. I= spawned a job via livy=C2=A0that reads from kafka=C2=A0and writes to Kafka= =C2=A0
b= ut the moment I kill the livy server the job also is getting killed. not su= re why? I believe once the livy server crashes the spark context also=C2=A0=
get&#= 39;s killed so do I need to need to set the livy.spark.deploy.mode ?=C2=A0i= f so, what value should I set it to?


On Mon, Mar 12, 2018 at 12:30 PM, Meis= am Fathi <meisam.fathi@gmail.com> wrote:
On YARN, your application keeps runnin= g even if the launcher fails. So after recovery, Livy reconnects to the app= lication. On Spark standalone, I am not sure what happens to the applicatio= n of the launcher fails.

Thanks,
Meisam
<= /div>

On Mon, Mar 1= 2, 2018 at 10:34 AM kant kodali <kanth909@gmail.com> wrote:
can someone please explain how YARN helps = here?=C2=A0And why not spark master?

On Mon, Mar 12, 2018 at 3:41 AM, Matteo Durighetto= <m.durighetto@miriade.it> wrote:

=
2018-03-12 9:58 GMT+01:00 kant kodali <kan= th909@gmail.com>:
Sorry I see there is a recovery mode and also I= can set state store to zookeeper but looks like I need YARN? because I get= the error message below

"requirement failed: Session = recovery requires YARN"


I am using spark=C2=A0standalone and I don&#= 39;t=C2=A0use YARN anywhere in my cluster. is there any other option for re= covery in this case?

=
On Sun, Mar 11, 2018 at 11:57 AM, kant kodal= i <kanth909@gmail.com> wrote:
Hi All,

When my= live server crashes it looks like all my spark jobs are gone. I am trying = to see how I can make it more resilient? other words, I would like spark jo= bs that were spawned by Livy=C2=A0to be running even if my Livy server cras= hes because in theory Livy server can crash anytime and Spark Jobs should r= un for weeks or months in my case. How can I achieve this?=C2=A0
=
Thanks!


Hello,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0to enable recovery i= n Livy you need Spark on YARN

(=C2=A0https://spark.apache.org/docs/l= atest/running-on-yarn.html )

=


<= div class=3D"gmail_extra">Kind Regards


--001a1145b61cabbaf20567e0e643--