Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id B6A4F185DC for ; Sun, 6 Dec 2015 07:15:38 +0000 (UTC) Received: (qmail 27260 invoked by uid 500); 6 Dec 2015 07:15:27 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 27156 invoked by uid 500); 6 Dec 2015 07:15:27 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 27131 invoked by uid 99); 6 Dec 2015 07:15:27 -0000 Received: from Unknown (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 06 Dec 2015 07:15:27 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 8F875C7E0A for ; Sun, 6 Dec 2015 07:15:26 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 5.099 X-Spam-Level: ***** X-Spam-Status: No, score=5.099 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_REPLY=1, HTML_MESSAGE=3, KAM_LINEPADDING=1.2, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-us-west.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id gc2-N7fK7O8s for ; Sun, 6 Dec 2015 07:15:20 +0000 (UTC) Received: from mail-ig0-f176.google.com (mail-ig0-f176.google.com [209.85.213.176]) by mx1-us-west.apache.org (ASF Mail Server at mx1-us-west.apache.org) with ESMTPS id 4E87023061 for ; Sun, 6 Dec 2015 07:15:20 +0000 (UTC) Received: by igbxm8 with SMTP id xm8so63640253igb.1 for ; Sat, 05 Dec 2015 23:15:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; bh=tlOB4niB747TNFhOMhHWhwr370rLsfRUhhZYzspTOVo=; b=KgVSu+dekB30uxzHlr4j7cDFuo4hnR6HedW0lUmCG1rKgesDQ9koOv6gDYCWOQHryC SrKj8ncYHSN4GqjXWdygsDX3cvb/UqdASrdHphf+fTThWnJkmBxEjLGA3Le6b9TqbH5h kp35E1NeSBggK/4FrFGkRNpdd/O2OhtKoJ69Q+V2rzLEFfLB+21WFbrSlaTsEZHn+vAs SeX0FmHjUzNXhUpb/Oz+kpND+Ln7T7jbRS076MlKcWba6Dwcn2NPMG1gMcxD5pp+Tgyn TJW3LM0bk2zgJPiPd2z2MOIlgRvMGTFQkG74j86YhvNonQDP8RZkoH6IRkj33D6qSmYW O6HA== X-Received: by 10.50.60.2 with SMTP id d2mr10876880igr.52.1449386112505; Sat, 05 Dec 2015 23:15:12 -0800 (PST) MIME-Version: 1.0 Received: by 10.107.135.138 with HTTP; Sat, 5 Dec 2015 23:14:53 -0800 (PST) In-Reply-To: References: From: =?UTF-8?B?6YOt5aOr5Lyf?= Date: Sun, 6 Dec 2015 15:14:53 +0800 Message-ID: Subject: Re: YARN timelineserver process taking 600% CPU To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=047d7b15b11760df6105263580a3 --047d7b15b11760df6105263580a3 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable It seems that it's the large leveldb size that cause the problem. What is the value of 'yarn.timeline-service.ttl-ms' config ? Maybe it's not short enough so we have too much entities in timeline store. And by the way, it will take a long time (hours) when the ATS do discard old entity operation, and it will also block the other operations. The patch https://issues.apache.org/jira/browse/YARN-3448 is a great performance improve. We just backport it and it works well. 2015-11-06 13:07 GMT+08:00 Naganarasimha G R (Naga) < garlanaganarasimha@huawei.com>: > Hi Krzysiek, > > > > *There are currently 8 Spark Streaming jobs constantly running, each 3 > with 1 second batch, 5 x 10 s. I believe these are the jobs that publish = to > ATS. How could I check what precisely is doing what or how to get some > logs about it, I don't know...* > > Not sure about the applications being run and if you have already tried > disabling the "Spark History Server doing the puts ATS" then not sure if > the apps are sending it out . AFAIK Spark history server had not integrat= ed > with ATS (SPARK-1537). So most propably its the applications which are > pumping in the data. I think you need to check with them itself. > > > *2. Is 8 concurrent Spark Streaming jobs really that high for > Timelineserver? I have just a small cluster, how other larger companies a= re > handling much larger load? * > > Its not been used in large scale by us but according YARN-2556 (ATS > Performance Test Tool), it states that "On a 36 node cluster, this > results in ~830 concurrent containers (e.g maps), each firing 10KB of > payload, 20 times." but only thing being different is, data in your > system is already overloaded hence cost of querying (which is currently > happening during each insertion) is very high. > > May be guys from other company who have used or supported ATSV1 might be > able to tell the ATSV1 scale better ! > > > Regards, > > + Naga > ------------------------------ > *From:* Krzysztof Zarzycki [k.zarzycki@gmail.com] > *Sent:* Thursday, November 05, 2015 19:51 > *To:* user@hadoop.apache.org > *Subject:* Re: YARN timelineserver process taking 600% CPU > > Thanks Naga for your input, (I'm sorry for a late response, I was out fo= r > some time). > > So you believe that Spark is actually doing the PUTs? There are currently > 8 Spark Streaming jobs constantly running, each 3 with 1 second batch, 5 = x > 10 s. I believe these are the jobs that publish to ATS. How could I chec= k > what precisely is doing what or how to get some logs about it, I don't > know... > I though maybe it is Spark History Server doing the puts, but it seems it > is not, as I disabled it and the load hasn't gone down. So it seems these > are the jobs itself indeed. > > Now I have the following problems: > 1. The most important: How can I at least *workaround* this issue? Maybe > I will somehow disable Spark usage of Yarn timelineserver ? What are the > consequences? Is it only history of Spark finished jobs not being saved? = If > yes, that doesn't hurt that much. Probably this is a question to Spark > group... > 2. Is 8 concurrent Spark Streaming jobs really that high for > Timelineserver? I have just a small cluster, how other larger companies a= re > handling much larger load? > > Thanks for helping me with this! > Krzysiek > > > > > > > > > > > 2015-10-05 20:45 GMT+02:00 Naganarasimha Garla >: > >> Hi Krzysiek, >> Oops My mistake, 3 Gb seems to be on little higher side. >> And from the jstack it seems like there were no major activity other tha= n >> puts seems like around 16 concurrent puts were happening which tries to = get >> the timeline Entity hence hitting the native call. >> >> From the logs it seems like lot of ACL validations are happening and fro= m >> the URL it seems like its for PUTEntites. >> approximately from 09:30:16 to 09:44:26 about 9213 checks have happened >> and if all of these are for puts then roughly about 10 put calls/s is >> happening from *spark* side. This i feel is not right usage of ATS, can >> you check what is being published from the spark to ATS at this high rat= e ? >> >> Besides some improvements regarding the timeline metrics is available in >> trunk as part of YARN-3360 which could have been useful in analyzing you= r >> issue. >> >> + Naga >> >> >> On Mon, Oct 5, 2015 at 1:19 PM, Krzysztof Zarzycki >> wrote: >> >>> Hi Naga, >>> Sorry, but it's not 3MB, but 3GB in leveldb-timeline-store (du shows >>> numbers in kB). Does that seems reasonable as well? >>> There are new .sst files generated each minute. >>> There are now 26850 files in leveldb-timeline-store directory. New file= s >>> are generated each minute. Some are also being deleted. >>> >>> I started timeline server today, to gather logs and jstack, it was >>> running for ~20 minutes. I attach the tar bz2 archive with those logs. >>> >>> Thank you for helping me debug this. >>> Krzysiek >>> >>> >>> >>> >>> >>> 2015-09-30 21:00 GMT+02:00 Naganarasimha Garla < >>> naganarasimha.gr@gmail.com>: >>> >>>> Hi Krzysiek, >>>> seems like the size is around 3 MB which seems to be fine. , >>>> Could you try enabling in debug and share the logs of ATS/AHS and also >>>> if possible the jstack output for the AHS process >>>> >>>> + Naga >>>> >>>> On Wed, Sep 30, 2015 at 10:27 PM, Krzysztof Zarzycki < >>>> k.zarzycki@gmail.com> wrote: >>>> >>>>> Hi Naga, >>>>> I see the following size: >>>>> $ sudo du --max=3D1 /var/lib/hadoop/yarn/timeline >>>>> 36 /var/lib/hadoop/yarn/timeline/timeline-state-store.ldb >>>>> 3307772 /var/lib/hadoop/yarn/timeline/leveldb-timeline-store.ldb >>>>> 3307812 /var/lib/hadoop/yarn/timeline >>>>> >>>>> The timeline service has been multiple times restarted as I was >>>>> looking for issue with it. But it was installed about a 2 months ago.= Just >>>>> few applications (1?2? ) has been started since its last start. The >>>>> ResourceManager interface has 261 entries. >>>>> >>>>> As in yarn-site.xml that I attached, the variable you're asking for >>>>> has the following value: >>>>> >>>>> >>>>> >>>>> yarn.timeline-service.leveldb-timeline-store.ttl-interval-ms<= /name> >>>>> 300000 >>>>> >>>>> >>>>> >>>>> Ah, One more thing: When I looked with jstack to see what the process >>>>> is doing, I saw threads spending time in NATIVE in leveldbjni library= . So I >>>>> *think* it is related to leveldb store. >>>>> >>>>> Please ask if any more information is needed. >>>>> Any help is appreciated! Thanks >>>>> Krzysiek >>>>> >>>>> 2015-09-30 16:23 GMT+02:00 Naganarasimha G R (Naga) < >>>>> garlanaganarasimha@huawei.com>: >>>>> >>>>>> Hi , >>>>>> >>>>>> Whats the size of Store Files? >>>>>> Since when is it running ? how many applications have been run since >>>>>> it has been started ? >>>>>> Whats the value of " >>>>>> yarn.timeline-service.leveldb-timeline-store.ttl-interval-ms" ? >>>>>> >>>>>> + Naga >>>>>> ------------------------------ >>>>>> *From:* Krzysztof Zarzycki [k.zarzycki@gmail.com] >>>>>> *Sent:* Wednesday, September 30, 2015 19:20 >>>>>> *To:* user@hadoop.apache.org >>>>>> *Subject:* YARN timelineserver process taking 600% CPU >>>>>> >>>>>> Hi there Hadoopers, >>>>>> I have a serious issue with my installation of Hadoop & YARN in >>>>>> version 2.7.1 (HDP 2.3). >>>>>> The timelineserver process ( more >>>>>> precisely org.apache.hadoop.yarn.server.applicationhistoryservice.Ap= plicationHistoryServer >>>>>> class) takes over 600% of CPU, generating enormous load on my master= node. >>>>>> I can't guess why it happens. >>>>>> >>>>>> First, I run the timelineserver using java 8, thought that this was >>>>>> an issue. But no, I started timelineserver now with use of java 7 an= d still >>>>>> the problem is the same. >>>>>> >>>>>> My cluster is tiny- it consists of: >>>>>> - 2 HDFS nodes >>>>>> - 2 HBase RegionServers >>>>>> - 2 Kafkas >>>>>> - 2 Spark nodes >>>>>> - 8 Spark Streaming jobs, processing around 100 messages/second >>>>>> TOTAL. >>>>>> >>>>>> I'll be very grateful for your help here. If you need any more info, >>>>>> please write. >>>>>> I also attach yarn-site.xml grepped to options related to timeline >>>>>> server. >>>>>> >>>>>> And here is a command of timeline that I see from ps : >>>>>> /usr/java/jdk1.7.0_79/bin/java -Dproc_timelineserver -Xmx1024m >>>>>> -Dhdp.version=3D2.3.0.0-2557 -Dhadoop.log.dir=3D/var/log/hadoop-yarn= /yarn >>>>>> -Dyarn.log.dir=3D/var/log/hadoop-yarn/yarn >>>>>> -Dhadoop.log.file=3Dyarn-yarn-timelineserver-hd-master-a01.log >>>>>> -Dyarn.log.file=3Dyarn-yarn-timelineserver-hd-master-a01.log -Dyarn.= home.dir=3D >>>>>> -Dyarn.id.str=3Dyarn -Dhadoop.root.logger=3DINFO,EWMA,RFA >>>>>> -Dyarn.root.logger=3DINFO,EWMA,RFA >>>>>> -Djava.library.path=3D:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux= -amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native:/usr/hdp/2.3.0.0-2557/had= oop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native >>>>>> -Dyarn.policy.file=3Dhadoop-policy.xml >>>>>> -Dhadoop.log.dir=3D/var/log/hadoop-yarn/yarn >>>>>> -Dyarn.log.dir=3D/var/log/hadoop-yarn/yarn >>>>>> -Dhadoop.log.file=3Dyarn-yarn-timelineserver-hd-master-a01.log >>>>>> -Dyarn.log.file=3Dyarn-yarn-timelineserver-hd-master-a01.log >>>>>> -Dyarn.home.dir=3D/usr/hdp/current/hadoop-yarn-timelineserver >>>>>> -Dhadoop.home.dir=3D/usr/hdp/2.3.0.0-2557/hadoop >>>>>> -Dhadoop.root.logger=3DINFO,EWMA,RFA -Dyarn.root.logger=3DINFO,EWMA,= RFA >>>>>> -Djava.library.path=3D:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux= -amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native:/usr/hdp/2.3.0.0-2557/had= oop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native >>>>>> -classpath >>>>>> /usr/hdp/current/hadoop-client/conf:/usr/hdp/current/hadoop-client/c= onf:/usr/hdp/current/hadoop-client/conf:/usr/hdp/2.3.0.0-2557/hadoop/lib/*:= /usr/hdp/2.3.0.0-2557/hadoop/.//*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/./:/usr= /hdp/2.3.0.0-2557/hadoop-hdfs/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/.//*:= /usr/hdp/2.3.0.0-2557/hadoop-yarn/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-yarn/.= //*:/usr/hdp/2.3.0.0-2557/hadoop-mapreduce/lib/*:/usr/hdp/2.3.0.0-2557/hado= op-mapreduce/.//*:::/usr/share/java/mysql-connector-java.jar::/usr/share/ja= va/mysql-connector-java.jar:/usr/hdp/current/hadoop-yarn-timelineserver/.//= *:/usr/hdp/current/hadoop-yarn-timelineserver/lib/*:/usr/hdp/current/hadoop= -client/conf/timelineserver-config/log4j.properties >>>>>> org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationH= istoryServer >>>>>> >>>>>> >>>>>> Thanks! >>>>>> Krzysztof >>>>>> >>>>>> >>>>> >>>> >>> >> > --047d7b15b11760df6105263580a3 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
It seems that it's the large leveldb size that cause t= he problem. What is the value of 'yarn.timeline-service.ttl-ms' con= fig ? Maybe it's not short enough so we have too much entities in timel= ine store.
And by the way, it will take a long time (hours) when the AT= S do discard old entity operation, and it will also block the other operati= ons. The patch=C2=A0https://issues.apache.org/jira/browse/YARN-3448 is a great perform= ance improve. We just backport it and it works well.

2015-11-06 13:07 GMT+08:00 = Naganarasimha G R (Naga) <garlanaganarasimha@huawei.com>= ;:

Hi Krzysiek= ,

=C2=A0

There ar= e currently 8 Spark Streaming jobs constantly running, each 3 with 1 second= batch, 5 x 10 s. I believe these are the jobs that publish to ATS.=C2=A0 H= ow could I check what precisely is doing what or how to get some logs about it, I don't know...

Not = sure about the applications being run and if you have already tried disabli= ng the "Spark History Server doing the puts ATS" then not sure if= the apps are sending it out . AFAIK Spark history server had not integrated with ATS (SPARK-1537). So most propably = its the applications which are pumping in the data. I think you need to che= ck with them itself.


2. Is 8 concur= rent Spark Streaming jobs really that high for Timelineserver? I have just = a small cluster, how other larger companies are handling much larger load?=C2=A0

Its not been used in large scale b= y us but according YARN-2556 (ATS Performance Test Tool), it states that &q= uot;On a 36 node cluster, this results in ~830 concurrent containers (e.g maps), = each firing 10KB of payload, 20 times." but only thing being di= fferent is, data in your system is already overloaded hence cost of queryin= g (which is currently happening during each insertion) is very high.

May be guys from other company who have used or supported ATSV1 might be= able to tell the ATSV1 scale better !


Regards,=C2=A0

+ Naga


From: Krzysztof Zarzycki [k.zarzycki@gmail.com]
Sent: Thursday, November 05, 2015 19:51
To: user= @hadoop.apache.org
Subject: Re: YARN timelineserver process taking 600% CPU

Thanks Naga for your input, =C2=A0(I'm sorry for a lat= e response, I was out for some time).=C2=A0

So you believe that Spark is actually doing the PUTs? There are curren= tly 8 Spark Streaming jobs constantly running, each 3 with 1 second batch, = 5 x 10 s. I believe these are the jobs that publish to ATS.=C2=A0 How could= I check what precisely is doing what or how to get some logs about it, I don't know...
I though maybe it is Spark History Server doing the puts, but it seems= it is not, as I disabled it and the load hasn't gone down. So it seems= these are the jobs itself indeed.=C2=A0

Now I have the following problems:
1. The most important: How can I at least workaround this issue= ? Maybe I will somehow disable Spark usage of Yarn timelineserver ? What ar= e the consequences? Is it only history of Spark finished jobs not being sav= ed? If yes, that doesn't hurt that much. Probably this is a question to Spark group...=C2=A0
2. Is 8 concurrent Spark Streaming jobs really that high for Timelines= erver? I have just a small cluster, how other larger companies are handling= much larger load?=C2=A0

Thanks for helping me with this!=C2=A0
Krzysiek










2015-10-05 20:45 GMT+02:00 Naganarasimha Garla <= span dir=3D"ltr"> <naganar= asimha.gr@gmail.com>:
Hi=C2=A0Krzysiek,
Oops My mistake, 3 Gb seems to be on li= ttle higher side.=C2=A0
And from the jstack it seems like there= were no major activity other than puts seems like around 16 concurrent put= s were happening which tries to get the timeline Entity hence hitting the n= ative call.

From the logs it seems like lot of ACL = validations are happening and from the URL it seems like its for PUTEntites= .
approximately from=C2=A009:30:16 to=C2= =A009:44:26 about 9213 checks have happened and if all of these are for put= s then roughly about 10 put calls/s is happening from spark side. This i feel is not right usage of ATS, can you check wha= t is being published from the spark to ATS at this high rate ?

Besides some improvements regarding the= timeline metrics is available in trunk as part of=C2=A0YARN-3360 which cou= ld have been useful in analyzing your issue.

+ Naga


On Mon, Oct 5, 2015 at 1:19 PM, Krzysztof Zarzyc= ki <k.zarzycki@gm= ail.com> wrote:
Hi Naga,
Sorry, but it's not 3MB, but 3GB in leveldb-timeline-store (du sho= ws numbers in kB). Does that seems reasonable as well?=C2=A0
There are new .sst files generated each minute.
There are now 26850 files in leveldb-timeline-store directory. New fil= es are generated each minute. Some are also being deleted.

I started timeline server today, to gather logs and jstack, it was run= ning for ~20 minutes. I attach the tar bz2 archive with those logs.

Thank you for helping me debug this.=C2=A0
Krzysiek





2015-09-30 21:00 GMT+02:00 Naganarasimha Garla <= span dir=3D"ltr"> <naganar= asimha.gr@gmail.com>:
Hi=C2=A0Krzysiek,
seems like the size is around 3 MB which seems to be fine. ,
Could you try enabling in debug and share the logs of ATS/AHS and also= if possible the jstack output for the AHS process

+ Naga

On Wed, Sep 30, 2015 at 10:27 PM, Krzysztof Zarz= ycki <k.zarzycki@gm= ail.com> wrote:
Hi Naga,=C2=A0
I see the following size:
$ sudo du --max=3D1 /var/lib/hadoop/yarn/timeline
36 =C2=A0 =C2=A0 =C2=A0/var/lib/hadoop/yarn/timeline/timeline-state-st= ore.ldb
3307772 /var/lib/hadoop/yarn/timeline/leveldb-timeline-store.ldb
3307812 /var/lib/hadoop/yarn/timeline

The timeline service has been multiple times restarted as I was lookin= g for issue with it. But it was installed about a 2 months ago. Just few ap= plications (1?2? ) has been started since its last start. The ResourceManag= er interface has 261 entries.

As in yarn-site.xml that I attached, the variable you're asking fo= r has the following value:
<property>
  <name>yarn.timeline-service.le=
veldb-timeline-store.ttl-interval-ms</name>
      <value>300000</value>
</property>

Ah, One more thing: When I looked with jstack to see what the process = is doing, I saw threads spending time in NATIVE in leveldbjni library. So I= *think* it is related to leveldb store.=C2=A0

Please ask if any more information is needed.=C2=A0
Any help is appreciated! Thanks
Krzysiek

2015-09-30 16:23 GMT+02:00 Naganarasimha G R (Na= ga) <garl= anaganarasimha@huawei.com>:
Hi ,

Whats the size of Store Files?
Since when is it running ? how many applications have been run since i= t has been started ?=C2=A0
Whats the value of "yarn.timeline= -service.leveldb-timeline-store.ttl-interval-ms" ?

+ Naga

From: Krzysztof Zarzycki [k.zarzycki@gmail.com]
Sent: Wednesday, September 30, 2015 19:20
To: user= @hadoop.apache.org
Subject: YARN timelineserver process taking 600% CPU

Hi there Hadoopers,
I have a serious issue with my installation of Hadoop & YARN in ve= rsion 2.7.1 (HDP 2.3).=C2=A0
The timelineserver process ( more precisely=C2=A0org.apache.hadoop.yar= n.server.applicationhistoryservice.ApplicationHistoryServer class) takes ov= er 600% of CPU, generating enormous load on my master node. I can't gue= ss why it happens.

First, I run the timelineserver using java 8, thought that this was an= issue. But no, I started timelineserver now with use of java 7 and still t= he problem is the same.=C2=A0

My cluster is tiny- it consists of:
- 2 HDFS nodes
- 2 HBase RegionServers
- 2 Kafkas
- 2 Spark nodes
- 8 Spark Streaming jobs, processing around 100 messages/second TOTAL.= =C2=A0

I'll be very grateful for your help here. If you need any more inf= o, please write.=C2=A0
I also attach yarn-site.xml grepped to options related to timeline ser= ver.

And here is a command of timeline that I see from ps :
/usr/java/jdk1.7.0_79/bin/java -Dproc_timelineserver -Xmx1024m -Dhdp.v= ersion=3D2.3.0.0-2557 -Dhadoop.log.dir=3D/var/log/hadoop-yarn/yarn -Dyarn.l= og.dir=3D/var/log/hadoop-yarn/yarn -Dhadoop.log.file=3Dyarn-yarn-timelinese= rver-hd-master-a01.log -Dyarn.log.file=3Dyarn-yarn-timelineserver-hd-master= -a01.log -Dyarn.home.dir=3D -Dyarn.id.str=3Dyarn -Dhadoop.root.logger=3DINFO,EWMA,R= FA -Dyarn.root.logger=3DINFO,EWMA,RFA -Djava.library.path=3D:/usr/hdp/2.3.0= .0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/n= ative:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0= .0-2557/hadoop/lib/native -Dyarn.policy.file=3Dhadoop-policy.xml -Dhadoop.log.dir=3D/var/log/hadoop-= yarn/yarn -Dyarn.log.dir=3D/var/log/hadoop-yarn/yarn -Dhadoop.log.file=3Dya= rn-yarn-timelineserver-hd-master-a01.log -Dyarn.log.file=3Dyarn-yarn-timeli= neserver-hd-master-a01.log -Dyarn.home.dir=3D/usr/hdp/current/hadoop-yarn-t= imelineserver -Dhadoop.home.dir=3D/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.root.logger=3DIN= FO,EWMA,RFA -Dyarn.root.logger=3DINFO,EWMA,RFA -Djava.library.path=3D:/usr/= hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/had= oop/lib/native:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/= hdp/2.3.0.0-2557/hadoop/lib/native -classpath /usr/hdp/current/hadoop-client/conf:/usr/hdp/current/hadoop-cli= ent/conf:/usr/hdp/current/hadoop-client/conf:/usr/hdp/2.3.0.0-2557/hadoop/l= ib/*:/usr/hdp/2.3.0.0-2557/hadoop/.//*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/./= :/usr/hdp/2.3.0.0-2557/hadoop-hdfs/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/= .//*:/usr/hdp/2.3.0.0-2557/hadoop-yarn/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-y= arn/.//*:/usr/hdp/2.3.0.0-2557/hadoop-mapreduce/lib/*:/usr/hdp/2.3.0.0-2557= /hadoop-mapreduce/.//*:::/usr/share/java/mysql-connector-java.jar::/usr/sha= re/java/mysql-connector-java.jar:/usr/hdp/current/hadoop-yarn-timelineserve= r/.//*:/usr/hdp/current/hadoop-yarn-timelineserver/lib/*:/usr/hdp/current/h= adoop-client/conf/timelineserver-config/log4j.properties org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistory= Server


Thanks!
Krzysztof







--047d7b15b11760df6105263580a3--