Return-Path: X-Original-To: apmail-ambari-user-archive@www.apache.org Delivered-To: apmail-ambari-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 83CB010F5E for ; Mon, 3 Nov 2014 08:37:06 +0000 (UTC) Received: (qmail 89607 invoked by uid 500); 3 Nov 2014 08:37:06 -0000 Delivered-To: apmail-ambari-user-archive@ambari.apache.org Received: (qmail 89564 invoked by uid 500); 3 Nov 2014 08:37:06 -0000 Mailing-List: contact user-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@ambari.apache.org Delivered-To: mailing list user@ambari.apache.org Received: (qmail 89554 invoked by uid 99); 3 Nov 2014 08:37:06 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 03 Nov 2014 08:37:06 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of mshi@pivotal.io designates 209.85.220.172 as permitted sender) Received: from [209.85.220.172] (HELO mail-vc0-f172.google.com) (209.85.220.172) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 03 Nov 2014 08:36:40 +0000 Received: by mail-vc0-f172.google.com with SMTP id le20so4552701vcb.31 for ; Mon, 03 Nov 2014 00:36:38 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:content-type; bh=kL0qMSqmB+Xh/zEMKymslCZqgezgDPCtvmniGqTD1nA=; b=Ne+8w0iDp7Ox4wbYGAhHw/6cRQDQK1xvsggv5lO13lW/nxLAd7/Ab/vFGJnF/IK0v3 QQmM9jdwaonf8Vp8RQkYofQYiF3/nSPUYcq1Wam8f1qBvVy7nguD+Ctoac22Qspleams xqRTQZLPVgf7n6NdKfJGnBg7RfVmOfrw96pKkMm8dJKId0/NMreVkpH5QWKyq+6xvQy+ 2HJzurQr5H6SNTOrLgT0LdFaSjsR36CdFWjSb/kOWNWNqRBAhogH4HPu1gUQdfcDwizH TQHLtt0u/CpeJrE4/GIG/ezN41nwuG2hF/iA+y3BsC3vB3Whe32KOVZOvx76FnQpWALB uszg== X-Gm-Message-State: ALoCoQlkzqO6LU2pY78rPTnlw6iq82d1NOuNOifDpsfs0kZ6IzjoIo+xqPADL8xt5Y7AYBSY2Gdj X-Received: by 10.221.36.73 with SMTP id sz9mr33925087vcb.17.1415003798409; Mon, 03 Nov 2014 00:36:38 -0800 (PST) MIME-Version: 1.0 Received: by 10.31.6.1 with HTTP; Mon, 3 Nov 2014 00:36:18 -0800 (PST) In-Reply-To: References: From: Mingjiang Shi Date: Mon, 3 Nov 2014 16:36:18 +0800 Message-ID: Subject: Re: timeline service installed by ambari can't start To: "user@ambari.apache.org" Content-Type: multipart/alternative; boundary=001a11339346c2ae720506f03e50 X-Virus-Checked: Checked by ClamAV on apache.org --001a11339346c2ae720506f03e50 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hi, Could you check the timeline server log located at /var/log/hadoop-yarn/yarn/yarn-yarn-timelineserver*.log to see what problem caused the failure? On Mon, Nov 3, 2014 at 3:56 PM, guxiaobo1982 wrote: > The HDFS installed is of version > Version:2.4.0.2.1.5.0-695, rc11220208321e1835912fde828f1038eedb1afae > > > ------------------ Original ------------------ > *From: * "guxiaobo1982";; > *Send time:* Monday, Nov 3, 2014 3:48 PM > *To:* "user"; > *Subject: * timeline service installed by ambari can't start > > Hi, > > I use Ambari 16.1 installed HDP 2.1 Single node deployment, but the > timeline service can't start with the following error: > stderr: /var/lib/ambari-agent/data/errors-96.txt > > 2014-11-03 13:28:03,199 - Error while executing command 'restart': > Traceback (most recent call last): > File "/usr/lib/python2.6/site-packages/resource_management/libraries/sc= ript/script.py", line 111, in execute > method(env) > File "/usr/lib/python2.6/site-packages/resource_management/libraries/sc= ript/script.py", line 212, in restart > self.start(env) > File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/packag= e/scripts/application_timeline_server.py", line 42, in start > service('historyserver', action=3D'start') > File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/packag= e/scripts/service.py", line 51, in service > initial_wait=3D5 > File "/usr/lib/python2.6/site-packages/resource_management/core/base.py= ", line 148, in __init__ > self.env.run() > File "/usr/lib/python2.6/site-packages/resource_management/core/environ= ment.py", line 149, in run > self.run_action(resource, action) > File "/usr/lib/python2.6/site-packages/resource_management/core/environ= ment.py", line 115, in run_action > provider_action() > File "/usr/lib/python2.6/site-packages/resource_management/core/provide= rs/system.py", line 239, in action_run > raise ex > Fail: Execution of 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.= pid >/dev/null 2>&1 && ps `cat /var/run/hadoop-yarn/yarn/yarn-yarn-historys= erver.pid` >/dev/null 2>&1' returned 1. > > stdout: /var/lib/ambari-agent/data/output-96.txt > > 2014-11-03 13:27:56,524 - Execute['mkdir -p /tmp/HDP-artifacts/; curl= -kf -x "" --retry 10 http://ambari.bh.com:8080/resources//UnlimitedJCE= PolicyJDK7.zip -o /tmp/HDP-artifacts//UnlimitedJCEPolicyJDK7.zip'] {'enviro= nment': ..., 'not_if': 'test -e /tmp/HDP-artifacts//UnlimitedJCEPolicyJDK7.= zip', 'ignore_failures': True, 'path': ['/bin', '/usr/bin/']} > 2014-11-03 13:27:56,543 - Skipping Execute['mkdir -p /tmp/HDP-artifacts/;= curl -kf -x "" --retry 10 http://ambari.bh.com:8080/resources//Unl= imitedJCEPolicyJDK7.zip -o /tmp/HDP-artifacts//UnlimitedJCEPolicyJDK7.zip']= due to not_if > 2014-11-03 13:27:56,618 - Directory['/etc/hadoop/conf.empty'] {'owner': '= root', 'group': 'root', 'recursive': True} > 2014-11-03 13:27:56,620 - Link['/etc/hadoop/conf'] {'not_if': 'ls /etc/ha= doop/conf', 'to': '/etc/hadoop/conf.empty'} > 2014-11-03 13:27:56,634 - Skipping Link['/etc/hadoop/conf'] due to not_if > 2014-11-03 13:27:56,644 - File['/etc/hadoop/conf/hadoop-env.sh'] {'conten= t': Template('hadoop-env.sh.j2'), 'owner': 'hdfs'} > 2014-11-03 13:27:56,646 - XmlConfig['core-site.xml'] {'owner': 'hdfs', 'g= roup': 'hadoop', 'conf_dir': '/etc/hadoop/conf', 'configurations': ...} > 2014-11-03 13:27:56,650 - Generating config: /etc/hadoop/conf/core-site.x= ml > 2014-11-03 13:27:56,650 - File['/etc/hadoop/conf/core-site.xml'] {'owner'= : 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None} > 2014-11-03 13:27:56,651 - Writing File['/etc/hadoop/conf/core-site.xml'] = because contents don't match > 2014-11-03 13:27:56,662 - Execute['/bin/echo 0 > /selinux/enforce'] {'onl= y_if': 'test -f /selinux/enforce'} > 2014-11-03 13:27:56,683 - Execute['mkdir -p /usr/lib/hadoop/lib/native/Li= nux-i386-32; ln -sf /usr/lib/libsnappy.so /usr/lib/hadoop/lib/native/Linux-= i386-32/libsnappy.so'] {} > 2014-11-03 13:27:56,698 - Execute['mkdir -p /usr/lib/hadoop/lib/native/Li= nux-amd64-64; ln -sf /usr/lib64/libsnappy.so /usr/lib/hadoop/lib/native/Lin= ux-amd64-64/libsnappy.so'] {} > 2014-11-03 13:27:56,709 - Directory['/var/log/hadoop'] {'owner': 'root', = 'group': 'root', 'recursive': True} > 2014-11-03 13:27:56,710 - Directory['/var/run/hadoop'] {'owner': 'root', = 'group': 'root', 'recursive': True} > 2014-11-03 13:27:56,710 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs',= 'recursive': True} > 2014-11-03 13:27:56,714 - File['/etc/hadoop/conf/commons-logging.properti= es'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'= } > 2014-11-03 13:27:56,716 - File['/etc/hadoop/conf/health_check'] {'content= ': Template('health_check-v2.j2'), 'owner': 'hdfs'} > 2014-11-03 13:27:56,717 - File['/etc/hadoop/conf/log4j.properties'] {'con= tent': '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} > 2014-11-03 13:27:56,720 - File['/etc/hadoop/conf/hadoop-metrics2.properti= es'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'= } > 2014-11-03 13:27:56,720 - File['/etc/hadoop/conf/task-log4j.properties'] = {'content': StaticFile('task-log4j.properties'), 'mode': 0755} > 2014-11-03 13:27:56,721 - File['/etc/hadoop/conf/configuration.xsl'] {'ow= ner': 'hdfs', 'group': 'hadoop'} > 2014-11-03 13:27:56,803 - Execute['export HADOOP_LIBEXEC_DIR=3D/usr/lib/h= adoop/libexec && /usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config /etc/had= oop/conf stop historyserver'] {'user': 'yarn'} > 2014-11-03 13:27:56,924 - Execute['rm -f /var/run/hadoop-yarn/yarn/yarn-y= arn-historyserver.pid'] {'user': 'yarn'} > 2014-11-03 13:27:56,955 - Directory['/var/run/hadoop-yarn/yarn'] {'owner'= : 'yarn', 'group': 'hadoop', 'recursive': True} > 2014-11-03 13:27:56,956 - Directory['/var/log/hadoop-yarn/yarn'] {'owner'= : 'yarn', 'group': 'hadoop', 'recursive': True} > 2014-11-03 13:27:56,956 - Directory['/var/run/hadoop-mapreduce/mapred'] {= 'owner': 'mapred', 'group': 'hadoop', 'recursive': True} > 2014-11-03 13:27:56,956 - Directory['/var/log/hadoop-mapreduce/mapred'] {= 'owner': 'mapred', 'group': 'hadoop', 'recursive': True} > 2014-11-03 13:27:56,956 - Directory['/hadoop/yarn/local'] {'owner': 'yarn= ', 'ignore_failures': True, 'recursive': True} > 2014-11-03 13:27:56,956 - Directory['/hadoop/yarn/log'] {'owner': 'yarn',= 'ignore_failures': True, 'recursive': True} > 2014-11-03 13:27:56,957 - Directory['/var/log/hadoop-yarn'] {'owner': 'ya= rn', 'ignore_failures': True, 'recursive': True} > 2014-11-03 13:27:56,957 - XmlConfig['core-site.xml'] {'owner': 'hdfs', 'g= roup': 'hadoop', 'mode': 0644, 'conf_dir': '/etc/hadoop/conf', 'configurati= ons': ...} > 2014-11-03 13:27:56,963 - Generating config: /etc/hadoop/conf/core-site.x= ml > 2014-11-03 13:27:56,963 - File['/etc/hadoop/conf/core-site.xml'] {'owner'= : 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644} > 2014-11-03 13:27:56,963 - XmlConfig['mapred-site.xml'] {'owner': 'yarn', = 'group': 'hadoop', 'mode': 0644, 'conf_dir': '/etc/hadoop/conf', 'configura= tions': ...} > 2014-11-03 13:27:56,966 - Generating config: /etc/hadoop/conf/mapred-site= .xml > 2014-11-03 13:27:56,966 - File['/etc/hadoop/conf/mapred-site.xml'] {'owne= r': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644= } > 2014-11-03 13:27:56,967 - Writing File['/etc/hadoop/conf/mapred-site.xml'= ] because contents don't match > 2014-11-03 13:27:56,967 - Changing owner for /etc/hadoop/conf/mapred-site= .xml from 1022 to yarn > 2014-11-03 13:27:56,967 - XmlConfig['yarn-site.xml'] {'owner': 'yarn', 'g= roup': 'hadoop', 'mode': 0644, 'conf_dir': '/etc/hadoop/conf', 'configurati= ons': ...} > 2014-11-03 13:27:56,969 - Generating config: /etc/hadoop/conf/yarn-site.x= ml > 2014-11-03 13:27:56,969 - File['/etc/hadoop/conf/yarn-site.xml'] {'owner'= : 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644} > 2014-11-03 13:27:56,970 - Writing File['/etc/hadoop/conf/yarn-site.xml'] = because contents don't match > 2014-11-03 13:27:56,971 - XmlConfig['capacity-scheduler.xml'] {'owner': '= yarn', 'group': 'hadoop', 'mode': 0644, 'conf_dir': '/etc/hadoop/conf', 'co= nfigurations': ...} > 2014-11-03 13:27:56,974 - Generating config: /etc/hadoop/conf/capacity-sc= heduler.xml > 2014-11-03 13:27:56,974 - File['/etc/hadoop/conf/capacity-scheduler.xml']= {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode= ': 0644} > 2014-11-03 13:27:56,975 - Writing File['/etc/hadoop/conf/capacity-schedul= er.xml'] because contents don't match > 2014-11-03 13:27:56,975 - Changing owner for /etc/hadoop/conf/capacity-sc= heduler.xml from 1021 to yarn > 2014-11-03 13:27:56,975 - Directory['/hadoop/yarn/timeline'] {'owner': 'y= arn', 'group': 'hadoop', 'recursive': True} > 2014-11-03 13:27:56,975 - File['/etc/hadoop/conf/yarn.exclude'] {'owner':= 'yarn', 'group': 'hadoop'} > 2014-11-03 13:27:56,977 - File['/etc/security/limits.d/yarn.conf'] {'cont= ent': Template('yarn.conf.j2'), 'mode': 0644} > 2014-11-03 13:27:56,980 - File['/etc/security/limits.d/mapreduce.conf'] {= 'content': Template('mapreduce.conf.j2'), 'mode': 0644} > 2014-11-03 13:27:56,982 - File['/etc/hadoop/conf/yarn-env.sh'] {'content'= : Template('yarn-env.sh.j2'), 'owner': 'yarn', 'group': 'hadoop', 'mode': 0= 755} > 2014-11-03 13:27:56,984 - File['/etc/hadoop/conf/mapred-env.sh'] {'conten= t': Template('mapred-env.sh.j2'), 'owner': 'hdfs'} > 2014-11-03 13:27:56,985 - File['/etc/hadoop/conf/taskcontroller.cfg'] {'c= ontent': Template('taskcontroller.cfg.j2'), 'owner': 'hdfs'} > 2014-11-03 13:27:56,986 - XmlConfig['mapred-site.xml'] {'owner': 'mapred'= , 'group': 'hadoop', 'conf_dir': '/etc/hadoop/conf', 'configurations': ...} > 2014-11-03 13:27:56,988 - Generating config: /etc/hadoop/conf/mapred-site= .xml > 2014-11-03 13:27:56,988 - File['/etc/hadoop/conf/mapred-site.xml'] {'owne= r': 'mapred', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': No= ne} > 2014-11-03 13:27:56,988 - Changing owner for /etc/hadoop/conf/mapred-site= .xml from 1020 to mapred > 2014-11-03 13:27:56,988 - XmlConfig['capacity-scheduler.xml'] {'owner': '= hdfs', 'group': 'hadoop', 'conf_dir': '/etc/hadoop/conf', 'configurations':= ...} > 2014-11-03 13:27:56,991 - Generating config: /etc/hadoop/conf/capacity-sc= heduler.xml > 2014-11-03 13:27:56,991 - File['/etc/hadoop/conf/capacity-scheduler.xml']= {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode= ': None} > 2014-11-03 13:27:56,992 - Changing owner for /etc/hadoop/conf/capacity-sc= heduler.xml from 1020 to hdfs > 2014-11-03 13:27:56,992 - File['/etc/hadoop/conf/ssl-client.xml.example']= {'owner': 'mapred', 'group': 'hadoop'} > 2014-11-03 13:27:56,992 - File['/etc/hadoop/conf/ssl-server.xml.example']= {'owner': 'mapred', 'group': 'hadoop'} > 2014-11-03 13:27:56,993 - Execute['export HADOOP_LIBEXEC_DIR=3D/usr/lib/h= adoop/libexec && /usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config /etc/had= oop/conf start historyserver'] {'not_if': 'ls /var/run/hadoop-yarn/yarn/yar= n-yarn-historyserver.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop-yarn/ya= rn/yarn-yarn-historyserver.pid` >/dev/null 2>&1', 'user': 'yarn'} > 2014-11-03 13:27:58,089 - Execute['ls /var/run/hadoop-yarn/yarn/yarn-yarn= -historyserver.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop-yarn/yarn/yar= n-yarn-historyserver.pid` >/dev/null 2>&1'] {'initial_wait': 5, 'not_if': '= ls /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid >/dev/null 2>&1 &&= ps `cat /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid` >/dev/null = 2>&1', 'user': 'yarn'} > 2014-11-03 13:28:03,199 - Error while executing command 'restart': > Traceback (most recent call last): > File "/usr/lib/python2.6/site-packages/resource_management/libraries/sc= ript/script.py", line 111, in execute > method(env) > File "/usr/lib/python2.6/site-packages/resource_management/libraries/sc= ript/script.py", line 212, in restart > self.start(env) > File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/packag= e/scripts/application_timeline_server.py", line 42, in start > service('historyserver', action=3D'start') > File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/packag= e/scripts/service.py", line 51, in service > initial_wait=3D5 > File "/usr/lib/python2.6/site-packages/resource_management/core/base.py= ", line 148, in __init__ > self.env.run() > File "/usr/lib/python2.6/site-packages/resource_management/core/environ= ment.py", line 149, in run > self.run_action(resource, action) > File "/usr/lib/python2.6/site-packages/resource_management/core/environ= ment.py", line 115, in run_action > provider_action() > File "/usr/lib/python2.6/site-packages/resource_management/core/provide= rs/system.py", line 239, in action_run > raise ex > Fail: Execution of 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.= pid >/dev/null 2>&1 && ps `cat /var/run/hadoop-yarn/yarn/yarn-yarn-historys= erver.pid` >/dev/null 2>&1' returned 1. > > It seems this is a known issue according to > http://docs.hortonworks.com/HDPDocuments/Ambari-1.6.1.0/bk_releasenotes_a= mbari_1.6.1/content/ch_relnotes-ambari-1.6.1.0-knownissues.html > , > > I checked my environment, it is configured with the default value of org.= apache.hadoop.yarn.server.timeline.LeveldbTimelineStore > for yarn.timeline-service.store-class, and I can't determine which > version of HDP ambari-server has installed, so I tried org.apache.hadoop.= yarn.server.applicationhistoryservice.timeline.LeveldbTimelineStore > and org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore, but both > failed with the same problem, can you help with this, and another questio= ns > is how can determine which version of HDP is installed? > > Thanks > > --=20 Cheers -MJ --001a11339346c2ae720506f03e50 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi,
Could you check the timeline server log = located at /var/log/hadoop-yarn/yarn/yarn-yarn-timelineserver*.log to see w= hat problem caused the failure?

On Mon, Nov 3, 2014 at 3:56 PM, guxiaobo1982 <guxia= obo1982@qq.com> wrote:
The HDFS installed is of version=C2=A0
Version:2.4.0.2.1.5.0-695, rc1= 1220208321e1835912fde828f1038eedb1afae
<= br>

------------------=C2=A0Original=C2=A0-----------------= -
From: =C2=A0"guxiaobo1982"= ;<guxiaobo1982@= qq.com>;
Send time:=C2=A0Monday, Nov 3, 2014 3:48 P= M
To:=C2=A0"user"<user@ambari.apache.org>;
Subject: =C2=A0timeline service installed by amba= ri can't start

Hi,

<= div>I use Ambari 16.1 installed HDP 2.1 Single node deployment, but the tim= eline service can't start with the following error:
stderr: =C2=A0 /var/lib/a= mbari-agent/data/errors-96.txt
2014-11-03 13:28:03,199 - Error while executing command 'restart':
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/resource_management/libraries=
/script/script.py", line 111, in execute
    method(env)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries=
/script/script.py", line 212, in restart
    self.start(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/pac=
kage/scripts/application_timeline_server.py", line 42, in start
    service('historyserver', action=3D'start')
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/pac=
kage/scripts/service.py", line 51, in service
    initial_wait=3D5
  File "/usr/lib/python2.6/site-packages/resource_management/core/base=
.py", line 148, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/envi=
ronment.py", line 149, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/envi=
ronment.py", line 115, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/prov=
iders/system.py", line 239, in action_run
    raise ex
Fail: Execution of 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-historyserve=
r.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop-yarn/yar=
n/yarn-yarn-historyserver.pid` >/dev/null 2>&1' returned 1.
stdout: =C2=A0 /var/lib/ambari-agent/data/output-96.txt
2014-11-03 13:27:56,524 - Execute['mkdir -p /tmp/HDP-arti=
facts/;     curl -kf -x "" --retry 10     http=
://ambari.bh.com:8080/resources//UnlimitedJCEPolicyJDK7.zip -o /tmp/HDP=
-artifacts//UnlimitedJCEPolicyJDK7.zip'] {'environment': ..., &=
#39;not_if': 'test -e /tmp/HDP-artifacts//UnlimitedJCEPolicyJDK7.zi=
p', 'ignore_failures': True, 'path': ['/bin', &=
#39;/usr/bin/']}
2014-11-03 13:27:56,543 - Skipping Execute['mkdir -p /tmp/HDP-artifacts=
/;     curl -kf -x "" --retry 10     http://am=
bari.bh.com:8080/resources//UnlimitedJCEPolicyJDK7.zip -o /tmp/HDP-arti=
facts//UnlimitedJCEPolicyJDK7.zip'] due to not_if
2014-11-03 13:27:56,618 - Directory['/etc/hadoop/conf.empty'] {'=
;owner': 'root', 'group': 'root', 'recursiv=
e': True}
2014-11-03 13:27:56,620 - Link['/etc/hadoop/conf'] {'not_if'=
;: 'ls /etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty=
'}
2014-11-03 13:27:56,634 - Skipping Link['/etc/hadoop/conf'] due to =
not_if
2014-11-03 13:27:56,644 - File['/etc/hadoop/conf/hadoop-env.sh'] {&=
#39;content': Template('hadoop-env.sh.j2'), 'owner': &#=
39;hdfs'}
2014-11-03 13:27:56,646 - XmlConfig['core-site.xml'] {'owner=
9;: 'hdfs', 'group': 'hadoop', 'conf_dir': =
'/etc/hadoop/conf', 'configurations': ...}
2014-11-03 13:27:56,650 - Generating config: /etc/hadoop/conf/core-site.xml
2014-11-03 13:27:56,650 - File['/etc/hadoop/conf/core-site.xml'] {&=
#39;owner': 'hdfs', 'content': InlineTemplate(...), =
9;group': 'hadoop', 'mode': None}
2014-11-03 13:27:56,651 - Writing File['/etc/hadoop/conf/core-site.xml&=
#39;] because contents don't match
2014-11-03 13:27:56,662 - Execute['/bin/echo 0 > /selinux/enforce=
9;] {'only_if': 'test -f /selinux/enforce'}
2014-11-03 13:27:56,683 - Execute['mkdir -p /usr/lib/hadoop/lib/native/=
Linux-i386-32; ln -sf /usr/lib/libsnappy.so /usr/lib/hadoop/lib/native/Linu=
x-i386-32/libsnappy.so'] {}
2014-11-03 13:27:56,698 - Execute['mkdir -p /usr/lib/hadoop/lib/native/=
Linux-amd64-64; ln -sf /usr/lib64/libsnappy.so /usr/lib/hadoop/lib/native/L=
inux-amd64-64/libsnappy.so'] {}
2014-11-03 13:27:56,709 - Directory['/var/log/hadoop'] {'owner&=
#39;: 'root', 'group': 'root', 'recursive':=
 True}
2014-11-03 13:27:56,710 - Directory['/var/run/hadoop'] {'owner&=
#39;: 'root', 'group': 'root', 'recursive':=
 True}
2014-11-03 13:27:56,710 - Directory['/tmp/hadoop-hdfs'] {'owner=
': 'hdfs', 'recursive': True}
2014-11-03 13:27:56,714 - File['/etc/hadoop/conf/commons-logging.proper=
ties'] {'content': Template('commons-logging.properties.j2&=
#39;), 'owner': 'hdfs'}
2014-11-03 13:27:56,716 - File['/etc/hadoop/conf/health_check'] {&#=
39;content': Template('health_check-v2.j2'), 'owner': &=
#39;hdfs'}
2014-11-03 13:27:56,717 - File['/etc/hadoop/conf/log4j.properties']=
 {'content': '...', 'owner': 'hdfs', 'g=
roup': 'hadoop', 'mode': 0644}
2014-11-03 13:27:56,720 - File['/etc/hadoop/conf/hadoop-metrics2.proper=
ties'] {'content': Template('hadoop-metrics2.properties.j2&=
#39;), 'owner': 'hdfs'}
2014-11-03 13:27:56,720 - File['/etc/hadoop/conf/task-log4j.properties&=
#39;] {'content': StaticFile('task-log4j.properties'), '=
;mode': 0755}
2014-11-03 13:27:56,721 - File['/etc/hadoop/conf/configuration.xsl'=
] {'owner': 'hdfs', 'group': 'hadoop'}
2014-11-03 13:27:56,803 - Execute['export HADOOP_LIBEXEC_DIR=3D/usr/lib=
/hadoop/libexec && /usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --confi=
g /etc/hadoop/conf stop historyserver'] {'user': 'yarn'=
}
2014-11-03 13:27:56,924 - Execute['rm -f /var/run/hadoop-yarn/yarn/yarn=
-yarn-historyserver.pid'] {'user': 'yarn'}
2014-11-03 13:27:56,955 - Directory['/var/run/hadoop-yarn/yarn'] {&=
#39;owner': 'yarn', 'group': 'hadoop', 'rec=
ursive': True}
2014-11-03 13:27:56,956 - Directory['/var/log/hadoop-yarn/yarn'] {&=
#39;owner': 'yarn', 'group': 'hadoop', 'rec=
ursive': True}
2014-11-03 13:27:56,956 - Directory['/var/run/hadoop-mapreduce/mapred&#=
39;] {'owner': 'mapred', 'group': 'hadoop',=
 'recursive': True}
2014-11-03 13:27:56,956 - Directory['/var/log/hadoop-mapreduce/mapred&#=
39;] {'owner': 'mapred', 'group': 'hadoop',=
 'recursive': True}
2014-11-03 13:27:56,956 - Directory['/hadoop/yarn/local'] {'own=
er': 'yarn', 'ignore_failures': True, 'recursive=
9;: True}
2014-11-03 13:27:56,956 - Directory['/hadoop/yarn/log'] {'owner=
': 'yarn', 'ignore_failures': True, 'recursive'=
: True}
2014-11-03 13:27:56,957 - Directory['/var/log/hadoop-yarn'] {'o=
wner': 'yarn', 'ignore_failures': True, 'recursive&=
#39;: True}
2014-11-03 13:27:56,957 - XmlConfig['core-site.xml'] {'owner=
9;: 'hdfs', 'group': 'hadoop', 'mode': 0644=
, 'conf_dir': '/etc/hadoop/conf', 'configurations':=
 ...}
2014-11-03 13:27:56,963 - Generating config: /etc/hadoop/conf/core-site.xml
2014-11-03 13:27:56,963 - File['/etc/hadoop/conf/core-site.xml'] {&=
#39;owner': 'hdfs', 'content': InlineTemplate(...), =
9;group': 'hadoop', 'mode': 0644}
2014-11-03 13:27:56,963 - XmlConfig['mapred-site.xml'] {'owner&=
#39;: 'yarn', 'group': 'hadoop', 'mode': 06=
44, 'conf_dir': '/etc/hadoop/conf', 'configurations'=
;: ...}
2014-11-03 13:27:56,966 - Generating config: /etc/hadoop/conf/mapred-site.x=
ml
2014-11-03 13:27:56,966 - File['/etc/hadoop/conf/mapred-site.xml'] =
{'owner': 'yarn', 'content': InlineTemplate(...), &=
#39;group': 'hadoop', 'mode': 0644}
2014-11-03 13:27:56,967 - Writing File['/etc/hadoop/conf/mapred-site.xm=
l'] because contents don't match
2014-11-03 13:27:56,967 - Changing owner for /etc/hadoop/conf/mapred-site.x=
ml from 1022 to yarn
2014-11-03 13:27:56,967 - XmlConfig['yarn-site.xml'] {'owner=
9;: 'yarn', 'group': 'hadoop', 'mode': 0644=
, 'conf_dir': '/etc/hadoop/conf', 'configurations':=
 ...}
2014-11-03 13:27:56,969 - Generating config: /etc/hadoop/conf/yarn-site.xml
2014-11-03 13:27:56,969 - File['/etc/hadoop/conf/yarn-site.xml'] {&=
#39;owner': 'yarn', 'content': InlineTemplate(...), =
9;group': 'hadoop', 'mode': 0644}
2014-11-03 13:27:56,970 - Writing File['/etc/hadoop/conf/yarn-site.xml&=
#39;] because contents don't match
2014-11-03 13:27:56,971 - XmlConfig['capacity-scheduler.xml'] {'=
;owner': 'yarn', 'group': 'hadoop', 'mode&#=
39;: 0644, 'conf_dir': '/etc/hadoop/conf', 'configurati=
ons': ...}
2014-11-03 13:27:56,974 - Generating config: /etc/hadoop/conf/capacity-sche=
duler.xml
2014-11-03 13:27:56,974 - File['/etc/hadoop/conf/capacity-scheduler.xml=
'] {'owner': 'yarn', 'content': InlineTemplate(=
...), 'group': 'hadoop', 'mode': 0644}
2014-11-03 13:27:56,975 - Writing File['/etc/hadoop/conf/capacity-sched=
uler.xml'] because contents don't match
2014-11-03 13:27:56,975 - Changing owner for /etc/hadoop/conf/capacity-sche=
duler.xml from 1021 to yarn
2014-11-03 13:27:56,975 - Directory['/hadoop/yarn/timeline'] {'=
owner': 'yarn', 'group': 'hadoop', 'recursi=
ve': True}
2014-11-03 13:27:56,975 - File['/etc/hadoop/conf/yarn.exclude'] {&#=
39;owner': 'yarn', 'group': 'hadoop'}
2014-11-03 13:27:56,977 - File['/etc/security/limits.d/yarn.conf'] =
{'content': Template('yarn.conf.j2'), 'mode': 0644}
2014-11-03 13:27:56,980 - File['/etc/security/limits.d/mapreduce.conf&#=
39;] {'content': Template('mapreduce.conf.j2'), 'mode&#=
39;: 0644}
2014-11-03 13:27:56,982 - File['/etc/hadoop/conf/yarn-env.sh'] {=
9;content': Template('yarn-env.sh.j2'), 'owner': 'y=
arn', 'group': 'hadoop', 'mode': 0755}
2014-11-03 13:27:56,984 - File['/etc/hadoop/conf/mapred-env.sh'] {&=
#39;content': Template('mapred-env.sh.j2'), 'owner': &#=
39;hdfs'}
2014-11-03 13:27:56,985 - File['/etc/hadoop/conf/taskcontroller.cfg'=
;] {'content': Template('taskcontroller.cfg.j2'), 'owne=
r': 'hdfs'}
2014-11-03 13:27:56,986 - XmlConfig['mapred-site.xml'] {'owner&=
#39;: 'mapred', 'group': 'hadoop', 'conf_dir=
9;: '/etc/hadoop/conf', 'configurations': ...}
2014-11-03 13:27:56,988 - Generating config: /etc/hadoop/conf/mapred-site.x=
ml
2014-11-03 13:27:56,988 - File['/etc/hadoop/conf/mapred-site.xml'] =
{'owner': 'mapred', 'content': InlineTemplate(...),=
 'group': 'hadoop', 'mode': None}
2014-11-03 13:27:56,988 - Changing owner for /etc/hadoop/conf/mapred-site.x=
ml from 1020 to mapred
2014-11-03 13:27:56,988 - XmlConfig['capacity-scheduler.xml'] {'=
;owner': 'hdfs', 'group': 'hadoop', 'conf_d=
ir': '/etc/hadoop/conf', 'configurations': ...}
2014-11-03 13:27:56,991 - Generating config: /etc/hadoop/conf/capacity-sche=
duler.xml
2014-11-03 13:27:56,991 - File['/etc/hadoop/conf/capacity-scheduler.xml=
'] {'owner': 'hdfs', 'content': InlineTemplate(=
...), 'group': 'hadoop', 'mode': None}
2014-11-03 13:27:56,992 - Changing owner for /etc/hadoop/conf/capacity-sche=
duler.xml from 1020 to hdfs
2014-11-03 13:27:56,992 - File['/etc/hadoop/conf/ssl-client.xml.example=
'] {'owner': 'mapred', 'group': 'hadoop'=
;}
2014-11-03 13:27:56,992 - File['/etc/hadoop/conf/ssl-server.xml.example=
'] {'owner': 'mapred', 'group': 'hadoop'=
;}
2014-11-03 13:27:56,993 - Execute['export HADOOP_LIBEXEC_DIR=3D/usr/lib=
/hadoop/libexec && /usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --confi=
g /etc/hadoop/conf start historyserver'] {'not_if': 'ls /va=
r/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid >/dev/null 2>&=
1 && ps `cat /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid`=
 >/dev/null 2>&1', 'user': 'yarn'}
2014-11-03 13:27:58,089 - Execute['ls /var/run/hadoop-yarn/yarn/yarn-ya=
rn-historyserver.pid >/dev/null 2>&1 && ps `cat /var/run/=
hadoop-yarn/yarn/yarn-yarn-historyserver.pid` >/dev/null 2>&1'=
;] {'initial_wait': 5, 'not_if': 'ls /var/run/hadoop-ya=
rn/yarn/yarn-yarn-historyserver.pid >/dev/null 2>&1 && ps=
 `cat /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid` >/dev/null =
2>&1', 'user': 'yarn'}
2014-11-03 13:28:03,199 - Error while executing command 'restart':
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/resource_management/libraries=
/script/script.py", line 111, in execute
    method(env)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries=
/script/script.py", line 212, in restart
    self.start(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/pac=
kage/scripts/application_timeline_server.py", line 42, in start
    service('historyserver', action=3D'start')
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/pac=
kage/scripts/service.py", line 51, in service
    initial_wait=3D5
  File "/usr/lib/python2.6/site-packages/resource_management/core/base=
.py", line 148, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/envi=
ronment.py", line 149, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/envi=
ronment.py", line 115, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/prov=
iders/system.py", line 239, in action_run
    raise ex
Fail: Execution of 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-historyserve=
r.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop-yarn/yar=
n/yarn-yarn-historyserver.pid` >/dev/null 2>&1' returned 1.

I checked my environment, it is configured with the defau= lt value of org.apache.hadoop.yarn.server.t= imeline.LeveldbTimelineStore for yarn.tim= eline-service.sto= re-class, and I can't determine which version of HDP ambari-server has = installed, so I tried org.apache.had= oop.yarn.server.applicationhistoryservice.timeline.LeveldbTimelineStore and= org.apache.hadoop.yarn.server.timel= ine.LeveldbTimelineStore, but both failed with the same problem, can you he= lp with this, and another questions is how can determine which version of H= DP is installed?

Thanks




--
<= div>
Cheers
-MJ
--001a11339346c2ae720506f03e50--