ambari-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vinod Kumar Vavilapalli <vino...@apache.org>
Subject Re: timeline service installed by ambari can't start
Date Tue, 04 Nov 2014 06:11:04 GMT
That is the right log file name. Timeline service is still named historyserver which should
be fixed soon.

> /hadoop/yarn/timeline/leveldb-timeline-store.ldb/LOCK: Permission denied

This is the real issue. Is this configuration generated by Ambari? Either ways, the configuration
property that controls this is yarn.timeline-service.leveldb-timeline-store.path - point it
to a valid location with permissions to create files for the server.

+Vinod

On Nov 3, 2014, at 2:18 AM, guxiaobo1982 <guxiaobo1982@qq.com> wrote:

> there are some error messages in the yarn-yarn-historyserver-lix1.bh.com.log file:
> 
> 
> 
> ************************************************************/
> 
> 2014-11-03 12:07:34,974 INFO  applicationhistoryservice.ApplicationHistoryServer (StringUtils.java:startupShutdownMessage(614))
- STARTUP_MSG: 
> 
> /************************************************************
> 
> STARTUP_MSG: Starting ApplicationHistoryServer
> 
> STARTUP_MSG:   host = lix1.bh.com/192.168.100.3
> 
> STARTUP_MSG:   args = []
> 
> STARTUP_MSG:   version = 2.4.0.2.1.5.0-695
> 
> STARTUP_MSG:   classpath = /etc/hadoop/conf:/etc/hadoop/conf:/etc/hadoop/conf:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/mockito-all
> 
> -1.8.5.jar:
> 
> 
> 
> .......
> 
> 
> 
> STARTUP_MSG:   build = git@github.com:hortonworks/hadoop.git -r c11220208321e1835912fde828f1038eedb1afae;
compiled by 'jenkins' on 2014-08-28T03:10Z
> 
> STARTUP_MSG:   java = 1.7.0_45
> 
> ************************************************************/
> 
> 2014-11-03 12:07:35,039 INFO  applicationhistoryservice.ApplicationHistoryServer (SignalLogger.java:register(91))
- registered UNIX signal handlers f
> 
> or [TERM, HUP, INT]
> 
> 2014-11-03 12:07:36,977 INFO  impl.MetricsConfig (MetricsConfig.java:loadFirst(111))
- loaded properties from hadoop-metrics2.properties
> 
> 2014-11-03 12:07:37,208 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(355))
- Scheduled snapshot period at 60 second(s).
> 
> 2014-11-03 12:07:37,208 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:start(183))
- ApplicationHistoryServer metrics system started
> 
> 2014-11-03 12:07:37,230 INFO  applicationhistoryservice.ApplicationHistoryManagerImpl
(ApplicationHistoryManagerImpl.java:serviceInit(61)) - Applicat
> 
> ionHistory Init
> 
> 2014-11-03 12:07:37,615 INFO  timeline.LeveldbTimelineStore (LeveldbTimelineStore.java:serviceInit(194))
- Using leveldb path /hadoop/yarn/timeline/l
> 
> eveldb-timeline-store.ldb
> 
> 2014-11-03 12:07:37,647 INFO  service.AbstractService (AbstractService.java:noteFailure(272))
- Service org.apache.hadoop.yarn.server.timeline.Leveld
> 
> bTimelineStore failed in state INITED; cause: org.fusesource.leveldbjni.internal.NativeDB$DBException:
IO error: /hadoop/yarn/timeline/leveldb-timeli
> 
> ne-store.ldb/LOCK: Permission denied
> 
> org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: /hadoop/yarn/timeline/leveldb-timeline-store.ldb/LOCK:
Permission denied
> 
> at org.fusesource.leveldbjni.internal.NativeDB.checkStatus(NativeDB.java:200)
> 
> at org.fusesource.leveldbjni.internal.NativeDB.open(NativeDB.java:218)
> 
> at org.fusesource.leveldbjni.JniDBFactory.open(JniDBFactory.java:168)
> 
> at org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore.serviceInit(LeveldbTimelineStore.java:195)
> 
> at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> 
> at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> 
> at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.serviceInit(ApplicationHistoryServer.java:88)
> 
> at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> 
> at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.launchAppHistoryServer(ApplicationHistoryServer.java:145)
> 
> at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.main(ApplicationHistoryServer.java:155)
> 
> 2014-11-03 12:07:37,654 INFO  service.AbstractService (AbstractService.java:noteFailure(272))
- Service org.apache.hadoop.yarn.server.applicationhist
> 
> oryservice.ApplicationHistoryServer failed in state INITED; cause: org.apache.hadoop.service.ServiceStateException:
org.fusesource.leveldbjni.interna
> 
> l.NativeDB$DBException: IO error: /hadoop/yarn/timeline/leveldb-timeline-store.ldb/LOCK:
Permission denied
> 
> org.apache.hadoop.service.ServiceStateException: org.fusesource.leveldbjni.internal.NativeDB$DBException:
IO error: /hadoop/yarn/timeline/leveldb-tim
> 
> eline-store.ldb/LOCK: Permission denied
> 
> at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
> 
> at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
> 
> at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> 
> at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.serviceInit(ApplicationHistoryServer.java:88)
> 
> at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> 
> at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.launchAppHistoryServer(ApplicationHistoryServer.java:145)
> 
> at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.main(ApplicationHistoryServer.java:155)
> 
> Caused by: org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: /hadoop/yarn/timeline/leveldb-timeline-store.ldb/LOCK:
Permission denie
> 
> d
> 
> at org.fusesource.leveldbjni.internal.NativeDB.checkStatus(NativeDB.java:200)
> 
> at org.fusesource.leveldbjni.internal.NativeDB.open(NativeDB.java:218)
> 
> at org.fusesource.leveldbjni.JniDBFactory.open(JniDBFactory.java:168)
> 
> at org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore.serviceInit(LeveldbTimelineStore.java:195)
> 
> at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> 
> ... 5 more
> 
> 2014-11-03 12:07:37,655 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(200))
- Stopping ApplicationHistoryServer metrics system...
> 
> 2014-11-03 12:07:37,656 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(206))
- ApplicationHistoryServer metrics system stopped.
> 
> 2014-11-03 12:07:37,656 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(583))
- ApplicationHistoryServer metrics system shutdown complete.
> 
> 2014-11-03 12:07:37,656 INFO  applicationhistoryservice.ApplicationHistoryManagerImpl
(ApplicationHistoryManagerImpl.java:serviceStop(78)) - Stopping ApplicationHistory
> 
> 2014-11-03 12:07:37,657 FATAL applicationhistoryservice.ApplicationHistoryServer (ApplicationHistoryServer.java:launchAppHistoryServer(148))
- Error starting ApplicationHistoryServer
> 
> org.apache.hadoop.service.ServiceStateException: org.fusesource.leveldbjni.internal.NativeDB$DBException:
IO error: /hadoop/yarn/timeline/leveldb-timeline-store.ldb/LOCK: Permission denied
> 
> at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
> 
> at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
> 
> at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> 
> at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.serviceInit(ApplicationHistoryServer.java:88)
> 
> at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> 
> at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.launchAppHistoryServer(ApplicationHistoryServer.java:145)
> 
> at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.main(ApplicationHistoryServer.java:155)
> 
> Caused by: org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: /hadoop/yarn/timeline/leveldb-timeline-store.ldb/LOCK:
Permission denied
> 
> at org.fusesource.leveldbjni.internal.NativeDB.checkStatus(NativeDB.java:200)
> 
> at org.fusesource.leveldbjni.internal.NativeDB.open(NativeDB.java:218)
> 
> at org.fusesource.leveldbjni.JniDBFactory.open(JniDBFactory.java:168)
> 
> at org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore.serviceInit(LeveldbTimelineStore.java:195)
> 
> at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> 
> ... 5 more
> 
> 2014-11-03 12:07:37,664 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting
with status -1
> 
> 2014-11-03 12:07:37,668 INFO  applicationhistoryservice.ApplicationHistoryServer (StringUtils.java:run(640))
- SHUTDOWN_MSG: 
> 
> /************************************************************
> 
> SHUTDOWN_MSG: Shutting down ApplicationHistoryServer at lix1.bh.com/192.168.100.3
> 
> 
> 
> And I list the ownership for the /hadoop path
> 
> 
> 
> [root@lix1 hadoop]# pwd
> 
> /hadoop
> 
> [root@lix1 hadoop]# ls
> 
> falcon  hbase  hdfs  oozie  storm  yarn  zookeeper
> 
> [root@lix1 hadoop]# ls -al
> 
> 总用量 36
> 
> drwxr-xr-x.  9 root      root   4096 11月  3 11:58 .
> 
> dr-xr-xr-x. 28 root      root   4096 11月  3 18:01 ..
> 
> drwxr-xr-x.  4 falcon    root   4096 11月  3 11:54 falcon
> 
> drwxr-xr-x.  3 hbase     root   4096 11月  3 10:34 hbase
> 
> drwxr-xr-x.  5 root      root   4096 11月  3 11:58 hdfs
> 
> drwxr-xr-x.  3 root      root   4096 11月  3 11:58 oozie
> 
> drwxr-xr-x.  5 storm     hadoop 4096 11月  3 12:05 storm
> 
> drwxr-xr-x.  5 root      root   4096 10月 14 18:34 yarn
> 
> drwxr-xr-x.  3 zookeeper hadoop 4096 11月  3 11:51 zookeeper
> 
> [root@lix1 hadoop]# ls -al yarn/
> 
> 总用量 20
> 
> drwxr-xr-x. 5 root root   4096 10月 14 18:34 .
> 
> drwxr-xr-x. 9 root root   4096 11月  3 11:58 ..
> 
> drwxr-xr-x. 6 yarn root   4096 11月  3 12:58 local
> 
> drwxr-xr-x. 2 yarn root   4096 11月  3 12:09 log
> 
> drwxr-xr-x. 3 yarn hadoop 4096 10月 14 18:34 timeline
> 
> 
> [root@lix1 hadoop]# 
> 
> 
> 
> 
> 
> ------------------ Original ------------------
> From:  "Mingjiang Shi";<mshi@pivotal.io>;
> Send time: Monday, Nov 3, 2014 4:36 PM
> To: "user@ambari.apache.org"<user@ambari.apache.org>;
> Subject:  Re: timeline service installed by ambari can't start
> 
> Hi,
> Could you check the timeline server log located at /var/log/hadoop-yarn/yarn/yarn-yarn-timelineserver*.log
to see what problem caused the failure? 
> 
> On Mon, Nov 3, 2014 at 3:56 PM, guxiaobo1982 <guxiaobo1982@qq.com> wrote:
> The HDFS installed is of version 
> Version:	2.4.0.2.1.5.0-695, rc11220208321e1835912fde828f1038eedb1afae
> 
> 
> ------------------ Original ------------------
> From:  "guxiaobo1982";<guxiaobo1982@qq.com>;
> Send time: Monday, Nov 3, 2014 3:48 PM
> To: "user"<user@ambari.apache.org>;
> Subject:  timeline service installed by ambari can't start
> 
> Hi,
> 
> I use Ambari 16.1 installed HDP 2.1 Single node deployment, but the timeline service
can't start with the following error:
> stderr:   /var/lib/ambari-agent/data/errors-96.txt
> 
> 2014-11-03 13:28:03,199 - Error while executing command 'restart':
> Traceback (most recent call last):
>   File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
line 111, in execute
>     method(env)
>   File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
line 212, in restart
>     self.start(env)
>   File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/application_timeline_server.py",
line 42, in start
>     service('historyserver', action='start')
>   File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/service.py",
line 51, in service
>     initial_wait=5
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148,
in __init__
>     self.env.run()
>   File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line
149, in run
>     self.run_action(resource, action)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line
115, in run_action
>     provider_action()
>   File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
line 239, in action_run
>     raise ex
> Fail: Execution of 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid >/dev/null
2>&1 && ps `cat /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid` >/dev/null
2>&1' returned 1.
> stdout:   /var/lib/ambari-agent/data/output-96.txt
> 
> 2014-11-03 13:27:56,524 - Execute['mkdir -p /tmp/HDP-artifacts/;     curl -kf -x "" --retry
10     http://ambari.bh.com:8080/resources//UnlimitedJCEPolicyJDK7.zip -o /tmp/HDP-artifacts//UnlimitedJCEPolicyJDK7.zip']
{'environment': ..., 'not_if': 'test -e /tmp/HDP-artifacts//UnlimitedJCEPolicyJDK7.zip', 'ignore_failures':
True, 'path': ['/bin', '/usr/bin/']}
> 2014-11-03 13:27:56,543 - Skipping Execute['mkdir -p /tmp/HDP-artifacts/;     curl -kf
-x "" --retry 10     http://ambari.bh.com:8080/resources//UnlimitedJCEPolicyJDK7.zip -o /tmp/HDP-artifacts//UnlimitedJCEPolicyJDK7.zip']
due to not_if
> 2014-11-03 13:27:56,618 - Directory['/etc/hadoop/conf.empty'] {'owner': 'root', 'group':
'root', 'recursive': True}
> 2014-11-03 13:27:56,620 - Link['/etc/hadoop/conf'] {'not_if': 'ls /etc/hadoop/conf',
'to': '/etc/hadoop/conf.empty'}
> 2014-11-03 13:27:56,634 - Skipping Link['/etc/hadoop/conf'] due to not_if
> 2014-11-03 13:27:56,644 - File['/etc/hadoop/conf/hadoop-env.sh'] {'content': Template('hadoop-env.sh.j2'),
'owner': 'hdfs'}
> 2014-11-03 13:27:56,646 - XmlConfig['core-site.xml'] {'owner': 'hdfs', 'group': 'hadoop',
'conf_dir': '/etc/hadoop/conf', 'configurations': ...}
> 2014-11-03 13:27:56,650 - Generating config: /etc/hadoop/conf/core-site.xml
> 2014-11-03 13:27:56,650 - File['/etc/hadoop/conf/core-site.xml'] {'owner': 'hdfs', 'content':
InlineTemplate(...), 'group': 'hadoop', 'mode': None}
> 2014-11-03 13:27:56,651 - Writing File['/etc/hadoop/conf/core-site.xml'] because contents
don't match
> 2014-11-03 13:27:56,662 - Execute['/bin/echo 0 > /selinux/enforce'] {'only_if': 'test
-f /selinux/enforce'}
> 2014-11-03 13:27:56,683 - Execute['mkdir -p /usr/lib/hadoop/lib/native/Linux-i386-32;
ln -sf /usr/lib/libsnappy.so /usr/lib/hadoop/lib/native/Linux-i386-32/libsnappy.so'] {}
> 2014-11-03 13:27:56,698 - Execute['mkdir -p /usr/lib/hadoop/lib/native/Linux-amd64-64;
ln -sf /usr/lib64/libsnappy.so /usr/lib/hadoop/lib/native/Linux-amd64-64/libsnappy.so'] {}
> 2014-11-03 13:27:56,709 - Directory['/var/log/hadoop'] {'owner': 'root', 'group': 'root',
'recursive': True}
> 2014-11-03 13:27:56,710 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root',
'recursive': True}
> 2014-11-03 13:27:56,710 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive':
True}
> 2014-11-03 13:27:56,714 - File['/etc/hadoop/conf/commons-logging.properties'] {'content':
Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
> 2014-11-03 13:27:56,716 - File['/etc/hadoop/conf/health_check'] {'content': Template('health_check-v2.j2'),
'owner': 'hdfs'}
> 2014-11-03 13:27:56,717 - File['/etc/hadoop/conf/log4j.properties'] {'content': '...',
'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
> 2014-11-03 13:27:56,720 - File['/etc/hadoop/conf/hadoop-metrics2.properties'] {'content':
Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
> 2014-11-03 13:27:56,720 - File['/etc/hadoop/conf/task-log4j.properties'] {'content':
StaticFile('task-log4j.properties'), 'mode': 0755}
> 2014-11-03 13:27:56,721 - File['/etc/hadoop/conf/configuration.xsl'] {'owner': 'hdfs',
'group': 'hadoop'}
> 2014-11-03 13:27:56,803 - Execute['export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec
&& /usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config /etc/hadoop/conf stop historyserver']
{'user': 'yarn'}
> 2014-11-03 13:27:56,924 - Execute['rm -f /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid']
{'user': 'yarn'}
> 2014-11-03 13:27:56,955 - Directory['/var/run/hadoop-yarn/yarn'] {'owner': 'yarn', 'group':
'hadoop', 'recursive': True}
> 2014-11-03 13:27:56,956 - Directory['/var/log/hadoop-yarn/yarn'] {'owner': 'yarn', 'group':
'hadoop', 'recursive': True}
> 2014-11-03 13:27:56,956 - Directory['/var/run/hadoop-mapreduce/mapred'] {'owner': 'mapred',
'group': 'hadoop', 'recursive': True}
> 2014-11-03 13:27:56,956 - Directory['/var/log/hadoop-mapreduce/mapred'] {'owner': 'mapred',
'group': 'hadoop', 'recursive': True}
> 2014-11-03 13:27:56,956 - Directory['/hadoop/yarn/local'] {'owner': 'yarn', 'ignore_failures':
True, 'recursive': True}
> 2014-11-03 13:27:56,956 - Directory['/hadoop/yarn/log'] {'owner': 'yarn', 'ignore_failures':
True, 'recursive': True}
> 2014-11-03 13:27:56,957 - Directory['/var/log/hadoop-yarn'] {'owner': 'yarn', 'ignore_failures':
True, 'recursive': True}
> 2014-11-03 13:27:56,957 - XmlConfig['core-site.xml'] {'owner': 'hdfs', 'group': 'hadoop',
'mode': 0644, 'conf_dir': '/etc/hadoop/conf', 'configurations': ...}
> 2014-11-03 13:27:56,963 - Generating config: /etc/hadoop/conf/core-site.xml
> 2014-11-03 13:27:56,963 - File['/etc/hadoop/conf/core-site.xml'] {'owner': 'hdfs', 'content':
InlineTemplate(...), 'group': 'hadoop', 'mode': 0644}
> 2014-11-03 13:27:56,963 - XmlConfig['mapred-site.xml'] {'owner': 'yarn', 'group': 'hadoop',
'mode': 0644, 'conf_dir': '/etc/hadoop/conf', 'configurations': ...}
> 2014-11-03 13:27:56,966 - Generating config: /etc/hadoop/conf/mapred-site.xml
> 2014-11-03 13:27:56,966 - File['/etc/hadoop/conf/mapred-site.xml'] {'owner': 'yarn',
'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644}
> 2014-11-03 13:27:56,967 - Writing File['/etc/hadoop/conf/mapred-site.xml'] because contents
don't match
> 2014-11-03 13:27:56,967 - Changing owner for /etc/hadoop/conf/mapred-site.xml from 1022
to yarn
> 2014-11-03 13:27:56,967 - XmlConfig['yarn-site.xml'] {'owner': 'yarn', 'group': 'hadoop',
'mode': 0644, 'conf_dir': '/etc/hadoop/conf', 'configurations': ...}
> 2014-11-03 13:27:56,969 - Generating config: /etc/hadoop/conf/yarn-site.xml
> 2014-11-03 13:27:56,969 - File['/etc/hadoop/conf/yarn-site.xml'] {'owner': 'yarn', 'content':
InlineTemplate(...), 'group': 'hadoop', 'mode': 0644}
> 2014-11-03 13:27:56,970 - Writing File['/etc/hadoop/conf/yarn-site.xml'] because contents
don't match
> 2014-11-03 13:27:56,971 - XmlConfig['capacity-scheduler.xml'] {'owner': 'yarn', 'group':
'hadoop', 'mode': 0644, 'conf_dir': '/etc/hadoop/conf', 'configurations': ...}
> 2014-11-03 13:27:56,974 - Generating config: /etc/hadoop/conf/capacity-scheduler.xml
> 2014-11-03 13:27:56,974 - File['/etc/hadoop/conf/capacity-scheduler.xml'] {'owner': 'yarn',
'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644}
> 2014-11-03 13:27:56,975 - Writing File['/etc/hadoop/conf/capacity-scheduler.xml'] because
contents don't match
> 2014-11-03 13:27:56,975 - Changing owner for /etc/hadoop/conf/capacity-scheduler.xml
from 1021 to yarn
> 2014-11-03 13:27:56,975 - Directory['/hadoop/yarn/timeline'] {'owner': 'yarn', 'group':
'hadoop', 'recursive': True}
> 2014-11-03 13:27:56,975 - File['/etc/hadoop/conf/yarn.exclude'] {'owner': 'yarn', 'group':
'hadoop'}
> 2014-11-03 13:27:56,977 - File['/etc/security/limits.d/yarn.conf'] {'content': Template('yarn.conf.j2'),
'mode': 0644}
> 2014-11-03 13:27:56,980 - File['/etc/security/limits.d/mapreduce.conf'] {'content': Template('mapreduce.conf.j2'),
'mode': 0644}
> 2014-11-03 13:27:56,982 - File['/etc/hadoop/conf/yarn-env.sh'] {'content': Template('yarn-env.sh.j2'),
'owner': 'yarn', 'group': 'hadoop', 'mode': 0755}
> 2014-11-03 13:27:56,984 - File['/etc/hadoop/conf/mapred-env.sh'] {'content': Template('mapred-env.sh.j2'),
'owner': 'hdfs'}
> 2014-11-03 13:27:56,985 - File['/etc/hadoop/conf/taskcontroller.cfg'] {'content': Template('taskcontroller.cfg.j2'),
'owner': 'hdfs'}
> 2014-11-03 13:27:56,986 - XmlConfig['mapred-site.xml'] {'owner': 'mapred', 'group': 'hadoop',
'conf_dir': '/etc/hadoop/conf', 'configurations': ...}
> 2014-11-03 13:27:56,988 - Generating config: /etc/hadoop/conf/mapred-site.xml
> 2014-11-03 13:27:56,988 - File['/etc/hadoop/conf/mapred-site.xml'] {'owner': 'mapred',
'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None}
> 2014-11-03 13:27:56,988 - Changing owner for /etc/hadoop/conf/mapred-site.xml from 1020
to mapred
> 2014-11-03 13:27:56,988 - XmlConfig['capacity-scheduler.xml'] {'owner': 'hdfs', 'group':
'hadoop', 'conf_dir': '/etc/hadoop/conf', 'configurations': ...}
> 2014-11-03 13:27:56,991 - Generating config: /etc/hadoop/conf/capacity-scheduler.xml
> 2014-11-03 13:27:56,991 - File['/etc/hadoop/conf/capacity-scheduler.xml'] {'owner': 'hdfs',
'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None}
> 2014-11-03 13:27:56,992 - Changing owner for /etc/hadoop/conf/capacity-scheduler.xml
from 1020 to hdfs
> 2014-11-03 13:27:56,992 - File['/etc/hadoop/conf/ssl-client.xml.example'] {'owner': 'mapred',
'group': 'hadoop'}
> 2014-11-03 13:27:56,992 - File['/etc/hadoop/conf/ssl-server.xml.example'] {'owner': 'mapred',
'group': 'hadoop'}
> 2014-11-03 13:27:56,993 - Execute['export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec
&& /usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config /etc/hadoop/conf start historyserver']
{'not_if': 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid >/dev/null 2>&1
&& ps `cat /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid` >/dev/null 2>&1',
'user': 'yarn'}
> 2014-11-03 13:27:58,089 - Execute['ls /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid
>/dev/null 2>&1 && ps `cat /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid`
>/dev/null 2>&1'] {'initial_wait': 5, 'not_if': 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid
>/dev/null 2>&1 && ps `cat /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid`
>/dev/null 2>&1', 'user': 'yarn'}
> 2014-11-03 13:28:03,199 - Error while executing command 'restart':
> Traceback (most recent call last):
>   File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
line 111, in execute
>     method(env)
>   File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
line 212, in restart
>     self.start(env)
>   File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/application_timeline_server.py",
line 42, in start
>     service('historyserver', action='start')
>   File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/service.py",
line 51, in service
>     initial_wait=5
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148,
in __init__
>     self.env.run()
>   File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line
149, in run
>     self.run_action(resource, action)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line
115, in run_action
>     provider_action()
>   File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
line 239, in action_run
>     raise ex
> Fail: Execution of 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid >/dev/null
2>&1 && ps `cat /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid` >/dev/null
2>&1' returned 1.
> It seems this is a known issue according to http://docs.hortonworks.com/HDPDocuments/Ambari-1.6.1.0/bk_releasenotes_ambari_1.6.1/content/ch_relnotes-ambari-1.6.1.0-knownissues.html,
> 
> I checked my environment, it is configured with the default value of org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore
for yarn.timeline-service.store-class, and I can't determine which version of HDP ambari-server
has installed, so I tried org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.LeveldbTimelineStore
and org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore, but both failed with the
same problem, can you help with this, and another questions is how can determine which version
of HDP is installed?
> 
> Thanks
> 
> 
> 
> 
> -- 
> Cheers
> -MJ


Mime
View raw message