Return-Path: X-Original-To: apmail-ambari-issues-archive@minotaur.apache.org Delivered-To: apmail-ambari-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 09CB218839 for ; Wed, 9 Mar 2016 01:37:41 +0000 (UTC) Received: (qmail 82582 invoked by uid 500); 9 Mar 2016 01:37:41 -0000 Delivered-To: apmail-ambari-issues-archive@ambari.apache.org Received: (qmail 82310 invoked by uid 500); 9 Mar 2016 01:37:40 -0000 Mailing-List: contact issues-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@ambari.apache.org Delivered-To: mailing list issues@ambari.apache.org Received: (qmail 82297 invoked by uid 99); 9 Mar 2016 01:37:40 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 09 Mar 2016 01:37:40 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id B10AD2C1F5C for ; Wed, 9 Mar 2016 01:37:40 +0000 (UTC) Date: Wed, 9 Mar 2016 01:37:40 +0000 (UTC) From: "Siddharth Wagle (JIRA)" To: issues@ambari.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Created] (AMBARI-15342) AMS Grafana start failed with permission denied error on chanigng user MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 Siddharth Wagle created AMBARI-15342: ---------------------------------------- Summary: AMS Grafana start failed with permission denied error= on chanigng user Key: AMBARI-15342 URL: https://issues.apache.org/jira/browse/AMBARI-15342 Project: Ambari Issue Type: Bug Components: ambari-metrics Affects Versions: 2.2.2 Reporter: Siddharth Wagle Assignee: Siddharth Wagle Fix For: 2.2.2 Grafana service failed to start when tried to start from the Ambari UI [fro= m Ambari Metrics service page, select drop down "Start"]. Failure message from logs : */var/log/ambari-metrics-grafana/grafana.out: P= ermission denied\nFAILED"* Complete logs:=20 {noformat} { "href" : "http://172.22.110.160:8080/api/v1/clusters/cl1/requests/4/tasks= /181", "Tasks" : { "attempt_cnt" : 1, "cluster_name" : "cl1", "command" : "START", "command_detail" : "METRICS_GRAFANA START", "end_time" : 1457416332931, "error_log" : "/var/lib/ambari-agent/data/errors-181.txt", "exit_code" : 1, "host_name" : "os-r6-dggzcu-ambari-rare-19-5.novalocal", "id" : 181, "output_log" : "/var/lib/ambari-agent/data/output-181.txt", "request_id" : 4, "role" : "METRICS_GRAFANA", "stage_id" : 4, "start_time" : 1457416296331, "status" : "FAILED", "stderr" : "Traceback (most recent call last):\n File \"/var/lib/ambar= i-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_= grafana.py\", line 70, in \n AmsGrafana().execute()\n File \"/u= sr/lib/python2.6/site-packages/resource_management/libraries/script/script.= py\", line 219, in execute\n method(env)\n File \"/var/lib/ambari-agent= /cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana= .py\", line 48, in start\n user=3Dparams.ams_user\n File \"/usr/lib/pyt= hon2.6/site-packages/resource_management/core/base.py\", line 154, in __ini= t__\n self.env.run()\n File \"/usr/lib/python2.6/site-packages/resource= _management/core/environment.py\", line 158, in run\n self.run_action(re= source, action)\n File \"/usr/lib/python2.6/site-packages/resource_managem= ent/core/environment.py\", line 121, in run_action\n provider_action()\n= File \"/usr/lib/python2.6/site-packages/resource_management/core/provider= s/system.py\", line 238, in action_run\n tries=3Dself.resource.tries, tr= y_sleep=3Dself.resource.try_sleep)\n File \"/usr/lib/python2.6/site-packag= es/resource_management/core/shell.py\", line 70, in inner\n result =3D f= unction(command, **kwargs)\n File \"/usr/lib/python2.6/site-packages/resou= rce_management/core/shell.py\", line 92, in checked_call\n tries=3Dtries= , try_sleep=3Dtry_sleep)\n File \"/usr/lib/python2.6/site-packages/resourc= e_management/core/shell.py\", line 140, in _call_wrapper\n result =3D _c= all(command, **kwargs_copy)\n File \"/usr/lib/python2.6/site-packages/reso= urce_management/core/shell.py\", line 291, in _call\n raise Fail(err_msg= )\nresource_management.core.exceptions.Fail: Execution of '/usr/sbin/ambari= -metrics-grafana start' returned 1. ######## Hortonworks #############\nThi= s is MOTD message, added for testing in qe infra\nStarting Ambari Metrics G= rafana: .... /usr/sbin/ambari-metrics-grafana: line 114: /var/log/ambari-me= trics-grafana/grafana.out: Permission denied\nFAILED", "stdout" : "2016-03-08 05:51:46,460 - The hadoop conf dir /usr/hdp/curr= ent/hadoop-client/conf exists, will call conf-select on it for version 2.4.= 0.0-169\n2016-03-08 05:51:46,460 - Checking if need to create versioned con= f dir /etc/hadoop/2.4.0.0-169/0\n2016-03-08 05:51:46,467 - call['conf-selec= t create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-versi= on 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}\n20= 16-03-08 05:51:46,602 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist = already', '')\n2016-03-08 05:51:46,602 - checked_call['conf-select set-conf= -dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logo= utput': False, 'sudo': True, 'quiet': False}\n2016-03-08 05:51:46,788 - che= cked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4= .0.0-169/0')\n2016-03-08 05:51:46,788 - Ensuring that hadoop has the correc= t symlink structure\n2016-03-08 05:51:46,789 - Using hadoop conf dir: /usr/= hdp/current/hadoop-client/conf\n2016-03-08 05:51:47,321 - The hadoop conf d= ir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it = for version 2.4.0.0-169\n2016-03-08 05:51:47,322 - Checking if need to crea= te versioned conf dir /etc/hadoop/2.4.0.0-169/0\n2016-03-08 05:51:47,322 - = call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-= 169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, '= stderr': -1}\n2016-03-08 05:51:47,386 - call returned (1, '/etc/hadoop/2.4.= 0.0-169/0 exist already', '')\n2016-03-08 05:51:47,387 - checked_call['conf= -select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-ve= rsion 0'] {'logoutput': False, 'sudo': True, 'quiet': False}\n2016-03-08 05= :51:47,439 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf ->= /etc/hadoop/2.4.0.0-169/0')\n2016-03-08 05:51:47,440 - Ensuring that hadoo= p has the correct symlink structure\n2016-03-08 05:51:47,440 - Using hadoop= conf dir: /usr/hdp/current/hadoop-client/conf\n2016-03-08 05:51:47,442 - G= roup['cstm-knox-group'] {}\n2016-03-08 05:51:47,448 - Group['hadoop'] {}\n2= 016-03-08 05:51:47,449 - Group['users'] {}\n2016-03-08 05:51:47,449 - Group= ['cstm-spark'] {}\n2016-03-08 05:51:47,449 - User['atlas'] {'gid': 'hadoop'= , 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}\n2016-03-08 05:51:47= ,451 - User['cstm-hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, '= groups': ['hadoop']}\n2016-03-08 05:51:47,453 - User['cstm-sqoop'] {'gid': = 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}\n2016-03-08 = 05:51:47,454 - User['cstm-ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': = True, 'groups': ['hadoop']}\n2016-03-08 05:51:47,455 - User['cstm-tez'] {'g= id': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}\n2016-03= -08 05:51:47,456 - User['cstm-storm'] {'gid': 'hadoop', 'fetch_nonlocal_gro= ups': True, 'groups': ['hadoop']}\n2016-03-08 05:51:47,457 - User['cstm-kno= x'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}\= n2016-03-08 05:51:47,458 - User['cstm-flume'] {'gid': 'hadoop', 'fetch_nonl= ocal_groups': True, 'groups': ['hadoop']}\n2016-03-08 05:51:47,459 - User['= cstm-kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['h= adoop']}\n2016-03-08 05:51:47,460 - User['cstm-hcat'] {'gid': 'hadoop', 'fe= tch_nonlocal_groups': True, 'groups': ['hadoop']}\n2016-03-08 05:51:47,462 = - User['cstm-mahout'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'gro= ups': ['hadoop']}\n2016-03-08 05:51:47,463 - User['cstm-hbase'] {'gid': 'ha= doop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}\n2016-03-08 05:= 51:47,464 - User['cstm-hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': Tr= ue, 'groups': ['hadoop']}\n2016-03-08 05:51:47,465 - User['cstm-falcon'] {'= gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}\n2016-0= 3-08 05:51:47,466 - User['cstm-accumulo'] {'gid': 'hadoop', 'fetch_nonlocal= _groups': True, 'groups': ['hadoop']}\n2016-03-08 05:51:47,467 - User['amba= ri-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'= ]}\n2016-03-08 05:51:47,468 - User['cstm-zookeeper'] {'gid': 'hadoop', 'fet= ch_nonlocal_groups': True, 'groups': ['hadoop']}\n2016-03-08 05:51:47,470 -= User['cstm-oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'group= s': ['users']}\n2016-03-08 05:51:47,471 - User['yarn'] {'gid': 'hadoop', 'f= etch_nonlocal_groups': True, 'groups': ['hadoop']}\n2016-03-08 05:51:47,472= - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups'= : ['hadoop']}\n2016-03-08 05:51:47,473 - User['cstm-spark'] {'gid': 'hadoop= ', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}\n2016-03-08 05:51:4= 7,474 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'grou= ps': ['hadoop']}\n2016-03-08 05:51:47,475 - File['/var/lib/ambari-agent/tmp= /changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 055= 5}\n2016-03-08 05:51:47,690 - Execute['/var/lib/ambari-agent/tmp/changeUid.= sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-q= a,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa= ) -gt 1000) || (false)'}\n2016-03-08 05:51:47,715 - Skipping Execute['/var/= lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hspe= rfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due = to not_if\n2016-03-08 05:51:47,716 - Directory['/tmp/hbase-hbase'] {'owner'= : 'cstm-hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}\n2016-03= -08 05:51:48,114 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content= ': StaticFile('changeToSecureUid.sh'), 'mode': 0555}\n2016-03-08 05:51:48,4= 16 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh cstm-hbase /home/cstm-= hbase,/tmp/cstm-hbase,/usr/bin/cstm-hbase,/var/log/cstm-hbase,/tmp/hbase-hb= ase'] {'not_if': '(test $(id -u cstm-hbase) -gt 1000) || (false)'}\n2016-03= -08 05:51:48,427 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh= cstm-hbase /home/cstm-hbase,/tmp/cstm-hbase,/usr/bin/cstm-hbase,/var/log/c= stm-hbase,/tmp/hbase-hbase'] due to not_if\n2016-03-08 05:51:48,428 - Group= ['cstm-hdfs'] {}\n2016-03-08 05:51:48,428 - User['cstm-hdfs'] {'fetch_nonlo= cal_groups': True, 'groups': ['hadoop', 'cstm-hdfs']}\n2016-03-08 05:51:48,= 429 - Directory['/etc/hadoop'] {'mode': 0755}\n2016-03-08 05:51:48,577 - Fi= le['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineT= emplate(...), 'owner': 'cstm-hdfs', 'group': 'hadoop'}\n2016-03-08 05:51:48= ,683 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner= ': 'cstm-hdfs', 'group': 'hadoop', 'mode': 0777}\n2016-03-08 05:51:48,895 -= Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which g= etenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'tes= t -f /selinux/enforce'}\n2016-03-08 05:51:49,041 - Directory['/grid/0/log/h= dfs'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True,= 'cd_access': 'a'}\n2016-03-08 05:51:49,639 - Directory['/grid/0/pid/hdfs']= {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}\n2= 016-03-08 05:51:49,974 - Directory['/tmp/hadoop-cstm-hdfs'] {'owner': 'cstm= -hdfs', 'recursive': True, 'cd_access': 'a'}\n2016-03-08 05:51:50,119 - Fil= e['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'conte= nt': Template('commons-logging.properties.j2'), 'owner': 'cstm-hdfs'}\n2016= -03-08 05:51:50,225 - File['/usr/hdp/current/hadoop-client/conf/health_chec= k'] {'content': Template('health_check.j2'), 'owner': 'cstm-hdfs'}\n2016-03= -08 05:51:50,334 - File['/usr/hdp/current/hadoop-client/conf/log4j.properti= es'] {'content': ..., 'owner': 'cstm-hdfs', 'group': 'hadoop', 'mode': 0644= }\n2016-03-08 05:51:50,504 - File['/usr/hdp/current/hadoop-client/conf/hado= op-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j= 2'), 'owner': 'cstm-hdfs', 'group': 'hadoop'}\n2016-03-08 05:51:50,635 - Fi= le['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content':= StaticFile('task-log4j.properties'), 'mode': 0755}\n2016-03-08 05:51:50,76= 6 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner':= 'cstm-hdfs', 'group': 'hadoop'}\n2016-03-08 05:51:50,874 - File['/etc/hado= op/conf/topology_mappings.data'] {'owner': 'cstm-hdfs', 'content': Template= ('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'grou= p': 'hadoop'}\n2016-03-08 05:51:51,039 - File['/etc/hadoop/conf/topology_sc= ript.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d= /etc/hadoop/conf', 'mode': 0755}\n2016-03-08 05:51:52,004 - The hadoop con= f dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on = it for version 2.4.0.0-169\n2016-03-08 05:51:52,004 - Checking if need to c= reate versioned conf dir /etc/hadoop/2.4.0.0-169/0\n2016-03-08 05:51:52,004= - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0= .0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False= , 'stderr': -1}\n2016-03-08 05:51:52,056 - call returned (1, '/etc/hadoop/2= .4.0.0-169/0 exist already', '')\n2016-03-08 05:51:52,057 - checked_call['c= onf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf= -version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}\n2016-03-08= 05:51:52,112 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf= -> /etc/hadoop/2.4.0.0-169/0')\n2016-03-08 05:51:52,112 - Ensuring that ha= doop has the correct symlink structure\n2016-03-08 05:51:52,112 - Using had= oop conf dir: /usr/hdp/current/hadoop-client/conf\n2016-03-08 05:51:52,138 = - Directory['/etc/ambari-metrics-grafana/conf'] {'owner': 'cstm-ams', 'grou= p': 'hadoop', 'recursive': True, 'mode': 0755}\n2016-03-08 05:51:52,206 - C= hanging owner for /etc/ambari-metrics-grafana/conf from 2554 to cstm-ams\n2= 016-03-08 05:51:52,206 - Changing group for /etc/ambari-metrics-grafana/con= f from 2551 to hadoop\n2016-03-08 05:51:52,265 - Directory['/var/log/ambari= -metrics-grafana'] {'owner': 'cstm-ams', 'group': 'hadoop', 'recursive': Tr= ue, 'mode': 0755}\n2016-03-08 05:51:52,322 - Changing owner for /var/log/am= bari-metrics-grafana from 0 to cstm-ams\n2016-03-08 05:51:52,322 - Changing= group for /var/log/ambari-metrics-grafana from 0 to hadoop\n2016-03-08 05:= 51:52,368 - Directory['/var/lib/ambari-metrics-grafana'] {'owner': 'cstm-am= s', 'group': 'hadoop', 'recursive': True, 'mode': 0755}\n2016-03-08 05:51:5= 2,426 - Changing owner for /var/lib/ambari-metrics-grafana from 0 to cstm-a= ms\n2016-03-08 05:51:52,426 - Changing group for /var/lib/ambari-metrics-gr= afana from 0 to hadoop\n2016-03-08 05:51:52,477 - Directory['/var/run/ambar= i-metrics-grafana'] {'owner': 'cstm-ams', 'group': 'hadoop', 'recursive': T= rue, 'mode': 0755}\n2016-03-08 05:51:52,609 - Changing owner for /var/run/a= mbari-metrics-grafana from 0 to cstm-ams\n2016-03-08 05:51:52,610 - Changin= g group for /var/run/ambari-metrics-grafana from 0 to hadoop\n2016-03-08 05= :51:52,710 - File['/etc/ambari-metrics-grafana/conf/ams-grafana-env.sh'] {'= content': InlineTemplate(...), 'owner': 'cstm-ams', 'group': 'hadoop'}\n201= 6-03-08 05:51:52,916 - Writing File['/etc/ambari-metrics-grafana/conf/ams-g= rafana-env.sh'] because contents don't match\n2016-03-08 05:51:53,023 - Cha= nging owner for /etc/ambari-metrics-grafana/conf/ams-grafana-env.sh from 0 = to cstm-ams\n2016-03-08 05:51:53,023 - Changing group for /etc/ambari-metri= cs-grafana/conf/ams-grafana-env.sh from 0 to hadoop\n2016-03-08 05:51:53,05= 1 - File['/etc/ambari-metrics-grafana/conf/ams-grafana.ini'] {'content': In= lineTemplate(...), 'owner': 'cstm-ams', 'group': 'hadoop'}\n2016-03-08 05:5= 1:53,225 - Writing File['/etc/ambari-metrics-grafana/conf/ams-grafana.ini']= because contents don't match\n2016-03-08 05:51:53,285 - Changing owner for= /etc/ambari-metrics-grafana/conf/ams-grafana.ini from 0 to cstm-ams\n2016-= 03-08 05:51:53,285 - Changing group for /etc/ambari-metrics-grafana/conf/am= s-grafana.ini from 0 to hadoop\n2016-03-08 05:51:53,306 - Execute['/usr/sbi= n/ambari-metrics-grafana stop'] {'user': 'cstm-ams'}\n2016-03-08 05:51:58,4= 74 - Execute['/usr/sbin/ambari-metrics-grafana start'] {'user': 'cstm-ams'}= ", "structured_out" : { } } } {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)