Return-Path: X-Original-To: apmail-ambari-dev-archive@www.apache.org Delivered-To: apmail-ambari-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6A3CA109A0 for ; Tue, 8 Dec 2015 12:22:11 +0000 (UTC) Received: (qmail 15578 invoked by uid 500); 8 Dec 2015 12:22:11 -0000 Delivered-To: apmail-ambari-dev-archive@ambari.apache.org Received: (qmail 15483 invoked by uid 500); 8 Dec 2015 12:22:11 -0000 Mailing-List: contact dev-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@ambari.apache.org Delivered-To: mailing list dev@ambari.apache.org Received: (qmail 15453 invoked by uid 99); 8 Dec 2015 12:22:11 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 08 Dec 2015 12:22:11 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id F1F792C14F9 for ; Tue, 8 Dec 2015 12:22:10 +0000 (UTC) Date: Tue, 8 Dec 2015 12:22:10 +0000 (UTC) From: "Andrew Onischuk (JIRA)" To: dev@ambari.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Created] (AMBARI-14264) Some component fails to start at single node cluster via Blueprint when AMS is not on cluster MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 Andrew Onischuk created AMBARI-14264: ---------------------------------------- Summary: Some component fails to start at single node cluster = via Blueprint when AMS is not on cluster Key: AMBARI-14264 URL: https://issues.apache.org/jira/browse/AMBARI-14264 Project: Ambari Issue Type: Bug Reporter: Andrew Onischuk Assignee: Andrew Onischuk Fix For: 2.2.0 Setup single node cluster via Blueprint =20 =20 =20 { "configurations": [], "host_groups": [ { "name": "host1", "cardinality": "1", "components": [ { "name": "ZOOKEEPER_SERVER" }, { "name": "ZOOKEEPER_CLIENT" }, { "name": "NIMBUS" }, { "name": "SUPERVISOR" }, { "name": "STORM_UI_SERVER" }, { "name": "DRPC_SERVER" } ] } ], "Blueprints": { "blueprint_name": "STORM", "stack_name": "HDP", "stack_version": "2.3" } } =20 =20 =20 =20 { "blueprint": "STORM", "default_password": "password", "config_recommendation_strategy": "NEVER_APPLY", "host_groups": [ { "name": "host1", "hosts": [ { "fqdn": "c6401.ambari.apache.org", "ip": "192.168.64.101" } ] } ] } =20 =20 =20 =20 { "href" : "http://172.22.123.182:8080/api/v1/clusters/cl1/requests/5/t= asks/12", "Tasks" : { "attempt_cnt" : 1, "cluster_name" : "cl1", "command" : "START", "command_detail" : "SUPERVISOR START", "end_time" : 1448629012734, "error_log" : "/var/lib/ambari-agent/data/errors-12.txt", "exit_code" : 1, "host_name" : "os-r6-aqtpzu-ambari-rare-4-re-5.novalocal", "id" : 12, "output_log" : "/var/lib/ambari-agent/data/output-12.txt", "request_id" : 5, "role" : "SUPERVISOR", "stage_id" : 2, "start_time" : 1448628896461, "status" : "FAILED", "stderr" : "Traceback (most recent call last):\n File \"/var/lib/a= mbari-agent/cache/common-services/STORM/0.9.1.2.1/package/scripts/superviso= r.py\", line 104, in \n Supervisor().execute()\n File \"/usr/li= b/python2.6/site-packages/resource_management/libraries/script/script.py\",= line 217, in execute\n method(env)\n File \"/var/lib/ambari-agent/cach= e/common-services/STORM/0.9.1.2.1/package/scripts/supervisor.py\", line 87,= in start\n service(\"supervisor\", action=3D\"start\")\n File \"/var/l= ib/ambari-agent/cache/common-services/STORM/0.9.1.2.1/package/scripts/servi= ce.py\", line 77, in service\n path =3D params.storm_bin_dir)\n File \"= /usr/lib/python2.6/site-packages/resource_management/core/base.py\", line 1= 54, in __init__\n self.env.run()\n File \"/usr/lib/python2.6/site-packa= ges/resource_management/core/environment.py\", line 158, in run\n self.r= un_action(resource, action)\n File \"/usr/lib/python2.6/site-packages/reso= urce_management/core/environment.py\", line 121, in run_action\n provide= r_action()\n File \"/usr/lib/python2.6/site-packages/resource_management/c= ore/providers/system.py\", line 238, in action_run\n tries=3Dself.resour= ce.tries, try_sleep=3Dself.resource.try_sleep)\n File \"/usr/lib/python2.6= /site-packages/resource_management/core/shell.py\", line 70, in inner\n = result =3D function(command, **kwargs)\n File \"/usr/lib/python2.6/site-pa= ckages/resource_management/core/shell.py\", line 92, in checked_call\n t= ries=3Dtries, try_sleep=3Dtry_sleep)\n File \"/usr/lib/python2.6/site-pack= ages/resource_management/core/shell.py\", line 140, in _call_wrapper\n r= esult =3D _call(command, **kwargs_copy)\n File \"/usr/lib/python2.6/site-p= ackages/resource_management/core/shell.py\", line 291, in _call\n raise = Fail(err_msg)\nresource_management.core.exceptions.Fail: Execution of '/usr= /jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.supervisor$ && /usr/jdk6= 4/jdk1.8.0_60/bin/jps -l | grep storm.daemon.supervisor$ | awk {'print $1'= } > /var/run/storm/supervisor.pid' returned 1. ######## Hortonworks #######= ######\nThis is MOTD message, added for testing in qe infra", "stdout" : "2015-11-27 12:55:57,680 - Group['hadoop'] {}\n2015-11-2= 7 12:55:57,683 - User['storm'] {'gid': 'hadoop', 'groups': ['hadoop']}\n201= 5-11-27 12:55:57,684 - User['zookeeper'] {'gid': 'hadoop', 'groups': ['hado= op']}\n2015-11-27 12:55:57,685 - User['ambari-qa'] {'gid': 'hadoop', 'group= s': ['users']}\n2015-11-27 12:55:57,686 - File['/var/lib/ambari-agent/tmp/c= hangeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}= \n2015-11-27 12:55:57,838 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh= ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,= /tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) = -gt 1000) || (false)'}\n2015-11-27 12:55:57,854 - Skipping Execute['/var/li= b/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperf= data_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to= not_if\n2015-11-27 12:55:57,872 - Execute[('setenforce', '0')] {'not_if': = '(! which getenforce ) || (which getenforce && getenforce | grep -q Disable= d)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}\n2015-11-27 12:55= :58,293 - Directory['/var/log/storm'] {'owner': 'storm', 'group': 'hadoop',= 'recursive': True, 'mode': 0777}\n2015-11-27 12:55:58,376 - Directory['/va= r/run/storm'] {'owner': 'storm', 'cd_access': 'a', 'group': 'hadoop', 'mode= ': 0755, 'recursive': True}\n2015-11-27 12:55:58,569 - Directory['/hadoop/s= torm'] {'owner': 'storm', 'mode': 0755, 'group': 'hadoop', 'recursive': Tru= e, 'cd_access': 'a'}\n2015-11-27 12:55:58,736 - Directory['/usr/hdp/current= /storm-supervisor/conf'] {'group': 'hadoop', 'recursive': True, 'cd_access'= : 'a'}\n2015-11-27 12:55:58,796 - Changing group for /usr/hdp/current/storm= -supervisor/conf from 0 to hadoop\n2015-11-27 12:55:59,000 - File['/usr/hdp= /current/storm-supervisor/conf/config.yaml'] {'owner': 'storm', 'content': = Template('config.yaml.j2'), 'group': 'hadoop'}\n2015-11-27 12:55:59,100 - F= ile['/usr/hdp/current/storm-supervisor/conf/storm.yaml'] {'owner': 'storm',= 'content': InlineTemplate(...), 'group': 'hadoop'}\n2015-11-27 12:55:59,29= 6 - File['/usr/hdp/current/storm-supervisor/conf/storm-env.sh'] {'content':= InlineTemplate(...), 'owner': 'storm'}\n2015-11-27 12:55:59,361 - Writing = File['/usr/hdp/current/storm-supervisor/conf/storm-env.sh'] because content= s don't match\n2015-11-27 12:55:59,412 - Directory['/usr/hdp/current/storm-= supervisor/log4j2'] {'owner': 'storm', 'group': 'hadoop', 'recursive': True= , 'mode': 0755}\n2015-11-27 12:55:59,482 - File['/usr/hdp/current/storm-sup= ervisor/log4j2/cluster.xml'] {'content': InlineTemplate(...), 'owner': 'sto= rm'}\n2015-11-27 12:55:59,574 - File['/usr/hdp/current/storm-supervisor/log= 4j2/worker.xml'] {'content': InlineTemplate(...), 'owner': 'storm'}\n2015-1= 1-27 12:55:59,664 - Execute['source /usr/hdp/current/storm-supervisor/conf/= storm-env.sh ; export PATH=3D$JAVA_HOME/bin:$PATH ; storm supervisor > /var= /log/storm/supervisor.out 2>&1'] {'wait_for_finish': False, 'path': ['/usr/= hdp/current/storm-supervisor/bin'], 'user': 'storm', 'not_if': \"ambari-sud= o.sh su storm -l -s /bin/bash -c 'ls /var/run/storm/supervisor.pid >/dev/nu= ll 2>&1 && ps -p `cat /var/run/storm/supervisor.pid` >/dev/null 2>&1'\"}\n2= 015-11-27 12:55:59,722 - Execute['/usr/jdk64/jdk1.8.0_60/bin/jps -l | grep= storm.daemon.supervisor$ && /usr/jdk64/jdk1.8.0_60/bin/jps -l | grep stor= m.daemon.supervisor$ | awk {'print $1'} > /var/run/storm/supervisor.pid'] {= 'logoutput': True, 'path': ['/usr/hdp/current/storm-supervisor/bin'], 'trie= s': 6, 'user': 'storm', 'try_sleep': 10}\n######## Hortonworks ############= #\nThis is MOTD message, added for testing in qe infra\n2015-11-27 12:56:00= ,161 - Retrying after 10 seconds. Reason: Execution of '/usr/jdk64/jdk1.8.0= _60/bin/jps -l | grep storm.daemon.supervisor$ && /usr/jdk64/jdk1.8.0_60/b= in/jps -l | grep storm.daemon.supervisor$ | awk {'print $1'} > /var/run/st= orm/supervisor.pid' returned 1. ######## Hortonworks #############\nThis is= MOTD message, added for testing in qe infra\n######## Hortonworks ########= #####\nThis is MOTD message, added for testing in qe infra\n2015-11-27 12:5= 6:10,981 - Retrying after 10 seconds. Reason: Execution of '/usr/jdk64/jdk1= .8.0_60/bin/jps -l | grep storm.daemon.supervisor$ && /usr/jdk64/jdk1.8.0_= 60/bin/jps -l | grep storm.daemon.supervisor$ | awk {'print $1'} > /var/ru= n/storm/supervisor.pid' returned 1. ######## Hortonworks #############\nThi= s is MOTD message, added for testing in qe infra\n######## Hortonworks ####= #########\nThis is MOTD message, added for testing in qe infra\n2015-11-27 = 12:56:21,347 - Retrying after 10 seconds. Reason: Execution of '/usr/jdk64/= jdk1.8.0_60/bin/jps -l | grep storm.daemon.supervisor$ && /usr/jdk64/jdk1.= 8.0_60/bin/jps -l | grep storm.daemon.supervisor$ | awk {'print $1'} > /va= r/run/storm/supervisor.pid' returned 1. ######## Hortonworks #############\= nThis is MOTD message, added for testing in qe infra\n######## Hortonworks = #############\nThis is MOTD message, added for testing in qe infra\n2015-11= -27 12:56:31,671 - Retrying after 10 seconds. Reason: Execution of '/usr/jd= k64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.supervisor$ && /usr/jdk64/j= dk1.8.0_60/bin/jps -l | grep storm.daemon.supervisor$ | awk {'print $1'} >= /var/run/storm/supervisor.pid' returned 1. ######## Hortonworks ##########= ###\nThis is MOTD message, added for testing in qe infra\n######## Hortonwo= rks #############\nThis is MOTD message, added for testing in qe infra\n201= 5-11-27 12:56:42,034 - Retrying after 10 seconds. Reason: Execution of '/us= r/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.supervisor$ && /usr/jdk= 64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.supervisor$ | awk {'print $1= '} > /var/run/storm/supervisor.pid' returned 1. ######## Hortonworks ######= #######\nThis is MOTD message, added for testing in qe infra\n######## Hort= onworks #############\nThis is MOTD message, added for testing in qe infra"= , "structured_out" : { "version" : "2.3.4.0-3349" } } } =20 =20 =20 =20 { "href" : "http://172.22.123.182:8080/api/v1/clusters/cl1/requests/12/= tasks/31", "Tasks" : { "attempt_cnt" : 1, "cluster_name" : "cl1", "command" : "START", "command_detail" : "NIMBUS START", "end_time" : 1448629968713, "error_log" : "/var/lib/ambari-agent/data/errors-31.txt", "exit_code" : 1, "host_name" : "os-r6-aqtpzu-ambari-rare-4-re-5.novalocal", "id" : 31, "output_log" : "/var/lib/ambari-agent/data/output-31.txt", "request_id" : 12, "role" : "NIMBUS", "stage_id" : 0, "start_time" : 1448629907648, "status" : "FAILED", "stderr" : "Traceback (most recent call last):\n File \"/var/lib/a= mbari-agent/cache/common-services/STORM/0.9.1.2.1/package/scripts/nimbus.py= \", line 149, in \n Nimbus().execute()\n File \"/usr/lib/python= 2.6/site-packages/resource_management/libraries/script/script.py\", line 21= 7, in execute\n method(env)\n File \"/var/lib/ambari-agent/cache/common= -services/STORM/0.9.1.2.1/package/scripts/nimbus.py\", line 70, in start\n = service(\"nimbus\", action=3D\"start\")\n File \"/var/lib/ambari-agent/= cache/common-services/STORM/0.9.1.2.1/package/scripts/service.py\", line 77= , in service\n path =3D params.storm_bin_dir)\n File \"/usr/lib/python2= .6/site-packages/resource_management/core/base.py\", line 154, in __init__\= n self.env.run()\n File \"/usr/lib/python2.6/site-packages/resource_man= agement/core/environment.py\", line 158, in run\n self.run_action(resour= ce, action)\n File \"/usr/lib/python2.6/site-packages/resource_management/= core/environment.py\", line 121, in run_action\n provider_action()\n Fi= le \"/usr/lib/python2.6/site-packages/resource_management/core/providers/sy= stem.py\", line 238, in action_run\n tries=3Dself.resource.tries, try_sl= eep=3Dself.resource.try_sleep)\n File \"/usr/lib/python2.6/site-packages/r= esource_management/core/shell.py\", line 70, in inner\n result =3D funct= ion(command, **kwargs)\n File \"/usr/lib/python2.6/site-packages/resource_= management/core/shell.py\", line 92, in checked_call\n tries=3Dtries, tr= y_sleep=3Dtry_sleep)\n File \"/usr/lib/python2.6/site-packages/resource_ma= nagement/core/shell.py\", line 140, in _call_wrapper\n result =3D _call(= command, **kwargs_copy)\n File \"/usr/lib/python2.6/site-packages/resource= _management/core/shell.py\", line 291, in _call\n raise Fail(err_msg)\nr= esource_management.core.exceptions.Fail: Execution of '/usr/jdk64/jdk1.8.0_= 60/bin/jps -l | grep storm.daemon.nimbus$ && /usr/jdk64/jdk1.8.0_60/bin/jp= s -l | grep storm.daemon.nimbus$ | awk {'print $1'} > /var/run/storm/nimbu= s.pid' returned 1. ######## Hortonworks #############\nThis is MOTD message= , added for testing in qe infra", "stdout" : "2015-11-27 13:11:52,585 - Group['hadoop'] {}\n2015-11-2= 7 13:11:52,588 - User['storm'] {'gid': 'hadoop', 'groups': ['hadoop']}\n201= 5-11-27 13:11:52,589 - User['zookeeper'] {'gid': 'hadoop', 'groups': ['hado= op']}\n2015-11-27 13:11:52,590 - User['ambari-qa'] {'gid': 'hadoop', 'group= s': ['users']}\n2015-11-27 13:11:52,591 - File['/var/lib/ambari-agent/tmp/c= hangeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}= \n2015-11-27 13:11:52,734 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh= ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,= /tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) = -gt 1000) || (false)'}\n2015-11-27 13:11:52,749 - Skipping Execute['/var/li= b/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperf= data_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to= not_if\n2015-11-27 13:11:52,767 - Execute[('setenforce', '0')] {'not_if': = '(! which getenforce ) || (which getenforce && getenforce | grep -q Disable= d)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}\n2015-11-27 13:11= :53,209 - Directory['/var/log/storm'] {'owner': 'storm', 'group': 'hadoop',= 'recursive': True, 'mode': 0777}\n2015-11-27 13:11:53,295 - Directory['/va= r/run/storm'] {'owner': 'storm', 'cd_access': 'a', 'group': 'hadoop', 'mode= ': 0755, 'recursive': True}\n2015-11-27 13:11:53,493 - Directory['/hadoop/s= torm'] {'owner': 'storm', 'mode': 0755, 'group': 'hadoop', 'recursive': Tru= e, 'cd_access': 'a'}\n2015-11-27 13:11:53,645 - Directory['/usr/hdp/current= /storm-nimbus/conf'] {'group': 'hadoop', 'recursive': True, 'cd_access': 'a= '}\n2015-11-27 13:11:53,713 - Changing group for /usr/hdp/current/storm-nim= bus/conf from 0 to hadoop\n2015-11-27 13:11:54,004 - File['/usr/hdp/current= /storm-nimbus/conf/config.yaml'] {'owner': 'storm', 'content': Template('co= nfig.yaml.j2'), 'group': 'hadoop'}\n2015-11-27 13:11:54,109 - File['/usr/hd= p/current/storm-nimbus/conf/storm.yaml'] {'owner': 'storm', 'content': Inli= neTemplate(...), 'group': 'hadoop'}\n2015-11-27 13:11:54,434 - File['/usr/h= dp/current/storm-nimbus/conf/storm-env.sh'] {'content': InlineTemplate(...)= , 'owner': 'storm'}\n2015-11-27 13:11:54,517 - Writing File['/usr/hdp/curre= nt/storm-nimbus/conf/storm-env.sh'] because contents don't match\n2015-11-2= 7 13:11:54,554 - Directory['/usr/hdp/current/storm-nimbus/log4j2'] {'owner'= : 'storm', 'group': 'hadoop', 'recursive': True, 'mode': 0755}\n2015-11-27 = 13:11:54,635 - File['/usr/hdp/current/storm-nimbus/log4j2/cluster.xml'] {'c= ontent': InlineTemplate(...), 'owner': 'storm'}\n2015-11-27 13:11:54,750 - = File['/usr/hdp/current/storm-nimbus/log4j2/worker.xml'] {'content': InlineT= emplate(...), 'owner': 'storm'}\n2015-11-27 13:11:54,857 - Ranger admin not= installed\n2015-11-27 13:11:54,858 - Execute['source /usr/hdp/current/stor= m-nimbus/conf/storm-env.sh ; export PATH=3D$JAVA_HOME/bin:$PATH ; storm nim= bus > /var/log/storm/nimbus.out 2>&1'] {'wait_for_finish': False, 'path': [= '/usr/hdp/current/storm-nimbus/bin'], 'user': 'storm', 'not_if': \"ambari-s= udo.sh su storm -l -s /bin/bash -c 'ls /var/run/storm/nimbus.pid >/dev/null= 2>&1 && ps -p `cat /var/run/storm/nimbus.pid` >/dev/null 2>&1'\"}\n2015-11= -27 13:11:54,956 - Execute['/usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm= .daemon.nimbus$ && /usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.n= imbus$ | awk {'print $1'} > /var/run/storm/nimbus.pid'] {'logoutput': True,= 'path': ['/usr/hdp/current/storm-nimbus/bin'], 'tries': 6, 'user': 'storm'= , 'try_sleep': 10}\n######## Hortonworks #############\nThis is MOTD messag= e, added for testing in qe infra\n2015-11-27 13:11:55,513 - Retrying after = 10 seconds. Reason: Execution of '/usr/jdk64/jdk1.8.0_60/bin/jps -l | grep= storm.daemon.nimbus$ && /usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.da= emon.nimbus$ | awk {'print $1'} > /var/run/storm/nimbus.pid' returned 1. ##= ###### Hortonworks #############\nThis is MOTD message, added for testing i= n qe infra\n######## Hortonworks #############\nThis is MOTD message, added= for testing in qe infra\n2015-11-27 13:12:06,641 - Retrying after 10 secon= ds. Reason: Execution of '/usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.d= aemon.nimbus$ && /usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.nim= bus$ | awk {'print $1'} > /var/run/storm/nimbus.pid' returned 1. ######## H= ortonworks #############\nThis is MOTD message, added for testing in qe inf= ra\n######## Hortonworks #############\nThis is MOTD message, added for tes= ting in qe infra\n2015-11-27 13:12:17,310 - Retrying after 10 seconds. Reas= on: Execution of '/usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.ni= mbus$ && /usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.nimbus$ | a= wk {'print $1'} > /var/run/storm/nimbus.pid' returned 1. ######## Hortonwor= ks #############\nThis is MOTD message, added for testing in qe infra\n####= #### Hortonworks #############\nThis is MOTD message, added for testing in = qe infra\n2015-11-27 13:12:27,627 - Retrying after 10 seconds. Reason: Exec= ution of '/usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.nimbus$ &&= /usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.nimbus$ | awk {'pri= nt $1'} > /var/run/storm/nimbus.pid' returned 1. ######## Hortonworks #####= ########\nThis is MOTD message, added for testing in qe infra\n######## Hor= tonworks #############\nThis is MOTD message, added for testing in qe infra= \n2015-11-27 13:12:37,969 - Retrying after 10 seconds. Reason: Execution of= '/usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.nimbus$ && /usr/jd= k64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.nimbus$ | awk {'print $1'} = > /var/run/storm/nimbus.pid' returned 1. ######## Hortonworks #############= \nThis is MOTD message, added for testing in qe infra\n######## Hortonworks= #############\nThis is MOTD message, added for testing in qe infra", "structured_out" : { "version" : "2.3.4.0-3349" } } } =20 -- This message was sent by Atlassian JIRA (v6.3.4#6332)