Return-Path: X-Original-To: apmail-incubator-ambari-dev-archive@minotaur.apache.org Delivered-To: apmail-incubator-ambari-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 21919F2F4 for ; Tue, 7 May 2013 21:01:16 +0000 (UTC) Received: (qmail 75060 invoked by uid 500); 7 May 2013 21:01:16 -0000 Delivered-To: apmail-incubator-ambari-dev-archive@incubator.apache.org Received: (qmail 75045 invoked by uid 500); 7 May 2013 21:01:16 -0000 Mailing-List: contact ambari-dev-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: ambari-dev@incubator.apache.org Delivered-To: mailing list ambari-dev@incubator.apache.org Received: (qmail 75036 invoked by uid 99); 7 May 2013 21:01:15 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 07 May 2013 21:01:15 +0000 Date: Tue, 7 May 2013 21:01:15 +0000 (UTC) From: "Siddharth Wagle (JIRA)" To: ambari-dev@incubator.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Resolved] (AMBARI-2076) DataNode install failed with custom users MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/AMBARI-2076?page=3Dcom.atlassi= an.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle resolved AMBARI-2076. ------------------------------------- Resolution: Fixed =20 > DataNode install failed with custom users > ----------------------------------------- > > Key: AMBARI-2076 > URL: https://issues.apache.org/jira/browse/AMBARI-2076 > Project: Ambari > Issue Type: Bug > Affects Versions: 1.3.0 > Reporter: Sumit Mohanty > Assignee: Siddharth Wagle > Fix For: 1.3.0 > > Attachments: AMBARI-2076-1.patch, AMBARI-2076.patch > > > 1) Using latest QA build > 2) Modified all log and pid directories (just added an 'xx' to all props) > 3) Modified all user accounts (just added an 'xx' to all props) =E2=80=93= see screen shot > 4) Install failed error below on the datanode install > stderr:=20 > none > none > stdout: > notice: /Stagemain/Hdp-repos::Process_repo/FileHDP/ensure: defined conten= t as ' > {md5}e4521dbc3c19e73cef603cd7d2a7f1b0' > notice: Finished catalog run in 0.11 seconds > notice: /Stagemain/Hdp-repos::Process_repo/FileHDP-epel/ensure: defined c= ontent as '{md5} > e9b95c5b568ee24769f8d0f851acade2' > notice: Finished catalog run in 0.05 seconds > warning: Dynamic lookup of $ambari_db_rca_url is deprecated. Support will= be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., $clas= sname::variable) or parameterized classes. > warning: Dynamic lookup of $ambari_db_rca_driver is deprecated. Support w= ill be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., $c= lassname::variable) or parameterized classes. > warning: Dynamic lookup of $ambari_db_rca_username is deprecated. Support= will be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., = $classname::variable) or parameterized classes. > warning: Dynamic lookup of $ambari_db_rca_password is deprecated. Support= will be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., = $classname::variable) or parameterized classes. > notice: /Stage1/Hdp::Create_smoke_user/Groupusersxx/ensure: created > notice: /Stage1/Hdp/Grouphadoopxx/ensure: created > notice: /Stage1/Hdp::Create_smoke_user/Hdp::Userambari-qa/Userambari-qa/e= nsure: created > notice: /Stage1/Hdp::Create_smoke_user/Hdp::Execusermod -u 1012 ambari-qa= /Execusermod -u 1012 ambari-qa/returns: executed successfully > notice: /Stage1/Hdp::Snappy::Package/Hdp::Packagesnappy/Hdp::Package::Pro= cess_pkgsnappy/Packagesnappy/ensure: created > notice: /Stage1/Hdp::Snmp/Hdp::Packagesnmp/Hdp::Package::Process_pkgsnmp/= Packagenet-snmp-utils/ensure: created > notice: /Stage1/Hdp::Snmp/Hdp::Packagesnmp/Hdp::Package::Process_pkgsnmp/= Packagenet-snmp/ensure: created > notice: /Stage1/Hdp::Snmp/Hdp::Packagesnmp/Hdp::Package::Process_pkgsnmp/= Hdp::Java::Packagesnmp/Execmkdir -p /tmp/HDP-artifacts/ ; curl -f --retry 1= 0 http://ip-10-101-1-112.ec2.internal:8080/resources//jdk-6u31-linux-x64.bi= n -o /tmp/HDP-artifacts//jdk-6u31-linux-x64.bin snmp/returns: executed succ= essfully > notice: /Stage1/Hdp::Set_selinux/Hdp::Exec/bin/echo 0 > /selinux/enforce/= Exec/bin/echo 0 > /selinux/enforce/returns: executed successfully > notice: /Stage1/Hdp::Snmp/Hdp::Packagesnmp/Hdp::Package::Process_pkgsnmp/= Hdp::Java::Packagesnmp/Exec[mkdir -p /usr/jdk ; chmod +x /tmp/HDP-artifacts= //jdk-6u31-linux-x64.bin; cd /usr/jdk ; echo A | /tmp/HDP-artifacts//jdk-6u= 31-linux-x64.bin -noregister > /dev/null 2>&1 snmp]/returns: executed succe= ssfully > notice: /Stage1/Hdp::Snmp/Hdp::Packagesnmp/Hdp::Package::Process_pkgsnmp/= Hdp::Java::Packagesnmp/File/usr/jdk/jdk1.6.0_31/bin/java snmp/ensure: creat= ed > notice: /Stage1/Hdp::Snmp/Hdp::Snmp-configfilesnmpd.conf/Hdp::Configfile/= etc/snmp//snmpd.conf/File/etc/snmp//snmpd.conf/content: content changed ' > {md5}8307434bc8ed4e2a7df4928fb4232778' to '{md5} > f786955c0c36f7f5a4f375e3fe93c959' > notice: /Stage1/Hdp::Snmp/Servicesnmpd/ensure: ensure changed 'stopped' t= o 'running' > notice: /Stage1/Hdp::Snmp/Servicesnmpd: Triggered 'refresh' from 1 events > notice: /Stage1/Hdp::Snappy::Package/Hdp::Packagesnappy/Hdp::Package::Pro= cess_pkgsnappy/Packagesnappy-devel/ensure: created > notice: /Stage1/Hdp::Snappy::Package/Hdp::Snappy::Package::Ln64/Hdp::Exec= hdp::snappy::package::ln 64/Exechdp::snappy::package::ln 64/returns: execut= ed successfully > notice: /Stage1/Hdp::Snappy::Package/Hdp::Snappy::Package::Ln32/Hdp::Exec= hdp::snappy::package::ln 32/Exechdp::snappy::package::ln 32/returns: execut= ed successfully > notice: /Stage1/Hdp/Hdp::Packageglibc/Hdp::Package::Process_pkgglibc/Pack= ageglibc.i686/ensure: created > notice: /Stagemain/Hdp-hadoop/Hdp-hadoop::Packagehadoop/Hdp::Packagehadoo= p 64/Hdp::Package::Process_pkghadoop 64/Packagehadoop-sbin/ensure: created > notice: /Stagemain/Hdp-hadoop/Hdp-hadoop::Packagehadoop/Hdp::Packagehadoo= p 64/Hdp::Package::Process_pkghadoop 64/Packagehadoop-libhdfs/ensure: creat= ed > notice: /Stagemain/Hdp-hadoop/Hdp-hadoop::Packagehadoop/Hdp::Packagehadoo= p 64/Hdp::Package::Process_pkghadoop 64/Packagehadoop-pipes/ensure: created > notice: /Stagemain/Hdp-hadoop/Hdp-hadoop::Packagehadoop/Hdp::Packagehadoo= p 64/Hdp::Package::Process_pkghadoop 64/Packagehadoop-native/ensure: create= d > notice: /Stagemain/Hdp-hadoop/Hdp-hadoop::Packagehadoop/Hdp::Packagehadoo= p 64/Hdp::Package::Process_pkghadoop 64/Hdp::Java::Packagehadoop 64/File/us= r/jdk/jdk1.6.0_31/bin/java hadoop 64/ensure: created > notice: /Stagemain/Hdp-hadoop/Hdp-hadoop::Packagehadoop/Hdp::Packagehadoo= p 64/Hdp::Package::Process_pkghadoop 64/Packagehadoop-lzo/ensure: created > notice: /Stagemain/Hdp-hadoop/Hdp::Directory_recursive_create/var/run/had= oopXX/Hdp::Execmkdir -p /var/run/hadoopXX/Execmkdir -p /var/run/hadoopXX/re= turns: executed successfully > notice: /Stagemain/Hdp-hadoop/Hdp::Directory_recursive_create/var/run/had= oopXX/Hdp::Directory/var/run/hadoopXX/File/var/run/hadoopXX/group: group ch= anged 'root' to 'hadoopxx' > notice: /Stage2/Hdp-hadoop::Datanode/Hdp-hadoop::Datanode::Create_data_di= rs/grid/0/hadoop/hdfs/data,/grid/1/hadoop/hdfs/data/Hdp::Directory_recursiv= e_create_ignore_failure/grid/0/hadoop/hdfs/data/Hdp::Execmkdir -p /grid/0/h= adoop/hdfs/data ; exit 0/Execmkdir -p /grid/0/hadoop/hdfs/data ; exit 0/ret= urns: executed successfully > notice: /Stagemain/Hdp-hadoop/Hdp-hadoop::Packagehadoop/Hdp::Packagehadoo= p 64/Hdp::Package::Process_pkghadoop 64/Packagehadoop-lzo-native/ensure: cr= eated > notice: /Stage2/Hdp-hadoop::Initialize/Hdp::Packageambari-log4j/Hdp::Pack= age::Process_pkgambari-log4j/Hdp::Java::Packageambari-log4j/File/usr/jdk/jd= k1.6.0_31/bin/java ambari-log4j/ensure: created > notice: /Stage2/Hdp-hadoop::Initialize/Hdp::Packageambari-log4j/Hdp::Pack= age::Process_pkgambari-log4j/Packageambari-log4j/ensure: created > notice: /Stage2/Hdp-hadoop::Datanode/Hdp-hadoop::Datanode::Create_data_di= rs/grid/0/hadoop/hdfs/data,/grid/1/hadoop/hdfs/data/Hdp::Directory_recursiv= e_create_ignore_failure/grid/1/hadoop/hdfs/data/Hdp::Execmkdir -p /grid/1/h= adoop/hdfs/data ; exit 0/Execmkdir -p /grid/1/hadoop/hdfs/data ; exit 0/ret= urns: executed successfully > err: /Stagemain/Hdp-hadoop/Hdp::Usermapredxx/Usermapredxx/ensure: change = from absent to present failed: Could not create user mapredxx: Execution of= '/usr/sbin/useradd -s /bin/bash -g hadoopxx -G hadoopxx,mapredxx -m mapred= xx' returned 6: useradd: group 'mapredxx' does not exist > notice: /Stage2/Hdp-hadoop::Initialize/File/usr/lib/hadoop/lib/hadoop-too= ls.jar/ensure: created > notice: /Stage2/Hdp-hadoop::Datanode/Hdp-hadoop::Datanode::Create_data_di= rs/grid/0/hadoop/hdfs/data,/grid/1/hadoop/hdfs/data/Hdp::Directory_recursiv= e_create_ignore_failure/grid/1/hadoop/hdfs/data/Hdp::Execchown hdfsxx:hadoo= pxx /grid/1/hadoop/hdfs/data; exit 0/Execchown hdfsxx:hadoopxx /grid/1/hado= op/hdfs/data; exit 0/returns: executed successfully > notice: /Stage2/Hdp-hadoop::Datanode/Hdp-hadoop::Datanode::Create_data_di= rs/grid/0/hadoop/hdfs/data,/grid/1/hadoop/hdfs/data/Hdp::Directory_recursiv= e_create_ignore_failure/grid/1/hadoop/hdfs/data/Hdp::Execchmod 0750 /grid/1= /hadoop/hdfs/data ; exit 0/Execchmod 0750 /grid/1/hadoop/hdfs/data ; exit 0= /returns: executed successfully > notice: /Stagemain/Hdp-hadoop/Hdp::Directory_recursive_create/var/log/had= oopXX/Hdp::Execmkdir -p /var/log/hadoopXX/Execmkdir -p /var/log/hadoopXX/re= turns: executed successfully > notice: /Stagemain/Hdp-hadoop/Hdp::Directory_recursive_create/var/log/had= oopXX/Hdp::Directory/var/log/hadoopXX/File/var/log/hadoopXX/group: group ch= anged 'root' to 'hadoopxx' > notice: /Stage2/Hdp-hadoop::Datanode/Hdp-hadoop::Datanode::Create_data_di= rs/grid/0/hadoop/hdfs/data,/grid/1/hadoop/hdfs/data/Hdp::Directory_recursiv= e_create_ignore_failure/grid/0/hadoop/hdfs/data/Hdp::Execchown hdfsxx:hadoo= pxx /grid/0/hadoop/hdfs/data; exit 0/Execchown hdfsxx:hadoopxx /grid/0/hado= op/hdfs/data; exit 0/returns: executed successfully > notice: /Stage2/Hdp-hadoop::Datanode/Hdp-hadoop::Datanode::Create_data_di= rs/grid/0/hadoop/hdfs/data,/grid/1/hadoop/hdfs/data/Hdp::Directory_recursiv= e_create_ignore_failure/grid/0/hadoop/hdfs/data/Hdp::Execchmod 0750 /grid/0= /hadoop/hdfs/data ; exit 0/Execchmod 0750 /grid/0/hadoop/hdfs/data ; exit 0= /returns: executed successfully > err: /Stagemain/Hdp-hadoop/Hdp::Userhdfsxx/Userhdfsxx/ensure: change from= absent to present failed: Could not create user hdfsxx: Execution of '/usr= /sbin/useradd -s /bin/bash -g hadoopxx -G hadoopxx,hdfsxx -m hdfsxx' return= ed 6: useradd: group 'hdfsxx' does not exist > notice: /Stagemain/Hdp-hadoop/Hdp::Directory_recursive_create/etc/hadoop/= conf/Hdp::Execmkdir -p /etc/hadoop/conf/Anchorhdp::exec::mkdir -p /etc/hado= op/conf::begin: Dependency Usermapredxx has failures: true > notice: /Stagemain/Hdp-hadoop/Hdp::Directory_recursive_create/etc/hadoop/= conf/Hdp::Execmkdir -p /etc/hadoop/conf/Anchorhdp::exec::mkdir -p /etc/hado= op/conf::begin: Dependency Userhdfsxx has failures: true > warning: /Stagemain/Hdp-hadoop/Hdp::Directory_recursive_create/etc/hadoop= /conf/Hdp::Execmkdir -p /etc/hadoop/conf/Anchorhdp::exec::mkdir -p /etc/had= oop/conf::begin: Skipping because of failed dependencies > notice: /Stagemain/Hdp-hadoop/Hdp::Directory_recursive_create/etc/hadoop/= conf/Hdp::Execmkdir -p /etc/hadoop/conf/Execmkdir -p /etc/hadoop/conf: Depe= ndency Usermapredxx has failures: true > notice: /Stagemain/Hdp-hadoop/Hdp::Directory_recursive_create/etc/hadoop/= conf/Hdp::Execmkdir -p /etc/hadoop/conf/Execmkdir -p /etc/hadoop/conf: Depe= ndency Userhdfsxx has failures: true > warning: /Stagemain/Hdp-hadoop/Hdp::Directory_recursive_create/etc/hadoop= /conf/Hdp::Execmkdir -p /etc/hadoop/conf/Execmkdir -p /etc/hadoop/conf: Ski= pping because of failed dependencies > notice: /Stagemain/Hdp-hadoop/Hdp::Directory_recursive_create/etc/hadoop/= conf/Hdp::Execmkdir -p /etc/hadoop/conf/Anchorhdp::exec::mkdir -p /etc/hado= op/conf::end: Dependency Usermapredxx has failures: true > notice: /Stagemain/Hdp-hadoop/Hdp::Directory_recursive_create/etc/hadoop/= conf/Hdp::Execmkdir -p /etc/hadoop/conf/Anchorhdp::exec::mkdir -p /etc/hado= op/conf::end: Dependency Userhdfsxx has failures: true > warning: /Stagemain/Hdp-hadoop/Hdp::Directory_recursive_create/etc/hadoop= /conf/Hdp::Execmkdir -p /etc/hadoop/conf/Anchorhdp::exec::mkdir -p /etc/had= oop/conf::end: Skipping because of failed dependencies > notice: /Stagemain/Hdp-hadoop/Hdp::Directory_recursive_create/etc/hadoop/= conf/Hdp::Directory/etc/hadoop/conf/File/etc/hadoop/conf: Dependency Userma= predxx has failures: true > notice: /Stagemain/Hdp-hadoop/Hdp::Directory_recursive_create/etc/hadoop/= conf/Hdp::Directory/etc/hadoop/conf/File/etc/hadoop/conf: Dependency Userhd= fsxx has failures: true > warning: /Stagemain/Hdp-hadoop/Hdp::Directory_recursive_create/etc/hadoop= /conf/Hdp::Directory/etc/hadoop/conf/File/etc/hadoop/conf: Skipping because= of failed dependencies > notice: /Stage2/Hdp-hadoop::Initialize/File/etc/hadoop/conf/ssl-server.xm= l.example: Dependency Usermapredxx has failures: true > notice: /Stage2/Hdp-hadoop::Initialize/File/etc/hadoop/conf/ssl-server.xm= l.example: Dependency Userhdfsxx has failures: true > warning: /Stage2/Hdp-hadoop::Initialize/File/etc/hadoop/conf/ssl-server.x= ml.example: Skipping because of failed dependencies -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrato= rs For more information on JIRA, see: http://www.atlassian.com/software/jira