ambari-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Anandha L Ranganathan <analog.s...@gmail.com>
Subject Re: HDP upgrade failed on Finalize Upgrade Pre-Check using ambari.
Date Fri, 20 May 2016 14:59:24 GMT
The zookeeper services are running on the host machine.  but I could not
restart the service from the Ambari UI.

 Here are further information.


Process_id information.



[root@usw2dxdpzo01 ~]# ps -ef |grep zoo
501       2759     1  0 May19 ?        00:01:41 /usr/java/default/bin/java
-Dzookeeper.log.dir=/var/log/zookeeper
-Dzookeeper.log.file=zookeeper-zookeeper-server-usw2dxdpzo01.glassdoor.local.log
-Dzookeeper.root.logger=INFO,ROLLINGFILE -cp
/usr/hdp/2.4.0.0-169//zookeeper/bin/../build/classes:/usr/hdp/2.4.0.0-169//zookeeper/bin/../build/lib/*.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/xercesMinimal-1.9.6.2.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/wagon-provider-api-2.4.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/wagon-http-shared4-2.4.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/wagon-http-shared-1.0-beta-6.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/wagon-http-lightweight-1.0-beta-6.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/wagon-http-2.4.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/wagon-file-1.0-beta-6.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/slf4j-api-1.6.1.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/plexus-utils-3.0.8.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/plexus-interpolation-1.11.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/plexus-container-default-1.0-alpha-9-stable-1.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/netty-3.7.0.Final.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/nekohtml-1.9.6.2.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/maven-settings-2.2.1.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/maven-repository-metadata-2.2.1.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/maven-project-2.2.1.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/maven-profile-2.2.1.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/maven-plugin-registry-2.2.1.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/maven-model-2.2.1.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/maven-error-diagnostics-2.2.1.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/maven-artifact-manager-2.2.1.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/maven-artifact-2.2.1.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/maven-ant-tasks-2.1.3.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/log4j-1.2.16.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/jsoup-1.7.1.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/jline-0.9.94.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/httpcore-4.2.3.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/httpclient-4.2.3.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/commons-logging-1.1.1.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/commons-io-2.2.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/commons-codec-1.6.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/classworlds-1.1-alpha-2.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/backport-util-concurrent-3.1.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/ant-launcher-1.8.0.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../lib/ant-1.8.0.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../zookeeper-3.4.6.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169//zookeeper/bin/../src/java/lib/*.jar:/usr/hdp/2.4.0.0-169//zookeeper/conf::/usr/hdp/2.4.0.0-169//zookeeper/conf:/usr/hdp/2.4.0.0-169//zookeeper/*:/usr/hdp/2.4.0.0-169//zookeeper/lib/*:/usr/share/zookeeper/*
-Xmx1024m -Dzookeeper.log.threshold=INFO -Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.local.only=false
org.apache.zookeeper.server.quorum.QuorumPeerMain
/usr/hdp/2.4.0.0-169//zookeeper/conf/zoo.cfg


This is the output from restart log using ambari.

2016-05-20 14:31:30,685 - User['hdfs'] {'fetch_nonlocal_groups': True,
'groups': ['hadoop', 'hdfs']}
2016-05-20 14:31:30,686 - Directory['/etc/hadoop'] {'mode': 0755}
2016-05-20 14:31:30,686 -
Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner':
'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-05-20 14:31:30,696 - Execute[('setenforce', '0')] {'not_if': '(!
which getenforce ) || (which getenforce && getenforce | grep -q
Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-05-20 14:31:30,709 - Directory['/var/log/hadoop'] {'owner':
'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True,
'cd_access': 'a'}
2016-05-20 14:31:30,710 - Directory['/var/run/hadoop'] {'owner':
'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
2016-05-20 14:31:30,710 - Directory['/tmp/hadoop-hdfs'] {'owner':
'hdfs', 'recursive': True, 'cd_access': 'a'}
2016-05-20 14:31:30,711 - Directory['/etc/hadoop/conf'] {'owner':
'hdfs', 'group': 'hadoop', 'recursive': True}
2016-05-20 14:31:30,711 - Creating directory
Directory['/etc/hadoop/conf'] since it doesn't exist.
2016-05-20 14:31:30,711 - Following the link /etc/hadoop/conf to
/usr/hdp/current/hadoop-client/conf to create the directory
Error: Error: Unable to run the custom hook script ['/usr/bin/python',
'/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START/scripts/hook.py',
'START', '/var/lib/ambari-agent/data/command-3206.json',
'/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START',
'/var/lib/ambari-agent/data/structured-out-3206.json', 'INFO',
'/var/lib/ambari-agent/tmp']







On Thu, May 19, 2016 at 9:17 PM, Anandha L Ranganathan <
analog.sony@gmail.com> wrote:

> Thanks Nate. It helps.
>
> I haven't changed anything in the database. I just first tried with
> ambari-server restart    After the restart,  I tried again with finalize
> pre-check and all the errors are gone.
>
>
>
> On Wed, May 18, 2016 at 6:03 PM, Nate Cole <ncole@hortonworks.com> wrote:
>
>> Those steps look correct.  It is always good practice to backup your
>> database before trying anything directly against it.
>>
>>
>>
>> You should not lose information from an Ambari restart.
>>
>>
>>
>> Thanks
>>
>>
>>
>> *From: *Anandha L Ranganathan <analog.sony@gmail.com>
>> *Reply-To: *"user@ambari.apache.org" <user@ambari.apache.org>
>> *Date: *Wednesday, May 18, 2016 at 7:28 PM
>> *To: *"user@ambari.apache.org" <user@ambari.apache.org>
>> *Cc: *"dev@ambari.apache.org" <dev@ambari.apache.org>
>>
>> *Subject: *Re: HDP upgrade failed on Finalize Upgrade Pre-Check using
>> ambari.
>>
>>
>>
>> Thanks Nate for quick reply.
>>
>> I actually found that in the host_component table , it is still referring
>> to old version.
>>
>> 2
>>
>> SPARK_CLIENT
>>
>> 2.2.6.0-2800
>>
>> 5
>>
>> INSTALLED
>>
>> 3
>>
>> SPARK
>>
>> NONE
>>
>> UNKNOWN
>>
>> 35
>>
>> 2
>>
>> SPARK_JOBHISTORYSERVER
>>
>> 2.2.6.0-2800
>>
>> 5
>>
>> STARTED
>>
>> 6
>>
>> SPARK
>>
>> NONE
>>
>> UNKNOWN
>>
>> 17
>>
>> 2
>>
>> ZOOKEEPER_SERVER
>>
>> 2.2.6.0-2800
>>
>> 5
>>
>> INSTALLED
>>
>> 154
>>
>> ZOOKEEPER
>>
>> NONE
>>
>> UNSECURED
>>
>> 78
>>
>> 2
>>
>> ZOOKEEPER_SERVER
>>
>> 2.2.6.0-2800
>>
>> 5
>>
>> STARTED
>>
>> 155
>>
>> ZOOKEEPER
>>
>> NONE
>>
>> UNSECURED
>>
>> 83
>>
>>
>>
>> Can you confirm these are steps I need to take care.
>>
>>
>>
>> 1) Update version to latest for the component in the table and commit it.
>>
>> 2)  Restart the ambari-server.
>>
>> 3) Continue the upgrade process with retry option.
>>
>> During the ambari-server restart, just want to confirm will I loose any
>> information  or ambari maintains all the state information in the table ?
>>
>> Thanks
>>
>> Anand
>>
>>
>>
>>
>>
>> On Wed, May 18, 2016 at 3:04 PM, Nate Cole <ncole@hortonworks.com> wrote:
>>
>> Are all the services running on the correct version?  If that is the
>> case, you can set the actual version in the hostcomponentstate table.  If
>> you then restart Ambari, you should be able to retry the step and see if it
>> succeeds.
>>
>>
>>
>> Thanks
>>
>> *From: *Anandha L Ranganathan <analog.sony@gmail.com>
>> *Reply-To: *"user@ambari.apache.org" <user@ambari.apache.org>
>> *Date: *Wednesday, May 18, 2016 at 5:05 PM
>> *To: *"user@ambari.apache.org" <user@ambari.apache.org>, "
>> dev@ambari.apache.org" <dev@ambari.apache.org>
>> *Subject: *Re: HDP upgrade failed on Finalize Upgrade Pre-Check using
>> ambari.
>>
>>
>>
>> I also verified that in one of the node and it is  pointing to 2.4
>> version.
>>
>> [root@usw2dxdpzo01 ~]# ls -lrt /usr/hdp/current/zookeeper-client
>> lrwxrwxrwx. 1 root root 30 May 18 15:42 /usr/hdp/current/zookeeper-client
>> -> /usr/hdp/2.4.0.0-169/zookeeper
>>
>>
>>
>>
>>
>>
>>
>> On Wed, May 18, 2016 at 9:44 AM, Anandha L Ranganathan <
>> analog.sony@gmail.com> wrote:
>>
>> Hi,
>>
>> I created the test cluster and  trying to upgrade HDP 2.4  During the
>> upgrade it failed.
>>
>>
>>
>> Steps to upgrade the HDP.
>>
>> Ambari upgrade - ambari-2.1.0 => ambari-2.2.1.0
>> HDP upgrade - HDP 2.2.6 => HDP 2.4.0.0
>>
>>
>>
>> During the upgrade I had one issues.  forgot to  turn-off the JMX port
>> and  during the upgrade I commented that out and it went through fine.    I
>> have total 15 instances for this test cluster and it is failed to upgrade 7
>> instances.
>>
>> But in the final steps, "Finalize Upgrade Pre-Check
>> <http://localhost:8080/>" it is throwing error. It didn't give any
>> information in the log to debug the issue. Also it is not giving an option
>> to proceed further. Is there any workaround for this ? The upgrade process
>> is completed with 97% and could't proceed further.
>>
>> 1.  The following components were found to have version mismatches.  Finalize will
not complete successfully:
>>
>> 2. usw2dxdpgw01: SPARK/SPARK_CLIENT reports 2.2.6.0-2800
>>
>> 3. usw2dxdpma03: SPARK/SPARK_JOBHISTORYSERVER reports 2.2.6.0-2800
>>
>> 4. usw2dxdpzo01: ZOOKEEPER/ZOOKEEPER_SERVER reports 2.2.6.0-2800
>>
>> 5. usw2dxdpzo02: ZOOKEEPER/ZOOKEEPER_SERVER reports 2.2.6.0-2800
>>
>> 6. usw2dxdpzo03: ZOOKEEPER/ZOOKEEPER_SERVER reports 2.2.6.0-2800
>>
>> 7. usw2dxdpgw01: ZOOKEEPER/ZOOKEEPER_CLIENT reports 2.2.6.0-2800
>>
>> 8. usw2dxdpmn01: ZOOKEEPER/ZOOKEEPER_CLIENT reports 2.2.6.0-2800
>>
>>
>>
>>
>>
>>
>>
>
>

Mime
View raw message