cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Davide Pala <>
Subject R: A Story of a Failed XenServer Upgrade
Date Fri, 01 Jan 2016 23:17:53 GMT
Hi alessandro. If u put in maintenance mode the master you force the election of a new pool
master. Now when you have see the upgraded host as disconnected you are connected to the new
pool master and the host (as a pool member) cannot comunicate with a pool master of an earliest
version. The solution? Launche the upgrade on the pool master without enter in maintenance
mode. And remember a consistent backup!!!

Inviato dal mio dispositivo Samsung

-------- Messaggio originale --------
Da: Alessandro Caviglione <>
Data: 01/01/2016 23:23 (GMT+01:00)
Oggetto: A Story of a Failed XenServer Upgrade

Hi guys,
I want to share my XenServer Upgrade adventure to understand if I did
domething wrong.
I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the VRs has
been upgraded I start the upgrade process of my XenServer hosts from 6.2 to
I do not already have PoolHA enabled so I followed this article:

The cluster consists of n° 3 XenServer hosts.

First of all I added manage.xenserver.pool.master=false
to file and restarted cloudstack-management service.

After that I put in Maintenance Mode Pool Master host and, after all VMs
has been migrated, I Unmanaged the cluster.
At this point all host appears as "Disconnected" from CS interface and this
should be right.
Now I put XenServer 6.5 CD in the host in Maintenance Mode and start a
in-place upgrade.
After XS6.5 has been installed, I istalled the 6.5SP1 and reboot again.
At this point I expected that, after click on Manage Cluster on CS, all the
hosts come back to "UP" and I could go ahead upgrading the other hosts....

But, instead of that, all the hosts still appears as "Disconnected", I
tried a couple of cloudstack-management service restart without success.

So I opened XenCenter and connect to Pool Master I upgraded to 6.5 and it
appear in Maintenance Mode, so I tried to Exit from Maint Mode but I got
the error: The server is still booting

After some investigation, I run the command "xe task-list" and this is the

uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
name-label ( RO): VM.set_memory_dynamic_range
name-description ( RO):
status ( RO): pending
progress ( RO): 0.000

I tried a couple of reboot but nothing changes.... so I decided to shut
down the server, force raise a slave host to master with emergency mode,
remove old server from CS and reboot CS.

After that, I see my cluster up and running again, so I installed XS 6.2SP1
on the "upgraded" host and added again to the cluster....

So after an entire day of work, I'm in the same situation! :D

Anyone can tell me if I made something wrong??

Thank you very much!

View raw message