cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alessandro Caviglione <>
Subject Re: A Story of a Failed XenServer Upgrade
Date Sat, 02 Jan 2016 21:27:04 GMT
No guys,as the article wrote, my first action was to put in Maintenance
Mode the Pool Master INSIDE CS; "It is vital that you upgrade the XenServer
Pool Master first before any of the Slaves.  To do so you need to empty the
Pool Master of all CloudStack VMs, and you do this by putting the Host into
Maintenance Mode within CloudStack to trigger a live migration of all VMs
to alternate Hosts"

This is exactly what I've done and after the XS upgrade, no hosts was able
to communicate with CS and also with the upgraded host.

Putting an host in Maint Mode within CS will trigger MM also on XenServer
host or just will move the VMs to other hosts?

And again.... what's the best practices to upgrade a XS cluster?

On Sat, Jan 2, 2016 at 7:11 PM, Remi Bergsma <>

> CloudStack should always do the migration of VM's not the Hypervisor.
> That's not true. You can safely migrate outside of CloudStack as the power
> report will tell CloudStack where the vms live and the db gets updated
> accordingly. I do this a lot while patching and that works fine on 6.2 and
> 6.5. I use both CloudStack 4.4.4 and 4.7.0.
> Regards, Remi
> Sent from my iPhone
> On 02 Jan 2016, at 16:26, Jeremy Peterson <<mailto:
>>> wrote:
> I don't use XenServer maintenance mode until after CloudStack has put the
> Host in maintenance mode.
> When you initiate maintenance mode from the host rather than CloudStack
> the db does not know where the VM's are and your UUID's get jacked.
> CS is your brains not the hypervisor.
> Maintenance in CS.  All VM's will migrate.  Maintenance in XenCenter.
> Upgrade.  Reboot.  Join Pool.  Remove Maintenance starting at hypervisor if
> needed and then CS and move on to the next Host.
> CloudStack should always do the migration of VM's not the Hypervisor.
> Jeremy
> -----Original Message-----
> From: Davide Pala []
> Sent: Friday, January 1, 2016 5:18 PM
> To:<>
> Subject: R: A Story of a Failed XenServer Upgrade
> Hi alessandro. If u put in maintenance mode the master you force the
> election of a new pool master. Now when you have see the upgraded host as
> disconnected you are connected to the new pool master and the host (as a
> pool member) cannot comunicate with a pool master of an earliest version.
> The solution? Launche the upgrade on the pool master without enter in
> maintenance mode. And remember a consistent backup!!!
> Inviato dal mio dispositivo Samsung
> -------- Messaggio originale --------
> Da: Alessandro Caviglione <<mailto:
> Data: 01/01/2016 23:23 (GMT+01:00)
> A:<>
> Oggetto: A Story of a Failed XenServer Upgrade
> Hi guys,
> I want to share my XenServer Upgrade adventure to understand if I did
> domething wrong.
> I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the VRs
> has been upgraded I start the upgrade process of my XenServer hosts from
> 6.2 to 6.5.
> I do not already have PoolHA enabled so I followed this article:
> The cluster consists of n° 3 XenServer hosts.
> First of all I added manage.xenserver.pool.master=false
> to file and restarted cloudstack-management service.
> After that I put in Maintenance Mode Pool Master host and, after all VMs
> has been migrated, I Unmanaged the cluster.
> At this point all host appears as "Disconnected" from CS interface and
> this should be right.
> Now I put XenServer 6.5 CD in the host in Maintenance Mode and start a
> in-place upgrade.
> After XS6.5 has been installed, I istalled the 6.5SP1 and reboot again.
> At this point I expected that, after click on Manage Cluster on CS, all
> the hosts come back to "UP" and I could go ahead upgrading the other
> hosts....
> But, instead of that, all the hosts still appears as "Disconnected", I
> tried a couple of cloudstack-management service restart without success.
> So I opened XenCenter and connect to Pool Master I upgraded to 6.5 and it
> appear in Maintenance Mode, so I tried to Exit from Maint Mode but I got
> the error: The server is still booting
> After some investigation, I run the command "xe task-list" and this is the
> result:
> uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
> name-label ( RO): VM.set_memory_dynamic_range name-description ( RO):
> status ( RO): pending
> progress ( RO): 0.000
> I tried a couple of reboot but nothing changes.... so I decided to shut
> down the server, force raise a slave host to master with emergency mode,
> remove old server from CS and reboot CS.
> After that, I see my cluster up and running again, so I installed XS
> 6.2SP1 on the "upgraded" host and added again to the cluster....
> So after an entire day of work, I'm in the same situation! :D
> Anyone can tell me if I made something wrong??
> Thank you very much!

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message