Return-Path: X-Original-To: apmail-cloudstack-users-archive@www.apache.org Delivered-To: apmail-cloudstack-users-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 847911815D for ; Sat, 2 Jan 2016 21:27:24 +0000 (UTC) Received: (qmail 7092 invoked by uid 500); 2 Jan 2016 21:27:23 -0000 Delivered-To: apmail-cloudstack-users-archive@cloudstack.apache.org Received: (qmail 7037 invoked by uid 500); 2 Jan 2016 21:27:23 -0000 Mailing-List: contact users-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@cloudstack.apache.org Delivered-To: mailing list users@cloudstack.apache.org Received: (qmail 7025 invoked by uid 99); 2 Jan 2016 21:27:23 -0000 Received: from Unknown (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 02 Jan 2016 21:27:23 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 12DA21A0837 for ; Sat, 2 Jan 2016 21:27:23 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.9 X-Spam-Level: ** X-Spam-Status: No, score=2.9 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=3, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd2-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-eu-west.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id ZVHlGVjpqIPc for ; Sat, 2 Jan 2016 21:27:12 +0000 (UTC) Received: from mail-lf0-f54.google.com (mail-lf0-f54.google.com [209.85.215.54]) by mx1-eu-west.apache.org (ASF Mail Server at mx1-eu-west.apache.org) with ESMTPS id 4DACE24E30 for ; Sat, 2 Jan 2016 21:27:11 +0000 (UTC) Received: by mail-lf0-f54.google.com with SMTP id p203so260951126lfa.0 for ; Sat, 02 Jan 2016 13:27:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=VGFCFxJheWg+B4jhFc7t+A5wVcXatdSwx0auRN2dCDo=; b=1EI1kDkB5gMGrf2BsHGKh+g5j+hYfiJHeUPh0VZc9x0/Mqao+aVjW3aATVQK5UG7Z0 C3v9d5CzAinKBQBYrJafP6HTgawOd946QYMEBt+3+yHShacCdstDbhLvWdV+OsTniTLS JCDalSleQXUsvkKzuQFZBCXPrLGGkRGGxMtqPf/2KKkLSFqObXINz3jDEbqNbUO476GZ HIm2vlqP8Qb+OPU96O/CwxWka86f8/xLN54G7YC1hqFe+98Ib6usTknmwFGxEcwRTW5O uz6oCt3TnKt4RO427ntEyE3HytkuXVMt18llgnGbvH6lsoLdKfKsDFSv4jUAShlTPiHl JgAA== MIME-Version: 1.0 X-Received: by 10.25.213.134 with SMTP id m128mr25085363lfg.87.1451770024602; Sat, 02 Jan 2016 13:27:04 -0800 (PST) Received: by 10.25.17.90 with HTTP; Sat, 2 Jan 2016 13:27:04 -0800 (PST) In-Reply-To: References: Date: Sat, 2 Jan 2016 22:27:04 +0100 Message-ID: Subject: Re: A Story of a Failed XenServer Upgrade From: Alessandro Caviglione To: users@cloudstack.apache.org Content-Type: multipart/alternative; boundary=001a11420fba9cb3c70528608caa --001a11420fba9cb3c70528608caa Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable No guys,as the article wrote, my first action was to put in Maintenance Mode the Pool Master INSIDE CS; "It is vital that you upgrade the XenServer Pool Master first before any of the Slaves. To do so you need to empty the Pool Master of all CloudStack VMs, and you do this by putting the Host into Maintenance Mode within CloudStack to trigger a live migration of all VMs to alternate Hosts" This is exactly what I've done and after the XS upgrade, no hosts was able to communicate with CS and also with the upgraded host. Putting an host in Maint Mode within CS will trigger MM also on XenServer host or just will move the VMs to other hosts? And again.... what's the best practices to upgrade a XS cluster? On Sat, Jan 2, 2016 at 7:11 PM, Remi Bergsma wrote: > CloudStack should always do the migration of VM's not the Hypervisor. > > That's not true. You can safely migrate outside of CloudStack as the powe= r > report will tell CloudStack where the vms live and the db gets updated > accordingly. I do this a lot while patching and that works fine on 6.2 an= d > 6.5. I use both CloudStack 4.4.4 and 4.7.0. > > Regards, Remi > > > Sent from my iPhone > > On 02 Jan 2016, at 16:26, Jeremy Peterson jpeterson@acentek.net>> wrote: > > I don't use XenServer maintenance mode until after CloudStack has put the > Host in maintenance mode. > > When you initiate maintenance mode from the host rather than CloudStack > the db does not know where the VM's are and your UUID's get jacked. > > CS is your brains not the hypervisor. > > Maintenance in CS. All VM's will migrate. Maintenance in XenCenter. > Upgrade. Reboot. Join Pool. Remove Maintenance starting at hypervisor = if > needed and then CS and move on to the next Host. > > CloudStack should always do the migration of VM's not the Hypervisor. > > Jeremy > > > -----Original Message----- > From: Davide Pala [mailto:davide.pala@gesca.it] > Sent: Friday, January 1, 2016 5:18 PM > To: users@cloudstack.apache.org > Subject: R: A Story of a Failed XenServer Upgrade > > Hi alessandro. If u put in maintenance mode the master you force the > election of a new pool master. Now when you have see the upgraded host as > disconnected you are connected to the new pool master and the host (as a > pool member) cannot comunicate with a pool master of an earliest version. > The solution? Launche the upgrade on the pool master without enter in > maintenance mode. And remember a consistent backup!!! > > > > Inviato dal mio dispositivo Samsung > > > -------- Messaggio originale -------- > Da: Alessandro Caviglione c.alessandro@gmail.com>> > Data: 01/01/2016 23:23 (GMT+01:00) > A: users@cloudstack.apache.org > Oggetto: A Story of a Failed XenServer Upgrade > > Hi guys, > I want to share my XenServer Upgrade adventure to understand if I did > domething wrong. > I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the VRs > has been upgraded I start the upgrade process of my XenServer hosts from > 6.2 to 6.5. > I do not already have PoolHA enabled so I followed this article: > > http://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xense= rver-cluster/ > > The cluster consists of n=C2=B0 3 XenServer hosts. > > First of all I added manage.xenserver.pool.master=3Dfalse > to environment.properties file and restarted cloudstack-management servic= e. > > After that I put in Maintenance Mode Pool Master host and, after all VMs > has been migrated, I Unmanaged the cluster. > At this point all host appears as "Disconnected" from CS interface and > this should be right. > Now I put XenServer 6.5 CD in the host in Maintenance Mode and start a > in-place upgrade. > After XS6.5 has been installed, I istalled the 6.5SP1 and reboot again. > At this point I expected that, after click on Manage Cluster on CS, all > the hosts come back to "UP" and I could go ahead upgrading the other > hosts.... > > But, instead of that, all the hosts still appears as "Disconnected", I > tried a couple of cloudstack-management service restart without success. > > So I opened XenCenter and connect to Pool Master I upgraded to 6.5 and it > appear in Maintenance Mode, so I tried to Exit from Maint Mode but I got > the error: The server is still booting > > After some investigation, I run the command "xe task-list" and this is th= e > result: > > uuid ( RO) : 72f48a56-1d24-1ca3-aade-091f1830e2f1 > name-label ( RO): VM.set_memory_dynamic_range name-description ( RO): > status ( RO): pending > progress ( RO): 0.000 > > I tried a couple of reboot but nothing changes.... so I decided to shut > down the server, force raise a slave host to master with emergency mode, > remove old server from CS and reboot CS. > > After that, I see my cluster up and running again, so I installed XS > 6.2SP1 on the "upgraded" host and added again to the cluster.... > > So after an entire day of work, I'm in the same situation! :D > > Anyone can tell me if I made something wrong?? > > Thank you very much! > --001a11420fba9cb3c70528608caa--