Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id D5C20200828 for ; Fri, 13 May 2016 14:56:50 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id D478F16099F; Fri, 13 May 2016 12:56:50 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 2A2921602C0 for ; Fri, 13 May 2016 14:56:50 +0200 (CEST) Received: (qmail 88559 invoked by uid 500); 13 May 2016 12:56:49 -0000 Mailing-List: contact users-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@cloudstack.apache.org Delivered-To: mailing list users@cloudstack.apache.org Received: (qmail 88547 invoked by uid 99); 13 May 2016 12:56:48 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 13 May 2016 12:56:48 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 6C035C0A45 for ; Fri, 13 May 2016 12:56:48 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 4.8 X-Spam-Level: **** X-Spam-Status: No, score=4.8 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KAM_LAZY_DOMAIN_SECURITY=1, RDNS_NONE=3] autolearn=disabled Received: from mx2-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id EjNlcvMPj4xO for ; Fri, 13 May 2016 12:56:46 +0000 (UTC) Received: from blueonyx.testlabs.com.au (unknown [103.244.236.4]) by mx2-lw-eu.apache.org (ASF Mail Server at mx2-lw-eu.apache.org) with ESMTPS id 5773E5FAE8 for ; Fri, 13 May 2016 12:56:44 +0000 (UTC) Received: from mail.testlabs.com.au (localhost.localdomain [127.0.0.1]) by blueonyx.testlabs.com.au (8.14.4/8.14.4) with ESMTP id u4DCuXpS032478 for ; Fri, 13 May 2016 22:56:33 +1000 From: "Adrian Sender" To: users@cloudstack.apache.org Subject: Re: Upgrade of Primary storage. Date: Fri, 13 May 2016 22:56:33 +1000 Message-Id: <20160513124905.M56958@www.testlabs.com.au> In-Reply-To: References: X-Mailer: OpenWebMail 2.53 X-OriginatingIP: 123.2.171.29 (asender) MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 archived-at: Fri, 13 May 2016 12:56:51 -0000 Hi Makrand, There are many different ways to achieve storage migration; I would probably add the new LUN and within cloudstack perform a storage migration on the instances (there would potentially be no outage). For the NFS you could rsync preserving the directory structure, update the database and rebuild SSVM. There was an article for secondary storage migration but it seems to have vanished from the Citrix website; maybe someone from Accelerite knows where it is now - http://support.citrix.com/article/CTX135229 - Adrian Sender ---------- Original Message ----------- From: Makrand To: users@cloudstack.apache.org Sent: Fri, 13 May 2016 12:28:28 +0530 Subject: Upgrade of Primary storage. > Guys, > > Need little help. So in my second week on new job, I am suppose to upgrade > Nexeta (Primary storage and secondary storage for cloud) OS. > > Overview of setup:- > > 1Zone>>1POD>>1 Cluster>>8 Hosts > > Hypervisor:- Citrix Xenserer 6.5 (Free edition) > > Storage Primary (Cluster Level):- On nexenta > > Secondary Storage:- On same nexenta BOX (which caters primary) > > Management Node is VM (Ubuntu 12004 LTS) whos root disk resides > again on primary storage coming from same nexenta BOX. Seems LUN is > different. This VM is running on Management cluster (2 XEN 6.5 hosts). > > Well not sure why and how they kept single storage BOX for all > primary and secondary. Pretty wiered. It was done by a Vendor or > someone long back. Thats whole different story. > > Any-who, coming to point of this email. I've chalked out plan like below. > ------------------------------------------------------ > A) Shutdown all VMs in following order:- First user VMs, then VRs & then > System VMs. > > B) Put all XEN hosts in MM (Maintenance mode) from cloudstack. > > 1) Verify that all hosts are down (Shutdown them one by one) from > XEN Center > > C) ssh into Management Server VM. Backup cloudstack DB save it on JumpBOX > (Use WinSCP) > > D) Shutdown Cloudstack Management Server VM and other VMs on management > cluster. This needs to be done by issuing 'shutdown' command from OS > level. > > E) Shutdown Management cluster XEN hosts. > > F) Upgrade the storage. (Reboot is needed at end.......hence so many > shutdowns) > > G) Once upgrade is done start the Machines (Physical and Virtual) in > reveres order than that of shutdown. (Step E to A above) Something like > below > > 1) Management clsuter XEN hosts > 2) Cloudstack management server VM and other management cluster VMs > 3) XEN Hypervisor Nodes under cloud. > 4) Take XEN hosts out of MM from cloudstack admin GUI. > 5) Start SSVM and console proxy VM (I guess this time cloudstack > will recreate CPVM and SSVM if we start first user VM.....not > sure if this step will be needed) 6) Customer VMs and corresponding VRs > > H) Verify from and try to deploy test VM. > ------------------------------------------------------ > > All this downtime is ok. Do you think I am missing anything. Any comments > on improvising? Should I expect some more glitches? What was your previous > experience with primary storage upgrade. > > Note:- > Last time same upgrade happened for one of similar zone and there > was issue of mapping primary storage LUNs with XEN hosts. Restart of > hosts did trick that time and hence my manager wants to shutdown all > physical hosts this time (Since all VMs have their disks on this > storage box...which will reboot once or twice during upgrade) > > BTW > > 1) Is it necessary to put XEN hosts in MM from cloudstack? (If I am > shutting down all VMs). > > 2) After I bring UP Management server in step G-1, will it still > have host in MM? or What would it try to do after its up? (Assuming > all hosts in cluster are down at that time. > > Thanks for reading. > > -- > Best, > Makrand ------- End of Original Message -------