Return-Path: X-Original-To: apmail-cloudstack-commits-archive@www.apache.org Delivered-To: apmail-cloudstack-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 3E6DA102E7 for ; Mon, 17 Jun 2013 07:04:15 +0000 (UTC) Received: (qmail 59261 invoked by uid 500); 17 Jun 2013 07:04:13 -0000 Delivered-To: apmail-cloudstack-commits-archive@cloudstack.apache.org Received: (qmail 59079 invoked by uid 500); 17 Jun 2013 07:04:11 -0000 Mailing-List: contact commits-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cloudstack.apache.org Delivered-To: mailing list commits@cloudstack.apache.org Received: (qmail 59059 invoked by uid 99); 17 Jun 2013 07:04:10 -0000 Received: from tyr.zones.apache.org (HELO tyr.zones.apache.org) (140.211.11.114) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 17 Jun 2013 07:04:10 +0000 Received: by tyr.zones.apache.org (Postfix, from userid 65534) id 6E46F8A5871; Mon, 17 Jun 2013 07:04:10 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: tsp@apache.org To: commits@cloudstack.apache.org Date: Mon, 17 Jun 2013 07:04:12 -0000 Message-Id: In-Reply-To: <241269c0356d49eeba4d28063965bbfb@git.apache.org> References: <241269c0356d49eeba4d28063965bbfb@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [3/3] git commit: updated refs/heads/master to 0587d3a Moving maintenance mode into component/maint All tests that could possible disrupt the runs of other tests because of putting resources into maintenace will be put under maint/. This should allow us to run the tests sequentially when other tests are not running on a deployment. Signed-off-by: Prasanna Santhanam Project: http://git-wip-us.apache.org/repos/asf/cloudstack/repo Commit: http://git-wip-us.apache.org/repos/asf/cloudstack/commit/0587d3a4 Tree: http://git-wip-us.apache.org/repos/asf/cloudstack/tree/0587d3a4 Diff: http://git-wip-us.apache.org/repos/asf/cloudstack/diff/0587d3a4 Branch: refs/heads/master Commit: 0587d3a496b4c5d29a6b14774f5e7fdc42a5bb33 Parents: a3d585e Author: Prasanna Santhanam Authored: Mon Jun 17 12:32:50 2013 +0530 Committer: Prasanna Santhanam Committed: Mon Jun 17 12:33:57 2013 +0530 ---------------------------------------------------------------------- test/integration/component/maint/__init__.py | 21 + .../component/maint/test_high_availability.py | 1079 ++++++++++++++++++ .../maint/test_host_high_availability.py | 810 +++++++++++++ .../maint/test_vpc_host_maintenance.py | 561 +++++++++ .../maint/test_vpc_on_host_maintenance.py | 323 ++++++ .../component/test_high_availability.py | 1079 ------------------ .../component/test_host_high_availability.py | 810 ------------- test/integration/component/test_vpc.py | 252 +--- .../component/test_vpc_host_maintenance.py | 889 --------------- 9 files changed, 2795 insertions(+), 3029 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/cloudstack/blob/0587d3a4/test/integration/component/maint/__init__.py ---------------------------------------------------------------------- diff --git a/test/integration/component/maint/__init__.py b/test/integration/component/maint/__init__.py new file mode 100644 index 0000000..f044f8b --- /dev/null +++ b/test/integration/component/maint/__init__.py @@ -0,0 +1,21 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +""" +Tests that put hosts, zones, resources in to maintenance mode are here. +These will have to be run sequentiall when resources are available so as not disrupt other tests +""" \ No newline at end of file http://git-wip-us.apache.org/repos/asf/cloudstack/blob/0587d3a4/test/integration/component/maint/test_high_availability.py ---------------------------------------------------------------------- diff --git a/test/integration/component/maint/test_high_availability.py b/test/integration/component/maint/test_high_availability.py new file mode 100644 index 0000000..7b0f78e --- /dev/null +++ b/test/integration/component/maint/test_high_availability.py @@ -0,0 +1,1079 @@ +#!/usr/bin/env python +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +""" P1 tests for high availability +""" +#Import Local Modules +import marvin +from nose.plugins.attrib import attr +from marvin.cloudstackTestCase import * +from marvin.cloudstackAPI import * +from marvin.integration.lib.utils import * +from marvin.integration.lib.base import * +from marvin.integration.lib.common import * +from marvin import remoteSSHClient +import datetime + + +class Services: + """Test network offering Services + """ + + def __init__(self): + self.services = { + "account": { + "email": "test@test.com", + "firstname": "HA", + "lastname": "HA", + "username": "HA", + # Random characters are appended for unique + # username + "password": "password", + }, + "service_offering": { + "name": "Tiny Instance", + "displaytext": "Tiny Instance", + "cpunumber": 1, + "cpuspeed": 100, # in MHz + "memory": 128, # In MBs + }, + "lbrule": { + "name": "SSH", + "alg": "roundrobin", + # Algorithm used for load balancing + "privateport": 22, + "publicport": 2222, + }, + "natrule": { + "privateport": 22, + "publicport": 22, + "protocol": "TCP" + }, + "fw_rule": { + "startport": 1, + "endport": 6000, + "cidr": '55.55.0.0/11', + # Any network (For creating FW rule) + }, + "virtual_machine": { + "displayname": "VM", + "username": "root", + "password": "password", + "ssh_port": 22, + "hypervisor": 'XenServer', + # Hypervisor type should be same as + # hypervisor type of cluster + "privateport": 22, + "publicport": 22, + "protocol": 'TCP', + }, + "templates": { + "displaytext": "Public Template", + "name": "Public template", + "ostype": 'CentOS 5.3 (64-bit)', + "url": "http://download.cloud.com/releases/2.0.0/UbuntuServer-10-04-64bit.vhd.bz2", + "hypervisor": 'XenServer', + "format": 'VHD', + "isfeatured": True, + "ispublic": True, + "isextractable": True, + "templatefilter": 'self', + }, + "ostype": 'CentOS 5.3 (64-bit)', + # Cent OS 5.3 (64 bit) + "sleep": 60, + "timeout": 100, + "mode": 'advanced' + } + + +class TestHighAvailability(cloudstackTestCase): + + @classmethod + def setUpClass(cls): + + cls.api_client = super( + TestHighAvailability, + cls + ).getClsTestClient().getApiClient() + cls.services = Services().services + # Get Zone, Domain and templates + cls.domain = get_domain( + cls.api_client, + cls.services + ) + cls.zone = get_zone( + cls.api_client, + cls.services + ) + cls.pod = get_pod( + cls.api_client, + zoneid=cls.zone.id, + services=cls.services + ) + cls.template = get_template( + cls.api_client, + cls.zone.id, + cls.services["ostype"] + ) + cls.services["virtual_machine"]["zoneid"] = cls.zone.id + cls.services["virtual_machine"]["template"] = cls.template.id + + cls.service_offering = ServiceOffering.create( + cls.api_client, + cls.services["service_offering"], + offerha=True + ) + cls._cleanup = [ + cls.service_offering, + ] + return + + @classmethod + def tearDownClass(cls): + try: + #Cleanup resources used + cleanup_resources(cls.api_client, cls._cleanup) + except Exception as e: + raise Exception("Warning: Exception during cleanup : %s" % e) + return + + def setUp(self): + self.apiclient = self.testClient.getApiClient() + self.dbclient = self.testClient.getDbConnection() + self.account = Account.create( + self.apiclient, + self.services["account"], + admin=True, + domainid=self.domain.id + ) + self.cleanup = [self.account] + return + + def tearDown(self): + try: + #Clean up, terminate the created accounts, domains etc + cleanup_resources(self.apiclient, self.cleanup) + except Exception as e: + raise Exception("Warning: Exception during cleanup : %s" % e) + return + + @attr(tags = ["advanced", "advancedns", "multihost"]) + def test_01_host_maintenance_mode(self): + """Test host maintenance mode + """ + + + # Validate the following + # 1. Create Vms. Acquire IP. Create port forwarding & load balancing + # rules for Vms. + # 2. Host 1: put to maintenance mode. All Vms should failover to Host + # 2 in cluster. Vms should be in running state. All port forwarding + # rules and load balancing Rules should work. + # 3. After failover to Host 2 succeeds, deploy Vms. Deploy Vms on host + # 2 should succeed. + # 4. Host 1: cancel maintenance mode. + # 5. Host 2 : put to maintenance mode. All Vms should failover to + # Host 1 in cluster. + # 6. After failover to Host 1 succeeds, deploy VMs. Deploy Vms on + # host 1 should succeed. + + hosts = Host.list( + self.apiclient, + zoneid=self.zone.id, + resourcestate='Enabled', + type='Routing' + ) + self.assertEqual( + isinstance(hosts, list), + True, + "List hosts should return valid host response" + ) + self.assertGreaterEqual( + len(hosts), + 2, + "There must be two hosts present in a cluster" + ) + self.debug("Checking HA with hosts: %s, %s" % ( + hosts[0].name, + hosts[1].name + )) + self.debug("Deploying VM in account: %s" % self.account.name) + # Spawn an instance in that network + virtual_machine = VirtualMachine.create( + self.apiclient, + self.services["virtual_machine"], + accountid=self.account.name, + domainid=self.account.domainid, + serviceofferingid=self.service_offering.id + ) + vms = VirtualMachine.list( + self.apiclient, + id=virtual_machine.id, + listall=True + ) + self.assertEqual( + isinstance(vms, list), + True, + "List VMs should return valid response for deployed VM" + ) + self.assertNotEqual( + len(vms), + 0, + "List VMs should return valid response for deployed VM" + ) + vm = vms[0] + self.debug("Deployed VM on host: %s" % vm.hostid) + self.assertEqual( + vm.state, + "Running", + "Deployed VM should be in RUnning state" + ) + networks = Network.list( + self.apiclient, + account=self.account.name, + domainid=self.account.domainid, + listall=True + ) + self.assertEqual( + isinstance(networks, list), + True, + "List networks should return valid list for the account" + ) + network = networks[0] + + self.debug("Associating public IP for account: %s" % + self.account.name) + public_ip = PublicIPAddress.create( + self.apiclient, + accountid=self.account.name, + zoneid=self.zone.id, + domainid=self.account.domainid, + networkid=network.id + ) + + self.debug("Associated %s with network %s" % ( + public_ip.ipaddress.ipaddress, + network.id + )) + self.debug("Creating PF rule for IP address: %s" % + public_ip.ipaddress.ipaddress) + nat_rule = NATRule.create( + self.apiclient, + virtual_machine, + self.services["natrule"], + ipaddressid=public_ip.ipaddress.id + ) + + self.debug("Creating LB rule on IP with NAT: %s" % + public_ip.ipaddress.ipaddress) + + # Create Load Balancer rule on IP already having NAT rule + lb_rule = LoadBalancerRule.create( + self.apiclient, + self.services["lbrule"], + ipaddressid=public_ip.ipaddress.id, + accountid=self.account.name + ) + self.debug("Created LB rule with ID: %s" % lb_rule.id) + + # Should be able to SSH VM + try: + self.debug("SSH into VM: %s" % virtual_machine.id) + ssh = virtual_machine.get_ssh_client( + ipaddress=public_ip.ipaddress.ipaddress) + except Exception as e: + self.fail("SSH Access failed for %s: %s" % \ + (virtual_machine.ipaddress, e) + ) + + first_host = vm.hostid + self.debug("Enabling maintenance mode for host %s" % vm.hostid) + cmd = prepareHostForMaintenance.prepareHostForMaintenanceCmd() + cmd.id = first_host + self.apiclient.prepareHostForMaintenance(cmd) + + self.debug("Waiting for SSVMs to come up") + wait_for_ssvms( + self.apiclient, + zoneid=self.zone.id, + podid=self.pod.id, + ) + + timeout = self.services["timeout"] + # Poll and check state of VM while it migrates from one host to another + while True: + vms = VirtualMachine.list( + self.apiclient, + id=virtual_machine.id, + listall=True + ) + self.assertEqual( + isinstance(vms, list), + True, + "List VMs should return valid response for deployed VM" + ) + self.assertNotEqual( + len(vms), + 0, + "List VMs should return valid response for deployed VM" + ) + vm = vms[0] + + self.debug("VM 1 state: %s" % vm.state) + if vm.state in ["Stopping", + "Stopped", + "Running", + "Starting", + "Migrating"]: + if vm.state == "Running": + break + else: + time.sleep(self.services["sleep"]) + timeout = timeout - 1 + else: + self.fail( + "VM migration from one-host-to-other failed while enabling maintenance" + ) + second_host = vm.hostid + self.assertEqual( + vm.state, + "Running", + "VM should be in Running state after enabling host maintenance" + ) + # Should be able to SSH VM + try: + self.debug("SSH into VM: %s" % virtual_machine.id) + ssh = virtual_machine.get_ssh_client( + ipaddress=public_ip.ipaddress.ipaddress) + except Exception as e: + self.fail("SSH Access failed for %s: %s" % \ + (virtual_machine.ipaddress, e) + ) + self.debug("Deploying VM in account: %s" % self.account.name) + # Spawn an instance on other host + virtual_machine_2 = VirtualMachine.create( + self.apiclient, + self.services["virtual_machine"], + accountid=self.account.name, + domainid=self.account.domainid, + serviceofferingid=self.service_offering.id + ) + vms = VirtualMachine.list( + self.apiclient, + id=virtual_machine_2.id, + listall=True + ) + self.assertEqual( + isinstance(vms, list), + True, + "List VMs should return valid response for deployed VM" + ) + self.assertNotEqual( + len(vms), + 0, + "List VMs should return valid response for deployed VM" + ) + vm = vms[0] + self.debug("Deployed VM on host: %s" % vm.hostid) + self.debug("VM 2 state: %s" % vm.state) + self.assertEqual( + vm.state, + "Running", + "Deployed VM should be in Running state" + ) + + self.debug("Canceling host maintenance for ID: %s" % first_host) + cmd = cancelHostMaintenance.cancelHostMaintenanceCmd() + cmd.id = first_host + self.apiclient.cancelHostMaintenance(cmd) + self.debug("Maintenance mode canceled for host: %s" % first_host) + + self.debug("Enabling maintenance mode for host %s" % second_host) + cmd = prepareHostForMaintenance.prepareHostForMaintenanceCmd() + cmd.id = second_host + self.apiclient.prepareHostForMaintenance(cmd) + self.debug("Maintenance mode enabled for host: %s" % second_host) + + self.debug("Waiting for SSVMs to come up") + wait_for_ssvms( + self.apiclient, + zoneid=self.zone.id, + podid=self.pod.id, + ) + + # Poll and check the status of VMs + timeout = self.services["timeout"] + while True: + vms = VirtualMachine.list( + self.apiclient, + account=self.account.name, + domainid=self.account.domainid, + listall=True + ) + self.assertEqual( + isinstance(vms, list), + True, + "List VMs should return valid response for deployed VM" + ) + self.assertNotEqual( + len(vms), + 0, + "List VMs should return valid response for deployed VM" + ) + vm = vms[0] + self.debug( + "VM state after enabling maintenance on first host: %s" % + vm.state) + if vm.state in [ + "Stopping", + "Stopped", + "Running", + "Starting", + "Migrating" + ]: + if vm.state == "Running": + break + else: + time.sleep(self.services["sleep"]) + timeout = timeout - 1 + else: + self.fail( + "VM migration from one-host-to-other failed while enabling maintenance" + ) + + # Poll and check the status of VMs + timeout = self.services["timeout"] + while True: + vms = VirtualMachine.list( + self.apiclient, + account=self.account.name, + domainid=self.account.domainid, + listall=True + ) + self.assertEqual( + isinstance(vms, list), + True, + "List VMs should return valid response for deployed VM" + ) + self.assertNotEqual( + len(vms), + 0, + "List VMs should return valid response for deployed VM" + ) + vm = vms[1] + self.debug( + "VM state after enabling maintenance on first host: %s" % + vm.state) + if vm.state in [ + "Stopping", + "Stopped", + "Running", + "Starting", + "Migrating" + ]: + if vm.state == "Running": + break + else: + time.sleep(self.services["sleep"]) + timeout = timeout - 1 + else: + self.fail( + "VM migration from one-host-to-other failed while enabling maintenance" + ) + + for vm in vms: + self.debug( + "VM states after enabling maintenance mode on host: %s - %s" % + (first_host, vm.state)) + self.assertEqual( + vm.state, + "Running", + "Deployed VM should be in Running state" + ) + + # Spawn an instance on other host + virtual_machine_3 = VirtualMachine.create( + self.apiclient, + self.services["virtual_machine"], + accountid=self.account.name, + domainid=self.account.domainid, + serviceofferingid=self.service_offering.id + ) + vms = VirtualMachine.list( + self.apiclient, + id=virtual_machine_3.id, + listall=True + ) + self.assertEqual( + isinstance(vms, list), + True, + "List VMs should return valid response for deployed VM" + ) + self.assertNotEqual( + len(vms), + 0, + "List VMs should return valid response for deployed VM" + ) + vm = vms[0] + + self.debug("Deployed VM on host: %s" % vm.hostid) + self.debug("VM 3 state: %s" % vm.state) + self.assertEqual( + vm.state, + "Running", + "Deployed VM should be in Running state" + ) + + # Should be able to SSH VM + try: + self.debug("SSH into VM: %s" % virtual_machine.id) + ssh = virtual_machine.get_ssh_client( + ipaddress=public_ip.ipaddress.ipaddress) + except Exception as e: + self.fail("SSH Access failed for %s: %s" % \ + (virtual_machine.ipaddress, e) + ) + + self.debug("Canceling host maintenance for ID: %s" % second_host) + cmd = cancelHostMaintenance.cancelHostMaintenanceCmd() + cmd.id = second_host + self.apiclient.cancelHostMaintenance(cmd) + self.debug("Maintenance mode canceled for host: %s" % second_host) + + self.debug("Waiting for SSVMs to come up") + wait_for_ssvms( + self.apiclient, + zoneid=self.zone.id, + podid=self.pod.id, + ) + return + + @attr(tags = ["advanced", "advancedns", "multihost"]) + def test_02_host_maintenance_mode_with_activities(self): + """Test host maintenance mode with activities + """ + + + # Validate the following + # 1. Create Vms. Acquire IP. Create port forwarding & load balancing + # rules for Vms. + # 2. While activities are ongoing: Create snapshots, recurring + # snapshots, create templates, download volumes, Host 1: put to + # maintenance mode. All Vms should failover to Host 2 in cluster + # Vms should be in running state. All port forwarding rules and + # load balancing Rules should work. + # 3. After failover to Host 2 succeeds, deploy Vms. Deploy Vms on host + # 2 should succeed. All ongoing activities in step 3 should succeed + # 4. Host 1: cancel maintenance mode. + # 5. While activities are ongoing: Create snapshots, recurring + # snapshots, create templates, download volumes, Host 2: put to + # maintenance mode. All Vms should failover to Host 1 in cluster. + # 6. After failover to Host 1 succeeds, deploy VMs. Deploy Vms on + # host 1 should succeed. All ongoing activities in step 6 should + # succeed. + + hosts = Host.list( + self.apiclient, + zoneid=self.zone.id, + resourcestate='Enabled', + type='Routing' + ) + self.assertEqual( + isinstance(hosts, list), + True, + "List hosts should return valid host response" + ) + self.assertGreaterEqual( + len(hosts), + 2, + "There must be two hosts present in a cluster" + ) + self.debug("Checking HA with hosts: %s, %s" % ( + hosts[0].name, + hosts[1].name + )) + self.debug("Deploying VM in account: %s" % self.account.name) + # Spawn an instance in that network + virtual_machine = VirtualMachine.create( + self.apiclient, + self.services["virtual_machine"], + accountid=self.account.name, + domainid=self.account.domainid, + serviceofferingid=self.service_offering.id + ) + vms = VirtualMachine.list( + self.apiclient, + id=virtual_machine.id, + listall=True + ) + self.assertEqual( + isinstance(vms, list), + True, + "List VMs should return valid response for deployed VM" + ) + self.assertNotEqual( + len(vms), + 0, + "List VMs should return valid response for deployed VM" + ) + vm = vms[0] + self.debug("Deployed VM on host: %s" % vm.hostid) + self.assertEqual( + vm.state, + "Running", + "Deployed VM should be in RUnning state" + ) + networks = Network.list( + self.apiclient, + account=self.account.name, + domainid=self.account.domainid, + listall=True + ) + self.assertEqual( + isinstance(networks, list), + True, + "List networks should return valid list for the account" + ) + network = networks[0] + + self.debug("Associating public IP for account: %s" % + self.account.name) + public_ip = PublicIPAddress.create( + self.apiclient, + accountid=self.account.name, + zoneid=self.zone.id, + domainid=self.account.domainid, + networkid=network.id + ) + + self.debug("Associated %s with network %s" % ( + public_ip.ipaddress.ipaddress, + network.id + )) + self.debug("Creating PF rule for IP address: %s" % + public_ip.ipaddress.ipaddress) + nat_rule = NATRule.create( + self.apiclient, + virtual_machine, + self.services["natrule"], + ipaddressid=public_ip.ipaddress.id + ) + + self.debug("Creating LB rule on IP with NAT: %s" % + public_ip.ipaddress.ipaddress) + + # Create Load Balancer rule on IP already having NAT rule + lb_rule = LoadBalancerRule.create( + self.apiclient, + self.services["lbrule"], + ipaddressid=public_ip.ipaddress.id, + accountid=self.account.name + ) + self.debug("Created LB rule with ID: %s" % lb_rule.id) + + # Should be able to SSH VM + try: + self.debug("SSH into VM: %s" % virtual_machine.id) + ssh = virtual_machine.get_ssh_client( + ipaddress=public_ip.ipaddress.ipaddress) + except Exception as e: + self.fail("SSH Access failed for %s: %s" % \ + (virtual_machine.ipaddress, e) + ) + # Get the Root disk of VM + volumes = list_volumes( + self.apiclient, + virtualmachineid=virtual_machine.id, + type='ROOT', + listall=True + ) + volume = volumes[0] + self.debug( + "Root volume of VM(%s): %s" % ( + virtual_machine.name, + volume.name + )) + # Create a snapshot from the ROOTDISK + self.debug("Creating snapshot on ROOT volume: %s" % volume.name) + snapshot = Snapshot.create(self.apiclient, volumes[0].id) + self.debug("Snapshot created: ID - %s" % snapshot.id) + + snapshots = list_snapshots( + self.apiclient, + id=snapshot.id, + listall=True + ) + self.assertEqual( + isinstance(snapshots, list), + True, + "Check list response returns a valid list" + ) + self.assertNotEqual( + snapshots, + None, + "Check if result exists in list snapshots call" + ) + self.assertEqual( + snapshots[0].id, + snapshot.id, + "Check snapshot id in list resources call" + ) + + # Generate template from the snapshot + self.debug("Generating template from snapshot: %s" % snapshot.name) + template = Template.create_from_snapshot( + self.apiclient, + snapshot, + self.services["templates"] + ) + self.debug("Created template from snapshot: %s" % template.id) + + templates = list_templates( + self.apiclient, + templatefilter=\ + self.services["templates"]["templatefilter"], + id=template.id + ) + + self.assertEqual( + isinstance(templates, list), + True, + "List template call should return the newly created template" + ) + + self.assertEqual( + templates[0].isready, + True, + "The newly created template should be in ready state" + ) + + first_host = vm.hostid + self.debug("Enabling maintenance mode for host %s" % vm.hostid) + cmd = prepareHostForMaintenance.prepareHostForMaintenanceCmd() + cmd.id = first_host + self.apiclient.prepareHostForMaintenance(cmd) + + self.debug("Waiting for SSVMs to come up") + wait_for_ssvms( + self.apiclient, + zoneid=self.zone.id, + podid=self.pod.id, + ) + + timeout = self.services["timeout"] + # Poll and check state of VM while it migrates from one host to another + while True: + vms = VirtualMachine.list( + self.apiclient, + id=virtual_machine.id, + listall=True + ) + self.assertEqual( + isinstance(vms, list), + True, + "List VMs should return valid response for deployed VM" + ) + self.assertNotEqual( + len(vms), + 0, + "List VMs should return valid response for deployed VM" + ) + vm = vms[0] + + self.debug("VM 1 state: %s" % vm.state) + if vm.state in ["Stopping", + "Stopped", + "Running", + "Starting", + "Migrating"]: + if vm.state == "Running": + break + else: + time.sleep(self.services["sleep"]) + timeout = timeout - 1 + else: + self.fail( + "VM migration from one-host-to-other failed while enabling maintenance" + ) + second_host = vm.hostid + self.assertEqual( + vm.state, + "Running", + "VM should be in Running state after enabling host maintenance" + ) + # Should be able to SSH VM + try: + self.debug("SSH into VM: %s" % virtual_machine.id) + ssh = virtual_machine.get_ssh_client( + ipaddress=public_ip.ipaddress.ipaddress) + except Exception as e: + self.fail("SSH Access failed for %s: %s" % \ + (virtual_machine.ipaddress, e) + ) + self.debug("Deploying VM in account: %s" % self.account.name) + # Spawn an instance on other host + virtual_machine_2 = VirtualMachine.create( + self.apiclient, + self.services["virtual_machine"], + accountid=self.account.name, + domainid=self.account.domainid, + serviceofferingid=self.service_offering.id + ) + vms = VirtualMachine.list( + self.apiclient, + id=virtual_machine_2.id, + listall=True + ) + self.assertEqual( + isinstance(vms, list), + True, + "List VMs should return valid response for deployed VM" + ) + self.assertNotEqual( + len(vms), + 0, + "List VMs should return valid response for deployed VM" + ) + vm = vms[0] + self.debug("Deployed VM on host: %s" % vm.hostid) + self.debug("VM 2 state: %s" % vm.state) + self.assertEqual( + vm.state, + "Running", + "Deployed VM should be in Running state" + ) + + self.debug("Canceling host maintenance for ID: %s" % first_host) + cmd = cancelHostMaintenance.cancelHostMaintenanceCmd() + cmd.id = first_host + self.apiclient.cancelHostMaintenance(cmd) + self.debug("Maintenance mode canceled for host: %s" % first_host) + + # Get the Root disk of VM + volumes = list_volumes( + self.apiclient, + virtualmachineid=virtual_machine_2.id, + type='ROOT', + listall=True + ) + volume = volumes[0] + self.debug( + "Root volume of VM(%s): %s" % ( + virtual_machine_2.name, + volume.name + )) + # Create a snapshot from the ROOTDISK + self.debug("Creating snapshot on ROOT volume: %s" % volume.name) + snapshot = Snapshot.create(self.apiclient, volumes[0].id) + self.debug("Snapshot created: ID - %s" % snapshot.id) + + snapshots = list_snapshots( + self.apiclient, + id=snapshot.id, + listall=True + ) + self.assertEqual( + isinstance(snapshots, list), + True, + "Check list response returns a valid list" + ) + self.assertNotEqual( + snapshots, + None, + "Check if result exists in list snapshots call" + ) + self.assertEqual( + snapshots[0].id, + snapshot.id, + "Check snapshot id in list resources call" + ) + + # Generate template from the snapshot + self.debug("Generating template from snapshot: %s" % snapshot.name) + template = Template.create_from_snapshot( + self.apiclient, + snapshot, + self.services["templates"] + ) + self.debug("Created template from snapshot: %s" % template.id) + + templates = list_templates( + self.apiclient, + templatefilter=\ + self.services["templates"]["templatefilter"], + id=template.id + ) + + self.assertEqual( + isinstance(templates, list), + True, + "List template call should return the newly created template" + ) + + self.assertEqual( + templates[0].isready, + True, + "The newly created template should be in ready state" + ) + + self.debug("Enabling maintenance mode for host %s" % second_host) + cmd = prepareHostForMaintenance.prepareHostForMaintenanceCmd() + cmd.id = second_host + self.apiclient.prepareHostForMaintenance(cmd) + self.debug("Maintenance mode enabled for host: %s" % second_host) + + self.debug("Waiting for SSVMs to come up") + wait_for_ssvms( + self.apiclient, + zoneid=self.zone.id, + podid=self.pod.id, + ) + + # Poll and check the status of VMs + timeout = self.services["timeout"] + while True: + vms = VirtualMachine.list( + self.apiclient, + account=self.account.name, + domainid=self.account.domainid, + listall=True + ) + self.assertEqual( + isinstance(vms, list), + True, + "List VMs should return valid response for deployed VM" + ) + self.assertNotEqual( + len(vms), + 0, + "List VMs should return valid response for deployed VM" + ) + vm = vms[0] + self.debug( + "VM state after enabling maintenance on first host: %s" % + vm.state) + if vm.state in ["Stopping", + "Stopped", + "Running", + "Starting", + "Migrating"]: + if vm.state == "Running": + break + else: + time.sleep(self.services["sleep"]) + timeout = timeout - 1 + else: + self.fail( + "VM migration from one-host-to-other failed while enabling maintenance" + ) + + # Poll and check the status of VMs + timeout = self.services["timeout"] + while True: + vms = VirtualMachine.list( + self.apiclient, + account=self.account.name, + domainid=self.account.domainid, + listall=True + ) + self.assertEqual( + isinstance(vms, list), + True, + "List VMs should return valid response for deployed VM" + ) + self.assertNotEqual( + len(vms), + 0, + "List VMs should return valid response for deployed VM" + ) + vm = vms[1] + self.debug( + "VM state after enabling maintenance on first host: %s" % + vm.state) + if vm.state in ["Stopping", + "Stopped", + "Running", + "Starting", + "Migrating"]: + if vm.state == "Running": + break + else: + time.sleep(self.services["sleep"]) + timeout = timeout - 1 + else: + self.fail( + "VM migration from one-host-to-other failed while enabling maintenance" + ) + + for vm in vms: + self.debug( + "VM states after enabling maintenance mode on host: %s - %s" % + (first_host, vm.state)) + self.assertEqual( + vm.state, + "Running", + "Deployed VM should be in Running state" + ) + + # Spawn an instance on other host + virtual_machine_3 = VirtualMachine.create( + self.apiclient, + self.services["virtual_machine"], + accountid=self.account.name, + domainid=self.account.domainid, + serviceofferingid=self.service_offering.id + ) + vms = VirtualMachine.list( + self.apiclient, + id=virtual_machine_3.id, + listall=True + ) + self.assertEqual( + isinstance(vms, list), + True, + "List VMs should return valid response for deployed VM" + ) + self.assertNotEqual( + len(vms), + 0, + "List VMs should return valid response for deployed VM" + ) + vm = vms[0] + + self.debug("Deployed VM on host: %s" % vm.hostid) + self.debug("VM 3 state: %s" % vm.state) + self.assertEqual( + vm.state, + "Running", + "Deployed VM should be in Running state" + ) + + self.debug("Canceling host maintenance for ID: %s" % second_host) + cmd = cancelHostMaintenance.cancelHostMaintenanceCmd() + cmd.id = second_host + self.apiclient.cancelHostMaintenance(cmd) + self.debug("Maintenance mode canceled for host: %s" % second_host) + + self.debug("Waiting for SSVMs to come up") + wait_for_ssvms( + self.apiclient, + zoneid=self.zone.id, + podid=self.pod.id, + ) + return http://git-wip-us.apache.org/repos/asf/cloudstack/blob/0587d3a4/test/integration/component/maint/test_host_high_availability.py ---------------------------------------------------------------------- diff --git a/test/integration/component/maint/test_host_high_availability.py b/test/integration/component/maint/test_host_high_availability.py new file mode 100644 index 0000000..5fb047b --- /dev/null +++ b/test/integration/component/maint/test_host_high_availability.py @@ -0,0 +1,810 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +""" P1 tests for dedicated Host high availability +""" +#Import Local Modules +from nose.plugins.attrib import attr +from marvin.cloudstackTestCase import * +from marvin.cloudstackAPI import * +from marvin.integration.lib.utils import * +from marvin.integration.lib.base import * +from marvin.integration.lib.common import * + + +class Services: + """ Dedicated host HA test cases """ + + def __init__(self): + self.services = { + "account": { + "email": "test@test.com", + "firstname": "HA", + "lastname": "HA", + "username": "HA", + # Random characters are appended for unique + # username + "password": "password", + }, + "service_offering_with_ha": { + "name": "Tiny Instance With HA Enabled", + "displaytext": "Tiny Instance", + "cpunumber": 1, + "cpuspeed": 100, # in MHz + "memory": 128, # In MBs + }, + "service_offering_without_ha": { + "name": "Tiny Instance Without HA", + "displaytext": "Tiny Instance", + "cpunumber": 1, + "cpuspeed": 100, # in MHz + "memory": 128, # In MBs + }, + "virtual_machine": { + "displayname": "VM", + "username": "root", + "password": "password", + "ssh_port": 22, + "hypervisor": 'XenServer', + # Hypervisor type should be same as + # hypervisor type of cluster + "privateport": 22, + "publicport": 22, + "protocol": 'TCP', + }, + "ostype": 'CentOS 5.3 (64-bit)', + "timeout": 100, + } + + +class TestHostHighAvailability(cloudstackTestCase): + """ Dedicated host HA test cases """ + + @classmethod + def setUpClass(cls): + cls.api_client = super( + TestHostHighAvailability, + cls + ).getClsTestClient().getApiClient() + cls.services = Services().services + # Get Zone, Domain and templates + cls.domain = get_domain( + cls.api_client, + cls.services + ) + cls.zone = get_zone( + cls.api_client, + cls.services + ) + + cls.template = get_template( + cls.api_client, + cls.zone.id, + cls.services["ostype"] + ) + cls.services["virtual_machine"]["zoneid"] = cls.zone.id + cls.services["virtual_machine"]["template"] = cls.template.id + + cls.service_offering_with_ha = ServiceOffering.create( + cls.api_client, + cls.services["service_offering_with_ha"], + offerha=True + ) + + cls.service_offering_without_ha = ServiceOffering.create( + cls.api_client, + cls.services["service_offering_without_ha"], + offerha=False + ) + + cls._cleanup = [ + cls.service_offering_with_ha, + cls.service_offering_without_ha, + ] + return + + @classmethod + def tearDownClass(cls): + try: + #Cleanup resources used + cleanup_resources(cls.api_client, cls._cleanup) + except Exception as e: + raise Exception("Warning: Exception during cleanup : %s" % e) + return + + def setUp(self): + self.apiclient = self.testClient.getApiClient() + self.dbclient = self.testClient.getDbConnection() + self.account = Account.create( + self.apiclient, + self.services["account"], + admin=True, + domainid=self.domain.id + ) + self.cleanup = [self.account] + return + + def tearDown(self): + try: + #Clean up, terminate the created accounts, domains etc + cleanup_resources(self.apiclient, self.cleanup) + except Exception as e: + raise Exception("Warning: Exception during cleanup : %s" % e) + return + + @attr(configuration="ha.tag") + @attr(tags=["advanced", "advancedns", "sg", "basic", "eip", "simulator"]) + def test_01_vm_deployment_with_compute_offering_with_ha_enabled(self): + """ Test VM deployments (Create HA enabled Compute Service Offering and VM) """ + + # Steps, + #1. Create a Compute service offering with the 'Offer HA' option selected. + #2. Create a Guest VM with the compute service offering created above. + # Validations, + #1. Ensure that the offering is created and that in the UI the 'Offer HA' field is enabled (Yes) + #The listServiceOffering API should list 'offerha' as true. + #2. Select the newly created VM and ensure that the Compute offering field value lists the compute service offering that was selected. + # Also, check that the HA Enabled field is enabled 'Yes'. + + #list and validate above created service offering with Ha enabled + list_service_response = list_service_offering( + self.apiclient, + id=self.service_offering_with_ha.id + ) + self.assertEqual( + isinstance(list_service_response, list), + True, + "listServiceOfferings returned invalid object in response." + ) + self.assertNotEqual( + len(list_service_response), + 0, + "listServiceOfferings returned empty list." + ) + self.assertEqual( + list_service_response[0].offerha, + True, + "The service offering is not HA enabled" + ) + + #create virtual machine with the service offering with Ha enabled + virtual_machine = VirtualMachine.create( + self.apiclient, + self.services["virtual_machine"], + accountid=self.account.name, + domainid=self.account.domainid, + serviceofferingid=self.service_offering_with_ha.id + ) + vms = VirtualMachine.list( + self.apiclient, + id=virtual_machine.id, + listall=True + ) + self.assertEqual( + isinstance(vms, list), + True, + "listVirtualMachines returned invalid object in response." + ) + self.assertNotEqual( + len(vms), + 0, + "listVirtualMachines returned empty list." + ) + self.debug("Deployed VM on host: %s" % vms[0].hostid) + self.assertEqual( + vms[0].haenable, + True, + "VM not created with HA enable tag" + ) + + @attr(configuration="ha.tag") + @attr(tags=["advanced", "advancedns", "sg", "basic", "eip", "simulator", "multihost"]) + def test_02_no_vm_creation_on_host_with_haenabled(self): + """ Verify you can not create new VMs on hosts with an ha.tag """ + + # Steps, + #1. Fresh install CS (Bonita) that supports this feature + #2. Create Basic zone, pod, cluster, add 3 hosts to cluster (host1, host2, host3), secondary & primary Storage + #3. When adding host3, assign the HA host tag. + #4. You should already have a compute service offering with HA already create from above. If not, create one for HA. + #5. Create VMs with the service offering with and without the HA tag + # Validations, + #Check to make sure the newly created VM is not on any HA enabled hosts + #The VM should be created only on host1 or host2 and never host3 (HA enabled) + + #create and verify virtual machine with HA enabled service offering + virtual_machine_with_ha = VirtualMachine.create( + self.apiclient, + self.services["virtual_machine"], + accountid=self.account.name, + domainid=self.account.domainid, + serviceofferingid=self.service_offering_with_ha.id + ) + + vms = VirtualMachine.list( + self.apiclient, + id=virtual_machine_with_ha.id, + listall=True + ) + + self.assertEqual( + isinstance(vms, list), + True, + "listVirtualMachines returned invalid object in response." + ) + + self.assertNotEqual( + len(vms), + 0, + "listVirtualMachines returned empty list." + ) + + vm = vms[0] + + self.debug("Deployed VM on host: %s" % vm.hostid) + + #validate the virtual machine created is host Ha enabled + list_hosts_response = list_hosts( + self.apiclient, + id=vm.hostid + ) + self.assertEqual( + isinstance(list_hosts_response, list), + True, + "listHosts returned invalid object in response." + ) + + self.assertNotEqual( + len(list_hosts_response), + 0, + "listHosts retuned empty list in response." + ) + + self.assertEqual( + list_hosts_response[0].hahost, + False, + "VM created on HA enabled host." + ) + + #create and verify virtual machine with Ha disabled service offering + virtual_machine_without_ha = VirtualMachine.create( + self.apiclient, + self.services["virtual_machine"], + accountid=self.account.name, + domainid=self.account.domainid, + serviceofferingid=self.service_offering_without_ha.id + ) + + vms = VirtualMachine.list( + self.apiclient, + id=virtual_machine_without_ha.id, + listall=True + ) + + self.assertEqual( + isinstance(vms, list), + True, + "listVirtualMachines returned invalid object in response." + ) + + self.assertNotEqual( + len(vms), + 0, + "listVirtualMachines returned empty list." + ) + + vm = vms[0] + + self.debug("Deployed VM on host: %s" % vm.hostid) + + #verify that the virtual machine created on the host is Ha disabled + list_hosts_response = list_hosts( + self.apiclient, + id=vm.hostid + ) + self.assertEqual( + isinstance(list_hosts_response, list), + True, + "listHosts returned invalid object in response." + ) + + self.assertNotEqual( + len(list_hosts_response), + 0, + "listHosts returned empty list." + ) + + host = list_hosts_response[0] + + self.assertEqual( + host.hahost, + False, + "VM migrated to HA enabled host." + ) + + @attr(configuration="ha.tag") + @attr(tags=["advanced", "advancedns", "sg", "basic", "eip", "simulator", "multihost"]) + def test_03_cant_migrate_vm_to_host_with_ha_positive(self): + """ Verify you can not migrate VMs to hosts with an ha.tag (positive) """ + + # Steps, + #1. Create a Compute service offering with the 'Offer HA' option selected. + #2. Create a Guest VM with the compute service offering created above. + #3. Select the VM and migrate VM to another host. Choose a 'Suitable' host (i.e. host2) + # Validations + #The option from the 'Migrate instance to another host' dialog box' should list host3 as 'Not Suitable' for migration. + #Confirm that the VM is migrated to the 'Suitable' host you selected (i.e. host2) + + #create and verify the virtual machine with HA enabled service offering + virtual_machine_with_ha = VirtualMachine.create( + self.apiclient, + self.services["virtual_machine"], + accountid=self.account.name, + domainid=self.account.domainid, + serviceofferingid=self.service_offering_with_ha.id + ) + + vms = VirtualMachine.list( + self.apiclient, + id=virtual_machine_with_ha.id, + listall=True, + ) + + self.assertEqual( + isinstance(vms, list), + True, + "List VMs should return valid response for deployed VM" + ) + + self.assertNotEqual( + len(vms), + 0, + "List VMs should return valid response for deployed VM" + ) + + vm = vms[0] + + self.debug("Deployed VM on host: %s" % vm.hostid) + + #Find out a Suitable host for VM migration + list_hosts_response = list_hosts( + self.apiclient, + ) + self.assertEqual( + isinstance(list_hosts_response, list), + True, + "The listHosts API returned the invalid list" + ) + + self.assertNotEqual( + len(list_hosts_response), + 0, + "The listHosts returned nothing." + ) + suitableHost = None + for host in list_hosts_response: + if host.suitableformigration == True and host.hostid != vm.hostid: + suitableHost = host + break + + self.assertTrue(suitableHost is not None, "suitablehost should not be None") + + #Migration of the VM to a suitable host + self.debug("Migrating VM-ID: %s to Host: %s" % (self.vm.id, suitableHost.id)) + + cmd = migrateVirtualMachine.migrateVirtualMachineCmd() + cmd.hostid = suitableHost.id + cmd.virtualmachineid = self.vm.id + self.apiclient.migrateVirtualMachine(cmd) + + #Verify that the VM migrated to a targeted Suitable host + list_vm_response = list_virtual_machines( + self.apiclient, + id=vm.id + ) + self.assertEqual( + isinstance(list_vm_response, list), + True, + "The listVirtualMachines returned the invalid list." + ) + + self.assertNotEqual( + list_vm_response, + None, + "The listVirtualMachines API returned nothing." + ) + + vm_response = list_vm_response[0] + + self.assertEqual( + vm_response.id, + vm.id, + "The virtual machine id and the the virtual machine from listVirtualMachines is not matching." + ) + + self.assertEqual( + vm_response.hostid, + suitableHost.id, + "The VM is not migrated to targeted suitable host." + ) + + @attr(configuration="ha.tag") + @attr(tags=["advanced", "advancedns", "sg", "basic", "eip", "simulator", "multihost"]) + def test_04_cant_migrate_vm_to_host_with_ha_negative(self): + """ Verify you can not migrate VMs to hosts with an ha.tag (negative) """ + + # Steps, + #1. Create a Compute service offering with the 'Offer HA' option selected. + #2. Create a Guest VM with the compute service offering created above. + #3. Select the VM and migrate VM to another host. Choose a 'Not Suitable' host. + # Validations, + #The option from the 'Migrate instance to another host' dialog box should list host3 as 'Not Suitable' for migration. + #By design, The Guest VM can STILL can be migrated to host3 if the admin chooses to do so. + + #create and verify virtual machine with HA enabled service offering + virtual_machine_with_ha = VirtualMachine.create( + self.apiclient, + self.services["virtual_machine"], + accountid=self.account.name, + domainid=self.account.domainid, + serviceofferingid=self.service_offering_with_ha.id + ) + + vms = VirtualMachine.list( + self.apiclient, + id=virtual_machine_with_ha.id, + listall=True + ) + + self.assertEqual( + isinstance(vms, list), + True, + "The listVirtualMachines returned invalid object in response." + ) + + self.assertNotEqual( + len(vms), + 0, + "The listVirtualMachines returned empty response." + ) + + vm = vms[0] + + self.debug("Deployed VM on host: %s" % vm.hostid) + + #Find out Non-Suitable host for VM migration + list_hosts_response = list_hosts( + self.apiclient, + ) + self.assertEqual( + isinstance(list_hosts_response, list), + True, + "listHosts returned invalid object in response." + ) + + self.assertNotEqual( + len(list_hosts_response), + 0, + "listHosts returned empty response." + ) + + notSuitableHost = None + for host in list_hosts_response: + if not host.suitableformigration and host.hostid != vm.hostid: + notSuitableHost = host + break + + self.assertTrue(notSuitableHost is not None, "notsuitablehost should not be None") + + #Migrate VM to Non-Suitable host + self.debug("Migrating VM-ID: %s to Host: %s" % (vm.id, notSuitableHost.id)) + + cmd = migrateVirtualMachine.migrateVirtualMachineCmd() + cmd.hostid = notSuitableHost.id + cmd.virtualmachineid = vm.id + self.apiclient.migrateVirtualMachine(cmd) + + #Verify that the virtual machine got migrated to targeted Non-Suitable host + list_vm_response = list_virtual_machines( + self.apiclient, + id=vm.id + ) + self.assertEqual( + isinstance(list_vm_response, list), + True, + "listVirtualMachine returned invalid object in response." + ) + + self.assertNotEqual( + len(list_vm_response), + 0, + "listVirtualMachines returned empty response." + ) + + self.assertEqual( + list_vm_response[0].id, + vm.id, + "Virtual machine id with the virtual machine from listVirtualMachine is not matching." + ) + + self.assertEqual( + list_vm_response[0].hostid, + notSuitableHost.id, + "The detination host id of migrated VM is not matching." + ) + + @attr(configuration="ha.tag") + @attr(speed="slow") + @attr(tags=["advanced", "advancedns", "sg", "basic", "eip", "simulator", "multihost"]) + def test_05_no_vm_with_ha_gets_migrated_to_ha_host_in_live_migration(self): + """ Verify that none of the VMs with HA enabled migrate to an ha tagged host during live migration """ + + # Steps, + #1. Fresh install CS that supports this feature + #2. Create Basic zone, pod, cluster, add 3 hosts to cluster (host1, host2, host3), secondary & primary Storage + #3. When adding host3, assign the HA host tag. + #4. Create VMs with and without the Compute Service Offering with the HA tag. + #5. Note the VMs on host1 and whether any of the VMs have their 'HA enabled' flags enabled. + #6. Put host1 into maintenance mode. + # Validations, + #1. Make sure the VMs are created on either host1 or host2 and not on host3 + #2. Putting host1 into maintenance mode should trigger a live migration. Make sure the VMs are not migrated to HA enabled host3. + + # create and verify virtual machine with HA disabled service offering + virtual_machine_with_ha = VirtualMachine.create( + self.apiclient, + self.services["virtual_machine"], + accountid=self.account.name, + domainid=self.account.domainid, + serviceofferingid=self.service_offering_with_ha.id + ) + + vms = VirtualMachine.list( + self.apiclient, + id=virtual_machine_with_ha.id, + listall=True + ) + + self.assertEqual( + isinstance(vms, list), + True, + "List VMs should return valid response for deployed VM" + ) + + self.assertNotEqual( + len(vms), + 0, + "List VMs should return valid response for deployed VM" + ) + + vm_with_ha_enabled = vms[0] + + #Verify the virtual machine got created on non HA host + list_hosts_response = list_hosts( + self.apiclient, + id=vm_with_ha_enabled.hostid + ) + self.assertEqual( + isinstance(list_hosts_response, list), + True, + "Check list response returns a valid list" + ) + + self.assertNotEqual( + len(list_hosts_response), + 0, + "Check Host is available" + ) + + self.assertEqual( + list_hosts_response[0].hahost, + False, + "The virtual machine is not ha enabled so check if VM is created on host which is also not ha enabled" + ) + + #put the Host in maintainance mode + self.debug("Enabling maintenance mode for host %s" % vm_with_ha_enabled.hostid) + cmd = prepareHostForMaintenance.prepareHostForMaintenanceCmd() + cmd.id = vm_with_ha_enabled.hostid + self.apiclient.prepareHostForMaintenance(cmd) + + timeout = self.services["timeout"] + + #verify the VM live migration happened to another running host + self.debug("Waiting for VM to come up") + wait_for_vm( + self.apiclient, + virtualmachineid=vm_with_ha_enabled.id, + interval=timeout + ) + + vms = VirtualMachine.list( + self.apiclient, + id=vm_with_ha_enabled.id, + listall=True, + ) + + self.assertEqual( + isinstance(vms, list), + True, + "List VMs should return valid response for deployed VM" + ) + + self.assertNotEqual( + len(vms), + 0, + "List VMs should return valid response for deployed VM" + ) + + vm_with_ha_enabled1 = vms[0] + + list_hosts_response = list_hosts( + self.apiclient, + id=vm_with_ha_enabled1.hostid + ) + self.assertEqual( + isinstance(list_hosts_response, list), + True, + "Check list response returns a valid list" + ) + + self.assertNotEqual( + len(list_hosts_response), + 0, + "Check Host is available" + ) + + self.assertEqual( + list_hosts_response[0].hahost, + False, + "The virtual machine is not ha enabled so check if VM is created on host which is also not ha enabled" + ) + + self.debug("Disabling the maintenance mode for host %s" % vm_with_ha_enabled.hostid) + cmd = cancelHostMaintenance.cancelHostMaintenanceCmd() + cmd.id = vm_with_ha_enabled.hostid + self.apiclient.cancelHostMaintenance(cmd) + + @attr(configuration="ha.tag") + @attr(speed="slow") + @attr(tags=["advanced", "advancedns", "sg", "basic", "eip", "simulator", "multihost"]) + def test_06_no_vm_without_ha_gets_migrated_to_ha_host_in_live_migration(self): + """ Verify that none of the VMs without HA enabled migrate to an ha tagged host during live migration """ + + # Steps, + #1. Fresh install CS that supports this feature + #2. Create Basic zone, pod, cluster, add 3 hosts to cluster (host1, host2, host3), secondary & primary Storage + #3. When adding host3, assign the HA host tag. + #4. Create VMs with and without the Compute Service Offering with the HA tag. + #5. Note the VMs on host1 and whether any of the VMs have their 'HA enabled' flags enabled. + #6. Put host1 into maintenance mode. + # Validations, + #1. Make sure the VMs are created on either host1 or host2 and not on host3 + #2. Putting host1 into maintenance mode should trigger a live migration. Make sure the VMs are not migrated to HA enabled host3. + + # create and verify virtual machine with HA disabled service offering + virtual_machine_without_ha = VirtualMachine.create( + self.apiclient, + self.services["virtual_machine"], + accountid=self.account.name, + domainid=self.account.domainid, + serviceofferingid=self.service_offering_without_ha.id + ) + + vms = VirtualMachine.list( + self.apiclient, + id=virtual_machine_without_ha.id, + listall=True + ) + + self.assertEqual( + isinstance(vms, list), + True, + "List VMs should return valid response for deployed VM" + ) + + self.assertNotEqual( + len(vms), + 0, + "List VMs should return valid response for deployed VM" + ) + + vm_with_ha_disabled = vms[0] + + #Verify the virtual machine got created on non HA host + list_hosts_response = list_hosts( + self.apiclient, + id=vm_with_ha_disabled.hostid + ) + self.assertEqual( + isinstance(list_hosts_response, list), + True, + "Check list response returns a valid list" + ) + + self.assertNotEqual( + len(list_hosts_response), + 0, + "Check Host is available" + ) + + self.assertEqual( + list_hosts_response[0].hahost, + False, + "The virtual machine is not ha enabled so check if VM is created on host which is also not ha enabled" + ) + + #put the Host in maintainance mode + self.debug("Enabling maintenance mode for host %s" % vm_with_ha_disabled.hostid) + cmd = prepareHostForMaintenance.prepareHostForMaintenanceCmd() + cmd.id = vm_with_ha_disabled.hostid + self.apiclient.prepareHostForMaintenance(cmd) + + timeout = self.services["timeout"] + + #verify the VM live migration happened to another running host + self.debug("Waiting for VM to come up") + wait_for_vm( + self.apiclient, + virtualmachineid=vm_with_ha_disabled.id, + interval=timeout + ) + + vms = VirtualMachine.list( + self.apiclient, + id=vm_with_ha_disabled.id, + listall=True + ) + + self.assertEqual( + isinstance(vms, list), + True, + "List VMs should return valid response for deployed VM" + ) + + self.assertNotEqual( + len(vms), + 0, + "List VMs should return valid response for deployed VM" + ) + + list_hosts_response = list_hosts( + self.apiclient, + id=vms[0].hostid + ) + self.assertEqual( + isinstance(list_hosts_response, list), + True, + "Check list response returns a valid list" + ) + + self.assertNotEqual( + len(list_hosts_response), + 0, + "Check Host is available" + ) + + self.assertEqual( + list_hosts_response[0].hahost, + False, + "The virtual machine is not ha enabled so check if VM is created on host which is also not ha enabled" + ) + + self.debug("Disabling the maintenance mode for host %s" % vm_with_ha_disabled.hostid) + cmd = cancelHostMaintenance.cancelHostMaintenanceCmd() + cmd.id = vm_with_ha_disabled.hostid + self.apiclient.cancelHostMaintenance(cmd)