Return-Path: X-Original-To: apmail-cloudstack-users-archive@www.apache.org Delivered-To: apmail-cloudstack-users-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6D29318532 for ; Tue, 14 Jul 2015 20:40:00 +0000 (UTC) Received: (qmail 77689 invoked by uid 500); 14 Jul 2015 20:39:50 -0000 Delivered-To: apmail-cloudstack-users-archive@cloudstack.apache.org Received: (qmail 77626 invoked by uid 500); 14 Jul 2015 20:39:50 -0000 Mailing-List: contact users-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@cloudstack.apache.org Delivered-To: mailing list users@cloudstack.apache.org Received: (qmail 77233 invoked by uid 99); 14 Jul 2015 20:39:49 -0000 Received: from Unknown (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 14 Jul 2015 20:39:49 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 385CA1801DB for ; Tue, 14 Jul 2015 20:39:49 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 3.793 X-Spam-Level: *** X-Spam-Status: No, score=3.793 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=3, KAM_BADIPHTTP=2, RCVD_IN_MSPIKE_H2=-1.108, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd3-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-eu-west.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id qWa9KhCPyK7k for ; Tue, 14 Jul 2015 20:39:25 +0000 (UTC) Received: from mail-ie0-f170.google.com (mail-ie0-f170.google.com [209.85.223.170]) by mx1-eu-west.apache.org (ASF Mail Server at mx1-eu-west.apache.org) with ESMTPS id 12BC520772 for ; Tue, 14 Jul 2015 20:39:24 +0000 (UTC) Received: by iebmu5 with SMTP id mu5so19743497ieb.1 for ; Tue, 14 Jul 2015 13:38:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=bcB4kwcOYaXI19mAqqFQNkzL2MXgp0/rknz9p3zZcVY=; b=GzGSPL59fhMZRv3aSoFNk2iHPQPZzSHIxEgq6zYNgRkriQgVxSIjJYXK0eRbFsX+Ne B0USyqjjNSAdbmp3E2xT7Kx41yO1l2C6G1MpCbMb8u3F3W+mqDHzF3WorU0SyKYaUbVI Go32LfiO5mufU7yB6WuC1Un/3vwfE47X60DcW5YynyPQY65cvpgZrISumyJpnHzzFlhc 2l/EyJ+pS1JyKuT5KPYpzZPifoiBV8jTNCif8VtvNyTX1BE7t7Ih76RT+Wufty1QvNlG PxJqfLwqVV27TErd/o62vYoyMnQi6N9Ucmy4HN7gcVgKwu8DotNGO1PSPWP6DMy57Fd8 OExg== MIME-Version: 1.0 X-Received: by 10.107.46.96 with SMTP id i93mr661805ioo.102.1436906317695; Tue, 14 Jul 2015 13:38:37 -0700 (PDT) Received: by 10.107.16.18 with HTTP; Tue, 14 Jul 2015 13:38:37 -0700 (PDT) In-Reply-To: References: <3ga1etgtsrl8o4iqagbgbj8g.1436612232107@email.android.com> <1EC49A8A-190C-43D7-9531-4F18DDFC3059@remi.nl> Date: Wed, 15 Jul 2015 05:38:37 +0900 Message-ID: Subject: Re: Urgent: VMs not migrated after putting Xenserver host in maintenance mode From: giljae o To: "users@cloudstack.apache.org" Content-Type: multipart/alternative; boundary=001a113726aca4527d051adbd2b2 --001a113726aca4527d051adbd2b2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable What is this? "f71666cc-2510-43f7-8748-6c693a4a0716")'] On Tuesday, July 14, 2015, Sonali Jadhav wrote: > Aha, this could be problem, I found this on pool master SMlog > > > Jul 14 13:53:32 SolXS01 SM: [12043] missing config for vdi: > f71666cc-2510-43f7-8748-6c693a4a0716 > Jul 14 13:53:32 SolXS01 SM: [12043] new VDIs on disk: > set(['f71666cc-2510-43f7-8748-6c693a4a0716']) > Jul 14 13:53:32 SolXS01 SM: [12043] Introducing VDI with > location=3Df71666cc-2510-43f7-8748-6c693a4a0716 > Jul 14 13:53:32 SolXS01 SM: [12049] lock: opening lock file > /var/lock/sm/e7d676cf-79ab-484a-8722-73d509b4c222/sr > Jul 14 13:53:32 SolXS01 SM: [12043] lock: released > /var/lock/sm/e7d676cf-79ab-484a-8722-73d509b4c222/sr > Jul 14 13:53:32 SolXS01 SM: [12043] ***** sr_scan: EXCEPTION > XenAPI.Failure, ['INTERNAL_ERROR', > 'Db_exn.Uniqueness_constraint_violation("VDI", "uuid", > "f71666cc-2510-43f7-8748-6c693a4a0716")'] > Jul 14 13:53:32 SolXS01 SM: [12043] File > "/opt/xensource/sm/SRCommand.py", line 110, in run > Jul 14 13:53:32 SolXS01 SM: [12043] return self._run_locked(sr) > Jul 14 13:53:32 SolXS01 SM: [12043] File > "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked > Jul 14 13:53:32 SolXS01 SM: [12043] rv =3D self._run(sr, target) > Jul 14 13:53:32 SolXS01 SM: [12043] File > "/opt/xensource/sm/SRCommand.py", line 331, in _run > Jul 14 13:53:32 SolXS01 SM: [12043] return > sr.scan(self.params['sr_uuid']) > Jul 14 13:53:32 SolXS01 SM: [12043] File "/opt/xensource/sm/FileSR", > line 206, in scan > Jul 14 13:53:32 SolXS01 SM: [12043] return super(FileSR, > self).scan(sr_uuid) > Jul 14 13:53:32 SolXS01 SM: [12043] File "/opt/xensource/sm/SR.py", lin= e > 317, in scan > Jul 14 13:53:32 SolXS01 SM: [12043] scanrecord.synchronise() > Jul 14 13:53:32 SolXS01 SM: [12043] File "/opt/xensource/sm/SR.py", lin= e > 580, in synchronise > Jul 14 13:53:32 SolXS01 SM: [12043] self.synchronise_new() > Jul 14 13:53:32 SolXS01 SM: [12043] File "/opt/xensource/sm/SR.py", lin= e > 553, in synchronise_new > Jul 14 13:53:32 SolXS01 SM: [12043] vdi._db_introduce() > Jul 14 13:53:32 SolXS01 SM: [12043] File "/opt/xensource/sm/VDI.py", > line 302, in _db_introduce > Jul 14 13:53:32 SolXS01 SM: [12043] vdi =3D > self.sr.session.xenapi.VDI.db_introduce(uuid, self.label, self.descriptio= n, > self.sr.sr_ref, ty, self.shareable, self.read_only, {}, self.location, {}= , > sm_config, self.managed, str(self.size), str(self.utilisation), > metadata_of_pool, is_a_snapshot, xmlrpclib.DateTime(snapshot_time), > snapshot_of) > Jul 14 13:53:32 SolXS01 SM: [12043] File > "/usr/lib/python2.4/site-packages/XenAPI.py", line 245, in __call__ > Jul 14 13:53:32 SolXS01 SM: [12043] return self.__send(self.__name, > args) > Jul 14 13:53:32 SolXS01 SM: [12043] File > "/usr/lib/python2.4/site-packages/XenAPI.py", line 149, in xenapi_request > Jul 14 13:53:32 SolXS01 SM: [12043] result =3D > _parse_result(getattr(self, methodname)(*full_params)) > Jul 14 13:53:32 SolXS01 SM: [12043] File > "/usr/lib/python2.4/site-packages/XenAPI.py", line 219, in _parse_result > Jul 14 13:53:32 SolXS01 SM: [12043] raise > Failure(result['ErrorDescription']) > Jul 14 13:53:32 SolXS01 SM: [12043] > Jul 14 13:53:32 SolXS01 SMGC: [12049] Found 0 cache files > Jul 14 13:53:32 SolXS01 SM: [12049] lock: tried lock > /var/lock/sm/e7d676cf-79ab-484a-8722-73d509b4c222/sr, acquired: True > (exists: True) > Jul 14 13:53:32 SolXS01 SM: [12049] ['/usr/bin/vhd-util', 'scan', '-f', > '-c', '-m', '/var/run/sr-mount/e7d676cf-79ab-484a-8722-73d509b4c222/*.vhd= '] > Jul 14 13:53:32 SolXS01 SM: [12043] Raising exception [40, The SR scan > failed [opterr=3D['INTERNAL_ERROR', > 'Db_exn.Uniqueness_constraint_violation("VDI", "uuid", > "f71666cc-2510-43f7-8748-6c693a4a0716")']]] > > > > [root@SolXS01 ~]# ls > /var/run/sr-mount/e7d676cf-79ab-484a-8722-73d509b4c222/ > f71666cc-2510-43f7-8748-6c693a4a0716.vhd > [root@SolXS01 ~]# > > > /Sonali > > -----Original Message----- > From: giljae o [mailto:ogiljae@gmail.com ] > Sent: Tuesday, July 14, 2015 4:40 PM > To: users@cloudstack.apache.org > Subject: Re: Urgent: VMs not migrated after putting Xenserver host in > maintenance mode > > Hi > > Sm log is on the xenserver because you can know which mount point is set. > > Sm log is under /var/log/SM.log > > James > > > On Tuesday, July 14, 2015, Sonali Jadhav > wrote: > > > Any clue on this? > > > > I can understand that it's a problem while creating new VR. > > > > Catch Exception: class com.xensource.xenapi.Types$UuidInvalid due to Th= e > > uuid you supplied was invalid. > > The uuid you supplied was invalid. > > > > I am not understanding which uuid is exactly invalid, I need help to > trace > > issue. > > > > /Sonali > > > > -----Original Message----- > > From: Sonali Jadhav [mailto:sonali@servercentralen.se > ] > > Sent: Monday, July 13, 2015 1:42 PM > > To: users@cloudstack.apache.org > > Subject: RE: Urgent: VMs not migrated after putting Xenserver host in > > maintenance mode > > > > Hi, > > > > That helped. I migrated vms and also in ACS it was syced correctly. Now > > all my xenservers in pool are 6.5 . > > > > But I am having new problem, I am trying to make new vm with isolated > > network. But its giving me following error, It looks like its problem > while > > creating VR. Also I observed that one host has 3 SRs which are > > disconnected. I don't know why. It was like that after I rebooted serve= r > > with updated XS 6.5. > > > > 015-07-13 08:36:47,975 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImp= l] > > (Work-Job-Executor-3:ctx-58f77d9c job-4353/job-4357 ctx-83fe75fb) > Creating > > monitoring services on VM[DomainRouter|r-97-VM] start... > > 2015-07-13 08:36:47,982 DEBUG > [c.c.n.r.VirtualNetworkApplianceManagerImpl] > > (Work-Job-Executor-3:ctx-58f77d9c job-4353/job-4357 ctx-83fe75fb) > > Reapplying dhcp entries as a part of domR VM[DomainRouter|r-97-VM] > start... > > 2015-07-13 08:36:47,984 DEBUG > [c.c.n.r.VirtualNetworkApplianceManagerImpl] > > (Work-Job-Executor-3:ctx-58f77d9c job-4353/job-4357 ctx-83fe75fb) > > Reapplying vm data (userData and metaData) entries as a part of domR > > VM[DomainRouter|r-97-VM] start... > > 2015-07-13 08:36:48,035 DEBUG [c.c.a.t.Request] > > (Work-Job-Executor-3:ctx-58f77d9c job-4353/job-4357 ctx-83fe75fb) Seq > > 4-5299892336484951126: Sending { Cmd , MgmtId: 59778234354585, via: > > 4(SeSolXS02), Ver: v1, Flags: 100011, > > > [{"com.cloud.agent.api.StartCommand":{"vm":{"id":97,"name":"r-97-VM","boo= tloader":"PyGrub","type":"DomainRouter","cpus":1,"minSpeed":500,"maxSpeed":= 500,"minRam":268435456,"maxRam":268435456,"arch":"x86_64","os":"Debian > > GNU/Linux 7(64-bit)","platformEmulator":"Debian Wheezy 7.0 > > (64-bit)","bootArgs":" template=3DdomP name=3Dr-97-VM eth2ip=3D100.65.3= 6.119 > > eth2mask=3D255.255.255.192 gateway=3D100.65.36.65 eth0ip=3D10.1.1.1 > > eth0mask=3D255.255.255.0 domain=3Dcs17cloud.internal cidrsize=3D24 > > dhcprange=3D10.1.1.1 eth1ip=3D169.254.0.120 eth1mask=3D255.255.0.0 type= =3Drouter > > disable_rp_filter=3Dtrue dns1=3D8.8.8.8 > > > dns2=3D8.8.4.4","enableHA":true,"limitCpuUse":false,"enableDynamicallySca= leVm":false,"vncPassword":"0R3TO+O9g+kGxMdtFbt0rw=3D=3D","params":{},"uuid"= :"80b6edf0-7301-4985-b2a6-fae64636c5e8","disks":[{"data":{"org.apache.cloud= stack.storage.to.VolumeObjectTO":{"uuid":"2fb465e2-f51f-4b46-8ec2-153fd843c= 6cf","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.Pri= maryDataStoreTO":{"uuid":"876d490c-a1d4-3bfe-88b7-1bdb2479541b","id":1,"poo= lType":"NetworkFilesystem","host":"172.16.5.194","path":"/tank/primstore","= port":2049,"url":"NetworkFilesystem:// > > > 172.16.5.194/tank/primstore/?ROLE=3DPrimary&STOREUUID=3D876d490c-a1d4-3bf= e-88b7-1bdb2479541b > > > "}},"name":"ROOT-97","size":2684354560,"path":"b9b23a67-9bfe-485a-906c-df= e8282fe868","volumeId":133,"vmName":"r-97-VM","accountId":23,"format":"VHD"= ,"provisioningType":"THIN","id":133,"deviceId":0,"hypervisorType":"XenServe= r"}},"diskSeq":0,"path":"b9b23a67-9bfe-485a-906c-dfe8282fe868","type":"ROOT= ","_details":{"managed":"false","storagePort":"2049","storageHost":"172.16.= 5.194","volumeSize":"2684354560"}}],"nics":[{"deviceId":2,"networkRateMbps"= :200,"defaultNic":true,"pxeDisable":true,"nicUuid":"f699a9b6-cc02-4e7e-805b= -0005d69eadac","uuid":"1b5905ad-12b0-4594-be02-26aa753a640d","ip":"100.65.3= 6.119","netmask":"255.255.255.192","gateway":"100.65.36.65","mac":"06:14:88= :00:01:14","dns1":"8.8.8.8","dns2":"8.8.4.4","broadcastType":"Vlan","type":= "Public","broadcastUri":"vlan://501","isolationUri":"vlan://501","isSecurit= yGroupEnabled":false,"name":"public"},{"deviceId":0,"networkRateMbps":200,"= defaultNic":false,"pxeDisable":true,"nicUuid":"52f8e291-c671-4bfe-b37b-9a0a= f82f09fd","uuid":"2a9f3c45-cdcf-4f39-a97c-ac29f1c21888","ip":"10.1.1.1","ne= tmask":"255.255.255.0","mac":"02:00:73:a2:00:02","dns1":"8.8.8.8","dns2":"8= .8.4.4","broadcastType":"Vlan","type":"Guest","broadcastUri":"vlan://714","= isolationUri":"vlan://714","isSecurityGroupEnabled":false,"name":"guest"},{= "deviceId":1,"networkRateMbps":-1,"defaultNic":false,"pxeDisable":true,"nic= Uuid":"b1b8c3d6-e1e6-4575-8371-5c9b3e3a0c66","uuid":"527ed501-3b46-4d98-8e0= a-d8d299870f32","ip":"169.254.0.120","netmask":"255.255.0.0","gateway":"169= .254.0.1","mac":"0e:00:a9:fe:00:78","broadcastType":"LinkLocal","type":"Con= trol","isSecurityGroupEnabled":false}]},"hostIp":"172.16.5.198","executeInS= equence":false,"wait":0}},{"com.cloud.agent.api.check.CheckSshCommand":{"ip= ":"169.254.0.120","port":3922,"interval":6,"retries":100,"name":"r-97-VM","= wait":0}},{"com.cloud.agent.api.GetDomRVersionCmd":{"accessDetails":{" > > router.name > > > ":"r-97-VM","router.ip":"169.254.0.120"},"wait":0}},{},{"com.cloud.agent.= api.routing.AggregationControlCommand":{"action":"Start","accessDetails":{"= router.guest.ip":"10.1.1.1"," > > router.name > > > ":"r-97-VM","router.ip":"169.254.0.120"},"wait":0}},{"com.cloud.agent.api= .routing.IpAssocCommand":{"ipAddresses":[{"accountId":23,"publicIp":"100.65= .36.119","sourceNat":true,"add":true,"oneToOneNat":false,"firstIP":true,"br= oadcastUri":"vlan://501","vlanGateway":"100.65.36.65","vlanNetmask":"255.25= 5.255.192","vifMacAddress":"06:af:70:00:01:14","networkRate":200,"trafficTy= pe":"Public","networkName":"public","newNic":false}],"accessDetails":{"zone= .network.type":"Advanced"," > > router.name > > > ":"r-97-VM","router.ip":"169.254.0.120","router.guest.ip":"10.1.1.1"},"wa= it":0}},{"com.cloud.agent.api.routing.SetFirewallRulesCommand":{"rules":[{"= id":0,"srcIp":"","protocol":"all","revoked":false,"alreadyAdded":false,"sou= rceCidrList":[],"purpose":"Firewall","trafficType":"Egress","defaultEgressP= olicy":false}],"accessDetails":{"router.guest.ip":"10.1.1.1","firewall.egre= ss.default":"System","zone.network.type":"Advanced","router.ip":"169.254.0.= 120"," > > router.name > > > ":"r-97-VM"},"wait":0}},{"com.cloud.agent.api.routing.SetMonitorServiceCo= mmand":{"services":[{"id":0,"service":"dhcp","processname":"dnsmasq","servi= ceName":"dnsmasq","servicePath":"/var/run/dnsmasq/dnsmasq.pid","pidFile":"/= var/run/dnsmasq/dnsmasq.pid","isDefault":false},{"id":0,"service":"loadbala= ncing","processname":"haproxy","serviceName":"haproxy","servicePath":"/var/= run/haproxy.pid","pidFile":"/var/run/haproxy.pid","isDefault":false},{"id":= 0,"service":"ssh","processname":"sshd","serviceName":"ssh","servicePath":"/= var/run/sshd.pid","pidFile":"/var/run/sshd.pid","isDefault":true},{"id":0,"= service":"webserver","processname":"apache2","serviceName":"apache2","servi= cePath":"/var/run/apache2.pid","pidFile":"/var/run/apache2.pid","isDefault"= :true}],"accessDetails":{" > > router.name > > > ":"r-97-VM","router.ip":"169.254.0.120","router.guest.ip":"10.1.1.1"},"wa= it":0}},{"com.cloud.agent.api.routing.AggregationControlCommand":{"action":= "Finish","accessDetails":{"router.guest.ip":"10.1.1.1"," > > router.name":"r-97-VM","router.ip":"169.254.0.120"},"wait":0}}] } > > 2015-07-13 08:36:48,036 DEBUG [c.c.a.t.Request] > > (Work-Job-Executor-3:ctx-58f77d9c job-4353/job-4357 ctx-83fe75fb) Seq > > 4-5299892336484951126: Executing: { Cmd , MgmtId: 59778234354585, via: > > 4(SeSolXS02), Ver: v1, Flags: 100011, > > > [{"com.cloud.agent.api.StartCommand":{"vm":{"id":97,"name":"r-97-VM","boo= tloader":"PyGrub","type":"DomainRouter","cpus":1,"minSpeed":500,"maxSpeed":= 500,"minRam":268435456,"maxRam":268435456,"arch":"x86_64","os":"Debian > > GNU/Linux 7(64-bit)","platformEmulator":"Debian Wheezy 7.0 > > (64-bit)","bootArgs":" template=3DdomP name=3Dr-97-VM eth2ip=3D100.65.3= 6.119 > > eth2mask=3D255.255.255.192 gateway=3D100.65.36.65 eth0ip=3D10.1.1.1 > > eth0mask=3D255.255.255.0 domain=3Dcs17cloud.internal cidrsize=3D24 > > dhcprange=3D10.1.1.1 eth1ip=3D169.254.0.120 eth1mask=3D255.255.0.0 type= =3Drouter > > disable_rp_filter=3Dtrue dns1=3D8.8.8.8 > > > dns2=3D8.8.4.4","enableHA":true,"limitCpuUse":false,"enableDynamicallySca= leVm":false,"vncPassword":"0R3TO+O9g+kGxMdtFbt0rw=3D=3D","params":{},"uuid"= :"80b6edf0-7301-4985-b2a6-fae64636c5e8","disks":[{"data":{"org.apache.cloud= stack.storage.to.VolumeObjectTO":{"uuid":"2fb465e2-f51f-4b46-8ec2-153fd843c= 6cf","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.Pri= maryDataStoreTO":{"uuid":"876d490c-a1d4-3bfe-88b7-1bdb2479541b","id":1,"poo= lType":"NetworkFilesystem","host":"172.16.5.194","path":"/tank/primstore","= port":2049,"url":"NetworkFilesystem:// > > > 172.16.5.194/tank/primstore/?ROLE=3DPrimary&STOREUUID=3D876d490c-a1d4-3bf= e-88b7-1bdb2479541b > > > "}},"name":"ROOT-97","size":2684354560,"path":"b9b23a67-9bfe-485a-906c-df= e8282fe868","volumeId":133,"vmName":"r-97-VM","accountId":23,"format":"VHD"= ,"provisioningType":"THIN","id":133,"deviceId":0,"hypervisorType":"XenServe= r"}},"diskSeq":0,"path":"b9b23a67-9bfe-485a-906c-dfe8282fe868","type":"ROOT= ","_details":{"managed":"false","storagePort":"2049","storageHost":"172.16.= 5.194","volumeSize":"2684354560"}}],"nics":[{"deviceId":2,"networkRateMbps"= :200,"defaultNic":true,"pxeDisable":true,"nicUuid":"f699a9b6-cc02-4e7e-805b= -0005d69eadac","uuid":"1b5905ad-12b0-4594-be02-26aa753a640d","ip":"100.65.3= 6.119","netmask":"255.255.255.192","gateway":"100.65.36.65","mac":"06:14:88= :00:01:14","dns1":"8.8.8.8","dns2":"8.8.4.4","broadcastType":"Vlan","type":= "Public","broadcastUri":"vlan://501","isolationUri":"vlan://501","isSecurit= yGroupEnabled":false,"name":"public"},{"deviceId":0,"networkRateMbps":200,"= defaultNic":false,"pxeDisable":true,"nicUuid":"52f8e291-c671-4bfe-b37b-9a0a= f82f09fd","uuid":"2a9f3c45-cdcf-4f39-a97c-ac29f1c21888","ip":"10.1.1.1","ne= tmask":"255.255.255.0","mac":"02:00:73:a2:00:02","dns1":"8.8.8.8","dns2":"8= .8.4.4","broadcastType":"Vlan","type":"Guest","broadcastUri":"vlan://714","= isolationUri":"vlan://714","isSecurityGroupEnabled":false,"name":"guest"},{= "deviceId":1,"networkRateMbps":-1,"defaultNic":false,"pxeDisable":true,"nic= Uuid":"b1b8c3d6-e1e6-4575-8371-5c9b3e3a0c66","uuid":"527ed501-3b46-4d98-8e0= a-d8d299870f32","ip":"169.254.0.120","netmask":"255.255.0.0","gateway":"169= .254.0.1","mac":"0e:00:a9:fe:00:78","broadcastType":"LinkLocal","type":"Con= trol","isSecurityGroupEnabled":false}]},"hostIp":"172.16.5.198","executeInS= equence":false,"wait":0}},{"com.cloud.agent.api.check.CheckSshCommand":{"ip= ":"169.254.0.120","port":3922,"interval":6,"retries":100,"name":"r-97-VM","= wait":0}},{"com.cloud.agent.api.GetDomRVersionCmd":{"accessDetails":{" > > router.name > > > ":"r-97-VM","router.ip":"169.254.0.120"},"wait":0}},{},{"com.cloud.agent.= api.routing.AggregationControlCommand":{"action":"Start","accessDetails":{"= router.guest.ip":"10.1.1.1"," > > router.name > > > ":"r-97-VM","router.ip":"169.254.0.120"},"wait":0}},{"com.cloud.agent.api= .routing.IpAssocCommand":{"ipAddresses":[{"accountId":23,"publicIp":"100.65= .36.119","sourceNat":true,"add":true,"oneToOneNat":false,"firstIP":true,"br= oadcastUri":"vlan://501","vlanGateway":"100.65.36.65","vlanNetmask":"255.25= 5.255.192","vifMacAddress":"06:af:70:00:01:14","networkRate":200,"trafficTy= pe":"Public","networkName":"public","newNic":false}],"accessDetails":{"zone= .network.type":"Advanced"," > > router.name > > > ":"r-97-VM","router.ip":"169.254.0.120","router.guest.ip":"10.1.1.1"},"wa= it":0}},{"com.cloud.agent.api.routing.SetFirewallRulesCommand":{"rules":[{"= id":0,"srcIp":"","protocol":"all","revoked":false,"alreadyAdded":false,"sou= rceCidrList":[],"purpose":"Firewall","trafficType":"Egress","defaultEgressP= olicy":false}],"accessDetails":{"router.guest.ip":"10.1.1.1","firewall.egre= ss.default":"System","zone.network.type":"Advanced","router.ip":"169.254.0.= 120"," > > router.name > > > ":"r-97-VM"},"wait":0}},{"com.cloud.agent.api.routing.SetMonitorServiceCo= mmand":{"services":[{"id":0,"service":"dhcp","processname":"dnsmasq","servi= ceName":"dnsmasq","servicePath":"/var/run/dnsmasq/dnsmasq.pid","pidFile":"/= var/run/dnsmasq/dnsmasq.pid","isDefault":false},{"id":0,"service":"loadbala= ncing","processname":"haproxy","serviceName":"haproxy","servicePath":"/var/= run/haproxy.pid","pidFile":"/var/run/haproxy.pid","isDefault":false},{"id":= 0,"service":"ssh","processname":"sshd","serviceName":"ssh","servicePath":"/= var/run/sshd.pid","pidFile":"/var/run/sshd.pid","isDefault":true},{"id":0,"= service":"webserver","processname":"apache2","serviceName":"apache2","servi= cePath":"/var/run/apache2.pid","pidFile":"/var/run/apache2.pid","isDefault"= :true}],"accessDetails":{" > > router.name > > > ":"r-97-VM","router.ip":"169.254.0.120","router.guest.ip":"10.1.1.1"},"wa= it":0}},{"com.cloud.agent.api.routing.AggregationControlCommand":{"action":= "Finish","accessDetails":{"router.guest.ip":"10.1.1.1"," > > router.name":"r-97-VM","router.ip":"169.254.0.120"},"wait":0}}] } > > 2015-07-13 08:36:48,036 DEBUG [c.c.a.m.DirectAgentAttache] > > (DirectAgent-434:ctx-819aba7f) Seq 4-5299892336484951126: Executing > request > > 2015-07-13 08:36:48,043 DEBUG [c.c.h.x.r.CitrixResourceBase] > > (DirectAgent-434:ctx-819aba7f) 1. The VM r-97-VM is in Starting state. > > 2015-07-13 08:36:48,065 DEBUG [c.c.h.x.r.CitrixResourceBase] > > (DirectAgent-434:ctx-819aba7f) Created VM > > 14e931b3-c51d-fa86-e2d4-2e25059de732 for r-97-VM > > 2015-07-13 08:36:48,069 DEBUG [c.c.h.x.r.CitrixResourceBase] > > (DirectAgent-434:ctx-819aba7f) PV args are -- quiet > > > console=3Dhvc0%template=3DdomP%name=3Dr-97-VM%eth2ip=3D100.65.36.119%eth2= mask=3D255.255.255.192%gateway=3D100.65.36.65%eth0ip=3D10.1.1.1%eth0mask=3D= 255.255.255.0%domain=3Dcs17cloud.internal%cidrsize=3D24%dhcprange=3D10.1.1.= 1%eth1ip=3D169.254.0.120%eth1mask=3D255.255.0.0%type=3Drouter%disable_rp_fi= lter=3Dtrue%dns1=3D8.8.8.8%dns2=3D8.8.4.4 > > 2015-07-13 08:36:48,092 DEBUG [c.c.h.x.r.CitrixResourceBase] > > (DirectAgent-434:ctx-819aba7f) VBD e8612817-9d0c-2a6c-136f-5391831336e7 > > created for com.cloud.agent.api.to.DiskTO@5b2138b > > 2015-07-13 08:36:48,101 WARN [c.c.h.x.r.CitrixResourceBase] > > (DirectAgent-434:ctx-819aba7f) Catch Exception: class > > com.xensource.xenapi.Types$UuidInvalid due to The uuid you supplied was > > invalid. > > The uuid you supplied was invalid. > > at com.xensource.xenapi.Types.checkResponse(Types.java:1491) > > at com.xensource.xenapi.Connection.dispatch(Connection.java:395= ) > > at > > > com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServer= Connection.dispatch(XenServerConnectionPool.java:462) > > at com.xensource.xenapi.VDI.getByUuid(VDI.java:341) > > at > > > com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.createPatchVbd= (CitrixResourceBase.java:1580) > > at > > > com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.execute(Citrix= ResourceBase.java:1784) > > at > > > com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest= (CitrixResourceBase.java:489) > > at > > > com.cloud.hypervisor.xenserver.resource.XenServer56Resource.executeReques= t(XenServer56Resource.java:64) > > at > > > com.cloud.hypervisor.xenserver.resource.XenServer610Resource.executeReque= st(XenServer610Resource.java:87) > > at > > > com.cloud.hypervisor.xenserver.resource.XenServer620SP1Resource.executeRe= quest(XenServer620SP1Resource.java:65) > > at > > > com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentA= ttache.java:302) > > at > > > org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(Manage= dContextRunnable.java:49) > > at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(D= efaultManagedContext.java:56) > > at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWith= Context(DefaultManagedContext.java:103) > > at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithC= ontext(DefaultManagedContext.java:53) > > at > > > org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedC= ontextRunnable.java:46) > > at > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > > at > > > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.acce= ss$201(ScheduledThreadPoolExecutor.java:178) > > at > > > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(= ScheduledThreadPoolExecutor.java:292) > > at > > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1145) > > at > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:615) > > at java.lang.Thread.run(Thread.java:744) > > 2015-07-13 08:36:48,102 WARN [c.c.h.x.r.CitrixResourceBase] > > (DirectAgent-434:ctx-819aba7f) Unable to start r-97-VM due to > > The uuid you supplied was invalid. > > at com.xensource.xenapi.Types.checkResponse(Types.java:1491) > > at com.xensource.xenapi.Connection.dispatch(Connection.java:395= ) > > at > > > com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServer= Connection.dispatch(XenServerConnectionPool.java:462) > > at com.xensource.xenapi.VDI.getByUuid(VDI.java:341) > > at > > > com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.createPatchVbd= (CitrixResourceBase.java:1580) > > at > > > com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.execute(Citrix= ResourceBase.java:1784) > > at > > > com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest= (CitrixResourceBase.java:489) > > at > > > com.cloud.hypervisor.xenserver.resource.XenServer56Resource.executeReques= t(XenServer56Resource.java:64) > > at > > > com.cloud.hypervisor.xenserver.resource.XenServer610Resource.executeReque= st(XenServer610Resource.java:87) > > at > > > com.cloud.hypervisor.xenserver.resource.XenServer620SP1Resource.executeRe= quest(XenServer620SP1Resource.java:65) > > at > > > com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentA= ttache.java:302) > > at > > > org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(Manage= dContextRunnable.java:49) > > at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(D= efaultManagedContext.java:56) > > at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWith= Context(DefaultManagedContext.java:103) > > at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithC= ontext(DefaultManagedContext.java:53) > > at > > > org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedC= ontextRunnable.java:46) > > at > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > > at > > > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.acce= ss$201(ScheduledThreadPoolExecutor.java:178) > > at > > > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(= ScheduledThreadPoolExecutor.java:292) > > at > > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1145) > > at > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:615) > > at java.lang.Thread.run(Thread.java:744) > > 2015-07-13 08:36:48,124 WARN [c.c.h.x.r.CitrixResourceBase] > > (DirectAgent-434:ctx-819aba7f) Unable to clean up VBD due to > > You gave an invalid object reference. The object may have recently bee= n > > deleted. The class parameter gives the type of reference given, and th= e > > handle parameter echoes the bad value given. > > at com.xensource.xenapi.Types.checkResponse(Types.java:693) > > at com.xensource.xenapi.Connection.dispatch(Connection.java:395= ) > > at > > > com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServer= Connection.dispatch(XenServerConnectionPool.java:462) > > at com.xensource.xenapi.VBD.unplug(VBD.java:1109) > > at > > > com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.handleVmStartF= ailure(CitrixResourceBase.java:1520) > > at > > > com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.execute(Citrix= ResourceBase.java:1871) > > at > > > com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest= (CitrixResourceBase.java:489) > > at > > > com.cloud.hypervisor.xenserver.resource.XenServer56Resource.executeReques= t(XenServer56Resource.java:64) > > at > > > com.cloud.hypervisor.xenserver.resource.XenServer610Resource.executeReque= st(XenServer610Resource.java:87) > > at > > > com.cloud.hypervisor.xenserver.resource.XenServer620SP1Resource.executeRe= quest(XenServer620SP1Resource.java:65) > > at > > > com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentA= ttache.java:302) > > at > > > org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(Manage= dContextRunnable.java:49) > > at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(D= efaultManagedContext.java:56) > > at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWith= Context(DefaultManagedContext.java:103) > > at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithC= ontext(DefaultManagedContext.java:53) > > at > > > org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedC= ontextRunnable.java:46) > > at > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > > at > > > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.acce= ss$201(ScheduledThreadPoolExecutor.java:178) > > at > > > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(= ScheduledThreadPoolExecutor.java:292) > > at > > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1145) > > at > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:615) > > at java.lang.Thread.run(Thread.java:744) > > 2015-07-13 08:36:48,128 WARN [c.c.h.x.r.CitrixResourceBase] > > (DirectAgent-434:ctx-819aba7f) Unable to clean up VBD due to > > You gave an invalid object reference. The object may have recently bee= n > > deleted. The class parameter gives the type of reference given, and th= e > > handle parameter echoes the bad value given. > > at com.xensource.xenapi.Types.checkResponse(Types.java:693) > > at com.xensource.xenapi.Connection.dispatch(Connection.java:395= ) > > at > > > com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServer= Connection.dispatch(XenServerConnectionPool.java:462) > > at com.xensource.xenapi.VBD.unplug(VBD.java:1109) > > at > > > com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.handleVmStartF= ailure(CitrixResourceBase.java:1520) > > at > > > com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.execute(Citrix= ResourceBase.java:1871) > > at > > > com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest= (CitrixResourceBase.java:489) > > at > > > com.cloud.hypervisor.xenserver.resource.XenServer56Resource.executeReques= t(XenServer56Resource.java:64) > > at > > > com.cloud.hypervisor.xenserver.resource.XenServer610Resource.executeReque= st(XenServer610Resource.java:87) > > at > > > com.cloud.hypervisor.xenserver.resource.XenServer620SP1Resource.executeRe= quest(XenServer620SP1Resource.java:65) > > at > > > com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentA= ttache.java:302) > > at > > > org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(Manage= dContextRunnable.java:49) > > at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(D= efaultManagedContext.java:56) > > at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWith= Context(DefaultManagedContext.java:103) > > at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithC= ontext(DefaultManagedContext.java:53) > > at > > > org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedC= ontextRunnable.java:46) > > at > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > > at > > > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.acce= ss$201(ScheduledThreadPoolExecutor.java:178) > > at > > > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(= ScheduledThreadPoolExecutor.java:292) > > at > > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1145) > > at > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:615) > > at java.lang.Thread.run(Thread.java:744) > > 2015-07-13 08:36:48,129 DEBUG [c.c.h.x.r.CitrixResourceBase] > > (DirectAgent-434:ctx-819aba7f) The VM is in stopped state, detected > problem > > during startup : r-97-VM > > 2015-07-13 08:36:48,129 DEBUG [c.c.a.m.DirectAgentAttache] > > (DirectAgent-434:ctx-819aba7f) Seq 4-5299892336484951126: Cancelling > > because one of the answers is false and it is stop on error. > > 2015-07-13 08:36:48,129 DEBUG [c.c.a.m.DirectAgentAttache] > > (DirectAgent-434:ctx-819aba7f) Seq 4-5299892336484951126: Response > Received: > > 2015-07-13 08:36:48,130 DEBUG [c.c.a.t.Request] > > (DirectAgent-434:ctx-819aba7f) Seq 4-5299892336484951126: Processing: = { > > Ans: , MgmtId: 59778234354585, via: 4, Ver: v1, Flags: 10, > > > [{"com.cloud.agent.api.StartAnswer":{"vm":{"id":97,"name":"r-97-VM","boot= loader":"PyGrub","type":"DomainRouter","cpus":1,"minSpeed":500,"maxSpeed":5= 00,"minRam":268435456,"maxRam":268435456,"arch":"x86_64","os":"Debian > > GNU/Linux 7(64-bit)","platformEmulator":"Debian Wheezy 7.0 > > (64-bit)","bootArgs":" template=3DdomP name=3Dr-97-VM eth2ip=3D100.65.3= 6.119 > > eth2mask=3D255.255.255.192 gateway=3D100.65.36.65 eth0ip=3D10.1.1.1 > > eth0mask=3D255.255.255.0 domain=3Dcs17cloud.internal cidrsize=3D24 > > dhcprange=3D10.1.1.1 eth1ip=3D169.254.0.120 eth1mask=3D255.255.0.0 type= =3Drouter > > disable_rp_filter=3Dtrue dns1=3D8.8.8.8 > > > dns2=3D8.8.4.4","enableHA":true,"limitCpuUse":false,"enableDynamicallySca= leVm":false,"vncPassword":"0R3TO+O9g+kGxMdtFbt0rw=3D=3D","params":{},"uuid"= :"80b6edf0-7301-4985-b2a6-fae64636c5e8","disks":[{"data":{"org.apache.cloud= stack.storage.to.VolumeObjectTO":{"uuid":"2fb465e2-f51f-4b46-8ec2-153fd843c= 6cf","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.Pri= maryDataStoreTO":{"uuid":"876d490c-a1d4-3bfe-88b7-1bdb2479541b","id":1,"poo= lType":"NetworkFilesystem","host":"172.16.5.194","path":"/tank/primstore","= port":2049,"url":"NetworkFilesystem:// > > > 172.16.5.194/tank/primstore/?ROLE=3DPrimary&STOREUUID=3D876d490c-a1d4-3bf= e-88b7-1bdb2479541b > "}},"name":"ROOT-97","size":2684354560,"path":"b9b23a67-9bfe-485a-906c-df= e8282fe868","volumeId":133,"vmName":"r-97-VM","accountId":23,"format":"VHD"= ,"provisioningType":"THIN","id":133,"deviceId":0,"hypervisorType":"XenServe= r"}},"diskSeq":0,"path":"b9b23a67-9bfe-485a-906c-dfe8282fe868","type":"ROOT= ","_details":{"managed":"false","storagePort":"2049","storageHost":"172.16.= 5.194","volumeSize":"2684354560"}}],"nics":[{"deviceId":2,"networkRateMbps"= :200,"defaultNic":true,"pxeDisable":true,"nicUuid":"f699a9b6-cc02-4e7e-805b= -0005d69eadac","uuid":"1b5905ad-12b0-4594-be02-26aa753a640d","ip":"100.65.3= 6.119","netmask":"255.255.255.192","gateway":"100.65.36.65","mac":"06:14:88= :00:01:14","dns1":"8.8.8.8","dns2":"8.8.4.4","broadcastType":"Vlan","type":= "Public","broadcastUri":"vlan://501","isolationUri":"vlan://501","isSecurit= yGroupEnabled":false,"name":"public"},{"deviceId":0,"networkRateMbps":200,"= defaultNic":false,"pxeDisable":true,"nicUuid":"52f8e291-c671-4bfe-b37b-9a0a= f82f09fd","uuid":"2a9f3c45-cdcf-4f39-a97c-ac29f1c21888","ip":"10.1.1.1","ne= tmask":"255.255.255.0","mac":"02:00:73:a2:00:02","dns1":"8.8.8.8","dns2":"8= .8.4.4","broadcastType":"Vlan","type":"Guest","broadcastUri":"vlan://714","= isolationUri":"vlan://714","isSecurityGroupEnabled":false,"name":"guest"},{= "deviceId":1,"networkRateMbps":-1,"defaultNic":false,"pxeDisable":true,"nic= Uuid":"b1b8c3d6-e1e6-4575-8371-5c9b3e3a0c66","uuid":"527ed501-3b46-4d98-8e0= a-d8d299870f32","ip":"169.254.0.120","netmask":"255.255.0.0","gateway":"169= .254.0.1","mac":"0e:00:a9:fe:00:78","broadcastType":"LinkLocal","type":"Con= trol","isSecurityGroupEnabled":false}]},"_iqnToPath":{},"result":false,"det= ails":"Unable > > to start r-97-VM due to ","wait":0}}] } > > 2015-07-13 08:36:48,130 DEBUG [c.c.a.t.Request] > > (Work-Job-Executor-3:ctx-58f77d9c job-4353/job-4357 ctx-83fe75fb) Seq > > 4-5299892336484951126: Received: { Ans: , MgmtId: 59778234354585, via: > 4, > > Ver: v1, Flags: 10, { StartAnswer } } > > 2015-07-13 08:36:48,175 INFO [c.c.v.VirtualMachineManagerImpl] > > (Work-Job-Executor-3:ctx-58f77d9c job-4353/job-4357 ctx-83fe75fb) Unabl= e > to > > start VM on Host[-4-Routing] due to Unable to start r-97-VM due to > > 2015-07-13 08:36:48,223 DEBUG [c.c.v.VirtualMachineManagerImpl] > > (Work-Job-Executor-3:ctx-58f77d9c job-4353/job-4357 ctx-83fe75fb) > Cleaning > > up resources for the vm VM[DomainRouter|r-97-VM] in Starting state > > 2015-07-13 08:36:48,230 DEBUG [c.c.a.t.Request] > > (Work-Job-Executor-3:ctx-58f77d9c job-4353/job-4357 ctx-83fe75fb) Seq > > 4-5299892336484951127: Sending { Cmd , MgmtId: 59778234354585, via: > > 4(SeSolXS02), Ver: v1, Flags: 100011, > > > [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":= false,"checkBeforeCleanup":false,"vmName":"r-97-VM","wait":0}}] > > } > > 2015-07-13 08:36:48,230 DEBUG [c.c.a.t.Request] > > (Work-Job-Executor-3:ctx-58f77d9c job-4353/job-4357 ctx-83fe75fb) Seq > > 4-5299892336484951127: Executing: { Cmd , MgmtId: 59778234354585, via: > > 4(SeSolXS02), Ver: v1, Flags: 100011, > > > [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":= false,"checkBeforeCleanup":false,"vmName":"r-97-VM","wait":0}}] > > } > > 2015-07-13 08:36:48,230 DEBUG [c.c.a.m.DirectAgentAttache] > > (DirectAgent-53:ctx-de9ca4c0) Seq 4-5299892336484951127: Executing > request > > > > > > /Sonali > > > > -----Original Message----- > > From: Remi Bergsma [mailto:remi@remi.nl ] > > Sent: Saturday, July 11, 2015 5:34 PM > > To: users@cloudstack.apache.org > > Subject: Re: Urgent: VMs not migrated after putting Xenserver host in > > maintenance mode > > > > Hi, > > > > Did you also set the 'removed' column back to NULL (instead of the > > date/time it was originally deleted)? > > > > You can migrate directly from XenServer in 4.5.1, no problem. When the > > hypervisor connects to CloudStack again it will report its running VMs > and > > update the data base. I guess there was a problem in 4.4.3 where > > out-of-band migrations would cause a reboot of a router. Not sure if it > is > > also in 4.5.1. It's fixed in 4.4.4 and also in the upcoming 4.5.2. If > your > > remaining VMs are not routers, there is no issue. Otherwise you risk a > > reboot (which is quite fast anyway). > > > > I'd first double check the disk offering, also check its tags etc. If > that > > works, then migrate in CloudStack (as it is supposed to work). If not, > you > > can do it directly from XenServer in order to empty your host and proce= ed > > with the migration. Once the migration is done, fix any remaining issue= s. > > > > Hope this helps. > > > > Regards, > > Remi > > > > > > > On 11 jul. 2015, at 12:57, Sonali Jadhav > > > wrote: > > > > > > Hi I am using 4.5.1. That's why I am upgrading all xenservers to 6.5. > > > > > > I didn't knew that I can migrate vm from xenservers host itself. I > > thought that would make cloudstack database inconsistent, since migrati= on > > is not initiated from cloudstack. > > > > > > And like I said before, those vms have compute offering which was > > > deleted, but I "undeleted" them by setting status to "active" in > > > disk_offering table > > > > > > Sent from my Sony Xperia(tm) smartphone > > > > > > > > > ---- Remi Bergsma wrote ---- > > > > > > Hi Sonali, > > > > > > What version of CloudStack do you use? We can then look at the source > at > > line 292 of DeploymentPlanningManagerImpl.java If I look at master, it > > indeed tries to do something with the compute offerings. Could you also > > post its specs (print the result of the select query where you set the > > field active). We might be able to tell what's wrong with it. > > > > > > As plan B, assuming you use a recent CloudStack version, you can use > > > 'xe vm-migrate' to migrate VMs directly off of the hypervisor from th= e > > > command line on the XenServer. Like this: xe vm-migrate vm=3Di-12-345= -VM > > > host=3Dxen3 > > > > > > Recent versions of CloudStack will properly pick this up. When the VM= S > > are gone, the hypervisor will enter maintenance mode just fine. > > > > > > Regards, > > > Remi > > > > > > > > >> On 11 jul. 2015, at 09:42, Sonali Jadhav > > > wrote: > > >> > > >> Can anyone help me please? > > >> > > >> When I add xenserver host in maintenance, there are 3 VMs which are > not > > getting migrated to another host in cluster. > > >> Other VMs were moved, but not these three. They both had computer > > offering which was removed. But I undeleted those computer offerings, > like > > Andrija Panic suggested, changed their state to Active in > > cloud.disk_offering table. > > >> > > >> But still I am seeing following errors, I am totally stuck, since I > > have cluster of 4 xenservers, And I have upgraded 3 xenservers to 6.5, > > except this one. And I can't reboot it for upgrade without moving these > > instances to another host. > > >> > > >> [o.a.c.f.j.i.AsyncJobManagerImpl] (HA-Worker-2:ctx-68459b74 work-73) > > >> Sync job-4090 execution on object VmWorkJobQueue.32 > > >> 2015-07-09 14:27:00,908 INFO [c.c.h.HighAvailabilityManagerImpl] > > >> (HA-Worker-3:ctx-6ee7e62f work-74) Processing > > >> HAWork[74-Migration-34-Running-Scheduled] > > >> 2015-07-09 14:27:01,147 WARN [o.a.c.f.j.AsyncJobExecutionContext] > > >> (HA-Worker-3:ctx-6ee7e62f work-74) Job is executed without a context= , > > >> setup psudo job for the executing thread > > >> 2015-07-09 14:27:01,162 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] > > >> (HA-Worker-3:ctx-6ee7e62f work-74) Sync job-4091 execution on object > > >> VmWorkJobQueue.34 > > >> 2015-07-09 14:27:01,191 DEBUG [c.c.r.ResourceManagerImpl] > > >> (API-Job-Executor-107:ctx-4f5d495d job-4088 ctx-5921f0d2) Sent > > >> resource event EVENT_PREPARE_MAINTENANCE_AFTER to listener > > >> CapacityManagerImpl > > >> 2015-07-09 14:27:01,206 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] > > >> (API-Job-Executor-107:ctx-4f5d495d job-4088 ctx-5921f0d2) Complete > > >> async job-4088, jobStatus: SUCCEEDED, resultCode: 0, result: > > >> org.apache.cloudstack.api.response.HostResponse/host/{"id":"c3c78959= - > > >> 6387-4cc9-8f59-23d44d2257a8","name":"SeSolXS03","state":"Up","discon= n > > >> ected":"2015-07-03T12:13:06+0200","type":"Routing","ipaddress":"172.= 1 > > >> 6.5.188","zoneid":"1baf17c9-8325-4fa6-bffc-e502a33b578b","zonename":= " > > >> Solna","podid":"07de38ee-b63f-4285-819c-8abbdc392ab0","podname":"SeS= o > > >> lRack1","version":"4.5.1","hypervisor":"XenServer","cpusockets":2,"c= p > > >> unumber":24,"cpuspeed":2400,"cpuallocated":"0%","cpuused":"0%","cpuw= i > > >> thoverprovisioning":"57600.0","networkkbsread":0,"networkkbswrite":0= , > > >> "memorytotal":95574311424,"memoryallocated":0,"memoryused":13790400,= " > > >> capabilities":"xen-3.0-x86_64 , xen-3.0-x86_32p , hvm-3.0-x86_32 , > > >> hvm-3.0-x86_32p , > > >> hvm-3.0-x86_64","lastpinged":"1970-01-17T06:39:19+0100","managements= e > > >> rverid":59778234354585,"clusterid":"fe15e305-5c11-4785-a13d-e4581e23= f > > >> 5e7","clustername":"SeSolCluster1","clustertype":"CloudManaged","isl= o > > >> calstorageactive":false,"created":"2015-01-27T10:55:13+0100","events= " > > >> :"ManagementServerDown; AgentConnected; Ping; Remove; > > >> AgentDisconnected; HostDown; ShutdownRequested; StartAgentRebalance; > > >> PingTimeout","resourcestate":"PrepareForMaintenance","hypervisorvers= i > > >> on":"6.2.0","hahost":false,"jobid":"7ad72023-a16f-4abf-84a3-83dd0e9f= 6 > > >> bfd","jobstatus":0} > > >> 2015-07-09 14:27:01,208 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] > > >> (API-Job-Executor-107:ctx-4f5d495d job-4088 ctx-5921f0d2) Publish > > >> async job-4088 complete on message bus > > >> 2015-07-09 14:27:01,208 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] > > >> (API-Job-Executor-107:ctx-4f5d495d job-4088 ctx-5921f0d2) Wake up > > >> jobs related to job-4088 > > >> 2015-07-09 14:27:01,209 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] > > >> (API-Job-Executor-107:ctx-4f5d495d job-4088 ctx-5921f0d2) Update db > > >> status for job-4088 > > >> 2015-07-09 14:27:01,211 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] > > >> (API-Job-Executor-107:ctx-4f5d495d job-4088 ctx-5921f0d2) Wake up > > >> jobs joined with job-4088 and disjoin all subjobs created from job- > > >> 4088 > > >> 2015-07-09 14:27:01,386 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] > > >> (API-Job-Executor-107:ctx-4f5d495d job-4088) Done executing > > >> org.apache.cloudstack.api.command.admin.host.PrepareForMaintenanceCm= d > > >> for job-4088 > > >> 2015-07-09 14:27:01,389 INFO [o.a.c.f.j.i.AsyncJobMonitor] > > >> (API-Job-Executor-107:ctx-4f5d495d job-4088) Remove job-4088 from jo= b > > >> monitoring > > >> 2015-07-09 14:27:02,755 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] > > >> (AsyncJobMgr-Heartbeat-1:ctx-1c99f7cd) Execute sync-queue item: > > >> SyncQueueItemVO {id:2326, queueId: 251, contentType: AsyncJob, > > >> contentId: 4091, lastProcessMsid: 59778234354585, lastprocessNumber: > > >> 193, lastProcessTime: Thu Jul 09 14:27:02 CEST 2015, created: Thu Ju= l > > >> 09 14:27:01 CEST 2015} > > >> 2015-07-09 14:27:02,758 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] > > >> (AsyncJobMgr-Heartbeat-1:ctx-1c99f7cd) Schedule queued job-4091 > > >> 2015-07-09 14:27:02,810 INFO [o.a.c.f.j.i.AsyncJobMonitor] > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Add job-4091 > > >> into job monitoring > > >> 2015-07-09 14:27:02,819 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Executing > > >> AsyncJobVO {id:4091, userId: 1, accountId: 1, instanceType: null, > > >> instanceId: null, cmd: com.cloud.vm.VmWorkMigrateAway, cmdInfo: > > >> rO0ABXNyAB5jb20uY2xvdWQudm0uVm1Xb3JrTWlncmF0ZUF3YXmt4MX4jtcEmwIAAUoA= C > > >> XNyY0hvc3RJZHhyABNjb20uY2xvdWQudm0uVm1Xb3Jrn5m2VvAlZ2sCAARKAAlhY2Nvd= W > > >> 50SWRKAAZ1c2VySWRKAAR2bUlkTAALaGFuZGxlck5hbWV0ABJMamF2YS9sYW5nL1N0cm= l > > >> uZzt4cAAAAAAAAAABAAAAAAAAAAEAAAAAAAAAInQAGVZpcnR1YWxNYWNoaW5lTWFuYWd= l > > >> ckltcGwAAAAAAAAABQ, cmdVersion: 0, status: IN_PROGRESS, > > >> processStatus: 0, resultCode: 0, result: null, initMsid: > > >> 59778234354585, completeMsid: null, lastUpdated: null, lastPolled: > > >> null, created: Thu Jul 09 14:27:01 CEST 2015} > > >> 2015-07-09 14:27:02,820 DEBUG [c.c.v.VmWorkJobDispatcher] > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Run VM work > > >> job: com.cloud.vm.VmWorkMigrateAway for VM 34, job origin: 3573 > > >> 2015-07-09 14:27:02,822 DEBUG [c.c.v.VmWorkJobHandlerProxy] > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091 ctx-744a984e) > > >> Execute VM work job: > > >> com.cloud.vm.VmWorkMigrateAway{"srcHostId":5,"userId":1,"accountId":= 1 > > >> ,"vmId":34,"handlerName":"VirtualMachineManagerImpl"} > > >> 2015-07-09 14:27:02,852 DEBUG [c.c.d.DeploymentPlanningManagerImpl] > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091 ctx-744a984e) > > >> Deploy avoids pods: [], clusters: [], hosts: [5] > > >> 2015-07-09 14:27:02,855 ERROR [c.c.v.VmWorkJobHandlerProxy] > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091 ctx-744a984e) > > >> Invocation exception, caused by: java.lang.NullPointerException > > >> 2015-07-09 14:27:02,855 INFO [c.c.v.VmWorkJobHandlerProxy] > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091 ctx-744a984e) > > >> Rethrow exception java.lang.NullPointerException > > >> 2015-07-09 14:27:02,855 DEBUG [c.c.v.VmWorkJobDispatcher] > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Done with run > > >> of VM work job: com.cloud.vm.VmWorkMigrateAway for VM 34, job origin= : > > >> 3573 > > >> 2015-07-09 14:27:02,855 ERROR [c.c.v.VmWorkJobDispatcher] > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Unable to > > complete AsyncJobVO {id:4091, userId: 1, accountId: 1, instanceType: > null, > > instanceId: null, cmd: com.cloud.vm.VmWorkMigrateAway, cmdInfo: > > > rO0ABXNyAB5jb20uY2xvdWQudm0uVm1Xb3JrTWlncmF0ZUF3YXmt4MX4jtcEmwIAAUoACXNyY= 0hvc3RJZHhyABNjb20uY2xvdWQudm0uVm1Xb3Jrn5m2VvAlZ2sCAARKAAlhY2NvdW50SWRKAAZ1= c2VySWRKAAR2bUlkTAALaGFuZGxlck5hbWV0ABJMamF2YS9sYW5nL1N0cmluZzt4cAAAAAAAAAA= BAAAAAAAAAAEAAAAAAAAAInQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAABQ, > > cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, > > result: null, initMsid: 59778234354585, completeMsid: null, lastUpdated= : > > null, lastPolled: null, created: Thu Jul 09 14:27:01 CEST 2015}, job > > origin:3573 java.lang.NullPointerException > > >> at > > > com.cloud.deploy.DeploymentPlanningManagerImpl.planDeployment(DeploymentP= lanningManagerImpl.java:292) > > >> at > > > com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMach= ineManagerImpl.java:2376) > > >> at > > > com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMach= ineManagerImpl.java:4517) > > >> at sun.reflect.GeneratedMethodAccessor563.invoke(Unknown Sourc= e) > > >> at > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI= mpl.java:43) > > >> at java.lang.reflect.Method.invoke(Method.java:606) > > >> at > > > com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.= java:107) > > >> at > > > com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineMana= gerImpl.java:4636) > > >> at > > com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103) > > >> at > > > org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInCont= ext(AsyncJobManagerImpl.java:537) > > >> at > > > org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(Manage= dContextRunnable.java:49) > > >> at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(D= efaultManagedContext.java:56) > > >> at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWith= Context(DefaultManagedContext.java:103) > > >> at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithC= ontext(DefaultManagedContext.java:53) > > >> at > > > org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedC= ontextRunnable.java:46) > > >> at > > > org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(Async= JobManagerImpl.java:494) > > >> at > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > > >> at java.util.concurrent.FutureTask.run(FutureTask.java:262) > > >> at > > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1145) > > >> at > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:615) > > >> at java.lang.Thread.run(Thread.java:744) > > >> 2015-07-09 14:27:02,863 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Complete async > > >> job-4091, jobStatus: FAILED, resultCode: 0, result: > > >> rO0ABXNyAB5qYXZhLmxhbmcuTnVsbFBvaW50ZXJFeGNlcHRpb25HpaGO_zHhuAIAAHhy= A > > >> BpqYXZhLmxhbmcuUnVudGltZUV4Y2VwdGlvbp5fBkcKNIPlAgAAeHIAE2phdmEubGFuZ= y > > >> 5FeGNlcHRpb27Q_R8-GjscxAIAAHhyABNqYXZhLmxhbmcuVGhyb3dhYmxl1cY1Jzl3uM= s > > >> DAARMAAVjYXVzZXQAFUxqYXZhL2xhbmcvVGhyb3dhYmxlO0wADWRldGFpbE1lc3NhZ2V= 0 > > >> ABJMamF2YS9sYW5nL1N0cmluZztbAApzdGFja1RyYWNldAAeW0xqYXZhL2xhbmcvU3Rh= Y > > >> 2tUcmFjZUVsZW1lbnQ7TAAUc3VwcHJlc3NlZEV4Y2VwdGlvbnN0ABBMamF2YS91dGlsL= 0 > > >> xpc3Q7eHBxAH4ACHB1cgAeW0xqYXZhLmxhbmcuU3RhY2tUcmFjZUVsZW1lbnQ7AkYqPD= z > > >> 9IjkCAAB4cAAAABVzcgAbamF2YS5sYW5nLlN0YWNrVHJhY2VFbGVtZW50YQnFmiY23YU= C > > >> AARJAApsaW5lTnVtYmVyTAAOZGVjbGFyaW5nQ2xhc3NxAH4ABUwACGZpbGVOYW1lcQB-= A > > >> AVMAAptZXRob2ROYW1lcQB-AAV4cAAAASR0AC5jb20uY2xvdWQuZGVwbG95LkRlcGxve= W > > >> 1lbnRQbGFubmluZ01hbmFnZXJJbXBsdAAiRGVwbG95bWVudFBsYW5uaW5nTWFuYWdlck= l > > >> tcGwuamF2YXQADnBsYW5EZXBsb3ltZW50c3EAfgALAAAJSHQAJmNvbS5jbG91ZC52bS5= W > > >> aXJ0dWFsTWFjaGluZU1hbmFnZXJJbXBsdAAeVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1w= b > > >> C5qYXZhdAAWb3JjaGVzdHJhdGVNaWdyYXRlQXdheXNxAH4ACwAAEaVxAH4AEXEAfgASc= Q > > >> B-ABNzcQB-AAv_____dAAmc3VuLnJlZmxlY3QuR2VuZXJhdGVkTWV0aG9kQWNjZXNzb3= I > > >> 1NjNwdAAGaW52b2tlc3EAfgALAAAAK3QAKHN1bi5yZWZsZWN0LkRlbGVnYXRpbmdNZXR= o > > >> b2RBY2Nlc3NvckltcGx0ACFEZWxlZ2F0aW5nTWV0aG9kQWNjZXNzb3JJbXBsLmphdmFx= A > > >> H4AF3NxAH4ACwAAAl50ABhqYXZhLmxhbmcucmVmbGVjdC5NZXRob2R0AAtNZXRob2Qua= m > > >> F2YXEAfgAXc3EAfgALAAAAa3QAImNvbS5jbG91ZC52bS5WbVdvcmtKb2JIYW5kbGVyUH= J > > >> veHl0ABpWbVdvcmtKb2JIYW5kbGVyUHJveHkuamF2YXQAD2hhbmRsZVZtV29ya0pvYnN= x > > >> AH4ACwAAEhxxAH4AEXEAfgAScQB-ACFzcQB-AAsAAABndAAgY29tLmNsb3VkLnZtLlZt= V > > >> 29ya0pvYkRpc3BhdGNoZXJ0ABhWbVdvcmtKb2JEaXNwYXRjaGVyLmphdmF0AAZydW5Kb= 2 > > >> JzcQB-AAsAAAIZdAA_b3JnLmFwYWNoZS5jbG91ZHN0YWNrLmZyYW1ld29yay5qb2JzLm= l > > >> tcGwuQXN5bmNKb2JNYW5hZ2VySW1wbCQ1dAAYQXN5bmNKb2JNYW5hZ2VySW1wbC5qYXZ= h > > >> dAAMcnVuSW5Db250ZXh0c3EAfgALAAAAMXQAPm9yZy5hcGFjaGUuY2xvdWRzdGFjay5t= Y > > >> W5hZ2VkLmNvbnRleHQuTWFuYWdlZENvbnRleHRSdW5uYWJsZSQxdAAbTWFuYWdlZENvb= n > > >> RleHRSdW5uYWJsZS5qYXZhdAADcnVuc3EAfgALAAAAOHQAQm9yZy5hcGFjaGUuY2xvdW= R > > >> zdGFjay5tYW5hZ2VkLmNvbnRleHQuaW1wbC5EZWZhdWx0TWFuYWdlZENvbnRleHQkMXQ= A > > >> GkRlZmF1bHRNYW5hZ2VkQ29udGV4dC5qYXZhdAAEY2FsbHNxAH4ACwAAAGd0AEBvcmcu= Y > > >> XBhY2hlLmNsb3Vkc3RhY2subWFuYWdlZC5jb250ZXh0LmltcGwuRGVmYXVsdE1hbmFnZ= W > > >> RDb250ZXh0cQB-ADF0AA9jYWxsV2l0aENvbnRleHRzcQB-AAsAAAA1cQB-ADRxAH4AMX= Q > > >> ADnJ1bldpdGhDb250ZXh0c3EAfgALAAAALnQAPG9yZy5hcGFjaGUuY2xvdWRzdGFjay5= t > > >> YW5hZ2VkLmNvbnRleHQuTWFuYWdlZENvbnRleHRSdW5uYWJsZXEAfgAtcQB-AC5zcQB-= A > > >> AsAAAHucQB-AChxAH4AKXEAfgAuc3EAfgALAAAB13QALmphdmEudXRpbC5jb25jdXJyZ= W > > >> 50LkV4ZWN1dG9ycyRSdW5uYWJsZUFkYXB0ZXJ0AA5FeGVjdXRvcnMuamF2YXEAfgAyc3= E > > >> AfgALAAABBnQAH2phdmEudXRpbC5jb25jdXJyZW50LkZ1dHVyZVRhc2t0AA9GdXR1cmV= U > > >> YXNrLmphdmFxAH4ALnNxAH4ACwAABHl0ACdqYXZhLnV0aWwuY29uY3VycmVudC5UaHJl= Y > > >> WRQb29sRXhlY3V0b3J0ABdUaHJlYWRQb29sRXhlY3V0b3IuamF2YXQACXJ1bldvcmtlc= n > > >> NxAH4ACwAAAmd0AC5qYXZhLnV0aWwuY29uY3VycmVudC5UaHJlYWRQb29sRXhlY3V0b3= I > > >> kV29ya2VycQB-AENxAH4ALnNxAH4ACwAAAuh0ABBqYXZhLmxhbmcuVGhyZWFkdAALVGh= y > > >> ZWFkLmphdmFxAH4ALnNyACZqYXZhLnV0aWwuQ29sbGVjdGlvbnMkVW5tb2RpZmlhYmxl= T > > >> GlzdPwPJTG17I4QAgABTAAEbGlzdHEAfgAHeHIALGphdmEudXRpbC5Db2xsZWN0aW9uc= y > > >> RVbm1vZGlmaWFibGVDb2xsZWN0aW9uGUIAgMte9x4CAAFMAAFjdAAWTGphdmEvdXRpbC= 9 > > >> Db2xsZWN0aW9uO3hwc3IAE2phdmEudXRpbC5BcnJheUxpc3R4gdIdmcdhnQMAAUkABHN= p > > >> emV4cAAAAAB3BAAAAAB4cQB-AE94 > > >> 2015-07-09 14:27:02,866 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Publish async > > >> job-4091 complete on message bus > > >> 2015-07-09 14:27:02,866 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Wake up jobs > > >> related to job-4091 > > >> 2015-07-09 14:27:02,866 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Update db > > >> status for job-4091 > > >> 2015-07-09 14:27:02,868 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Wake up jobs > > >> joined with job-4091 and disjoin all subjobs created from job- 4091 > > >> 2015-07-09 14:27:02,918 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Done executing > > >> com.cloud.vm.VmWorkMigrateAway for job-4091 > > >> 2015-07-09 14:27:02,926 INFO [o.a.c.f.j.i.AsyncJobMonitor] > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Remove job-409= 1 > > >> from job monitoring > > >> 2015-07-09 14:27:02,979 WARN [c.c.h.HighAvailabilityManagerImpl] > > >> (HA-Worker-3:ctx-6ee7e62f work-74) Encountered unhandled exception > > during HA process, reschedule retry java.lang.NullPointerException > > >> at > > > com.cloud.deploy.DeploymentPlanningManagerImpl.planDeployment(DeploymentP= lanningManagerImpl.java:292) > > >> at > > > com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMach= ineManagerImpl.java:2376) > > >> at > > > com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMach= ineManagerImpl.java:4517) > > >> at sun.reflect.GeneratedMethodAccessor563.invoke(Unknown Sourc= e) > > >> at > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI= mpl.java:43) > > >> at java.lang.reflect.Method.invoke(Method.java:606) > > >> at > > > com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.= java:107) > > >> at > > > com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineMana= gerImpl.java:4636) > > >> at > > com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103) > > >> at > > > org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInCont= ext(AsyncJobManagerImpl.java:537) > > >> at > > > org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(Manage= dContextRunnable.java:49) > > >> at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(D= efaultManagedContext.java:56) > > >> at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWith= Context(DefaultManagedContext.java:103) > > >> at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithC= ontext(DefaultManagedContext.java:53) > > >> at > > > org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedC= ontextRunnable.java:46) > > >> at > > > org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(Async= JobManagerImpl.java:494) > > >> at > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > > >> at java.util.concurrent.FutureTask.run(FutureTask.java:262) > > >> at > > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1145) > > >> at > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:615) > > >> at java.lang.Thread.run(Thread.java:744) > > >> 2015-07-09 14:27:02,980 INFO [c.c.h.HighAvailabilityManagerImpl] > > >> (HA-Worker-3:ctx-6ee7e62f work-74) Rescheduling > > >> HAWork[74-Migration-34-Running-Migrating] to try again at Thu Jul 09 > > >> 14:37:16 CEST 2015 > > >> 2015-07-09 14:27:03,008 DEBUG [c.c.a.m.AgentManagerImpl] > > >> (AgentManager-Handler-14:null) SeqA 11-89048: Processing Seq > > >> 11-89048: { Cmd , MgmtId: -1, via: 11, Ver: v1, Flags: 11, > > >> [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":= 8 > > >> 0,"_loadInfo":"{\n \"connections\": []\n}","wait":0}}] } > > >> 2015-07-09 14:27:03,027 WARN [c.c.h.HighAvailabilityManagerImpl] > > >> (HA-Worker-2:ctx-68459b74 work-73) Encountered unhandled exception > > during HA process, reschedule retry java.lang.NullPointerException > > >> at > > > com.cloud.deploy.DeploymentPlanningManagerImpl.planDeployment(DeploymentP= lanningManagerImpl.java:292) > > >> at > > > com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMach= ineManagerImpl.java:2376) > > >> at > > > com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMach= ineManagerImpl.java:4517) > > >> at sun.reflect.GeneratedMethodAccessor299.invoke(Unknown Sourc= e) > > >> at > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI= mpl.java:43) > > >> at java.lang.reflect.Method.invoke(Method.java:606) > > >> at > > > com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.= java:107) > > >> at > > > com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineMana= gerImpl.java:4636) > > >> at > > com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103) > > >> at > > > org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInCont= ext(AsyncJobManagerImpl.java:537) > > >> at > > > org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(Manage= dContextRunnable.java:49) > > >> at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(D= efaultManagedContext.java:56) > > >> at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWith= Context(DefaultManagedContext.java:103) > > >> at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithC= ontext(DefaultManagedContext.java:53) > > >> at > > > org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedC= ontextRunnable.java:46) > > >> at > > > org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(Async= JobManagerImpl.java:494) > > >> at > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > > >> at java.util.concurrent.FutureTask.run(FutureTask.java:262) > > >> at > > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1145) > > >> at > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:615) > > >> at java.lang.Thread.run(Thread.java:744) > > >> 2015-07-09 14:27:03,030 INFO [c.c.h.HighAvailabilityManagerImpl] > > >> (HA-Worker-2:ctx-68459b74 work-73) Rescheduling > > >> HAWork[73-Migration-32-Running-Migrating] to try again at Thu Jul 09 > > >> 14:37:16 CEST 2015 > > >> 2015-07-09 14:27:03,075 WARN [c.c.h.HighAvailabilityManagerImpl] > > >> (HA-Worker-1:ctx-105d205a work-72) Encountered unhandled exception > > during HA process, reschedule retry java.lang.NullPointerException > > >> at > > > com.cloud.deploy.DeploymentPlanningManagerImpl.planDeployment(DeploymentP= lanningManagerImpl.java:292) > > >> at > > > com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMach= ineManagerImpl.java:2376) > > >> at > > > com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMach= ineManagerImpl.java:4517) > > >> at sun.reflect.GeneratedMethodAccessor299.invoke(Unknown Sourc= e) > > >> at > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI= mpl.java:43) > > >> at java.lang.reflect.Method.invoke(Method.java:606) > > >> at > > > com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.= java:107) > > >> at > > > com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineMana= gerImpl.java:4636) > > >> at > > com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103) > > >> at > > > org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInCont= ext(AsyncJobManagerImpl.java:537) > > >> at > > > org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(Manage= dContextRunnable.java:49) > > >> at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(D= efaultManagedContext.java:56) > > >> at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWith= Context(DefaultManagedContext.java:103) > > >> at > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithC= ontext(DefaultManagedContext.java:53) > > >> at > > > org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedC= ontextRunnable.java:46) > > >> at > > > org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(Async= JobManagerImpl.java:494) > > >> at > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > > >> at java.util.concurrent.FutureTask.run(FutureTask.java:262) > > >> at > > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1145) > > >> at > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:615) > > >> at java.lang.Thread.run(Thread.java:744) > > >> 2015-07-09 14:27:03,076 INFO [c.c.h.HighAvailabilityManagerImpl] > > >> (HA-Worker-1:ctx-105d205a work-72) Rescheduling > > >> HAWork[72-Migration-31-Running-Migrating] to try again at Thu Jul 09 > > >> 14:37:16 CEST 2015 > > >> 2015-07-09 14:27:03,165 DEBUG [c.c.a.m.AgentManagerImpl] > > >> (AgentManager-Handler-14:null) SeqA 11-890 > > >> > > >> > > >> > > >> > > >> > > >> > > >> > > >> > > >> /Sonali > > >> > > >> -----Original Message----- > > >> From: Sonali Jadhav [mailto:sonali@servercentralen.se > ] > > >> Sent: Thursday, July 9, 2015 2:45 PM > > >> To: users@cloudstack.apache.org > > >> Subject: RE: VMs not migrated after putting Xenserver host in > > >> maintenance mode > > >> > > >> Ignore this, I found problem. > > >> > > >> Though one question remains, from ACS If I try to migrate instance t= o > > another host, it doesn't show upgraded host in list. Why is that ? > > >> > > >> /Sonali > > >> > > >> -----Original Message----- > > >> From: Sonali Jadhav [mailto:sonali@servercentralen.se > ] > > >> Sent: Thursday, July 9, 2015 2:00 PM > > >> To: users@cloudstack.apache.org > > >> Subject: VMs not migrated after putting Xenserver host in maintenanc= e > > >> mode > > >> > > >> Hi, > > >> > > >> I am upgrading my xenserver from 6.2 to 6.5. I have cluster of 4 > hosts. > > I have managed to upgrade two of the hosts. I added 3d host in > maintenance > > mode from ACS, some VMs were moved to another host, but 4 VMs did not g= ot > > moved to another host. I saw few errors in logs. > > >> > > >> http://pastebin.com/L7TjLHwq > > >> > > >> http://pastebin.com/i1EGnEJr > > >> > > >> One more thing I observed is that, from ACS If I try to migrate vm t= o > > another host, it doesn't show upgraded host in list. Why is that ? > > >> > > >> /Sonali > > > > > > > > --001a113726aca4527d051adbd2b2--