Return-Path: X-Original-To: apmail-cloudstack-issues-archive@www.apache.org Delivered-To: apmail-cloudstack-issues-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id DABB317BB1 for ; Wed, 4 Feb 2015 03:43:33 +0000 (UTC) Received: (qmail 23581 invoked by uid 500); 4 Feb 2015 03:43:34 -0000 Delivered-To: apmail-cloudstack-issues-archive@cloudstack.apache.org Received: (qmail 23551 invoked by uid 500); 4 Feb 2015 03:43:34 -0000 Mailing-List: contact issues-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cloudstack.apache.org Delivered-To: mailing list issues@cloudstack.apache.org Received: (qmail 23542 invoked by uid 500); 4 Feb 2015 03:43:34 -0000 Delivered-To: apmail-incubator-cloudstack-issues@incubator.apache.org Received: (qmail 23539 invoked by uid 99); 4 Feb 2015 03:43:34 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 04 Feb 2015 03:43:34 +0000 Date: Wed, 4 Feb 2015 03:43:34 +0000 (UTC) From: "Abhinandan Prateek (JIRA)" To: cloudstack-issues@incubator.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (CLOUDSTACK-8196) Local storage - Live VM migration fails MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/CLOUDSTACK-8196?page=3Dcom.atla= ssian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId= =3D14304608#comment-14304608 ]=20 Abhinandan Prateek commented on CLOUDSTACK-8196: ------------------------------------------------ Errors in SMLog: ock: closed /var/lock/sm/f89b7dbb-f8af-fbcf-4dbc-13f7c921d151/sr Jan 31 22:25:56 xen652 SM: [4038] lock: closed /var/lock/sm/f89b7dbb-f8af-f= bcf-4dbc-13f7c921d151/running Jan 31 22:28:38 xen652 SM: [3617] Warning: vdi_[de]activate present for dum= my Jan 31 22:28:44 xen652 SM: [3907] sr_attach {'sr_uuid': '30334f48-a747-4024= -792b-f34741c5280f', 'subtask_of': 'DummyRef:|7aafcf09-7437-db39-d8b7-38e84= a3c2f43|SR.attach', 'args': [], 'host_ref': 'OpaqueRef:9fe68e4b-6e5f-854a-c= b22-7c6ee30ea9fc', 'session_ref': 'OpaqueRef:45efeeac-20d8-6eda-454c-7a8326= 152ec0', 'device_config': {'legacy_mode': 'true', 'SRmaster': 'true', 'loca= tion': '/opt/xensource/packages/iso'}, 'command': 'sr_attach', 'sr_ref': 'O= paqueRef:adfba5ae-fab1-37fd-911b-f7ebd3ce2168'} Jan 31 22:28:44 xen652 SM: [3907] lock: opening lock file /var/lock/sm/3033= 4f48-a747-4024-792b-f34741c5280f/running Jan 31 22:28:44 xen652 SM: [3907] lock: opening lock file /var/lock/sm/3033= 4f48-a747-4024-792b-f34741c5280f/sr Jan 31 22:28:44 xen652 SM: [3907] lock: acquired /var/lock/sm/30334f48-a747= -4024-792b-f34741c5280f/running Jan 31 22:28:44 xen652 SM: [3907] lock: acquired /var/lock/sm/30334f48-a747= -4024-792b-f34741c5280f/sr Jan 31 22:28:44 xen652 SM: [3907] RESET for SR 30334f48-a747-4024-792b-f347= 41c5280f (master: True) Jan 31 22:28:44 xen652 SM: [3907] lock: released /var/lock/sm/30334f48-a747= -4024-792b-f34741c5280f/sr Jan 31 22:28:44 xen652 SM: [3907] lock: released /var/lock/sm/30334f48-a747= -4024-792b-f34741c5280f/running Jan 31 22:28:44 xen652 SM: [3907] lock: closed /var/lock/sm/30334f48-a747-4= 024-792b-f34741c5280f/sr Jan 31 22:28:44 xen652 SM: [3907] lock: closed /var/lock/sm/30334f48-a747-4= 024-792b-f34741c5280f/running Jan 31 22:28:44 xen652 SM: [3907] set_dirty 'OpaqueRef:adfba5ae-fab1-37fd-9= 11b-f7ebd3ce2168' succeeded Jan 31 22:28:51 xen652 SM: [4062] lock: opening lock file /var/lock/sm/d3e9= 2b9d-1ae5-616c-aa17-dc22edb614e6/sr Jan 31 22:28:51 xen652 SM: [4062] LVMCache created for VG_XenStorage-d3e92b= 9d-1ae5-616c-aa17-dc22edb614e6 Jan 31 22:28:51 xen652 SM: [4062] ['/usr/sbin/vgs', 'VG_XenStorage-d3e92b9d= -1ae5-616c-aa17-dc22edb614e6'] Jan 31 22:28:51 xen652 SM: [4062] FAILED in util.pread: (rc 5) stdout: '', = stderr: ' Volume group "VG_XenStorage-d3e92b9d-1ae5-616c-aa17-dc22edb614e6= " not found Jan 31 22:28:51 xen652 SM: [4062] ' Jan 31 22:28:51 xen652 SM: [4062] LVMCache: will initialize now Jan 31 22:28:51 xen652 SM: [4062] LVMCache: refreshing Jan 31 22:28:51 xen652 SM: [4062] ['/usr/sbin/lvs', '--noheadings', '--unit= s', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-d3e92b9d-1ae5-616c-aa17-dc22= edb614e6'] Jan 31 22:28:52 xen652 SM: [4062] FAILED in util.pread: (rc 5) stdout: '', = stderr: ' Volume group "VG_XenStorage-d3e92b9d-1ae5-616c-aa17-dc22edb614e6= " not found Jan 31 22:28:52 xen652 SM: [4062] Skipping volume group VG_XenStorage-d3e= 92b9d-1ae5-616c-aa17-dc22edb614e6 Jan 31 22:28:52 xen652 SM: [4062] ' Jan 31 22:28:52 xen652 SM: [4062] lock: acquired /var/lock/sm/d3e92b9d-1ae5= -616c-aa17-dc22edb614e6/sr Jan 31 22:28:52 xen652 SM: [4062] sr_create {'sr_uuid': 'd3e92b9d-1ae5-616c= -aa17-dc22edb614e6', 'subtask_of': 'DummyRef:|31b63d40-5758-e196-bb11-f7ec7= 4ea21b3|SR.create', 'args': ['0'], 'host_ref': 'OpaqueRef:9fe68e4b-6e5f-854= a-cb22-7c6ee30ea9fc', 'session_ref': 'OpaqueRef:349d0349-4b73-2464-c776-eee= 3b2119f68', 'device_config': {'device': '/dev/sda3', 'SRmaster': 'true'}, '= command': 'sr_create', 'sr_ref': 'OpaqueRef:073078b0-c6cd-d897-a2c4-42cc35d= 1c71d'} Jan 31 22:28:52 xen652 SM: [4062] LVHDSR.create for d3e92b9d-1ae5-616c-aa17= -dc22edb614e6 Jan 31 22:28:52 xen652 SM: [4062] ['/usr/sbin/vgs', 'VG_XenStorage-d3e92b9d= -1ae5-616c-aa17-dc22edb614e6'] Jan 31 22:28:52 xen652 SM: [4062] FAILED in util.pread: (rc 5) stdout: '', = stderr: ' Volume group "VG_XenStorage-d3e92b9d-1ae5-616c-aa17-dc22edb614e6= " not found Jan 31 22:28:52 xen652 SM: [4062] ' Jan 31 22:28:52 xen652 SM: [4062] ['sginfo', '-s', '/dev/sda3'] Jan 31 22:28:52 xen652 SM: [4062] pread SUCCESS Jan 31 22:28:52 xen652 SM: [4062] ['scsi_id', '-g', '--device', '/dev/sda'] Jan 31 22:28:52 xen652 SM: [4062] FAILED in util.pread: (rc 1) stdout: '', = stderr: 'scsi_id: invalid option -- - Jan 31 22:28:52 xen652 SM: [4062] ' Jan 31 22:28:52 xen652 SM: [4062] ['scsi_id', '-g', '--device', '/dev/sda']= failed with ('Operation not permitted',) Jan 31 22:28:52 xen652 SM: [4062] ['scsi_id', '-g', '-s', '/block/sda'] Jan 31 22:28:52 xen652 SM: [4062] FAILED in util.pread: (rc 1) stdout: '', = stderr: '' Jan 31 22:28:52 xen652 SM: [4062] ['scsi_id', '-g', '-s', '/block/sda'] fai= led with ('Operation not permitted',) Jan 31 22:28:52 xen652 SM: [4062] util.test_scsiserial: Unable to verify se= rial or SCSIid of device: /dev/sda3 Jan 31 22:28:52 xen652 SM: [4062] ['/bin/dd', 'if=3D/dev/zero', 'of=3D/dev/= sda3', 'bs=3D1M', 'count=3D10', 'oflag=3Ddirect'] Jan 31 22:28:52 xen652 SM: [4062] pread SUCCESS Jan 31 22:28:52 xen652 SM: [4062] ['/usr/sbin/pvcreate', '--metadatasize', = '10M', '/dev/sda3'] Jan 31 22:28:52 xen652 SM: [4062] pread SUCCESS Jan 31 22:28:52 xen652 SM: [4062] ['/usr/sbin/vgcreate', 'VG_XenStorage-d3e= 92b9d-1ae5-616c-aa17-dc22edb614e6', '/dev/sda3'] Jan 31 22:28:52 xen652 SM: [4062] pread SUCCESS Jan 31 22:28:52 xen652 SM: [4062] ['/usr/sbin/vgchange', '-an', '--master',= 'VG_XenStorage-d3e92b9d-1ae5-616c-aa17-dc22edb614e6'] Jan 31 22:28:52 xen652 SM: [4062] pread SUCCESS Jan 31 22:28:52 xen652 SM: [4062] ['scsi_id', '-g', '--device', '/dev/sda'] Jan 31 22:28:52 xen652 SM: [4062] FAILED in util.pread: (rc 1) stdout: '', = stderr: 'scsi_id: invalid option -- - Jan 31 22:28:52 xen652 SM: [4062] ' Jan 31 22:28:52 xen652 SM: [4062] ['scsi_id', '-g', '--device', '/dev/sda']= failed with ('Operation not permitted',) Jan 31 22:28:52 xen652 SM: [4062] ['scsi_id', '-g', '-s', '/block/sda'] Jan 31 22:28:52 xen652 SM: [4062] FAILED in util.pread: (rc 1) stdout: '', = stderr: '' Jan 31 22:28:52 xen652 SM: [4062] ['scsi_id', '-g', '-s', '/block/sda'] fai= led with ('Operation not permitted',) Jan 31 22:28:52 xen652 SM: [4062] lock: released /var/lock/sm/d3e92b9d-1ae5= -616c-aa17-dc22edb614e6/sr Jan 31 22:28:52 xen652 SM: [4062] lock: closed /var/lock/sm/d3e92b9d-1ae5-6= 16c-aa17-dc22edb614e6/sr Jan 31 22:28:52 xen652 SM: [4095] lock: opening lock file /var/lock/sm/d3e9= 2b9d-1ae5-616c-aa17-dc22edb614e6/sr Jan 31 22:28:52 xen652 SM: [4095] LVMCache created for VG_XenStorage-d3e92b= 9d-1ae5-616c-aa17-dc22edb614e6 Jan 31 22:28:52 xen652 SM: [4095] ['/usr/sbin/vgs', 'VG_XenStorage-d3e92b9d= -1ae5-616c-aa17-dc22edb614e6'] Jan 31 22:28:52 xen652 SM: [4095] pread SUCCESS Jan 31 22:28:52 xen652 SM: [4095] lock: acquired /var/lock/sm/d3e92b9d-1ae5= -616c-aa17-dc22edb614e6/sr Jan 31 22:28:52 xen652 SM: [4095] LVMCache: will initialize now Jan 31 22:28:52 xen652 SM: [4095] LVMCache: refreshing Jan 31 22:28:52 xen652 SM: [4095] ['/usr/sbin/lvs', '--noheadings', '--unit= s', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-d3e92b9d-1ae5-616c-aa17-dc22= edb614e6'] Jan 31 22:28:52 xen652 SM: [4095] pread SUCCESS Jan 31 22:28:52 xen652 SM: [4095] lock: released /var/lock/sm/d3e92b9d-1ae5= -616c-aa17-dc22edb614e6/sr Jan 31 22:28:52 xen652 SM: [4095] lock: acquired /var/lock/sm/d3e92b9d-1ae5= -616c-aa17-dc22edb614e6/sr Jan 31 22:28:52 xen652 SM: [4095] sr_attach {'sr_uuid': 'd3e92b9d-1ae5-616c= -aa17-dc22edb614e6', 'subtask_of': 'DummyRef:|e6bee5f4-4147-cffb-94bc-e7eba= 5f6b54f|SR.attach', 'args': [], 'host_ref': 'OpaqueRef:9fe68e4b-6e5f-854a-c= b22-7c6ee30ea9fc', 'sessi > Local storage - Live VM migration fails > --------------------------------------- > > Key: CLOUDSTACK-8196 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-819= 6 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the defa= ult.)=20 > Components: Volumes > Affects Versions: 4.5.0 > Environment: Xenserver 6.5 > Reporter: Abhinandan Prateek > Priority: Blocker > Fix For: 4.5.0 > > > When you live migrate a VM with its root volume on local storage it fails= with following in the logs: > 2015-02-03 17:41:17,697 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job= -Executor-5:ctx-bf4fe907 job-29/job-30) Executing AsyncJobVO {id:30, userId= : 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.= VmWorkMigrate, cmdInfo: rO0ABXNyACVjb20uY2xvdWQudm0uVm1Xb3JrTWlncmF0ZVdpdGh= TdG9yYWdlsew9z6UxtXMCAANKAApkZXN0SG9zdElkSgAJc3JjSG9zdElkTAAMdm9sdW1lVG9Qb2= 9sdAAPTGphdmEvdXRpbC9NYXA7eHIAE2NvbS5jbG91ZC52bS5WbVdvcmufmbZW8CVnawIABEoAC= WFjY291bnRJZEoABnVzZXJJZEoABHZtSWRMAAtoYW5kbGVyTmFtZXQAEkxqYXZhL2xhbmcvU3Ry= aW5nO3hwAAAAAAAAAAIAAAAAAAAAAgAAAAAAAAAEdAAZVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1= wbAAAAAAAAAAEAAAAAAAAAAFzcgARamF2YS51dGlsLkhhc2hNYXAFB9rBwxZg0QMAAkYACmxvYW= RGYWN0b3JJAAl0aHJlc2hvbGR4cD9AAAAAAAAQdwgAAAAQAAAAAHg, cmdVersion: 0, statu= s: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 34= 5043735628, completeMsid: null, lastUpdated: null, lastPolled: null, create= d: Tue Feb 03 17:41:16 IST 2015} > 2015-02-03 17:41:17,698 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Execu= tor-5:ctx-bf4fe907 job-29/job-30) Run VM work job: com.cloud.vm.VmWorkMigra= te for VM 4, job origin: 29 > 2015-02-03 17:41:17,699 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Exe= cutor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) Execute VM work job: com.c= loud.vm.VmWorkMigrateWithStorage{"srcHostId":1,"destHostId":4,"volumeToPool= ":{},"userId":2,"accountId":2,"vmId":4,"handlerName":"VirtualMachineManager= Impl"} > 2015-02-03 17:41:17,699 DEBUG [o.a.c.s.SecondaryStorageManagerImpl] (secs= torage-1:ctx-821cceb5) Zone 1 is ready to launch secondary storage VM > 2015-02-03 17:41:17,706 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work= -Job-Executor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) LocalStoragePoolAl= locator trying to find storage pool to fit the vm > 2015-02-03 17:41:17,710 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (W= ork-Job-Executor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) Checking if sto= rage pool is suitable, name: null ,poolId: 2 > 2015-02-03 17:41:17,712 INFO [c.c.s.StorageManagerImpl] (Work-Job-Execut= or-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) Storage pool null (2) does no= t supply IOPS capacity, assuming enough capacity > 2015-02-03 17:41:17,714 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Execut= or-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) Checking pool 2 for storage, = totalSize: 34347155456, usedBytes: 4194304, usedPct: 1.2211503236048357E-4,= disable threshold: 0.99 > 2015-02-03 17:41:17,718 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Execut= or-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) Found storage pool xen652 Loc= al Storage of type LVM > 2015-02-03 17:41:17,718 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Execut= or-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) Total capacity of the pool xe= n652 Local Storage id: 2 is 34347155456 > 2015-02-03 17:41:17,718 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Execut= or-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) Checking pool: 2 for volume a= llocation [Vol[4|vm=3D4|ROOT]], maxSize : 34347155456, totalAllocatedSize := 0, askingSize : 0, allocated disable threshold: 0.99 > 2015-02-03 17:41:17,718 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work= -Job-Executor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) Found suitable loc= al storage pool 2, adding to list > 2015-02-03 17:41:17,718 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work= -Job-Executor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) LocalStoragePoolAl= locator returning 1 suitable storage pools > 2015-02-03 17:41:17,719 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator= ] (Work-Job-Executor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) ClusterScop= eStoragePoolAllocator looking for storage pool > 2015-02-03 17:41:17,719 DEBUG [o.a.c.s.a.ZoneWideStoragePoolAllocator] (W= ork-Job-Executor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) ZoneWideStorage= PoolAllocator to find storage pool > 2015-02-03 17:41:17,723 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor= -5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) Service SecurityGroup is not su= pported in the network id=3D204 > 2015-02-03 17:41:17,726 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Ex= ecutor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) Preparing 1 volumes for V= M[User|i-2-4-VM] > 2015-02-03 17:41:17,747 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) VM state transitted from :Ru= nning to Migrating with event: MigrationRequestedvm's original host id: 1 n= ew host id: 4 host id before state transition: 1 > 2015-02-03 17:41:17,754 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) Hosts's actual total CPU: 63= 82 and CPU after applying overprovisioning: 25528 > 2015-02-03 17:41:17,754 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) We are allocating VM, increa= sing the used capacity of this host:4 > 2015-02-03 17:41:17,754 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) Current Used CPU: 0 , Free C= PU:25528 ,Requested CPU: 500 > 2015-02-03 17:41:17,754 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) Current Used RAM: 0 , Free R= AM:3266390016 ,Requested RAM: 134217728 > 2015-02-03 17:41:17,754 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) CPU STATS after allocation: = for host: 4, old used: 0, old reserved: 0, actual total: 6382, total with o= verprovisioning: 25528; new used:500, reserved:0; requested cpu:500,alloc_f= rom_last:false > 2015-02-03 17:41:17,754 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) RAM STATS after allocation: = for host: 4, old used: 0, old reserved: 0, total: 3266390016; new used: 134= 217728, reserved: 0; requested mem: 134217728,alloc_from_last:false > 2015-02-03 17:41:17,777 DEBUG [c.c.c.ConsoleProxyManagerImpl] (consolepro= xy-1:ctx-656ffab6) Zone 1 is ready to launch console proxy > 2015-02-03 17:41:17,785 DEBUG [c.c.a.t.Request] (Work-Job-Executor-5:ctx-= bf4fe907 job-29/job-30 ctx-4ddcb611) Seq 4-379428268605964298: Sending { C= md , MgmtId: 345043735628, via: 4(xen652), Ver: v1, Flags: 100111, [{"com.c= loud.agent.api.MigrateWithStorageCommand":{"vm":{"id":4,"name":"i-2-4-VM","= bootloader":"PyGrub","type":"User","cpus":1,"minSpeed":500,"maxSpeed":500,"= minRam":134217728,"maxRam":134217728,"arch":"x86_64","os":"CentOS 5.6 (64-b= it)","platformEmulator":"CentOS 5 (64-bit)","bootArgs":"","enableHA":false,= "limitCpuUse":false,"enableDynamicallyScaleVm":true,"vncPassword":"2M0FMXjO= hAukbg729Jj05Q=3D=3D","params":{"memoryOvercommitRatio":"1.0","platform":"v= iridian:true;acpi:1;apic:true;pae:true;nx:true","cpuOvercommitRatio":"4.0",= "hypervisortoolsversion":"xenserver56"},"uuid":"e9b86cce-fdc2-4b26-bb6e-322= 118f30ddf","disks":[{"data":{"org.apache.cloudstack.storage.to.VolumeObject= TO":{"uuid":"cfae7b2b-d019-414a-8332-b87a59577392","volumeType":"ROOT","dat= aStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"70d= 1dc4e-b70c-0b44-9f87-794905e5863d","id":1,"poolType":"LVM","host":"192.168.= 217.12","path":"lvm","port":0,"url":"LVM://192.168.217.12/lvm/?ROLE=3DPrima= ry&STOREUUID=3D70d1dc4e-b70c-0b44-9f87-794905e5863d"}},"name":"ROOT-4","siz= e":21474836480,"path":"b2baca19-68e9-4b11-8091-48f95f11e9c7","volumeId":4,"= vmName":"i-2-4-VM","accountId":2,"format":"VHD","provisioningType":"THIN","= id":4,"deviceId":0,"hypervisorType":"XenServer"}},"diskSeq":0,"path":"b2bac= a19-68e9-4b11-8091-48f95f11e9c7","type":"ROOT","_details":{"managed":"false= ","storagePort":"0","storageHost":"192.168.217.12","volumeSize":"2147483648= 0"}},{"data":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"id":0,"= format":"ISO","accountId":0,"hvm":false}},"diskSeq":3,"type":"ISO"}],"nics"= :[{"deviceId":0,"networkRateMbps":-1,"defaultNic":true,"pxeDisable":false,"= nicUuid":"d868298c-70ce-4523-a61e-4d201a091d9f","uuid":"198d58c3-333b-43f1-= 92d1-7163a6207059","ip":"10.1.1.132","netmask":"255.255.255.0","gateway":"1= 0.1.1.1","mac":"02:00:39:b9:00:02","dns1":"192.168.100.1","broadcastType":"= Vlan","type":"Guest","broadcastUri":"vlan://646","isolationUri":"vlan://646= ","isSecurityGroupEnabled":false,"name":"GUEST"}],"vcpuMaxLimit":16},"volum= eToFiler":{"Vol[4|ROOT|b2baca19-68e9-4b11-8091-48f95f11e9c7|21474836480]":{= "id":2,"uuid":"d3e92b9d-1ae5-616c-aa17-dc22edb614e6","host":"192.168.217.14= ","path":"lvm","port":0,"type":"LVM"}},"wait":0}}] } > 2015-02-03 17:41:17,786 DEBUG [c.c.a.t.Request] (Work-Job-Executor-5:ctx-= bf4fe907 job-29/job-30 ctx-4ddcb611) Seq 4-379428268605964298: Executing: = { Cmd , MgmtId: 345043735628, via: 4(xen652), Ver: v1, Flags: 100111, [{"co= m.cloud.agent.api.MigrateWithStorageCommand":{"vm":{"id":4,"name":"i-2-4-VM= ","bootloader":"PyGrub","type":"User","cpus":1,"minSpeed":500,"maxSpeed":50= 0,"minRam":134217728,"maxRam":134217728,"arch":"x86_64","os":"CentOS 5.6 (6= 4-bit)","platformEmulator":"CentOS 5 (64-bit)","bootArgs":"","enableHA":fal= se,"limitCpuUse":false,"enableDynamicallyScaleVm":true,"vncPassword":"2M0FM= XjOhAukbg729Jj05Q=3D=3D","params":{"memoryOvercommitRatio":"1.0","platform"= :"viridian:true;acpi:1;apic:true;pae:true;nx:true","cpuOvercommitRatio":"4.= 0","hypervisortoolsversion":"xenserver56"},"uuid":"e9b86cce-fdc2-4b26-bb6e-= 322118f30ddf","disks":[{"data":{"org.apache.cloudstack.storage.to.VolumeObj= ectTO":{"uuid":"cfae7b2b-d019-414a-8332-b87a59577392","volumeType":"ROOT","= dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"= 70d1dc4e-b70c-0b44-9f87-794905e5863d","id":1,"poolType":"LVM","host":"192.1= 68.217.12","path":"lvm","port":0,"url":"LVM://192.168.217.12/lvm/?ROLE=3DPr= imary&STOREUUID=3D70d1dc4e-b70c-0b44-9f87-794905e5863d"}},"name":"ROOT-4","= size":21474836480,"path":"b2baca19-68e9-4b11-8091-48f95f11e9c7","volumeId":= 4,"vmName":"i-2-4-VM","accountId":2,"format":"VHD","provisioningType":"THIN= ","id":4,"deviceId":0,"hypervisorType":"XenServer"}},"diskSeq":0,"path":"b2= baca19-68e9-4b11-8091-48f95f11e9c7","type":"ROOT","_details":{"managed":"fa= lse","storagePort":"0","storageHost":"192.168.217.12","volumeSize":"2147483= 6480"}},{"data":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"id":= 0,"format":"ISO","accountId":0,"hvm":false}},"diskSeq":3,"type":"ISO"}],"ni= cs":[{"deviceId":0,"networkRateMbps":-1,"defaultNic":true,"pxeDisable":fals= e,"nicUuid":"d868298c-70ce-4523-a61e-4d201a091d9f","uuid":"198d58c3-333b-43= f1-92d1-7163a6207059","ip":"10.1.1.132","netmask":"255.255.255.0","gateway"= :"10.1.1.1","mac":"02:00:39:b9:00:02","dns1":"192.168.100.1","broadcastType= ":"Vlan","type":"Guest","broadcastUri":"vlan://646","isolationUri":"vlan://= 646","isSecurityGroupEnabled":false,"name":"GUEST"}],"vcpuMaxLimit":16},"vo= lumeToFiler":{"Vol[4|ROOT|b2baca19-68e9-4b11-8091-48f95f11e9c7|21474836480]= ":{"id":2,"uuid":"d3e92b9d-1ae5-616c-aa17-dc22edb614e6","host":"192.168.217= .14","path":"lvm","port":0,"type":"LVM"}},"wait":0}}] } > 2015-02-03 17:41:17,786 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgent-4= 7:ctx-d9cdee8c) Seq 4-379428268605964298: Executing request > 2015-02-03 17:41:17,838 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent= -47:ctx-d9cdee8c) SR retrieved for d3e92b9d-1ae5-616c-aa17-dc22edb614e6 > 2015-02-03 17:41:17,843 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent= -47:ctx-d9cdee8c) Checking d3e92b9d-1ae5-616c-aa17-dc22edb614e6 or SR d3e92= b9d-1ae5-616c-aa17-dc22edb614e6 on XS[2df4562d-5f47-4345-9fed-95da6b798b45-= 192.168.217.14] > 2015-02-03 17:41:19,879 DEBUG [c.c.a.ApiServlet] (1369188023@qtp-20438506= 24-13:ctx-2054c360) =3D=3D=3DSTART=3D=3D=3D 192.168.100.30 -- GET command= =3DqueryAsyncJobResult&jobId=3De5f2660a-4c59-46f5-99ad-240e833354f9&respons= e=3Djson&sessionkey=3DsiNTLV8YaFC1WTwW5bGuCzZIKE8%3D&_=3D1422945678234 > 2015-02-03 17:41:19,896 DEBUG [c.c.a.ApiServlet] (1369188023@qtp-20438506= 24-13:ctx-2054c360 ctx-2341000e) =3D=3D=3DEND=3D=3D=3D 192.168.100.30 -- G= ET command=3DqueryAsyncJobResult&jobId=3De5f2660a-4c59-46f5-99ad-240e83335= 4f9&response=3Djson&sessionkey=3DsiNTLV8YaFC1WTwW5bGuCzZIKE8%3D&_=3D1422945= 678234 > 2015-02-03 17:41:21,092 DEBUG [c.c.a.m.AgentManagerImpl] (AgentManager-Ha= ndler-2:null) SeqA 2-248: Processing Seq 2-248: { Cmd , MgmtId: -1, via: 2= , Ver: v1, Flags: 11, [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand"= :{"_proxyVmId":2,"_loadInfo":"{\n \"connections\": []\n}","wait":0}}] } > 2015-02-03 17:41:21,096 DEBUG [c.c.a.m.AgentManagerImpl] (AgentManager-Ha= ndler-2:null) SeqA 2-248: Sending Seq 2-248: { Ans: , MgmtId: 345043735628= , via: 2, Ver: v1, Flags: 100010, [{"com.cloud.agent.api.AgentControlAnswer= ":{"result":true,"wait":0}}] } > 2015-02-03 17:41:21,691 INFO [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJob= Mgr-Heartbeat-1:ctx-66d27604) Begin cleanup expired async-jobs > 2015-02-03 17:41:21,694 INFO [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJob= Mgr-Heartbeat-1:ctx-66d27604) End cleanup expired async-jobs > 2015-02-03 17:41:22,881 DEBUG [c.c.a.ApiServlet] (1369188023@qtp-20438506= 24-13:ctx-ab1c5d96) =3D=3D=3DSTART=3D=3D=3D 192.168.100.30 -- GET command= =3DqueryAsyncJobResult&jobId=3De5f2660a-4c59-46f5-99ad-240e833354f9&respons= e=3Djson&sessionkey=3DsiNTLV8YaFC1WTwW5bGuCzZIKE8%3D&_=3D1422945681235 > 2015-02-03 17:41:22,900 DEBUG [c.c.a.ApiServlet] (1369188023@qtp-20438506= 24-13:ctx-ab1c5d96 ctx-2593d89e) =3D=3D=3DEND=3D=3D=3D 192.168.100.30 -- G= ET command=3DqueryAsyncJobResult&jobId=3De5f2660a-4c59-46f5-99ad-240e83335= 4f9&response=3Djson&sessionkey=3DsiNTLV8YaFC1WTwW5bGuCzZIKE8%3D&_=3D1422945= 681235 > 2015-02-03 17:41:25,885 DEBUG [c.c.a.ApiServlet] (1369188023@qtp-20438506= 24-13:ctx-427870e5) =3D=3D=3DSTART=3D=3D=3D 192.168.100.30 -- GET command= =3DqueryAsyncJobResult&jobId=3De5f2660a-4c59-46f5-99ad-240e833354f9&respons= e=3Djson&sessionkey=3DsiNTLV8YaFC1WTwW5bGuCzZIKE8%3D&_=3D1422945684238 > 2015-02-03 17:41:25,894 DEBUG [c.c.a.ApiServlet] (1369188023@qtp-20438506= 24-13:ctx-427870e5 ctx-5d7c65ce) =3D=3D=3DEND=3D=3D=3D 192.168.100.30 -- G= ET command=3DqueryAsyncJobResult&jobId=3De5f2660a-4c59-46f5-99ad-240e83335= 4f9&response=3Djson&sessionkey=3DsiNTLV8YaFC1WTwW5bGuCzZIKE8%3D&_=3D1422945= 684238 > 2015-02-03 17:41:25,898 WARN [c.c.h.x.r.CitrixResourceBase] (DirectAgent= -47:ctx-d9cdee8c) Task failed! Task record: uuid: 80e2ab71-= fa8d-ebbd-fdae-3edb3657f459 > nameLabel: Async.VM.migrate_send > nameDescription:=20 > allowedOperations: [] > currentOperations: {} > created: Sat Jan 31 11:38:59 IST 2015 > finished: Sat Jan 31 11:39:07 IST 2015 > status: failure > residentOn: com.xensource.xenapi.Host@50b4f213 > progress: 1.0 > type: > result:=20 > errorInfo: [SR_BACKEND_FAILURE_44, , There is insufficient spa= ce] > otherConfig: {} > subtaskOf: com.xensource.xenapi.Task@aaf13f6f > subtasks: [] > 2015-02-03 17:41:25,904 WARN [c.c.h.x.r.XenServer610Resource] (DirectAge= nt-47:ctx-d9cdee8c) Catch Exception com.xensource.xenapi.Types$BadAsyncResu= lt. Storage motion failed due to Task failed! Task record: = uuid: 80e2ab71-fa8d-ebbd-fdae-3edb3657f459 > nameLabel: Async.VM.migrate_send > nameDescription:=20 > allowedOperations: [] > currentOperations: {} > created: Sat Jan 31 11:38:59 IST 2015 > finished: Sat Jan 31 11:39:07 IST 2015 > status: failure > residentOn: com.xensource.xenapi.Host@50b4f213 > progress: 1.0 > type: > result:=20 > errorInfo: [SR_BACKEND_FAILURE_44, , There is insufficient spa= ce] > otherConfig: {} > subtaskOf: com.xensource.xenapi.Task@aaf13f6f > subtasks: [] > Task failed! Task record: uuid: 80e2ab71-fa8d-ebbd-fdae-3= edb3657f459 > nameLabel: Async.VM.migrate_send > nameDescription:=20 > allowedOperations: [] > currentOperations: {} > created: Sat Jan 31 11:38:59 IST 2015 > finished: Sat Jan 31 11:39:07 IST 2015 > status: failure > residentOn: com.xensource.xenapi.Host@50b4f213 > progress: 1.0 > type: > result:=20 > errorInfo: [SR_BACKEND_FAILURE_44, , There is insufficient spa= ce] > otherConfig: {} > subtaskOf: com.xensource.xenapi.Task@aaf13f6f > subtasks: [] > =09at com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.checkFor= Success(CitrixResourceBase.java:3203) > =09at com.cloud.hypervisor.xenserver.resource.XenServer610Resource.execut= e(XenServer610Resource.java:170) > =09at com.cloud.hypervisor.xenserver.resource.XenServer610Resource.execut= eRequest(XenServer610Resource.java:77) > =09at com.cloud.hypervisor.xenserver.resource.XenServer620SP1Resource.exe= cuteRequest(XenServer620SP1Resource.java:65) > =09at com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(Direct= AgentAttache.java:304) > =09at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(= ManagedContextRunnable.java:49) > =09at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.= call(DefaultManagedContext.java:56) > =09at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.ca= llWithContext(DefaultManagedContext.java:103) > =09at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.ru= nWithContext(DefaultManagedContext.java:53) > =09at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(Ma= nagedContextRunnable.java:46) > =09at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:= 471) > =09at java.util.concurrent.FutureTask.run(FutureTask.java:262) > =09at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTas= k.access$201(ScheduledThreadPoolExecutor.java:178) > =09at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTas= k.run(ScheduledThreadPoolExecutor.java:292) > =09at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecuto= r.java:1145) > =09at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecut= or.java:615) > =09at java.lang.Thread.run(Thread.java:745) > 2015-02-03 17:41:25,910 DEBUG [c.c.h.x.r.XenServer610Resource] (DirectAge= nt-47:ctx-d9cdee8c) Unable to destroy task com.xensource.xenapi.Task@75fb5c= on host 2df4562d-5f47-4345-9fed-95da6b798b45 due to You gave an invalid ob= ject reference. The object may have recently been deleted. The class para= meter gives the type of reference given, and the handle parameter echoes th= e bad value given. > 2015-02-03 17:41:25,910 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgent-4= 7:ctx-d9cdee8c) Seq 4-379428268605964298: Response Received:=20 > 2015-02-03 17:41:25,910 DEBUG [c.c.a.t.Request] (DirectAgent-47:ctx-d9cde= e8c) Seq 4-379428268605964298: Processing: { Ans: , MgmtId: 345043735628, = via: 4, Ver: v1, Flags: 110, [{"com.cloud.agent.api.MigrateWithStorageAnswe= r":{"result":false,"details":"Exception: com.xensource.xenapi.Types$BadAsyn= cResult\nMessage: null\nStack: Task failed! Task record: uu= id: 80e2ab71-fa8d-ebbd-fdae-3edb3657f459\n nameLabel: Async.VM.mi= grate_send\n nameDescription: \n allowedOperations: []\n currentOpe= rations: {}\n created: Sat Jan 31 11:38:59 IST 2015\n = finished: Sat Jan 31 11:39:07 IST 2015\n status: failure\n = residentOn: com.xensource.xenapi.Host@50b4f213\n progres= s: 1.0\n type: \n result: \n e= rrorInfo: [SR_BACKEND_FAILURE_44, , There is insufficient space]\n = otherConfig: {}\n subtaskOf: com.xensource.xenapi.Task@aaf13f6f\n= subtasks: []\n\n\tat com.cloud.hypervisor.xenserver.resource.Ci= trixResourceBase.checkForSuccess(CitrixResourceBase.java:3203)\n\tat com.cl= oud.hypervisor.xenserver.resource.XenServer610Resource.execute(XenServer610= Resource.java:170)\n\tat com.cloud.hypervisor.xenserver.resource.XenServer6= 10Resource.executeRequest(XenServer610Resource.java:77)\n\tat com.cloud.hyp= ervisor.xenserver.resource.XenServer620SP1Resource.executeRequest(XenServer= 620SP1Resource.java:65)\n\tat com.cloud.agent.manager.DirectAgentAttache$Ta= sk.runInContext(DirectAgentAttache.java:304)\n\tat org.apache.cloudstack.ma= naged.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)\= n\tat org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.ca= ll(DefaultManagedContext.java:56)\n\tat org.apache.cloudstack.managed.conte= xt.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:10= 3)\n\tat org.apache.cloudstack.managed.context.impl.DefaultManagedContext.r= unWithContext(DefaultManagedContext.java:53)\n\tat org.apache.cloudstack.ma= naged.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)\n\= tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)= \n\tat java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat java.= util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(= ScheduledThreadPoolExecutor.java:178)\n\tat java.util.concurrent.ScheduledT= hreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:= 292)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExec= utor.java:1145)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(Th= readPoolExecutor.java:615)\n\tat java.lang.Thread.run(Thread.java:745)\n","= wait":0}}] } > 2015-02-03 17:41:25,910 DEBUG [c.c.a.m.AgentAttache] (DirectAgent-47:ctx-= d9cdee8c) Seq 4-379428268605964298: No more commands found > 2015-02-03 17:41:25,910 DEBUG [c.c.a.t.Request] (Work-Job-Executor-5:ctx-= bf4fe907 job-29/job-30 ctx-4ddcb611) Seq 4-379428268605964298: Received: {= Ans: , MgmtId: 345043735628, via: 4, Ver: v1, Flags: 110, { MigrateWithSto= rageAnswer } } > 2015-02-03 17:41:25,910 ERROR [o.a.c.s.m.XenServerStorageMotionStrategy] = (Work-Job-Executor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) Migration wit= h storage of vm VM[User|i-2-4-VM] failed. Details: Exception: com.xensource= .xenapi.Types$BadAsyncResult > Message: null > Stack: Task failed! Task record: uuid: 80e2ab71-fa8d-ebbd= -fdae-3edb3657f459 > nameLabel: Async.VM.migrate_send > nameDescription:=20 > allowedOperations: [] > currentOperations: {} > created: Sat Jan 31 11:38:59 IST 2015 > finished: Sat Jan 31 11:39:07 IST 2015 > status: failure > residentOn: com.xensource.xenapi.Host@50b4f213 > progress: 1.0 > type: > result:=20 > errorInfo: [SR_BACKEND_FAILURE_44, , There is insufficient spa= ce] > otherConfig: {} > subtaskOf: com.xensource.xenapi.Task@aaf13f6f > subtasks: [] > =09at com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.checkFor= Success(CitrixResourceBase.java:3203) > =09at com.cloud.hypervisor.xenserver.resource.XenServer610Resource.execut= e(XenServer610Resource.java:170) > =09at com.cloud.hypervisor.xenserver.resource.XenServer610Resource.execut= eRequest(XenServer610Resource.java:77) > =09at com.cloud.hypervisor.xenserver.resource.XenServer620SP1Resource.exe= cuteRequest(XenServer620SP1Resource.java:65) > =09at com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(Direct= AgentAttache.java:304) > =09at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(= ManagedContextRunnable.java:49) > =09at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.= call(DefaultManagedContext.java:56) > =09at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.ca= llWithContext(DefaultManagedContext.java:103) > =09at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.ru= nWithContext(DefaultManagedContext.java:53) > =09at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(Ma= nagedContextRunnable.java:46) > =09at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:= 471) > =09at java.util.concurrent.FutureTask.run(FutureTask.java:262) > =09at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTas= k.access$201(ScheduledThreadPoolExecutor.java:178) > =09at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTas= k.run(ScheduledThreadPoolExecutor.java:292) > =09at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecuto= r.java:1145) > =09at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecut= or.java:615) > =09at java.lang.Thread.run(Thread.java:745) > 2015-02-03 17:41:25,911 ERROR [o.a.c.s.m.XenServerStorageMotionStrategy] = (Work-Job-Executor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) copy failed > com.cloud.utils.exception.CloudRuntimeException: Error while migrating th= e vm VM[User|i-2-4-VM] to host Host[-4-Routing]. Exception: com.xensource.x= enapi.Types$BadAsyncResult > Message: null > Stack: Task failed! Task record: uuid: 80e2ab71-fa8d-ebbd= -fdae-3edb3657f459 > nameLabel: Async.VM.migrate_send > nameDescription:=20 > allowedOperations: [] > currentOperations: {} > created: Sat Jan 31 11:38:59 IST 2015 > finished: Sat Jan 31 11:39:07 IST 2015 > status: failure > residentOn: com.xensource.xenapi.Host@50b4f213 > progress: 1.0 > type: > result:=20 > errorInfo: [SR_BACKEND_FAILURE_44, , There is insufficient spa= ce] > otherConfig: {} > subtaskOf: com.xensource.xenapi.Task@aaf13f6f > subtasks: [] > =09at com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.checkFor= Success(CitrixResourceBase.java:3203) > =09at com.cloud.hypervisor.xenserver.resource.XenServer610Resource.execut= e(XenServer610Resource.java:170) > =09at com.cloud.hypervisor.xenserver.resource.XenServer610Resource.execut= eRequest(XenServer610Resource.java:77) > =09at com.cloud.hypervisor.xenserver.resource.XenServer620SP1Resource.exe= cuteRequest(XenServer620SP1Resource.java:65) > =09at com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(Direct= AgentAttache.java:304) > =09at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(= ManagedContextRunnable.java:49) > =09at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.= call(DefaultManagedContext.java:56) > =09at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.ca= llWithContext(DefaultManagedContext.java:103) > =09at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.ru= nWithContext(DefaultManagedContext.java:53) > =09at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(Ma= nagedContextRunnable.java:46) > =09at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:= 471) > =09at java.util.concurrent.FutureTask.run(FutureTask.java:262) > =09at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTas= k.access$201(ScheduledThreadPoolExecutor.java:178) > =09at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTas= k.run(ScheduledThreadPoolExecutor.java:292) > =09at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecuto= r.java:1145) > =09at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecut= or.java:615) > =09at java.lang.Thread.run(Thread.java:745) > =09at org.apache.cloudstack.storage.motion.XenServerStorageMotionStrategy= .migrateVmWithVolumesWithinCluster(XenServerStorageMotionStrategy.java:211) > =09at org.apache.cloudstack.storage.motion.XenServerStorageMotionStrategy= .copyAsync(XenServerStorageMotionStrategy.java:115) > =09at org.apache.cloudstack.storage.motion.DataMotionServiceImpl.copyAsyn= c(DataMotionServiceImpl.java:90) > =09at org.apache.cloudstack.storage.volume.VolumeServiceImpl.migrateVolum= es(VolumeServiceImpl.java:1156) > =09at org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.migra= teVolumes(VolumeOrchestrator.java:986) > =09at com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateWithStorag= e(VirtualMachineManagerImpl.java:2169) > =09at com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateWithStorag= e(VirtualMachineManagerImpl.java:4493) > =09at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > =09at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImp= l.java:57) > =09at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcc= essorImpl.java:43) > =09at java.lang.reflect.Method.invoke(Method.java:606) > =09at com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandler= Proxy.java:107) > =09at com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachi= neManagerImpl.java:4596) > =09at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:10= 2) > =09at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run= InContext(AsyncJobManagerImpl.java:536) > =09at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(= ManagedContextRunnable.java:49) > =09at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.= call(DefaultManagedContext.java:56) > =09at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.ca= llWithContext(DefaultManagedContext.java:103) > =09at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.ru= nWithContext(DefaultManagedContext.java:53) > =09at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(Ma= nagedContextRunnable.java:46) > =09at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run= (AsyncJobManagerImpl.java:493) > =09at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:= 471) > =09at java.util.concurrent.FutureTask.run(FutureTask.java:262) > =09at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecuto= r.java:1145) > =09at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecut= or.java:615) > =09at java.lang.Thread.run(Thread.java:745) > 2015-02-03 17:41:25,920 WARN [o.a.c.s.d.ObjectInDataStoreManagerImpl] (W= ork-Job-Executor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) Unsupported dat= a object (VOLUME, org.apache.cloudstack.storage.datastore.PrimaryDataStoreI= mpl@739ed156), no need to delete from object in store ref table > 2015-02-03 17:41:25,920 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Ex= ecutor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) Failed to migrated vm VM[= User|i-2-4-VM] along with its volumes. com.cloud.utils.exception.CloudRunti= meException: Error while migrating the vm VM[User|i-2-4-VM] to host Host[-4= -Routing]. Exception: com.xensource.xenapi.Types$BadAsyncResult > Message: null > Stack: Task failed! Task record: uuid: 80e2ab71-fa8d-ebbd= -fdae-3edb3657f459 > nameLabel: Async.VM.migrate_send > nameDescription:=20 > allowedOperations: [] > currentOperations: {} > created: Sat Jan 31 11:38:59 IST 2015 > finished: Sat Jan 31 11:39:07 IST 2015 > status: failure > residentOn: com.xensource.xenapi.Host@50b4f213 > progress: 1.0 > type: > result:=20 > errorInfo: [SR_BACKEND_FAILURE_44, , There is insufficient spa= ce] > otherConfig: {} > subtaskOf: com.xensource.xenapi.Task@aaf13f6f > subtasks: [] > =09at com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.checkFor= Success(CitrixResourceBase.java:3203) > =09at com.cloud.hypervisor.xenserver.resource.XenServer610Resource.execut= e(XenServer610Resource.java:170) > =09at com.cloud.hypervisor.xenserver.resource.XenServer610Resource.execut= eRequest(XenServer610Resource.java:77) > =09at com.cloud.hypervisor.xenserver.resource.XenServer620SP1Resource.exe= cuteRequest(XenServer620SP1Resource.java:65) > =09at com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(Direct= AgentAttache.java:304) > =09at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(= ManagedContextRunnable.java:49) > =09at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.= call(DefaultManagedContext.java:56) > =09at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.ca= llWithContext(DefaultManagedContext.java:103) > =09at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.ru= nWithContext(DefaultManagedContext.java:53) > =09at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(Ma= nagedContextRunnable.java:46) > =09at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:= 471) > =09at java.util.concurrent.FutureTask.run(FutureTask.java:262) > =09at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTas= k.access$201(ScheduledThreadPoolExecutor.java:178) > =09at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTas= k.run(ScheduledThreadPoolExecutor.java:292) > =09at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecuto= r.java:1145) > =09at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecut= or.java:615) > =09at java.lang.Thread.run(Thread.java:745) > 2015-02-03 17:41:25,920 INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job= -Executor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) Migration was unsucces= sful. Cleaning up: VM[User|i-2-4-VM] > 2015-02-03 17:41:25,922 WARN [o.a.c.alerts] (Work-Job-Executor-5:ctx-bf4= fe907 job-29/job-30 ctx-4ddcb611) alertType:: 17 // dataCenterId:: 1 // po= dId:: 1 // clusterId:: null // message:: Unable to migrate vm i-2-4-VM from= host xen651 in zone ZoneA and pod ZoneA > 2015-02-03 17:41:25,928 DEBUG [c.c.a.m.AgentAttache] (Work-Job-Executor-5= :ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) Seq 4-379428268605964299: Routed = from 345043735628 > 2015-02-03 17:41:25,929 DEBUG [c.c.a.t.Request] (Work-Job-Executor-5:ctx-= bf4fe907 job-29/job-30 ctx-4ddcb611) Seq 4-379428268605964299: Sending { C= md , MgmtId: 345043735628, via: 4(xen652), Ver: v1, Flags: 100011, [{"com.c= loud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":false,"che= ckBeforeCleanup":false,"vmName":"i-2-4-VM","wait":0}}] } > 2015-02-03 17:41:25,929 DEBUG [c.c.a.t.Request] (Work-Job-Executor-5:ctx-= bf4fe907 job-29/job-30 ctx-4ddcb611) Seq 4-379428268605964299: Executing: = { Cmd , MgmtId: 345043735628, via: 4(xen652), Ver: v1, Flags: 100011, [{"co= m.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":false,"= checkBeforeCleanup":false,"vmName":"i-2-4-VM","wait":0}}] } > 2015-02-03 17:41:25,929 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgent-2= 4:ctx-aa18b608) Seq 4-379428268605964299: Executing request > 2015-02-03 17:41:25,938 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) VM state transitted from :Mi= grating to Running with event: OperationFailedvm's original host id: 1 new = host id: 1 host id before state transition: 4 > 2015-02-03 17:41:25,947 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgent-2= 4:ctx-aa18b608) Seq 4-379428268605964299: Response Received:=20 > 2015-02-03 17:41:25,947 DEBUG [c.c.a.t.Request] (DirectAgent-24:ctx-aa18b= 608) Seq 4-379428268605964299: Processing: { Ans: , MgmtId: 345043735628, = via: 4, Ver: v1, Flags: 10, [{"com.cloud.agent.api.StopAnswer":{"result":tr= ue,"details":"VM does not exist","wait":0}}] } > 2015-02-03 17:41:25,947 DEBUG [c.c.a.m.AgentAttache] (DirectAgent-24:ctx-= aa18b608) Seq 4-379428268605964299: Unable to find listener. > 2015-02-03 17:41:25,950 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) Hosts's actual total CPU: 63= 82 and CPU after applying overprovisioning: 25528 > 2015-02-03 17:41:25,950 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) Hosts's actual total RAM: 32= 66390144 and RAM after applying overprovisioning: 3266390016 > 2015-02-03 17:41:25,950 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) release cpu from host: 4, ol= d used: 500,reserved: 0, actual total: 6382, total with overprovisioning: 2= 5528; new used: 0,reserved:0; movedfromreserved: false,moveToReserveredfals= e > 2015-02-03 17:41:25,950 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) release mem from host: 4, ol= d used: 134217728,reserved: 0, total: 3266390016; new used: 0,reserved:0; m= ovedfromreserved: false,moveToReserveredfalse > 2015-02-03 17:41:25,956 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Exe= cutor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) Invocation exception, caus= ed by: com.cloud.utils.exception.CloudRuntimeException: Failed to migrated = vm VM[User|i-2-4-VM] along with its volumes.=20 > 2015-02-03 17:41:25,956 INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Exe= cutor-5:ctx-bf4fe907 job-29/job-30 ctx-4ddcb611) Rethrow exception com.clou= d.utils.exception.CloudRuntimeException: Failed to migrated vm VM[User|i-2-= 4-VM] along with its volumes.=20 > 2015-02-03 17:41:25,956 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Execu= tor-5:ctx-bf4fe907 job-29/job-30) Done with run of VM work job: com.cloud.v= m.VmWorkMigrate for VM 4, job origin: 29 > 2015-02-03 17:41:25,957 ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Execu= tor-5:ctx-bf4fe907 job-29/job-30) Unable to complete AsyncJobVO {id:30, use= rId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.= vm.VmWorkMigrate, cmdInfo: rO0ABXNyACVjb20uY2xvdWQudm0uVm1Xb3JrTWlncmF0ZVdp= dGhTdG9yYWdlsew9z6UxtXMCAANKAApkZXN0SG9zdElkSgAJc3JjSG9zdElkTAAMdm9sdW1lVG9= Qb29sdAAPTGphdmEvdXRpbC9NYXA7eHIAE2NvbS5jbG91ZC52bS5WbVdvcmufmbZW8CVnawIABE= oACWFjY291bnRJZEoABnVzZXJJZEoABHZtSWRMAAtoYW5kbGVyTmFtZXQAEkxqYXZhL2xhbmcvU= 3RyaW5nO3hwAAAAAAAAAAIAAAAAAAAAAgAAAAAAAAAEdAAZVmlydHVhbE1hY2hpbmVNYW5hZ2Vy= SW1wbAAAAAAAAAAEAAAAAAAAAAFzcgARamF2YS51dGlsLkhhc2hNYXAFB9rBwxZg0QMAAkYACmx= vYWRGYWN0b3JJAAl0aHJlc2hvbGR4cD9AAAAAAAAQdwgAAAAQAAAAAHg, cmdVersion: 0, st= atus: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid:= 345043735628, completeMsid: null, lastUpdated: null, lastPolled: null, cre= ated: Tue Feb 03 17:41:16 IST 2015}, job origin:29 > com.cloud.utils.exception.CloudRuntimeException: Failed to migrated vm VM= [User|i-2-4-VM] along with its volumes.=20 > =09at org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.migra= teVolumes(VolumeOrchestrator.java:991) > =09at com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateWithStorag= e(VirtualMachineManagerImpl.java:2169) > =09at com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateWithStorag= e(VirtualMachineManagerImpl.java:4493) > =09at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > =09at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImp= l.java:57) > =09at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcc= essorImpl.java:43) > =09at java.lang.reflect.Method.invoke(Method.java:606) > =09at com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandler= Proxy.java:107) > =09at com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachi= neManagerImpl.java:4596) > =09at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:10= 2) > =09at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run= InContext(AsyncJobManagerImpl.java:536) > =09at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(= ManagedContextRunnable.java:49) > =09at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.= call(DefaultManagedContext.java:56) > =09at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.ca= llWithContext(DefaultManagedContext.java:103) > =09at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.ru= nWithContext(DefaultManagedContext.java:53) > =09at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(Ma= nagedContextRunnable.java:46) > =09at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run= (AsyncJobManagerImpl.java:493) > =09at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:= 471) > =09at java.util.concurrent.FutureTask.run(FutureTask.java:262) > =09at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecuto= r.java:1145) > =09at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecut= or.java:615) > =09at java.lang.Thread.run(Thread.java:745) > 2015-02-03 17:41:25,968 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job= -Executor-5:ctx-bf4fe907 job-29/job-30) Complete async job-30, jobStatus: F= AILED, resultCode: 0, result: rO0ABXNyAC9jb20uY2xvdWQudXRpbHMuZXhjZXB0aW9uL= kNsb3VkUnVudGltZUV4Y2VwdGlvbgAAAABWTU9yAwABSQALY3NFcnJvckNvZGV4cgAaamF2YS5s= YW5nLlJ1bnRpbWVFeGNlcHRpb26eXwZHCjSD5QIAAHhyABNqYXZhLmxhbmcuRXhjZXB0aW9u0P0= fPho7HMQCAAB4cgATamF2YS5sYW5nLlRocm93YWJsZdXGNSc5d7jLAwAETAAFY2F1c2V0ABVMam= F2YS9sYW5nL1Rocm93YWJsZTtMAA1kZXRhaWxNZXNzYWdldAASTGphdmEvbGFuZy9TdHJpbmc7W= wAKc3RhY2tUcmFjZXQAHltMamF2YS9sYW5nL1N0YWNrVHJhY2VFbGVtZW50O0wAFHN1cHByZXNz= ZWRFeGNlcHRpb25zdAAQTGphdmEvdXRpbC9MaXN0O3hwcQB-AAh0AEBGYWlsZWQgdG8gbWlncmF= 0ZWQgdm0gVk1bVXNlcnxpLTItNC1WTV0gYWxvbmcgd2l0aCBpdHMgdm9sdW1lcy4gdXIAHltMam= F2YS5sYW5nLlN0YWNrVHJhY2VFbGVtZW50OwJGKjw8_SI5AgAAeHAAAAAWc3IAG2phdmEubGFuZ= y5TdGFja1RyYWNlRWxlbWVudGEJxZomNt2FAgAESQAKbGluZU51bWJlckwADmRlY2xhcmluZ0Ns= YXNzcQB-AAVMAAhmaWxlTmFtZXEAfgAFTAAKbWV0aG9kTmFtZXEAfgAFeHAAAAPfdAA9b3JnLmF= wYWNoZS5jbG91ZHN0YWNrLmVuZ2luZS5vcmNoZXN0cmF0aW9uLlZvbHVtZU9yY2hlc3RyYXRvcn= QAF1ZvbHVtZU9yY2hlc3RyYXRvci5qYXZhdAAObWlncmF0ZVZvbHVtZXNzcQB-AAwAAAh5dAAmY= 29tLmNsb3VkLnZtLlZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGx0AB5WaXJ0dWFsTWFjaGluZU1h= bmFnZXJJbXBsLmphdmF0AB1vcmNoZXN0cmF0ZU1pZ3JhdGVXaXRoU3RvcmFnZXNxAH4ADAAAEY1= xAH4AEnEAfgATcQB-ABRzcQB-AAz____-dAAkc3VuLnJlZmxlY3QuTmF0aXZlTWV0aG9kQWNjZX= Nzb3JJbXBsdAAdTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsLmphdmF0AAdpbnZva2Uwc3EAfgAMA= AAAOXEAfgAXcQB-ABh0AAZpbnZva2VzcQB-AAwAAAArdAAoc3VuLnJlZmxlY3QuRGVsZWdhdGlu= Z01ldGhvZEFjY2Vzc29ySW1wbHQAIURlbGVnYXRpbmdNZXRob2RBY2Nlc3NvckltcGwuamF2YXE= AfgAbc3EAfgAMAAACXnQAGGphdmEubGFuZy5yZWZsZWN0Lk1ldGhvZHQAC01ldGhvZC5qYXZhcQ= B-ABtzcQB-AAwAAABrdAAiY29tLmNsb3VkLnZtLlZtV29ya0pvYkhhbmRsZXJQcm94eXQAGlZtV= 29ya0pvYkhhbmRsZXJQcm94eS5qYXZhdAAPaGFuZGxlVm1Xb3JrSm9ic3EAfgAMAAAR9HEAfgAS= cQB-ABNxAH4AJXNxAH4ADAAAAGZ0ACBjb20uY2xvdWQudm0uVm1Xb3JrSm9iRGlzcGF0Y2hlcnQ= AGFZtV29ya0pvYkRpc3BhdGNoZXIuamF2YXQABnJ1bkpvYnNxAH4ADAAAAhh0AD9vcmcuYXBhY2= hlLmNsb3Vkc3RhY2suZnJhbWV3b3JrLmpvYnMuaW1wbC5Bc3luY0pvYk1hbmFnZXJJbXBsJDV0A= BhBc3luY0pvYk1hbmFnZXJJbXBsLmphdmF0AAxydW5JbkNvbnRleHRzcQB-AAwAAAAxdAA-b3Jn= LmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmx= lJDF0ABtNYW5hZ2VkQ29udGV4dFJ1bm5hYmxlLmphdmF0AANydW5zcQB-AAwAAAA4dABCb3JnLm= FwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5pbXBsLkRlZmF1bHRNYW5hZ2VkQ29ud= GV4dCQxdAAaRGVmYXVsdE1hbmFnZWRDb250ZXh0LmphdmF0AARjYWxsc3EAfgAMAAAAZ3QAQG9y= Zy5hcGFjaGUuY2xvdWRzdGFjay5tYW5hZ2VkLmNvbnRleHQuaW1wbC5EZWZhdWx0TWFuYWdlZEN= vbnRleHRxAH4ANXQAD2NhbGxXaXRoQ29udGV4dHNxAH4ADAAAADVxAH4AOHEAfgA1dAAOcnVuV2= l0aENvbnRleHRzcQB-AAwAAAAudAA8b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29ud= GV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlcQB-ADFxAH4AMnNxAH4ADAAAAe1xAH4ALHEAfgAt= cQB-ADJzcQB-AAwAAAHXdAAuamF2YS51dGlsLmNvbmN1cnJlbnQuRXhlY3V0b3JzJFJ1bm5hYmx= lQWRhcHRlcnQADkV4ZWN1dG9ycy5qYXZhcQB-ADZzcQB-AAwAAAEGdAAfamF2YS51dGlsLmNvbm= N1cnJlbnQuRnV0dXJlVGFza3QAD0Z1dHVyZVRhc2suamF2YXEAfgAyc3EAfgAMAAAEeXQAJ2phd= mEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvcnQAF1RocmVhZFBvb2xFeGVjdXRv= ci5qYXZhdAAJcnVuV29ya2Vyc3EAfgAMAAACZ3QALmphdmEudXRpbC5jb25jdXJyZW50LlRocmV= hZFBvb2xFeGVjdXRvciRXb3JrZXJxAH4AR3EAfgAyc3EAfgAMAAAC6XQAEGphdmEubGFuZy5UaH= JlYWR0AAtUaHJlYWQuamF2YXEAfgAyc3IAJmphdmEudXRpbC5Db2xsZWN0aW9ucyRVbm1vZGlma= WFibGVMaXN0_A8lMbXsjhACAAFMAARsaXN0cQB-AAd4cgAsamF2YS51dGlsLkNvbGxlY3Rpb25z= JFVubW9kaWZpYWJsZUNvbGxlY3Rpb24ZQgCAy173HgIAAUwAAWN0ABZMamF2YS91dGlsL0NvbGx= lY3Rpb247eHBzcgATamF2YS51dGlsLkFycmF5TGlzdHiB0h2Zx2GdAwABSQAEc2l6ZXhwAAAAAH= cEAAAAAHhxAH4AU3gAABCadwgAAAAAAAAAAHg > 2015-02-03 17:41:25,970 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job= -Executor-5:ctx-bf4fe907 job-29/job-30) Publish async job-30 complete on me= ssage bus > 2015-02-03 17:41:25,970 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job= -Executor-5:ctx-bf4fe907 job-29/job-30) Wake up jobs related to job-30 > 2015-02-03 17:41:25,970 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job= -Executor-5:ctx-bf4fe907 job-29/job-30) Update db status for job-30 > 2015-02-03 17:41:25,972 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job= -Executor-5:ctx-bf4fe907 job-29/job-30) Wake up jobs joined with job-30 and= disjoin all subjobs created from job- 30 > 2015-02-03 17:41:25,998 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job= -Executor-5:ctx-bf4fe907 job-29/job-30) Done executing com.cloud.vm.VmWorkM= igrate for job-30 > 2015-02-03 17:41:25,999 DEBUG [o.a.c.f.j.i.SyncQueueManagerImpl] (Work-Jo= b-Executor-5:ctx-bf4fe907 job-29/job-30) Sync queue (4) is currently empty > 2015-02-03 17:41:26,000 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Exe= cutor-5:ctx-bf4fe907 job-29/job-30) Remove job-30 from job monitoring > 2015-02-03 17:41:26,009 ERROR [c.c.a.ApiAsyncJobDispatcher] (API-Job-Exec= utor-17:ctx-d145a559 job-29) Unexpected exception while executing org.apach= e.cloudstack.api.command.admin.vm.MigrateVirtualMachineWithVolumeCmd > com.cloud.utils.exception.CloudRuntimeException: Failed to migrated vm VM= [User|i-2-4-VM] along with its volumes.=20 > =09at org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.migra= teVolumes(VolumeOrchestrator.java:991) > =09at com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateWithStorag= e(VirtualMachineManagerImpl.java:2169) > =09at com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateWithStorag= e(VirtualMachineManagerImpl.java:4493) > =09at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > =09at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImp= l.java:57) > =09at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcc= essorImpl.java:43) > =09at java.lang.reflect.Method.invoke(Method.java:606) > =09at com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandler= Proxy.java:107) > =09at com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachi= neManagerImpl.java:4596) > =09at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:10= 2) > =09at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run= InContext(AsyncJobManagerImpl.java:536) > =09at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(= ManagedContextRunnable.java:49) > =09at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.= call(DefaultManagedContext.java:56) > =09at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.ca= llWithContext(DefaultManagedContext.java:103) > =09at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.ru= nWithContext(DefaultManagedContext.java:53) > =09at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(Ma= nagedContextRunnable.java:46) > =09at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run= (AsyncJobManagerImpl.java:493) > =09at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:= 471) > =09at java.util.concurrent.FutureTask.run(FutureTask.java:262) > =09at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecuto= r.java:1145) > =09at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecut= or.java:615) > =09at java.lang.Thread.run(Thread.java:745) > 2015-02-03 17:41:26,011 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-= Executor-17:ctx-d145a559 job-29) Complete async job-29, jobStatus: FAILED, = resultCode: 530, result: org.apache.cloudstack.api.response.ExceptionRespon= se/null/{"uuidList":[],"errorcode":530,"errortext":"Failed to migrated vm V= M[User|i-2-4-VM] along with its volumes. "} > 2015-02-03 17:41:26,012 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-= Executor-17:ctx-d145a559 job-29) Publish async job-29 complete on message b= us > 2015-02-03 17:41:26,012 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-= Executor-17:ctx-d145a559 job-29) Wake up jobs related to job-29 > 2015-02-03 17:41:26,012 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-= Executor-17:ctx-d145a559 job-29) Update db status for job-29 > 2015-02-03 17:41:26,013 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-= Executor-17:ctx-d145a559 job-29) Wake up jobs joined with job-29 and disjoi= n all subjobs created from job- 29 > 2015-02-03 17:41:26,017 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-= Executor-17:ctx-d145a559 job-29) Done executing org.apache.cloudstack.api.c= ommand.admin.vm.MigrateVirtualMachineWithVolumeCmd for job-29 > 2015-02-03 17:41:26,017 INFO [o.a.c.f.j.i.AsyncJobMonitor] (API-Job-Exec= utor-17:ctx-d145a559 job-29) Remove job-29 from job monitoring > 2015-02-03 17:41:28,888 DEBUG [c.c.a.ApiServlet] (1369188023@qtp-20438506= 24-13:ctx-55cb7646) =3D=3D=3DSTART=3D=3D=3D 192.168.100.30 -- GET command= =3DqueryAsyncJobResult&jobId=3De5f2660a-4c59-46f5-99ad-240e833354f9&respons= e=3Djson&sessionkey=3DsiNTLV8YaFC1WTwW5bGuCzZIKE8%3D&_=3D1422945687242 > 2015-02-03 17:41:28,906 DEBUG [c.c.a.ApiServlet] (1369188023@qtp-20438506= 24-13:ctx-55cb7646 ctx-eeffe1a8) =3D=3D=3DEND=3D=3D=3D 192.168.100.30 -- G= ET command=3DqueryAsyncJobResult&jobId=3De5f2660a-4c59-46f5-99ad-240e83335= 4f9&response=3Djson&sessionkey=3DsiNTLV8YaFC1WTwW5bGuCzZIKE8%3D&_=3D1422945= 687242 > 2015-02-03 17:41:31,091 DEBUG [c.c.a.m.AgentManagerImpl] (AgentManager-Ha= ndler-3:null) SeqA 2-249: Processing Seq 2-249: { Cmd , MgmtId: -1, via: 2= , Ver: v1, Flags: 11, [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand"= :{"_proxyVmId":2,"_loadInfo":"{\n \"connections\": []\n}","wait":0}}] } > 2015-02-03 17:41:31,096 DEBUG [c.c.a.m.AgentManagerImpl] (AgentManager-Ha= ndler-3:null) SeqA 2-249: Sending Seq 2-249: { Ans: , MgmtId: 345043735628= , via: 2, Ver: v1, Flags: 100010, [{"com.cloud.agent.api.AgentControlAnswer= ":{"result":true,"wait":0}}] } > 2015-02-03 17:41:31,691 INFO [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJob= Mgr-Heartbeat-1:ctx-c2a39692) Begin cleanup expired async-jobs > 2015-02-03 17:41:31,694 INFO [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJob= Mgr-Heartbeat-1:ctx-c2a39692) End cleanup expired async-jobs > 2015-02-03 17:41:31,721 INFO [c.c.n.v.VpcManagerImpl] (VpcChecker-1:ctx-= 21a431b7) Found 0 removed VPCs to cleanup > 2015-02-03 17:41:31,759 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl= ] (RouterStatusMonitor-1:ctx-ce1bdd15) Found 1 routers to update status.=20 > 2015-02-03 17:41:31,759 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl= ] (RouterStatusMonitor-1:ctx-ce1bdd15) Found 0 networks to update RvR statu= s.=20 > 2015-02-03 17:41:31,864 INFO [c.c.u.AccountManagerImpl] (AccountChecker-= 1:ctx-387c4fa4) Found 0 removed accounts to cleanup > 2015-02-03 17:41:31,865 INFO [c.c.u.AccountManagerImpl] (AccountChecker-= 1:ctx-387c4fa4) Found 0 disabled accounts to cleanup > 2015-02-03 17:41:31,865 INFO [c.c.u.AccountManagerImpl] (AccountChecker-= 1:ctx-387c4fa4) Found 0 inactive domains to cleanup > 2015-02-03 17:41:31,866 INFO [c.c.u.AccountManagerImpl] (AccountChecker-= 1:ctx-387c4fa4) Found 0 disabled projects to cleanup > 2015-02-03 17:41:36,092 DEBUG [c.c.a.m.AgentManagerImpl] (AgentManager-Ha= ndler-4:null) SeqA 2-250: Processing Seq 2-250: { Cmd , MgmtId: -1, via: 2= , Ver: v1, Flags: 11, [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand"= :{"_proxyVmId":2,"_loadInfo":"{\n \"connections\": []\n}","wait":0}}] } > 2015-02-03 17:41:36,094 DEBUG [c.c.a.m.AgentManagerImpl] (AgentManager-Ha= ndler-4:null) SeqA 2-250: Sending Seq 2-250: { Ans: , MgmtId: 345043735628= , via: 2, Ver: v1, Flags: 100010, [{"com.cloud.agent.api.AgentControlAnswer= ":{"result":true,"wait":0}}] } > 2015-02-03 17:41:37,331 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgentCr= onJob-12:ctx-c0c8a6a8) Ping from 1(xen651) > 2015-02-03 17:41:37,332 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] (D= irectAgentCronJob-12:ctx-c0c8a6a8) Process host VM state report from ping p= rocess. host: 1 > 2015-02-03 17:41:37,335 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] (D= irectAgentCronJob-12:ctx-c0c8a6a8) Process VM state report. host: 1, number= of records in report: 4 > 2015-02-03 17:41:37,335 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] (D= irectAgentCronJob-12:ctx-c0c8a6a8) VM state report. host: 1, vm id: 1, powe= r state: PowerOn > 2015-02-03 17:41:37,335 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] (D= irectAgentCronJob-12:ctx-c0c8a6a8) VM power state does not change, skip DB = writing. vm id: 1 > 2015-02-03 17:41:37,335 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] (D= irectAgentCronJob-12:ctx-c0c8a6a8) VM state report. host: 1, vm id: 2, powe= r state: PowerOn > 2015-02-03 17:41:37,336 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] (D= irectAgentCronJob-12:ctx-c0c8a6a8) VM power state does not change, skip DB = writing. vm id: 2 -- This message was sent by Atlassian JIRA (v6.3.4#6332)