cloudstack-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF subversion and git services (JIRA)" <>
Subject [jira] [Commented] (CLOUDSTACK-8714) Restore VM (Re-install VM) with set to false fails, later fails to start up VM too
Date Fri, 07 Aug 2015 14:39:46 GMT


ASF subversion and git services commented on CLOUDSTACK-8714:

Commit 90feab18e028b8291169ad5171c999f9d8be3ec0 in cloudstack's branch refs/heads/master from
[;h=90feab1 ]

Merge pull request #659 from @manuiiit

CLOUDSTACK-8714 Restore VM (Re-install VM) with set to false fails

* pr/659:
  Bug-ID:CS-27160: Restore VM (Re-install VM) with set to false fails,
later fails to start up VM too

Signed-off-by: Remi Bergsma <>

> Restore VM (Re-install VM) with set to false fails, later fails
to start up VM too
> -----------------------------------------------------------------------------------------------------------
>                 Key: CLOUDSTACK-8714
>                 URL:
>             Project: CloudStack
>          Issue Type: Bug
>      Security Level: Public(Anyone can view this level - this is the default.) 
>    Affects Versions: 4.5.1
>            Reporter: Maneesha
>             Fix For: 4.6.0
> Environment: 
> ===========
> Advanced zone, 
> Hypervisor: XS, 
> Shared storage - multiple pools
> API: restoreVirtualMachine
> When we fire a Re-install VM, the allocator logic is kicking in for data disks as well
causing the data disks to possibly get migrated to a different storage. If global config
is set to false, the migration would fail and Reset VM would also fail, although there's enough
space left in the existing storage pools to run the data disks.
> Later, when I try to start up the VM, (which has now gone into stopped state), that also
> Question is, why should we move around the data disks when we do Re-install VM? Only
the ROOT disk should be re-installed and re-deployed. The data disks should remain as is.
But the allocator logic is kicking in for all the disks attached to the VM as well and in
effect, may get migrated to different pools in the cluster. We also add new entries in the
DB for the data disks that got migrated.
> If there are many number of data disks attached to the VM, we are spending a lot of time
just trying to un-necessarily move around the disks to different pools. While we actually
need only the ROOT disk to be re-installed.
> Finally, the VM also becomes un-recoverable since start VM fails.
> Steps:
> =====
> Have multiple pools in the cluster.
> Set = false
> Deploy a VM with data disk
> Re-install VM
> Watch for result. The data disks might get migrated to different storage if the allocator
decides to deploy them in different pool. It may take couple of attempts to reproduce the
issue, may not see it at the first time.

This message was sent by Atlassian JIRA

View raw message