cloudstack-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CLOUDSTACK-8714) Restore VM (Re-install VM) with enable.storage.migration set to false fails, later fails to start up VM too
Date Fri, 07 Aug 2015 14:39:47 GMT

    [ https://issues.apache.org/jira/browse/CLOUDSTACK-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14661930#comment-14661930
] 

ASF GitHub Bot commented on CLOUDSTACK-8714:
--------------------------------------------

Github user asfgit closed the pull request at:

    https://github.com/apache/cloudstack/pull/659


> Restore VM (Re-install VM) with enable.storage.migration set to false fails, later fails
to start up VM too
> -----------------------------------------------------------------------------------------------------------
>
>                 Key: CLOUDSTACK-8714
>                 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8714
>             Project: CloudStack
>          Issue Type: Bug
>      Security Level: Public(Anyone can view this level - this is the default.) 
>    Affects Versions: 4.5.1
>            Reporter: Maneesha
>             Fix For: 4.6.0
>
>
> Environment: 
> ===========
> Advanced zone, 
> Hypervisor: XS, 
> Shared storage - multiple pools
> API: restoreVirtualMachine
> When we fire a Re-install VM, the allocator logic is kicking in for data disks as well
causing the data disks to possibly get migrated to a different storage. If global config enable.storage.migration
is set to false, the migration would fail and Reset VM would also fail, although there's enough
space left in the existing storage pools to run the data disks.
> Later, when I try to start up the VM, (which has now gone into stopped state), that also
fails.
> Question is, why should we move around the data disks when we do Re-install VM? Only
the ROOT disk should be re-installed and re-deployed. The data disks should remain as is.
But the allocator logic is kicking in for all the disks attached to the VM as well and in
effect, may get migrated to different pools in the cluster. We also add new entries in the
DB for the data disks that got migrated.
> If there are many number of data disks attached to the VM, we are spending a lot of time
just trying to un-necessarily move around the disks to different pools. While we actually
need only the ROOT disk to be re-installed.
> Finally, the VM also becomes un-recoverable since start VM fails.
> Steps:
> =====
> Have multiple pools in the cluster.
> Set enable.storage.migration = false
> Deploy a VM with data disk
> Re-install VM
> Watch for result. The data disks might get migrated to different storage if the allocator
decides to deploy them in different pool. It may take couple of attempts to reproduce the
issue, may not see it at the first time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message