Return-Path: X-Original-To: apmail-cloudstack-issues-archive@www.apache.org Delivered-To: apmail-cloudstack-issues-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E47E818F3C for ; Fri, 7 Aug 2015 14:39:47 +0000 (UTC) Received: (qmail 50601 invoked by uid 500); 7 Aug 2015 14:39:47 -0000 Delivered-To: apmail-cloudstack-issues-archive@cloudstack.apache.org Received: (qmail 50376 invoked by uid 500); 7 Aug 2015 14:39:47 -0000 Mailing-List: contact issues-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cloudstack.apache.org Delivered-To: mailing list issues@cloudstack.apache.org Received: (qmail 50363 invoked by uid 500); 7 Aug 2015 14:39:47 -0000 Delivered-To: apmail-incubator-cloudstack-issues@incubator.apache.org Received: (qmail 50360 invoked by uid 99); 7 Aug 2015 14:39:47 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Aug 2015 14:39:47 +0000 Date: Fri, 7 Aug 2015 14:39:47 +0000 (UTC) From: "ASF GitHub Bot (JIRA)" To: cloudstack-issues@incubator.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (CLOUDSTACK-8714) Restore VM (Re-install VM) with enable.storage.migration set to false fails, later fails to start up VM too MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/CLOUDSTACK-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14661930#comment-14661930 ] ASF GitHub Bot commented on CLOUDSTACK-8714: -------------------------------------------- Github user asfgit closed the pull request at: https://github.com/apache/cloudstack/pull/659 > Restore VM (Re-install VM) with enable.storage.migration set to false fails, later fails to start up VM too > ----------------------------------------------------------------------------------------------------------- > > Key: CLOUDSTACK-8714 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8714 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the default.) > Affects Versions: 4.5.1 > Reporter: Maneesha > Fix For: 4.6.0 > > > Environment: > =========== > Advanced zone, > Hypervisor: XS, > Shared storage - multiple pools > API: restoreVirtualMachine > When we fire a Re-install VM, the allocator logic is kicking in for data disks as well causing the data disks to possibly get migrated to a different storage. If global config enable.storage.migration is set to false, the migration would fail and Reset VM would also fail, although there's enough space left in the existing storage pools to run the data disks. > Later, when I try to start up the VM, (which has now gone into stopped state), that also fails. > Question is, why should we move around the data disks when we do Re-install VM? Only the ROOT disk should be re-installed and re-deployed. The data disks should remain as is. But the allocator logic is kicking in for all the disks attached to the VM as well and in effect, may get migrated to different pools in the cluster. We also add new entries in the DB for the data disks that got migrated. > If there are many number of data disks attached to the VM, we are spending a lot of time just trying to un-necessarily move around the disks to different pools. While we actually need only the ROOT disk to be re-installed. > Finally, the VM also becomes un-recoverable since start VM fails. > Steps: > ===== > Have multiple pools in the cluster. > Set enable.storage.migration = false > Deploy a VM with data disk > Re-install VM > Watch for result. The data disks might get migrated to different storage if the allocator decides to deploy them in different pool. It may take couple of attempts to reproduce the issue, may not see it at the first time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)