cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tutkowski, Mike" <>
Subject Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)
Date Tue, 17 Jul 2018 03:27:50 GMT
Another comment here: The part that is broken is if you try to let CloudStack pick the primary
storage on the destination side. That code no longer exists in 4.11.1.

On 7/16/18, 9:24 PM, "Tutkowski, Mike" <> wrote:

    To follow up on this a bit: Yes, you should be able to migrate a VM and its storage from
one cluster to another today using non-managed (traditional) primary storage with XenServer
(both the source and destination primary storages would be cluster scoped). However, that
is one of the features that was broken in 4.11.1 that we are discussing in this thread.
    On 7/16/18, 9:20 PM, "Tutkowski, Mike" <> wrote:
        For a bit of info on what managed storage is, please take a look at this document:
        The short answer is that you can have zone-wide managed storage (for XenServer, VMware,
and KVM). However, there is no current zone-wide non-managed storage for XenServer.
        On 7/16/18, 6:20 PM, "Yiping Zhang" <> wrote:
            I assume by "managed storage", you guys mean primary storages, either zone -wide
or cluster-wide.
            For Xen hypervisor, ACS does not support "zone-wide" primary storage yet. Still,
I can live migrate a VM with data disks between clusters with storage migration from web GUI,
today.  So, your statement below does not reflect current behavior of the code.
                       - If I want to migrate a VM across clusters, but if at least one of
                       volumes is placed in a cluster-wide managed storage, the migration
is not
                       allowed. Is that it?
                [Mike] Correct

View raw message