cloudstack-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CLOUDSTACK-9620) Improvements for Managed Storage
Date Wed, 10 Jan 2018 21:45:00 GMT

    [ https://issues.apache.org/jira/browse/CLOUDSTACK-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321158#comment-16321158
] 

ASF GitHub Bot commented on CLOUDSTACK-9620:
--------------------------------------------

mike-tutkowski commented on issue #2298: CLOUDSTACK-9620: Enhancements for managed storage
URL: https://github.com/apache/cloudstack/pull/2298#issuecomment-356746715
 
 
   @rafaelweingartner Yeah, sorry if I wasn't clear about that back when we were looking at
your PR.
   
   For XenServer + managed storage, there are two approaches to taking volume snapshots:
   
   1) The volume snapshot resides on primary storage.
   
   2) The volume snapshot goes through a SAN snapshot and ends up on secondary storage.
   
   In use case one, a snapshot is created on the SAN (fast and space efficient, but it's not
technically a backup). To make use of it (i.e. to create a volume from a volume snapshot or
a template from a volume snapshot) on XenServer, we can leverage UUID resigning (if the applicable
XenServer service pack is installed). If UUID resigning is not available (because the service
pack isn't installed), then we just perform a copy of the snapshot when creating a volume
or a template from the snapshot. XenServer has a UUID for each SR and each VDI. You cannot
mount two SRs or VDIs with the same UUID at the same time. If you're interested, here's a
video of mine demoing this feature:
   
   https://www.youtube.com/watch?v=YQ3pBeL-WaA&index=13&list=PLqOXKM0Bt13DFnQnwUx8ZtJzoyDV0Uuye&t=1362s
   
   In any event, use case 2 allows you to temporarily use a SAN snapshot, but end up with
a standard volume snapshot on secondary storage when all is said and done. It is this use
case that is failing.
   
   I believe if you just look at the hypervisor type of the volume snapshot and pick any host
in the applicable zone of the volume snapshot to perform the operation that all should be
well.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


> Improvements for Managed Storage
> --------------------------------
>
>                 Key: CLOUDSTACK-9620
>                 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9620
>             Project: CloudStack
>          Issue Type: Improvement
>      Security Level: Public(Anyone can view this level - this is the default.) 
>          Components: KVM, Management Server, VMware, XenServer
>    Affects Versions: 4.11.0.0
>         Environment: KVM, vSphere, and XenServer
>            Reporter: Mike Tutkowski
>            Assignee: Mike Tutkowski
>             Fix For: 4.11.0.0
>
>
> Allowed zone-wide primary storage based on a custom plug-in to be added via the GUI in
a KVM-only environment (previously this only worked for XenServer and VMware)
> Added support for root disks on managed storage with KVM
> Added support for volume snapshots with managed storage on KVM
> Enabled creating a template directly from a volume (i.e. without having to go through
a volume snapshot) on KVM with managed storage
> Only allowed the resizing of a volume for managed storage on KVM if the volume in question
is either not attached to a VM or is attached to a VM in the Stopped state
> Included support for Reinstall VM on KVM with managed storage
> Enabled offline migration on KVM from non-managed storage to managed storage and vice
versa
> Included support for online storage migration on KVM with managed storage (NFS and Ceph
to managed storage)
> Added support to download (extract) a managed-storage volume to a QCOW2 file
> When uploading a file from outside of CloudStack to CloudStack, set the min and max IOPS,
if applicable.
> Included support for the KVM auto-convergence feature
> The compression flag was actually added in version 1.0.3 (1000003) as opposed to version
1.3.0 (1003000) (changed this to reflect the correct version)
> On KVM when using iSCSI-based managed storage, if the user shuts a VM down from the guest
OS (as opposed to doing so from CloudStack), we need to pass to the KVM agent a list of applicable
iSCSI volumes that need to be disconnected.
> Added a new Global Setting: kvm.storage.live.migration.wait
> For XenServer, added a check to enforce that only volumes from zone-wide managed storage
can be storage motioned from a host in one cluster to a host in another cluster (cannot do
so at the time being with volumes from cluster-scoped managed storage)
> Don’t allow Storage XenMotion on a VM that has any managed-storage volume with one
or more snapshots.
> Enabled for managed storage with VMware: Template caching, create snapshot, delete snapshot,
create volume from snapshot, and create template from snapshot
> Added an SIOC API plug-in to support VMware SIOC
> When starting a VM that uses managed storage in a cluster other than the one it last
was running in, we need to remove the reference to the iSCSI volume from the original cluster.
> Added the ability to revert a volume to a snapshot
> Enabled cluster-scoped managed storage
> Added support for VMware dynamic discovery



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message