cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Star Guo" <st...@ceph.me>
Subject Re: [Feature] Cloudstack KVM with RBD
Date Fri, 29 May 2015 01:15:21 GMT
+1 , wait for it.

Best Regards,
Star Guo

===============

+2 :)

On 28 May 2015 at 17:21, Andrei Mikhailovsky <andrei@arhont.com> wrote:

> +1 for this
> ----- Original Message -----
>
> From: "Logan Barfield" <lbarfield@tqhosting.com>
> To: dev@cloudstack.apache.org
> Sent: Thursday, 28 May, 2015 3:48:09 PM
> Subject: Re: [Feature] Cloudstack KVM with RBD
>
> Hi Star,
>
> I'll +1 this. I would like to see support for RBD snapshots as well, 
> and maybe have a method to "backup" the snapshots to secondary 
> storage. Right now for large volumes it can take an hour or more to 
> finish the snapshot.
>
> I have already discussed this with Wido, and was able to determine 
> that even without using native RBD snapshots we could improve the copy 
> time by saving the snaps as thin volumes instead of full raw files.
> Right now the snapshot code when using RBD specifically converts the 
> volumes to a full raw file, when saving as a qcow2 image would use 
> less space. When restoring a snapshot the code current specifically 
> indicates the source image as being a raw file, but if we change the 
> code to not indicate the source image type qemu-img should 
> automatically detect it. We just need to see if that's the case with 
> all of the supported versions of libvirt/qemu before submitting a pull 
> request.
>
> Thank You,
>
> Logan Barfield
> Tranquil Hosting
>
>
> On Wed, May 27, 2015 at 9:18 PM, Star Guo <starg@ceph.me> wrote:
> > Hi everyone,
> >
> > Since I have test cloudstack 4.4.2 + kvm + rbd, deploy an instance 
> > is so fast apart from the first deployment because copy template 
> > from secondary storage (NFS) to primary storage (RBD). That is no problem.
> > However, when I do some volume operation, such as create snapshot, 
> > create template, template deploy ect, it also take some time to 
> > finish because
> copy
> > data between primary storage and secondary storage.
> > So I think that if we support the same rbd as secondary storage, and 
> > use ceph COW feature, it may reduce the time and just some seconds.
> (OpenStack
> > can make glance and cinder as the same rbd)
> >
> > Best Regards,
> > Star Guo
> >
>
>


-- 

Andrija Panić


Mime
View raw message