cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Logan Barfield <lbarfi...@tqhosting.com>
Subject Re: [Feature] Cloudstack KVM with RBD
Date Fri, 29 May 2015 13:26:39 GMT
Wido,

I completely understand your position.  I hope I didn't give the
impression that I thought this was your responsibility.  I just
mentioned your name because you're more or less the "authority" on the
Ceph/Cloudstack integration components if anyone is.

As I mentioned before I think just changing the current method to
support thin snapshot images would help quite a bit.  I've already
determined that it's possible to do that by just not explicitly
stating the source image type in the snapshot code, and allowing
qemu-img to auto-detect it.  I just haven't had time to test it with
all of the supported versions of qemu yet.  If I can ever get through
my current work backlog I'm hoping to spend some more time working on
this stuff myself.  I don't think I can write "production quality"
code, but I can definitely help come up with proof of concept
implementations.

Thank You,

Logan Barfield
Tranquil Hosting


On Thu, May 28, 2015 at 9:15 PM, Star Guo <starg@ceph.me> wrote:
> +1 , wait for it.
>
> Best Regards,
> Star Guo
>
> ===============
>
> +2 :)
>
> On 28 May 2015 at 17:21, Andrei Mikhailovsky <andrei@arhont.com> wrote:
>
>> +1 for this
>> ----- Original Message -----
>>
>> From: "Logan Barfield" <lbarfield@tqhosting.com>
>> To: dev@cloudstack.apache.org
>> Sent: Thursday, 28 May, 2015 3:48:09 PM
>> Subject: Re: [Feature] Cloudstack KVM with RBD
>>
>> Hi Star,
>>
>> I'll +1 this. I would like to see support for RBD snapshots as well,
>> and maybe have a method to "backup" the snapshots to secondary
>> storage. Right now for large volumes it can take an hour or more to
>> finish the snapshot.
>>
>> I have already discussed this with Wido, and was able to determine
>> that even without using native RBD snapshots we could improve the copy
>> time by saving the snaps as thin volumes instead of full raw files.
>> Right now the snapshot code when using RBD specifically converts the
>> volumes to a full raw file, when saving as a qcow2 image would use
>> less space. When restoring a snapshot the code current specifically
>> indicates the source image as being a raw file, but if we change the
>> code to not indicate the source image type qemu-img should
>> automatically detect it. We just need to see if that's the case with
>> all of the supported versions of libvirt/qemu before submitting a pull
>> request.
>>
>> Thank You,
>>
>> Logan Barfield
>> Tranquil Hosting
>>
>>
>> On Wed, May 27, 2015 at 9:18 PM, Star Guo <starg@ceph.me> wrote:
>> > Hi everyone,
>> >
>> > Since I have test cloudstack 4.4.2 + kvm + rbd, deploy an instance
>> > is so fast apart from the first deployment because copy template
>> > from secondary storage (NFS) to primary storage (RBD). That is no problem.
>> > However, when I do some volume operation, such as create snapshot,
>> > create template, template deploy ect, it also take some time to
>> > finish because
>> copy
>> > data between primary storage and secondary storage.
>> > So I think that if we support the same rbd as secondary storage, and
>> > use ceph COW feature, it may reduce the time and just some seconds.
>> (OpenStack
>> > can make glance and cinder as the same rbd)
>> >
>> > Best Regards,
>> > Star Guo
>> >
>>
>>
>
>
> --
>
> Andrija Panić
>

Mime
View raw message