cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Edison Su <Edison...@citrix.com>
Subject RE: First review of RBD support for primary storage
Date Thu, 05 Jul 2012 23:06:15 GMT


> -----Original Message-----
> From: Chiradeep Vittal [mailto:Chiradeep.Vittal@citrix.com]
> Sent: Thursday, July 05, 2012 3:54 PM
> To: CloudStack DeveloperList
> Subject: Re: First review of RBD support for primary storage
> 
> I took a first glance at this. Really pleased about this feature. EBS-
> like
> scalable primary storage is within reach!
> 
> A few comments:
>  1. I see quite a few blocks of code ( > 20 times?) that are like
>      if (pool.getType() == StoragePoolType.RBD)
>     I realize that there is existing code that does these kinds of
> checks
> as well. To me this can be solved simply by the "chain of
> responsibility"
> pattern: you hand over the operation to a configured chain of handlers.
> The first handler (usually) that says it can handle it, terminates the
> chain.

It's in my to-do-list, refactor storage part code, to make adding a new storage type into
cloudstack much easier.

>  2. 'user_info' can actually be pushed into the 'storage_pool_details'
> table. Generally we avoid modifying existing tables if we can.
>  3. Copying a snapshot to secondary storage is desirable: to be
> consistent
> with other storage types, to be able to instantiate new volumes in
> other
> zones (when S3 support is available across the region). I'd like to
> understand the blockers here.
> 
> 
> On 7/2/12 5:59 AM, "Wido den Hollander" <wido@widodh.nl> wrote:
> 
> >Hi,
> >
> >On 29-06-12 17:59, Wido den Hollander wrote:
> >> Now, the RBD support for primary storage knows limitations:
> >>
> >> - It only works with KVM
> >>
> >> - You are NOT able to snapshot RBD volumes. This is due to
> CloudStack
> >> wanting to backup snapshots to the secondary storage and uses 'qemu-
> img
> >> convert' for this. That doesn't work with RBD, but it's also very
> >> inefficient.
> >>
> >> RBD supports native snapshots inside the Ceph cluster. RBD disks
> also
> >> have the potential to reach very large sizes. Disks of 1TB won't be
> the
> >> exception. It would stress your network heavily. I'm thinking about
> >> implementing "internal snapshots", but that is step #2. For now no
> >> snapshots.
> >>
> >> - You are able create a template from a RBD volume, but creating a
> new
> >> instance with RBD storage from a template is still a hit-and-miss.
> >> Working on that one.
> >>
> >
> >I just pushed a fix for creating instances from a template. That
> should
> >work now!
> >
> >Wido


Mime
View raw message