Return-Path: X-Original-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 58579DAC4 for ; Thu, 5 Jul 2012 23:06:44 +0000 (UTC) Received: (qmail 12440 invoked by uid 500); 5 Jul 2012 23:06:44 -0000 Delivered-To: apmail-incubator-cloudstack-dev-archive@incubator.apache.org Received: (qmail 12420 invoked by uid 500); 5 Jul 2012 23:06:44 -0000 Mailing-List: contact cloudstack-dev-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: cloudstack-dev@incubator.apache.org Delivered-To: mailing list cloudstack-dev@incubator.apache.org Received: (qmail 12408 invoked by uid 99); 5 Jul 2012 23:06:44 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 05 Jul 2012 23:06:44 +0000 X-ASF-Spam-Status: No, hits=-5.0 required=5.0 tests=RCVD_IN_DNSWL_HI,SPF_HELO_PASS,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of Edison.su@citrix.com designates 66.165.176.89 as permitted sender) Received: from [66.165.176.89] (HELO SMTP.CITRIX.COM) (66.165.176.89) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 05 Jul 2012 23:06:37 +0000 X-IronPort-AV: E=Sophos;i="4.77,532,1336363200"; d="scan'208";a="30553103" Received: from sjcpmailmx02.citrite.net ([10.216.14.75]) by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5; 05 Jul 2012 19:06:16 -0400 Received: from SJCPMAILBOX01.citrite.net ([10.216.4.73]) by SJCPMAILMX02.citrite.net ([10.216.14.75]) with mapi; Thu, 5 Jul 2012 16:06:15 -0700 From: Edison Su To: CloudStack DeveloperList Date: Thu, 5 Jul 2012 16:06:15 -0700 Subject: RE: First review of RBD support for primary storage Thread-Topic: First review of RBD support for primary storage Thread-Index: Ac1bAQIHZpi8YxzVTzS/o8cIDBHBHwAAPrZg Message-ID: References: <4FF19B18.1050504@widodh.nl> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 > -----Original Message----- > From: Chiradeep Vittal [mailto:Chiradeep.Vittal@citrix.com] > Sent: Thursday, July 05, 2012 3:54 PM > To: CloudStack DeveloperList > Subject: Re: First review of RBD support for primary storage >=20 > I took a first glance at this. Really pleased about this feature. EBS- > like > scalable primary storage is within reach! >=20 > A few comments: > 1. I see quite a few blocks of code ( > 20 times?) that are like > if (pool.getType() =3D=3D StoragePoolType.RBD) > I realize that there is existing code that does these kinds of > checks > as well. To me this can be solved simply by the "chain of > responsibility" > pattern: you hand over the operation to a configured chain of handlers. > The first handler (usually) that says it can handle it, terminates the > chain. It's in my to-do-list, refactor storage part code, to make adding a new sto= rage type into cloudstack much easier. > 2. 'user_info' can actually be pushed into the 'storage_pool_details' > table. Generally we avoid modifying existing tables if we can. > 3. Copying a snapshot to secondary storage is desirable: to be > consistent > with other storage types, to be able to instantiate new volumes in > other > zones (when S3 support is available across the region). I'd like to > understand the blockers here. >=20 >=20 > On 7/2/12 5:59 AM, "Wido den Hollander" wrote: >=20 > >Hi, > > > >On 29-06-12 17:59, Wido den Hollander wrote: > >> Now, the RBD support for primary storage knows limitations: > >> > >> - It only works with KVM > >> > >> - You are NOT able to snapshot RBD volumes. This is due to > CloudStack > >> wanting to backup snapshots to the secondary storage and uses 'qemu- > img > >> convert' for this. That doesn't work with RBD, but it's also very > >> inefficient. > >> > >> RBD supports native snapshots inside the Ceph cluster. RBD disks > also > >> have the potential to reach very large sizes. Disks of 1TB won't be > the > >> exception. It would stress your network heavily. I'm thinking about > >> implementing "internal snapshots", but that is step #2. For now no > >> snapshots. > >> > >> - You are able create a template from a RBD volume, but creating a > new > >> instance with RBD storage from a template is still a hit-and-miss. > >> Working on that one. > >> > > > >I just pushed a fix for creating instances from a template. That > should > >work now! > > > >Wido