Return-Path: X-Original-To: apmail-cloudstack-dev-archive@www.apache.org Delivered-To: apmail-cloudstack-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 0E38510E5C for ; Sun, 26 Jan 2014 04:30:50 +0000 (UTC) Received: (qmail 52978 invoked by uid 500); 26 Jan 2014 04:30:48 -0000 Delivered-To: apmail-cloudstack-dev-archive@cloudstack.apache.org Received: (qmail 52789 invoked by uid 500); 26 Jan 2014 04:30:48 -0000 Mailing-List: contact dev-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cloudstack.apache.org Delivered-To: mailing list dev@cloudstack.apache.org Received: (qmail 52781 invoked by uid 99); 26 Jan 2014 04:30:47 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 26 Jan 2014 04:30:47 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of mike.tutkowski@solidfire.com designates 209.85.219.52 as permitted sender) Received: from [209.85.219.52] (HELO mail-oa0-f52.google.com) (209.85.219.52) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 26 Jan 2014 04:30:43 +0000 Received: by mail-oa0-f52.google.com with SMTP id i4so5329224oah.25 for ; Sat, 25 Jan 2014 20:30:22 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=nrRoHGffStb+HLEyP7N+GKuIQV5cwzGHu2S4nOW2ro8=; b=gt906J8Da4FXaP6mFFzGhhJopjxlHOoDMWKRsxnzcjZ1FF8ktHngLIwK21eWY58sn8 ZutvjHMs905wa4jtFTbYvrgFiWGIX/pny5CV3KX6VIVhtxl0bgGQ373N+jnNJJlGPa6r b/cxFLtVvJ3HaFjv5wHCtNzs3KZNO2PnJ94eEXrJQ2FTix368Wvpr30swKC9JlHiG5jK grNoBWWgu13LRP3tvq886MGa5soMZHqdrHBxc32Urs1ZOrmGZKmk1Ga1VRC2DEj/4xHx /bytWZwzKRkq+twL97G2Vv7TJ67eCtu0cwO5fHyco+SLyrsc12mUEmtXqk7MMpUK7Bsh JqpA== X-Gm-Message-State: ALoCoQlDYPnbrFrttBTbq+2ltmPfQY8WU3uFrimhzO6dyIKYNp78MWqzfIEYaSO4koAQV3hGPO3g MIME-Version: 1.0 X-Received: by 10.182.22.18 with SMTP id z18mr429304obe.42.1390710622367; Sat, 25 Jan 2014 20:30:22 -0800 (PST) Received: by 10.182.114.164 with HTTP; Sat, 25 Jan 2014 20:30:22 -0800 (PST) In-Reply-To: References: Date: Sat, 25 Jan 2014 21:30:22 -0700 Message-ID: Subject: Re: Root-disk support for managed storage From: Mike Tutkowski To: Marcus Sorensen Cc: "dev@cloudstack.apache.org" , Edison Su Content-Type: multipart/alternative; boundary=001a11332d16a1dd5f04f0d80ce4 X-Virus-Checked: Checked by ClamAV on apache.org --001a11332d16a1dd5f04f0d80ce4 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Thanks for your input, Marcus. Yeah, the SolidFire SAN has the ability to clone, but I can't use it in this case. Little note first: I'm going to put some words below in capital letters to stress some important details. All caps for some words can be annoying to some, so please understand that I am only using them here to highlight important details. :) For managed storage (SolidFire is an example of this), this is what happens when a user attaches a volume to a VM for the first time (so this is for Disk Offerings...not root disks): 1) A volume (LUN) is created on the SolidFire SAN that is ONLY ever used by this ONE CloudStack volume. This volume has QoS settings like Min, Max, and Burst IOPS. 2) An SR is created in the XenServer resource pool (cluster) that makes use of the SolidFire volume that was just created. 3) A VDI that represents the disk is created on the SR (this VDI essentially consumes as much of the SR as it can*). If the user wants to create a new CloudStack volume to attach to a VM, that leads to a NEW SolidFire volume being created (with its own QoS), a NEW SR, and a new VDI inside of that SR. The same idea will exist for root volumes. A NEW SolidFire volume will be created for it. A NEW SR will consume the SolidFire volume, and only ONE root disk will EVER use this SR (so there is never a need to clone the template we download to this SR). The next time a root disk of this type is requested, this leads to a NEW SolidFire volume (with its own QoS), a NEW SR, and a new VDI. In the situation you describe (which is called non-managed (meaning the SR was created ahead of time outside of CloudStack)), you can have multiple root disks that leverage the same template on the same SR. This will never be the case for managed storage, so there will never be a need for a downloaded template to be cloned multiple times into multiple root disks. By the way, I just want to clarify, as well, that although I am talking in terms of "SolidFire this an SolidFire that" that the functionality I have been adding to CloudStack (outside of the SolidFire plug-in) can be leveraged by any storage vendor that wants a 1:1 mapping between a CloudStack volume and one of their volumes. This is, in fact, how OpenStack handles storage by default. Does that clarify my question? I was not aware of how CLVM handled templates. Perhaps I should look into that. By the way, I am currently focused on XenServer, but also plan to implement support for this on KVM and ESX (although those may be outside of the scope of 4.4). Thanks! * It consumes as much of the SR as it can unless you you want extra space put aside for hypervisor snapshots. On Sat, Jan 25, 2014 at 3:43 AM, Marcus Sorensen wrote= : > In other words, if you can't clone, then createDiskFromTemplate should > copy template from secondary storage directly onto root disk every > time, and copyPhysicalDisk really does nothing. If you can clone, then > copyPhysicalDisk should copy template to primary, and > createDiskFromTemplate should clone. Unless there's template cloning > in the storage driver now, and if so put the createDiskFromTemplate > logic there, but you still probably need copyPhysicalDisk to do its > thing on the agent. > > This is all from a KVM perspective, of course. > > On Sat, Jan 25, 2014 at 3:40 AM, Marcus Sorensen > wrote: > > I'm not quite following. With our storage, the template gets copied > > to the storage pool upon first use, and then cloned upon subsequent > > uses. I don't remember all of the methods immediately, but there's one > > called to copy the template to primary storage, and once that's done > > as you mention it's tracked in template_spool_ref and when root disks > > are created that's passed as the source to copy when creating root > > disks. > > > > Are you saying that you don't have clone capabilities to clone the > > template when root disks are created? If so, you'd be more like CLVM > > storage, where the template copy actually does nothing, and you > > initiate a template copy *in place* of the clone (or you do a template > > copy to primary pool whenever the clone normally would happen). CLVM > > creates a fresh root disk and copies the template from secondary > > storage directly to that whenever a root disk is deployed, bypassing > > templates altogether. This is because it can't efficiently clone, and > > if we let the template copy to primary, it will then do a full copy of > > that template from primary to primary every time, which is pretty > > heavy since it's also not thin provisioned. > > > > If you *can* clone, then just copy the template to your primary > > storage as normal in your storage adaptor (copyPhysicalDisk), it will > > be tracked in template_spool_ref, and then when root disks are created > > it will be passed to createDiskFromTemplate in your storage adaptor > > (for KVM), where you can call a clone of that and return it as the > > root volume . There was once going to be template clone capabilities > > in the storage driver level on the mgmt server, but I believe that was > > work-in-progress last I checked (4 months ago or so), so we still have > > to call clone to our storage server from the agent side as of now, but > > that call doesn't have to do any work on the agent-side, really. > > > > > > On Sat, Jan 25, 2014 at 12:47 AM, Mike Tutkowski > > wrote: > >> Just wanted to throw this out there before I went to bed: > >> > >> Since each root volume that belongs to managed storage will get its ow= n > copy > >> of some template (assuming we're dealing with templates here and not a= n > >> ISO), it is possible I may be able to circumvent a new table (or any > >> existing table like template_spool_ref) entirely for managed storage. > >> > >> The purpose of a table like template_spool_ref appears to be mainly to > make > >> sure we're not downloading the sample template to an SR multiple times > (and > >> this doesn't apply in the case of managed storage since each root volu= me > >> should have at most one template downloaded to it). > >> > >> Thoughts on that? > >> > >> Thanks! > >> > >> > >> On Sat, Jan 25, 2014 at 12:39 AM, Mike Tutkowski > >> wrote: > >>> > >>> Hi Edison and Marcus (and anyone else this may be of interest to), > >>> > >>> So, as of 4.3 I have added support for data disks for managed storage > for > >>> XenServer, VMware, and KVM (a 1:1 mapping between a CloudStack volume > and a > >>> volume on a storage system). One of the most useful abilities this > enables > >>> is support for guaranteed storage quality of service in CloudStack. > >>> > >>> One of the areas I'm working on for CS 4.4 is root-disk support for > >>> managed storage (both with templates and ISOs). > >>> > >>> I'd like to get your opinion about something. > >>> > >>> I noticed when we download a template to a XenServer SR that we > leverage a > >>> table in the DB called template_spool_ref. > >>> > >>> This table keeps track of whether or not we've downloaded the templat= e > in > >>> question to the SR in question already. > >>> > >>> The problem for managed storage is that the storage pool itself can b= e > >>> associated with many SRs (not all necessarily in the same cluster > even): one > >>> SR per volume that belongs to the managed storage. > >>> > >>> What this means is every time a user wants to place a root disk (that > uses > >>> a template) on managed storage, I will need to download a template to > the > >>> applicable SR (the template will never be there in advance). > >>> > >>> That is fine. The issue is that I cannot use the template_spool_ref > table > >>> because it is intended on mapping a template to a storage pool (1:1 > mapping > >>> between the two) and managed storage can download the same template > many > >>> times. > >>> > >>> It seems I will need to add a new table to the DB to support this > feature. > >>> > >>> My table would allow a mapping between a template and a volume from > >>> managed storage. > >>> > >>> Do you see an easier way around this or is this how you recommend I > >>> proceed? > >>> > >>> Thanks! > >>> > >>> -- > >>> Mike Tutkowski > >>> Senior CloudStack Developer, SolidFire Inc. > >>> e: mike.tutkowski@solidfire.com > >>> o: 303.746.7302 > >>> Advancing the way the world uses the cloud=99 > >> > >> > >> > >> > >> -- > >> Mike Tutkowski > >> Senior CloudStack Developer, SolidFire Inc. > >> e: mike.tutkowski@solidfire.com > >> o: 303.746.7302 > >> Advancing the way the world uses the cloud=99 > --=20 *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkowski@solidfire.com o: 303.746.7302 Advancing the way the world uses the cloud *=99* --001a11332d16a1dd5f04f0d80ce4--