cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Tutkowski <mike.tutkow...@solidfire.com>
Subject Re: Root-disk support for managed storage
Date Sun, 26 Jan 2014 18:44:03 GMT
To be clear, the cloned SAN volume will have a unique name and IQN;
however, the problem is that the data the cloned volume contains (the SR)
includes metadata for an SR that is supposed to be unique (the UUID), but
is immutable.


On Sun, Jan 26, 2014 at 11:39 AM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> So, this is my thinking on how this cloning would work (and why it would
> be a problem for an SR):
>
> 1) An SR is created on a SAN volume. The SR is essentially a clustered
> file system. The VDI on the SR represents a template we downloaded from
> secondary storage. The SR itself contains metadata like a UUID, the name of
> the SR, a description for the SR, etc. The SAN volume containing the SR has
> a unique name.
>
> 2) When we need to spin up a VM based on the template that exists on our
> SR, we clone the applicable SAN volume. The cloned SAN volume does have a
> unique name on the SAN, but the data the cloned volume has is, as expected,
> identical to the original SAN volume. This means the SR on the cloned
> volume has the same UUID as the original SR. XenServer will not like it if
> I introduce multiple SRs to its compute cluster (what it calls a resource
> pool) that have the same UUID.
>
> Thoughts on this?
>
> Thanks
>
>
> On Sun, Jan 26, 2014 at 10:07 AM, Marcus Sorensen <shadowsor@gmail.com>wrote:
>
>> In fact I'd recommend removing the SR of the template from anything
>> XenServer knows of. It just needs to exist on the SAN so it can be
>> cloned for new SR root volumes.
>>
>> On Sun, Jan 26, 2014 at 9:18 AM, Marcus Sorensen <shadowsor@gmail.com>
>> wrote:
>> > Hm, well I guest that's dependent on how your SAN clone works. Ours
>> > allows you to have a unique name for each clone, so we just name the
>> > clone with the new root volume UUID.
>> >
>> > On Sun, Jan 26, 2014 at 9:00 AM, Mike Tutkowski
>> > <mike.tutkowski@solidfire.com> wrote:
>> >> Hey Marcus,
>> >>
>> >> One thing I thought of late last night was that an SR has a UUID
>> associated
>> >> with it.
>> >>
>> >> If I clone the SAN volume that houses the SR each time, I'll be giving
>> >> XenServer SRs that have the same UUID.
>> >>
>> >> I guess I'll need to look into if there is some way to assign a new
>> UUID to
>> >> an existing SR.
>> >>
>> >> Perhaps you or Edison (or someone else) know about this off hand?
>> >>
>> >> Thanks
>> >>
>> >>
>> >> On Sat, Jan 25, 2014 at 11:42 PM, Mike Tutkowski
>> >> <mike.tutkowski@solidfire.com> wrote:
>> >>>
>> >>> Yeah, I see that now. I was thinking my situation here would be
>> different
>> >>> because in the XenServer case I was looking at, CloudStack was
>> copying the
>> >>> template down to an SR and that same SR was later used to also house
>> the
>> >>> root disk (two different SRs were not involved).
>> >>>
>> >>> Even though my case is different, with the approach you outlined about
>> >>> creating a "special" SR for the template itself, it ends up amounting
>> to a
>> >>> similar concept.
>> >>>
>> >>>
>> >>> On Sat, Jan 25, 2014 at 11:27 PM, Marcus Sorensen <
>> shadowsor@gmail.com>
>> >>> wrote:
>> >>>>
>> >>>> Yes. That's what all of the storage types do, they save the name
of
>> >>>> the file, volume, etc (along with path if necessary) on primary
>> >>>> storage where the template is copied.
>> >>>>
>> >>>> On Sat, Jan 25, 2014 at 11:01 PM, Mike Tutkowski
>> >>>> <mike.tutkowski@solidfire.com> wrote:
>> >>>> > It looks like I could use the template_spool_ref table's local
path
>> >>>> > and/or
>> >>>> > install path to point to the name of the volume on the SAN
that is
>> to
>> >>>> > be
>> >>>> > cloned when we need a root volume from this template.
>> >>>> >
>> >>>> > Is that what you do?
>> >>>> >
>> >>>> >
>> >>>> > On Sat, Jan 25, 2014 at 10:46 PM, Mike Tutkowski
>> >>>> > <mike.tutkowski@solidfire.com> wrote:
>> >>>> >>
>> >>>> >> Maybe 2) can be made to work in the template_spool_ref
table...I
>> need
>> >>>> >> to
>> >>>> >> think about it a bit.
>> >>>> >>
>> >>>> >>
>> >>>> >> On Sat, Jan 25, 2014 at 10:42 PM, Mike Tutkowski
>> >>>> >> <mike.tutkowski@solidfire.com> wrote:
>> >>>> >>>
>> >>>> >>> Data disks are easier than root disks.
>> >>>> >>>
>> >>>> >>> To be more clear, I should say data disks are easier
than root
>> disks
>> >>>> >>> that
>> >>>> >>> use templates (root disks that use ISOs are about the
same level
>> of
>> >>>> >>> difficulty as data disks).
>> >>>> >>>
>> >>>> >>> I could see it going either way:
>> >>>> >>>
>> >>>> >>> 1) Copy the template down once for each root disk
>> >>>> >>>
>> >>>> >>> or
>> >>>> >>>
>> >>>> >>> 2) Copy the template to an SR and clone the SAN volume
the SR is
>> on
>> >>>> >>> as
>> >>>> >>> needed
>> >>>> >>>
>> >>>> >>> 2) has the advantage of speed, but where do you store
knowledge
>> of
>> >>>> >>> this
>> >>>> >>> special SR (in the DB somewhere)?
>> >>>> >>>
>> >>>> >>>
>> >>>> >>> On Sat, Jan 25, 2014 at 10:39 PM, Mike Tutkowski
>> >>>> >>> <mike.tutkowski@solidfire.com> wrote:
>> >>>> >>>>
>> >>>> >>>> Do you still send your SAN commands from the KVM
agent?
>> >>>> >>>>
>> >>>> >>>> I don't have any SolidFire-specific commands outside
of the
>> >>>> >>>> SolidFire
>> >>>> >>>> plug-in.
>> >>>> >>>>
>> >>>> >>>>
>> >>>> >>>> On Sat, Jan 25, 2014 at 10:38 PM, Marcus Sorensen
>> >>>> >>>> <shadowsor@gmail.com>
>> >>>> >>>> wrote:
>> >>>> >>>>>
>> >>>> >>>>> Actually, I shouldn't take the liberty to speak
as though I
>> >>>> >>>>> understand
>> >>>> >>>>> the details about how you use SRs and VDIs.
My point though is
>> >>>> >>>>> basically that you probably can and should
treat them the same
>> as
>> >>>> >>>>> whatever you currently do with data disks.
Either create a new
>> one
>> >>>> >>>>> with every root volume create and copy the
template contents
>> to it
>> >>>> >>>>> (like CLVM does), or create one on the SAN
when the template
>> copy
>> >>>> >>>>> is
>> >>>> >>>>> called, prepopulate it with the template, and
send a clone
>> command
>> >>>> >>>>> against that one to your storage to generate
new root disks as
>> >>>> >>>>> needed.
>> >>>> >>>>>
>> >>>> >>>>> On Sat, Jan 25, 2014 at 10:30 PM, Marcus Sorensen
>> >>>> >>>>> <shadowsor@gmail.com>
>> >>>> >>>>> wrote:
>> >>>> >>>>> > And when I say 'the first time the template
is used, we
>> create an
>> >>>> >>>>> > SR',
>> >>>> >>>>> > I mean cloudstack does it automatically.
>> >>>> >>>>> >
>> >>>> >>>>> > On Sat, Jan 25, 2014 at 10:29 PM, Marcus
Sorensen
>> >>>> >>>>> > <shadowsor@gmail.com> wrote:
>> >>>> >>>>> >> Not's not really what I was describing,
or that's not how
>> we do
>> >>>> >>>>> >> it
>> >>>> >>>>> >> at
>> >>>> >>>>> >> least. The first time a template is
used, we create an SR
>> with
>> >>>> >>>>> >> one
>> >>>> >>>>> >> VDI
>> >>>> >>>>> >> (using your terminology as we don't
do it in Xen, but it
>> should
>> >>>> >>>>> >> map
>> >>>> >>>>> >> to
>> >>>> >>>>> >> essentially the same thing) and copy
the template contents
>> into
>> >>>> >>>>> >> it.
>> >>>> >>>>> >> Then we remove the SR. When a root
disk is requested, we
>> send a
>> >>>> >>>>> >> clone
>> >>>> >>>>> >> command to the SAN, and then register
the new clone as a new
>> >>>> >>>>> >> volume,
>> >>>> >>>>> >> then attach that as a new SR dedicated
to that root volume.
>> >>>> >>>>> >> Every
>> >>>> >>>>> >> root
>> >>>> >>>>> >> disk that makes use of that template
is its own SR.
>> >>>> >>>>> >>
>> >>>> >>>>> >> On Sat, Jan 25, 2014 at 9:30 PM, Mike
Tutkowski
>> >>>> >>>>> >> <mike.tutkowski@solidfire.com>
wrote:
>> >>>> >>>>> >>> Thanks for your input, Marcus.
>> >>>> >>>>> >>>
>> >>>> >>>>> >>> Yeah, the SolidFire SAN has the
ability to clone, but I
>> can't
>> >>>> >>>>> >>> use
>> >>>> >>>>> >>> it in this
>> >>>> >>>>> >>> case.
>> >>>> >>>>> >>>
>> >>>> >>>>> >>> Little note first: I'm going to
put some words below in
>> capital
>> >>>> >>>>> >>> letters to
>> >>>> >>>>> >>> stress some important details.
All caps for some words can
>> be
>> >>>> >>>>> >>> annoying to
>> >>>> >>>>> >>> some, so please understand that
I am only using them here
>> to
>> >>>> >>>>> >>> highlight
>> >>>> >>>>> >>> important details. :)
>> >>>> >>>>> >>>
>> >>>> >>>>> >>> For managed storage (SolidFire
is an example of this),
>> this is
>> >>>> >>>>> >>> what
>> >>>> >>>>> >>> happens
>> >>>> >>>>> >>> when a user attaches a volume
to a VM for the first time
>> (so
>> >>>> >>>>> >>> this
>> >>>> >>>>> >>> is for
>> >>>> >>>>> >>> Disk Offerings...not root disks):
>> >>>> >>>>> >>>
>> >>>> >>>>> >>> 1) A volume (LUN) is created on
the SolidFire SAN that is
>> ONLY
>> >>>> >>>>> >>> ever
>> >>>> >>>>> >>> used by
>> >>>> >>>>> >>> this ONE CloudStack volume. This
volume has QoS settings
>> like
>> >>>> >>>>> >>> Min,
>> >>>> >>>>> >>> Max, and
>> >>>> >>>>> >>> Burst IOPS.
>> >>>> >>>>> >>>
>> >>>> >>>>> >>> 2) An SR is created in the XenServer
resource pool
>> (cluster)
>> >>>> >>>>> >>> that
>> >>>> >>>>> >>> makes use
>> >>>> >>>>> >>> of the SolidFire volume that was
just created.
>> >>>> >>>>> >>>
>> >>>> >>>>> >>> 3) A VDI that represents the disk
is created on the SR
>> (this
>> >>>> >>>>> >>> VDI
>> >>>> >>>>> >>> essentially
>> >>>> >>>>> >>> consumes as much of the SR as
it can*).
>> >>>> >>>>> >>>
>> >>>> >>>>> >>> If the user wants to create a
new CloudStack volume to
>> attach
>> >>>> >>>>> >>> to a
>> >>>> >>>>> >>> VM, that
>> >>>> >>>>> >>> leads to a NEW SolidFire volume
being created (with its own
>> >>>> >>>>> >>> QoS), a
>> >>>> >>>>> >>> NEW SR,
>> >>>> >>>>> >>> and a new VDI inside of that SR.
>> >>>> >>>>> >>>
>> >>>> >>>>> >>> The same idea will exist for root
volumes. A NEW SolidFire
>> >>>> >>>>> >>> volume
>> >>>> >>>>> >>> will be
>> >>>> >>>>> >>> created for it. A NEW SR will
consume the SolidFire
>> volume, and
>> >>>> >>>>> >>> only ONE
>> >>>> >>>>> >>> root disk will EVER use this SR
(so there is never a need
>> to
>> >>>> >>>>> >>> clone
>> >>>> >>>>> >>> the
>> >>>> >>>>> >>> template we download to this SR).
>> >>>> >>>>> >>>
>> >>>> >>>>> >>> The next time a root disk of this
type is requested, this
>> leads
>> >>>> >>>>> >>> to
>> >>>> >>>>> >>> a NEW
>> >>>> >>>>> >>> SolidFire volume (with its own
QoS), a NEW SR, and a new
>> VDI.
>> >>>> >>>>> >>>
>> >>>> >>>>> >>> In the situation you describe
(which is called non-managed
>> >>>> >>>>> >>> (meaning
>> >>>> >>>>> >>> the SR
>> >>>> >>>>> >>> was created ahead of time outside
of CloudStack)), you can
>> have
>> >>>> >>>>> >>> multiple
>> >>>> >>>>> >>> root disks that leverage the same
template on the same SR.
>> This
>> >>>> >>>>> >>> will never
>> >>>> >>>>> >>> be the case for managed storage,
so there will never be a
>> need
>> >>>> >>>>> >>> for
>> >>>> >>>>> >>> a
>> >>>> >>>>> >>> downloaded template to be cloned
multiple times into
>> multiple
>> >>>> >>>>> >>> root
>> >>>> >>>>> >>> disks.
>> >>>> >>>>> >>>
>> >>>> >>>>> >>> By the way, I just want to clarify,
as well, that although
>> I am
>> >>>> >>>>> >>> talking in
>> >>>> >>>>> >>> terms of "SolidFire this an SolidFire
that" that the
>> >>>> >>>>> >>> functionality
>> >>>> >>>>> >>> I have
>> >>>> >>>>> >>> been adding to CloudStack (outside
of the SolidFire
>> plug-in)
>> >>>> >>>>> >>> can be
>> >>>> >>>>> >>> leveraged by any storage vendor
that wants a 1:1 mapping
>> >>>> >>>>> >>> between a
>> >>>> >>>>> >>> CloudStack volume and one of their
volumes. This is, in
>> fact,
>> >>>> >>>>> >>> how
>> >>>> >>>>> >>> OpenStack
>> >>>> >>>>> >>> handles storage by default.
>> >>>> >>>>> >>>
>> >>>> >>>>> >>> Does that clarify my question?
>> >>>> >>>>> >>>
>> >>>> >>>>> >>> I was not aware of how CLVM handled
templates. Perhaps I
>> should
>> >>>> >>>>> >>> look into
>> >>>> >>>>> >>> that.
>> >>>> >>>>> >>>
>> >>>> >>>>> >>> By the way, I am currently focused
on XenServer, but also
>> plan
>> >>>> >>>>> >>> to
>> >>>> >>>>> >>> implement
>> >>>> >>>>> >>> support for this on KVM and ESX
(although those may be
>> outside
>> >>>> >>>>> >>> of
>> >>>> >>>>> >>> the scope
>> >>>> >>>>> >>> of 4.4).
>> >>>> >>>>> >>>
>> >>>> >>>>> >>> Thanks!
>> >>>> >>>>> >>>
>> >>>> >>>>> >>> * It consumes as much of the SR
as it can unless you you
>> want
>> >>>> >>>>> >>> extra
>> >>>> >>>>> >>> space
>> >>>> >>>>> >>> put aside for hypervisor snapshots.
>> >>>> >>>>> >>>
>> >>>> >>>>> >>>
>> >>>> >>>>> >>> On Sat, Jan 25, 2014 at 3:43 AM,
Marcus Sorensen
>> >>>> >>>>> >>> <shadowsor@gmail.com>
>> >>>> >>>>> >>> wrote:
>> >>>> >>>>> >>>>
>> >>>> >>>>> >>>> In other words, if you can't
clone, then
>> >>>> >>>>> >>>> createDiskFromTemplate
>> >>>> >>>>> >>>> should
>> >>>> >>>>> >>>> copy template from secondary
storage directly onto root
>> disk
>> >>>> >>>>> >>>> every
>> >>>> >>>>> >>>> time, and copyPhysicalDisk
really does nothing. If you can
>> >>>> >>>>> >>>> clone,
>> >>>> >>>>> >>>> then
>> >>>> >>>>> >>>> copyPhysicalDisk should copy
template to primary, and
>> >>>> >>>>> >>>> createDiskFromTemplate should
clone. Unless there's
>> template
>> >>>> >>>>> >>>> cloning
>> >>>> >>>>> >>>> in the storage driver now,
and if so put the
>> >>>> >>>>> >>>> createDiskFromTemplate
>> >>>> >>>>> >>>> logic there, but you still
probably need copyPhysicalDisk
>> to
>> >>>> >>>>> >>>> do
>> >>>> >>>>> >>>> its
>> >>>> >>>>> >>>> thing on the agent.
>> >>>> >>>>> >>>>
>> >>>> >>>>> >>>> This is all from a KVM perspective,
of course.
>> >>>> >>>>> >>>>
>> >>>> >>>>> >>>> On Sat, Jan 25, 2014 at 3:40
AM, Marcus Sorensen
>> >>>> >>>>> >>>> <shadowsor@gmail.com>
>> >>>> >>>>> >>>> wrote:
>> >>>> >>>>> >>>> > I'm not quite following.
 With our storage, the template
>> >>>> >>>>> >>>> > gets
>> >>>> >>>>> >>>> > copied
>> >>>> >>>>> >>>> > to the storage pool upon
first use, and then cloned upon
>> >>>> >>>>> >>>> > subsequent
>> >>>> >>>>> >>>> > uses. I don't remember
all of the methods immediately,
>> but
>> >>>> >>>>> >>>> > there's one
>> >>>> >>>>> >>>> > called to copy the template
to primary storage, and once
>> >>>> >>>>> >>>> > that's
>> >>>> >>>>> >>>> > done
>> >>>> >>>>> >>>> > as you mention it's tracked
in template_spool_ref and
>> when
>> >>>> >>>>> >>>> > root
>> >>>> >>>>> >>>> > disks
>> >>>> >>>>> >>>> > are created that's passed
as the source to copy when
>> >>>> >>>>> >>>> > creating
>> >>>> >>>>> >>>> > root
>> >>>> >>>>> >>>> > disks.
>> >>>> >>>>> >>>> >
>> >>>> >>>>> >>>> > Are you saying that you
don't have clone capabilities to
>> >>>> >>>>> >>>> > clone
>> >>>> >>>>> >>>> > the
>> >>>> >>>>> >>>> > template when root disks
are created? If so, you'd be
>> more
>> >>>> >>>>> >>>> > like
>> >>>> >>>>> >>>> > CLVM
>> >>>> >>>>> >>>> > storage, where the template
copy actually does nothing,
>> and
>> >>>> >>>>> >>>> > you
>> >>>> >>>>> >>>> > initiate a template copy
*in place* of the clone (or
>> you do
>> >>>> >>>>> >>>> > a
>> >>>> >>>>> >>>> > template
>> >>>> >>>>> >>>> > copy to primary pool
whenever the clone normally would
>> >>>> >>>>> >>>> > happen).
>> >>>> >>>>> >>>> > CLVM
>> >>>> >>>>> >>>> > creates a fresh root
disk and copies the template from
>> >>>> >>>>> >>>> > secondary
>> >>>> >>>>> >>>> > storage directly to that
whenever a root disk is
>> deployed,
>> >>>> >>>>> >>>> > bypassing
>> >>>> >>>>> >>>> > templates altogether.
This is because it can't
>> efficiently
>> >>>> >>>>> >>>> > clone, and
>> >>>> >>>>> >>>> > if we let the template
copy to primary, it will then do
>> a
>> >>>> >>>>> >>>> > full
>> >>>> >>>>> >>>> > copy of
>> >>>> >>>>> >>>> > that template from primary
to primary every time, which
>> is
>> >>>> >>>>> >>>> > pretty
>> >>>> >>>>> >>>> > heavy since it's also
not thin provisioned.
>> >>>> >>>>> >>>> >
>> >>>> >>>>> >>>> > If you *can* clone, then
just copy the template to your
>> >>>> >>>>> >>>> > primary
>> >>>> >>>>> >>>> > storage as normal in
your storage adaptor
>> >>>> >>>>> >>>> > (copyPhysicalDisk), it
>> >>>> >>>>> >>>> > will
>> >>>> >>>>> >>>> > be tracked in template_spool_ref,
and then when root
>> disks
>> >>>> >>>>> >>>> > are
>> >>>> >>>>> >>>> > created
>> >>>> >>>>> >>>> > it will be passed to
createDiskFromTemplate in your
>> storage
>> >>>> >>>>> >>>> > adaptor
>> >>>> >>>>> >>>> > (for KVM), where you
can call a clone of that and
>> return it
>> >>>> >>>>> >>>> > as
>> >>>> >>>>> >>>> > the
>> >>>> >>>>> >>>> > root volume . There was
once going to be template clone
>> >>>> >>>>> >>>> > capabilities
>> >>>> >>>>> >>>> > in the storage driver
level on the mgmt server, but I
>> >>>> >>>>> >>>> > believe
>> >>>> >>>>> >>>> > that was
>> >>>> >>>>> >>>> > work-in-progress last
I checked (4 months ago or so),
>> so we
>> >>>> >>>>> >>>> > still have
>> >>>> >>>>> >>>> > to call clone to our
storage server from the agent side
>> as
>> >>>> >>>>> >>>> > of
>> >>>> >>>>> >>>> > now, but
>> >>>> >>>>> >>>> > that call doesn't have
to do any work on the agent-side,
>> >>>> >>>>> >>>> > really.
>> >>>> >>>>> >>>> >
>> >>>> >>>>> >>>> >
>> >>>> >>>>> >>>> > On Sat, Jan 25, 2014
at 12:47 AM, Mike Tutkowski
>> >>>> >>>>> >>>> > <mike.tutkowski@solidfire.com>
wrote:
>> >>>> >>>>> >>>> >> Just wanted to throw
this out there before I went to
>> bed:
>> >>>> >>>>> >>>> >>
>> >>>> >>>>> >>>> >> Since each root volume
that belongs to managed storage
>> will
>> >>>> >>>>> >>>> >> get
>> >>>> >>>>> >>>> >> its own
>> >>>> >>>>> >>>> >> copy
>> >>>> >>>>> >>>> >> of some template
(assuming we're dealing with templates
>> >>>> >>>>> >>>> >> here
>> >>>> >>>>> >>>> >> and not an
>> >>>> >>>>> >>>> >> ISO), it is possible
I may be able to circumvent a new
>> >>>> >>>>> >>>> >> table
>> >>>> >>>>> >>>> >> (or any
>> >>>> >>>>> >>>> >> existing table like
template_spool_ref) entirely for
>> >>>> >>>>> >>>> >> managed
>> >>>> >>>>> >>>> >> storage.
>> >>>> >>>>> >>>> >>
>> >>>> >>>>> >>>> >> The purpose of a
table like template_spool_ref appears
>> to
>> >>>> >>>>> >>>> >> be
>> >>>> >>>>> >>>> >> mainly to
>> >>>> >>>>> >>>> >> make
>> >>>> >>>>> >>>> >> sure we're not downloading
the sample template to an SR
>> >>>> >>>>> >>>> >> multiple times
>> >>>> >>>>> >>>> >> (and
>> >>>> >>>>> >>>> >> this doesn't apply
in the case of managed storage since
>> >>>> >>>>> >>>> >> each
>> >>>> >>>>> >>>> >> root
>> >>>> >>>>> >>>> >> volume
>> >>>> >>>>> >>>> >> should have at most
one template downloaded to it).
>> >>>> >>>>> >>>> >>
>> >>>> >>>>> >>>> >> Thoughts on that?
>> >>>> >>>>> >>>> >>
>> >>>> >>>>> >>>> >> Thanks!
>> >>>> >>>>> >>>> >>
>> >>>> >>>>> >>>> >>
>> >>>> >>>>> >>>> >> On Sat, Jan 25, 2014
at 12:39 AM, Mike Tutkowski
>> >>>> >>>>> >>>> >> <mike.tutkowski@solidfire.com>
wrote:
>> >>>> >>>>> >>>> >>>
>> >>>> >>>>> >>>> >>> Hi Edison and
Marcus (and anyone else this may be of
>> >>>> >>>>> >>>> >>> interest
>> >>>> >>>>> >>>> >>> to),
>> >>>> >>>>> >>>> >>>
>> >>>> >>>>> >>>> >>> So, as of 4.3
I have added support for data disks for
>> >>>> >>>>> >>>> >>> managed
>> >>>> >>>>> >>>> >>> storage
>> >>>> >>>>> >>>> >>> for
>> >>>> >>>>> >>>> >>> XenServer, VMware,
and KVM (a 1:1 mapping between a
>> >>>> >>>>> >>>> >>> CloudStack
>> >>>> >>>>> >>>> >>> volume
>> >>>> >>>>> >>>> >>> and a
>> >>>> >>>>> >>>> >>> volume on a storage
system). One of the most useful
>> >>>> >>>>> >>>> >>> abilities
>> >>>> >>>>> >>>> >>> this
>> >>>> >>>>> >>>> >>> enables
>> >>>> >>>>> >>>> >>> is support for
guaranteed storage quality of service
>> in
>> >>>> >>>>> >>>> >>> CloudStack.
>> >>>> >>>>> >>>> >>>
>> >>>> >>>>> >>>> >>> One of the areas
I'm working on for CS 4.4 is
>> root-disk
>> >>>> >>>>> >>>> >>> support for
>> >>>> >>>>> >>>> >>> managed storage
(both with templates and ISOs).
>> >>>> >>>>> >>>> >>>
>> >>>> >>>>> >>>> >>> I'd like to get
your opinion about something.
>> >>>> >>>>> >>>> >>>
>> >>>> >>>>> >>>> >>> I noticed when
we download a template to a XenServer
>> SR
>> >>>> >>>>> >>>> >>> that
>> >>>> >>>>> >>>> >>> we
>> >>>> >>>>> >>>> >>> leverage a
>> >>>> >>>>> >>>> >>> table in the
DB called template_spool_ref.
>> >>>> >>>>> >>>> >>>
>> >>>> >>>>> >>>> >>> This table keeps
track of whether or not we've
>> downloaded
>> >>>> >>>>> >>>> >>> the
>> >>>> >>>>> >>>> >>> template
>> >>>> >>>>> >>>> >>> in
>> >>>> >>>>> >>>> >>> question to the
SR in question already.
>> >>>> >>>>> >>>> >>>
>> >>>> >>>>> >>>> >>> The problem for
managed storage is that the storage
>> pool
>> >>>> >>>>> >>>> >>> itself can be
>> >>>> >>>>> >>>> >>> associated with
many SRs (not all necessarily in the
>> same
>> >>>> >>>>> >>>> >>> cluster
>> >>>> >>>>> >>>> >>> even): one
>> >>>> >>>>> >>>> >>> SR per volume
that belongs to the managed storage.
>> >>>> >>>>> >>>> >>>
>> >>>> >>>>> >>>> >>> What this means
is every time a user wants to place a
>> root
>> >>>> >>>>> >>>> >>> disk (that
>> >>>> >>>>> >>>> >>> uses
>> >>>> >>>>> >>>> >>> a template) on
managed storage, I will need to
>> download a
>> >>>> >>>>> >>>> >>> template to
>> >>>> >>>>> >>>> >>> the
>> >>>> >>>>> >>>> >>> applicable SR
(the template will never be there in
>> >>>> >>>>> >>>> >>> advance).
>> >>>> >>>>> >>>> >>>
>> >>>> >>>>> >>>> >>> That is fine.
The issue is that I cannot use the
>> >>>> >>>>> >>>> >>> template_spool_ref
>> >>>> >>>>> >>>> >>> table
>> >>>> >>>>> >>>> >>> because it is
intended on mapping a template to a
>> storage
>> >>>> >>>>> >>>> >>> pool
>> >>>> >>>>> >>>> >>> (1:1
>> >>>> >>>>> >>>> >>> mapping
>> >>>> >>>>> >>>> >>> between the two)
and managed storage can download the
>> same
>> >>>> >>>>> >>>> >>> template
>> >>>> >>>>> >>>> >>> many
>> >>>> >>>>> >>>> >>> times.
>> >>>> >>>>> >>>> >>>
>> >>>> >>>>> >>>> >>> It seems I will
need to add a new table to the DB to
>> >>>> >>>>> >>>> >>> support
>> >>>> >>>>> >>>> >>> this
>> >>>> >>>>> >>>> >>> feature.
>> >>>> >>>>> >>>> >>>
>> >>>> >>>>> >>>> >>> My table would
allow a mapping between a template and
>> a
>> >>>> >>>>> >>>> >>> volume
>> >>>> >>>>> >>>> >>> from
>> >>>> >>>>> >>>> >>> managed storage.
>> >>>> >>>>> >>>> >>>
>> >>>> >>>>> >>>> >>> Do you see an
easier way around this or is this how
>> you
>> >>>> >>>>> >>>> >>> recommend I
>> >>>> >>>>> >>>> >>> proceed?
>> >>>> >>>>> >>>> >>>
>> >>>> >>>>> >>>> >>> Thanks!
>> >>>> >>>>> >>>> >>>
>> >>>> >>>>> >>>> >>> --
>> >>>> >>>>> >>>> >>> Mike Tutkowski
>> >>>> >>>>> >>>> >>> Senior CloudStack
Developer, SolidFire Inc.
>> >>>> >>>>> >>>> >>> e: mike.tutkowski@solidfire.com
>> >>>> >>>>> >>>> >>> o: 303.746.7302
>> >>>> >>>>> >>>> >>> Advancing the
way the world uses the cloud™
>> >>>> >>>>> >>>> >>
>> >>>> >>>>> >>>> >>
>> >>>> >>>>> >>>> >>
>> >>>> >>>>> >>>> >>
>> >>>> >>>>> >>>> >> --
>> >>>> >>>>> >>>> >> Mike Tutkowski
>> >>>> >>>>> >>>> >> Senior CloudStack
Developer, SolidFire Inc.
>> >>>> >>>>> >>>> >> e: mike.tutkowski@solidfire.com
>> >>>> >>>>> >>>> >> o: 303.746.7302
>> >>>> >>>>> >>>> >> Advancing the way
the world uses the cloud™
>> >>>> >>>>> >>>
>> >>>> >>>>> >>>
>> >>>> >>>>> >>>
>> >>>> >>>>> >>>
>> >>>> >>>>> >>> --
>> >>>> >>>>> >>> Mike Tutkowski
>> >>>> >>>>> >>> Senior CloudStack Developer, SolidFire
Inc.
>> >>>> >>>>> >>> e: mike.tutkowski@solidfire.com
>> >>>> >>>>> >>> o: 303.746.7302
>> >>>> >>>>> >>> Advancing the way the world uses
the cloud™
>> >>>> >>>>
>> >>>> >>>>
>> >>>> >>>>
>> >>>> >>>>
>> >>>> >>>> --
>> >>>> >>>> Mike Tutkowski
>> >>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>> >>>> >>>> e: mike.tutkowski@solidfire.com
>> >>>> >>>> o: 303.746.7302
>> >>>> >>>> Advancing the way the world uses the cloud™
>> >>>> >>>
>> >>>> >>>
>> >>>> >>>
>> >>>> >>>
>> >>>> >>> --
>> >>>> >>> Mike Tutkowski
>> >>>> >>> Senior CloudStack Developer, SolidFire Inc.
>> >>>> >>> e: mike.tutkowski@solidfire.com
>> >>>> >>> o: 303.746.7302
>> >>>> >>> Advancing the way the world uses the cloud™
>> >>>> >>
>> >>>> >>
>> >>>> >>
>> >>>> >>
>> >>>> >> --
>> >>>> >> Mike Tutkowski
>> >>>> >> Senior CloudStack Developer, SolidFire Inc.
>> >>>> >> e: mike.tutkowski@solidfire.com
>> >>>> >> o: 303.746.7302
>> >>>> >> Advancing the way the world uses the cloud™
>> >>>> >
>> >>>> >
>> >>>> >
>> >>>> >
>> >>>> > --
>> >>>> > Mike Tutkowski
>> >>>> > Senior CloudStack Developer, SolidFire Inc.
>> >>>> > e: mike.tutkowski@solidfire.com
>> >>>> > o: 303.746.7302
>> >>>> > Advancing the way the world uses the cloud™
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Mike Tutkowski
>> >>> Senior CloudStack Developer, SolidFire Inc.
>> >>> e: mike.tutkowski@solidfire.com
>> >>> o: 303.746.7302
>> >>> Advancing the way the world uses the cloud™
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> Mike Tutkowski
>> >> Senior CloudStack Developer, SolidFire Inc.
>> >> e: mike.tutkowski@solidfire.com
>> >> o: 303.746.7302
>> >> Advancing the way the world uses the cloud™
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message