cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Tutkowski <mike.tutkow...@solidfire.com>
Subject Re: Storage Quality-of-Service Question
Date Fri, 01 Feb 2013 22:13:10 GMT
Thanks for all the effort and the info, Marcus!


On Fri, Feb 1, 2013 at 2:36 PM, Marcus Sorensen <shadowsor@gmail.com> wrote:

> Edison has some documentation on the work he's done. I did a search in
> the wiki for storage and refactor, and skimmed the 4.1 design docs,
> but couldn't find it. Maybe someone knows where that is?
>
> He also did a presentation at the cloudstack conference, and I think
> there's a youtube video of that somewhere.
>
> http://www.youtube.com/watch?v=HWtzvcprOyI
>
> There were slides but the slideshare link isn't provided on this video
> like it is on some of the others.
>
> On Fri, Feb 1, 2013 at 12:55 PM, Mike Tutkowski
> <mike.tutkowski@solidfire.com> wrote:
> > Hey Marcus,
> >
> > So, before I get too involved in the Max/Min IOPS part of this work, I'd
> > like to first understand more about the way CS is changing to enable
> > dynamic creation of a single volume (LUN) for a VM Instance or Data Disk.
> >
> > Is there somewhere you might be able to point me to where I could learn
> > about the code I would need to write to leverage this new architecture?
> >
> > Thanks!!
> >
> >
> > On Fri, Feb 1, 2013 at 9:55 AM, Mike Tutkowski <
> mike.tutkowski@solidfire.com
> >> wrote:
> >
> >> I see...that makes sense.
> >>
> >>
> >> On Fri, Feb 1, 2013 at 9:50 AM, Marcus Sorensen <shadowsor@gmail.com
> >wrote:
> >>
> >>> well, the offerings are up to the admin to create, the user just gets
> >>> to choose them. So we leave it up to the admin to create sane
> >>> offerings (not specify cpu mhz that can't be satisfied, storage sizes
> >>> that can't be supported, etc. We should make sure it states in the
> >>> documentation and functional spec how the feature is implemented (i.e.
> >>> an admin can't assume that cloudstack will just 'make it work', it has
> >>> to be supported by their primary storage).
> >>>
> >>> On Fri, Feb 1, 2013 at 8:13 AM, Mike Tutkowski
> >>> <mike.tutkowski@solidfire.com> wrote:
> >>> > Ah, yeah, now that I think of it, I didn't really phrase that
> question
> >>> all
> >>> > that well.
> >>> >
> >>> > What I meant to ask, Marcus, was if there is some way a user knows
> the
> >>> > fields (in this case, Max and Min IOPS) may or may not be honored
> >>> because
> >>> > it depends on the underlying storage's capabilities?
> >>> >
> >>> > Thanks!
> >>> >
> >>> >
> >>> > On Thu, Jan 31, 2013 at 10:31 PM, Marcus Sorensen <
> shadowsor@gmail.com
> >>> >wrote:
> >>> >
> >>> >> Yes, there are optional fields. For example if you register a new
> >>> >> compute offering you will see that some of them have red stars,
but
> >>> >> network rate for example is optional.
> >>> >>
> >>> >> On Thu, Jan 31, 2013 at 10:07 PM, Mike Tutkowski
> >>> >> <mike.tutkowski@solidfire.com> wrote:
> >>> >> > So, Marcus, you're thinking these values would be available
for
> any
> >>> >> Compute
> >>> >> > or Disk Offerings regardless of the type of Primary Storage
that
> back
> >>> >> them,
> >>> >> > right?
> >>> >> >
> >>> >> > Is there a way we denote Optional fields of this nature in
CS
> today
> >>> (a
> >>> >> way
> >>> >> > in which the end user would understand that these fields are
not
> >>> honored
> >>> >> by
> >>> >> > all Primary Storage types necessarily)?
> >>> >> >
> >>> >> > Thanks for the info!
> >>> >> >
> >>> >> >
> >>> >> > On Thu, Jan 31, 2013 at 4:46 PM, Marcus Sorensen <
> >>> shadowsor@gmail.com
> >>> >> >wrote:
> >>> >> >
> >>> >> >> I would start by creating a functional spec, then people
can give
> >>> >> >> input and help solidify exactly how it's implemented.
There are
> >>> >> >> examples on the wiki. Or perhaps there is already one
describing
> the
> >>> >> >> feature that you can comment on or add to. I think a good
place
> to
> >>> >> >> start is simply trying to get the values into the offerings,
and
> >>> >> >> adjusting any database schemas necessary to accomodate
that. Once
> >>> the
> >>> >> >> values are in the offerings, then it can be up to the
various
> >>> storage
> >>> >> >> pool types to implement or not.
> >>> >> >>
> >>> >> >> On Thu, Jan 31, 2013 at 4:42 PM, Mike Tutkowski
> >>> >> >> <mike.tutkowski@solidfire.com> wrote:
> >>> >> >> > Cool...thanks, Marcus.
> >>> >> >> >
> >>> >> >> > So, how do you recommend I go about this?  Although
I've got
> >>> recent CS
> >>> >> >> code
> >>> >> >> > on my machine and I've built and run it, I've not
yet made any
> >>> >> changes.
> >>> >> >>  Do
> >>> >> >> > you know of any documentation I could look at to
learn the
> process
> >>> >> >> involved
> >>> >> >> > in making CS changes?
> >>> >> >> >
> >>> >> >> >
> >>> >> >> > On Thu, Jan 31, 2013 at 4:36 PM, Marcus Sorensen
<
> >>> shadowsor@gmail.com
> >>> >> >> >wrote:
> >>> >> >> >
> >>> >> >> >> Yes, it would need to be a part of compute offering
> separately,
> >>> along
> >>> >> >> >> the CPU/RAM and network limits. Then theoretically
they could
> >>> >> >> >> provision OS drive with relatively slow limits,
and a database
> >>> volume
> >>> >> >> >> with higher limits (and higher pricetag or something).
> >>> >> >> >>
> >>> >> >> >> On Thu, Jan 31, 2013 at 4:33 PM, Mike Tutkowski
> >>> >> >> >> <mike.tutkowski@solidfire.com> wrote:
> >>> >> >> >> > Thanks for the info, Marcus!
> >>> >> >> >> >
> >>> >> >> >> > So, you are thinking that when the user
creates a new Disk
> >>> Offering
> >>> >> >> that
> >>> >> >> >> he
> >>> >> >> >> > or she would be given the option of specifying
Max and Min
> >>> IOPS?
> >>> >>  That
> >>> >> >> >> > makes sense when I think of Data Disks,
but how does that
> >>> figure
> >>> >> into
> >>> >> >> the
> >>> >> >> >> > kind of storage a VM Instance runs off of?
 I thought the
> way
> >>> that
> >>> >> >> works
> >>> >> >> >> > today is by specifying in the Compute Offering
a Storage
> Tag.
> >>> >> >> >> >
> >>> >> >> >> > Thanks!
> >>> >> >> >> >
> >>> >> >> >> >
> >>> >> >> >> > On Thu, Jan 31, 2013 at 4:25 PM, Marcus
Sorensen <
> >>> >> shadowsor@gmail.com
> >>> >> >> >> >wrote:
> >>> >> >> >> >
> >>> >> >> >> >> So, this is what Edison's storage refactor
is designed to
> >>> >> accomplish.
> >>> >> >> >> >> Instead of the storage working the way
it currently does,
> >>> >> creating a
> >>> >> >> >> >> volume for  a VM would consist of the
cloudstack server (or
> >>> volume
> >>> >> >> >> >> service as he has created) talking to
your solidfire
> >>> appliance,
> >>> >> >> >> >> creating a new lun, and using that.
Now instead of a giant
> >>> >> pool/lun
> >>> >> >> >> >> that each vm shares, each VM has it's
own LUN that is
> >>> provisioned
> >>> >> on
> >>> >> >> >> >> the fly by cloudstack.
> >>> >> >> >> >>
> >>> >> >> >> >> It sounds like maybe this will make
it into 4.1 (I have to
> go
> >>> >> through
> >>> >> >> >> >> my email today, but it sounded close).
> >>> >> >> >> >>
> >>> >> >> >> >> Either way, it would be a good idea
to add this into the
> disk
> >>> >> >> >> >> offering, a basic IO and throughput
limit, and then whether
> >>> you
> >>> >> >> >> >> implement it through cgroups on the
Linux server, or at the
> >>> SAN
> >>> >> >> level,
> >>> >> >> >> >> or through some other means on VMware
or Xen, the values
> are
> >>> >> there to
> >>> >> >> >> >> use.
> >>> >> >> >> >>
> >>> >> >> >> >> On Thu, Jan 31, 2013 at 4:19 PM, Mike
Tutkowski
> >>> >> >> >> >> <mike.tutkowski@solidfire.com>
wrote:
> >>> >> >> >> >> > Hi everyone,
> >>> >> >> >> >> >
> >>> >> >> >> >> > A while back, I had sent out a
question regarding storage
> >>> >> quality
> >>> >> >> of
> >>> >> >> >> >> > service.  A few of you chimed in
with some good ideas.
> >>> >> >> >> >> >
> >>> >> >> >> >> > Now that I have a little more experience
with CloudStack
> >>> (these
> >>> >> >> past
> >>> >> >> >> >> couple
> >>> >> >> >> >> > weeks, I've been able to get a
real CS system up and
> >>> running,
> >>> >> >> create
> >>> >> >> >> an
> >>> >> >> >> >> > iSCSI target, and make use of it
from XenServer), I would
> >>> like
> >>> >> to
> >>> >> >> >> pose my
> >>> >> >> >> >> > question again, but in a more refined
way.
> >>> >> >> >> >> >
> >>> >> >> >> >> > A little background:  I worked
for a data-storage
> company in
> >>> >> >> Boulder,
> >>> >> >> >> CO
> >>> >> >> >> >> > called SolidFire (http://solidfire.com).
 We build a
> highly
> >>> >> >> >> >> fault-tolerant,
> >>> >> >> >> >> > clustered SAN technology consisting
exclusively of SSDs.
> >>>  One of
> >>> >> >> our
> >>> >> >> >> main
> >>> >> >> >> >> > features is hard quality of service
(QoS).  You may have
> >>> heard
> >>> >> of
> >>> >> >> QoS
> >>> >> >> >> >> > before.  In our case, we refer
to it as hard QoS because
> >>> the end
> >>> >> >> user
> >>> >> >> >> has
> >>> >> >> >> >> > the ability to specify on a volume-by-volume
basis what
> the
> >>> >> maximum
> >>> >> >> >> and
> >>> >> >> >> >> > minimum IOPS for a given volume
should be.  In other
> words,
> >>> we
> >>> >> do
> >>> >> >> not
> >>> >> >> >> >> have
> >>> >> >> >> >> > the user assign relative high,
medium, and low
> priorities to
> >>> >> >> volumes
> >>> >> >> >> (the
> >>> >> >> >> >> > way you might do with thread priorities),
but rather hard
> >>> IOPS
> >>> >> >> limits.
> >>> >> >> >> >> >
> >>> >> >> >> >> > With this in mind, I would like
to know how you would
> >>> recommend
> >>> >> I
> >>> >> >> go
> >>> >> >> >> >> about
> >>> >> >> >> >> > enabling CloudStack to support
this feature.
> >>> >> >> >> >> >
> >>> >> >> >> >> > In my previous e-mail discussion,
people suggested using
> the
> >>> >> >> Storage
> >>> >> >> >> Tag
> >>> >> >> >> >> > field.  This is a good idea, but
does not fully satisfy
> my
> >>> >> >> >> requirements.
> >>> >> >> >> >> >
> >>> >> >> >> >> > For example, if I created two large
SolidFire volumes (by
> >>> the
> >>> >> way,
> >>> >> >> one
> >>> >> >> >> >> > SolidFire volume equals one LUN),
I could create two
> Primary
> >>> >> >> Storage
> >>> >> >> >> >> types
> >>> >> >> >> >> > to map onto them.  One Primary
Storage type could have
> the
> >>> tag
> >>> >> >> >> >> "high_perf"
> >>> >> >> >> >> > and the other the tag "normal_perf".
> >>> >> >> >> >> >
> >>> >> >> >> >> > I could then create Compute Offerings
and Disk Offerings
> >>> that
> >>> >> >> >> referenced
> >>> >> >> >> >> > one Storage Tag or the other.
> >>> >> >> >> >> >
> >>> >> >> >> >> > This would guarantee that a VM
Instance or Data Disk
> would
> >>> run
> >>> >> from
> >>> >> >> >> one
> >>> >> >> >> >> > SolidFire volume or the other.
> >>> >> >> >> >> >
> >>> >> >> >> >> > The problem is that one SolidFire
volume could be
> servicing
> >>> >> >> multiple
> >>> >> >> >> VM
> >>> >> >> >> >> > Instances and/or Data Disks.  This
may not seem like a
> >>> problem,
> >>> >> but
> >>> >> >> >> it is
> >>> >> >> >> >> > because in such a configuration
our SAN can no longer
> >>> guarantee
> >>> >> >> IOPS
> >>> >> >> >> on a
> >>> >> >> >> >> > VM-by-VM basis (or a data disk-by-data
disk basis).
>  This is
> >>> >> called
> >>> >> >> >> the
> >>> >> >> >> >> > Noisy Neighbor problem.  If, for
example, one VM Instance
> >>> starts
> >>> >> >> >> getting
> >>> >> >> >> >> > "greedy," it can degrade the performance
of the other VM
> >>> >> Instances
> >>> >> >> (or
> >>> >> >> >> >> Data
> >>> >> >> >> >> > Disks) that share that SolidFire
volume.
> >>> >> >> >> >> >
> >>> >> >> >> >> > Ideally we would like to have a
single VM Instance run
> on a
> >>> >> single
> >>> >> >> >> >> > SolidFire volume and a single Data
Disk be associated
> with a
> >>> >> single
> >>> >> >> >> >> > SolidFire volume.
> >>> >> >> >> >> >
> >>> >> >> >> >> > How might I go about accomplishing
this design goal?
> >>> >> >> >> >> >
> >>> >> >> >> >> > Thanks!!
> >>> >> >> >> >> >
> >>> >> >> >> >> > --
> >>> >> >> >> >> > *Mike Tutkowski*
> >>> >> >> >> >> > *Senior CloudStack Developer, SolidFire
Inc.*
> >>> >> >> >> >> > e: mike.tutkowski@solidfire.com
> >>> >> >> >> >> > o: 303.746.7302
> >>> >> >> >> >> > Advancing the way the world uses
the
> >>> >> >> >> >> > cloud<http://solidfire.com/solution/overview/?video=play
> >
> >>> >> >> >> >> > *™*
> >>> >> >> >> >>
> >>> >> >> >> >
> >>> >> >> >> >
> >>> >> >> >> >
> >>> >> >> >> > --
> >>> >> >> >> > *Mike Tutkowski*
> >>> >> >> >> > *Senior CloudStack Developer, SolidFire
Inc.*
> >>> >> >> >> > e: mike.tutkowski@solidfire.com
> >>> >> >> >> > o: 303.746.7302
> >>> >> >> >> > Advancing the way the world uses the
> >>> >> >> >> > cloud<http://solidfire.com/solution/overview/?video=play>
> >>> >> >> >> > *™*
> >>> >> >> >>
> >>> >> >> >
> >>> >> >> >
> >>> >> >> >
> >>> >> >> > --
> >>> >> >> > *Mike Tutkowski*
> >>> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
> >>> >> >> > e: mike.tutkowski@solidfire.com
> >>> >> >> > o: 303.746.7302
> >>> >> >> > Advancing the way the world uses the
> >>> >> >> > cloud<http://solidfire.com/solution/overview/?video=play>
> >>> >> >> > *™*
> >>> >> >>
> >>> >> >
> >>> >> >
> >>> >> >
> >>> >> > --
> >>> >> > *Mike Tutkowski*
> >>> >> > *Senior CloudStack Developer, SolidFire Inc.*
> >>> >> > e: mike.tutkowski@solidfire.com
> >>> >> > o: 303.746.7302
> >>> >> > Advancing the way the world uses the
> >>> >> > cloud<http://solidfire.com/solution/overview/?video=play>
> >>> >> > *™*
> >>> >>
> >>> >
> >>> >
> >>> >
> >>> > --
> >>> > *Mike Tutkowski*
> >>> > *Senior CloudStack Developer, SolidFire Inc.*
> >>> > e: mike.tutkowski@solidfire.com
> >>> > o: 303.746.7302
> >>> > Advancing the way the world uses the
> >>> > cloud<http://solidfire.com/solution/overview/?video=play>
> >>> > *™*
> >>>
> >>
> >>
> >>
> >> --
> >> *Mike Tutkowski*
> >> *Senior CloudStack Developer, SolidFire Inc.*
> >> e: mike.tutkowski@solidfire.com
> >> o: 303.746.7302
> >> Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> >> *™*
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message