incubator-cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Tutkowski <mike.tutkow...@solidfire.com>
Subject Storage Quality-of-Service Question
Date Thu, 31 Jan 2013 23:19:02 GMT
Hi everyone,

A while back, I had sent out a question regarding storage quality of
service.  A few of you chimed in with some good ideas.

Now that I have a little more experience with CloudStack (these past couple
weeks, I've been able to get a real CS system up and running, create an
iSCSI target, and make use of it from XenServer), I would like to pose my
question again, but in a more refined way.

A little background:  I worked for a data-storage company in Boulder, CO
called SolidFire (http://solidfire.com).  We build a highly fault-tolerant,
clustered SAN technology consisting exclusively of SSDs.  One of our main
features is hard quality of service (QoS).  You may have heard of QoS
before.  In our case, we refer to it as hard QoS because the end user has
the ability to specify on a volume-by-volume basis what the maximum and
minimum IOPS for a given volume should be.  In other words, we do not have
the user assign relative high, medium, and low priorities to volumes (the
way you might do with thread priorities), but rather hard IOPS limits.

With this in mind, I would like to know how you would recommend I go about
enabling CloudStack to support this feature.

In my previous e-mail discussion, people suggested using the Storage Tag
field.  This is a good idea, but does not fully satisfy my requirements.

For example, if I created two large SolidFire volumes (by the way, one
SolidFire volume equals one LUN), I could create two Primary Storage types
to map onto them.  One Primary Storage type could have the tag "high_perf"
and the other the tag "normal_perf".

I could then create Compute Offerings and Disk Offerings that referenced
one Storage Tag or the other.

This would guarantee that a VM Instance or Data Disk would run from one
SolidFire volume or the other.

The problem is that one SolidFire volume could be servicing multiple VM
Instances and/or Data Disks.  This may not seem like a problem, but it is
because in such a configuration our SAN can no longer guarantee IOPS on a
VM-by-VM basis (or a data disk-by-data disk basis).  This is called the
Noisy Neighbor problem.  If, for example, one VM Instance starts getting
"greedy," it can degrade the performance of the other VM Instances (or Data
Disks) that share that SolidFire volume.

Ideally we would like to have a single VM Instance run on a single
SolidFire volume and a single Data Disk be associated with a single
SolidFire volume.

How might I go about accomplishing this design goal?

Thanks!!

-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message