Return-Path: X-Original-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 38F92E7F3 for ; Fri, 1 Feb 2013 19:55:35 +0000 (UTC) Received: (qmail 84909 invoked by uid 500); 1 Feb 2013 19:55:34 -0000 Delivered-To: apmail-incubator-cloudstack-dev-archive@incubator.apache.org Received: (qmail 84872 invoked by uid 500); 1 Feb 2013 19:55:34 -0000 Mailing-List: contact cloudstack-dev-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: cloudstack-dev@incubator.apache.org Delivered-To: mailing list cloudstack-dev@incubator.apache.org Received: (qmail 84864 invoked by uid 99); 1 Feb 2013 19:55:34 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 01 Feb 2013 19:55:34 +0000 X-ASF-Spam-Status: No, hits=2.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_SOFTFAIL X-Spam-Check-By: apache.org Received-SPF: softfail (athena.apache.org: transitioning domain of mike.tutkowski@solidfire.com does not designate 209.85.219.51 as permitted sender) Received: from [209.85.219.51] (HELO mail-oa0-f51.google.com) (209.85.219.51) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 01 Feb 2013 19:55:29 +0000 Received: by mail-oa0-f51.google.com with SMTP id h2so2515053oag.38 for ; Fri, 01 Feb 2013 11:55:08 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type:x-gm-message-state; bh=I9kD/Kpw5vTzHZLFc7cmfsrv7kQ8Gcj8Ym3sljh3nIA=; b=IfSFuQy7AUlNcyC+mb41ydWt1Z8Yn/mAEnaZc+9Hk5vPneSxt2buJLospdzj4yCR4y dgRoL3G766yzPojhcPeMmk0iCm4vtVnp3vWacw+IHxP//7bahNKdfHcjfD86AARd/N7q EAUG3yiqDBbIQ4AKDwIz9ZLMCN1WpxX8EbhCu9YNqAXVA1Gp7leg3d+h8CyrWGjMwZgp mG/zRFzp3kgaCmQA0TcSsop3IycCdqLmvqxUMOxs5XxFUPzuiCVF6EoNtVYCSxRe6NjH ICt1eYo+BV2PblHdQnCBjwkyLDOgpORvJvTJnKvlOCG3OQT7VXK/PQ3U6BgaJZKpmm2p KLJA== MIME-Version: 1.0 X-Received: by 10.182.0.19 with SMTP id 19mr9764716oba.15.1359748508493; Fri, 01 Feb 2013 11:55:08 -0800 (PST) Received: by 10.182.49.202 with HTTP; Fri, 1 Feb 2013 11:55:08 -0800 (PST) In-Reply-To: References: Date: Fri, 1 Feb 2013 12:55:08 -0700 Message-ID: Subject: Re: Storage Quality-of-Service Question From: Mike Tutkowski To: cloudstack-dev@incubator.apache.org Content-Type: multipart/alternative; boundary=f46d043c8026d5269f04d4af1e98 X-Gm-Message-State: ALoCoQliDuXoj6KWbXeMWBSWfGFGlD/ixMyIBqq8uDm9BlRteg1Wo4TSniUhBqwhYUwZTUkTMPNz X-Virus-Checked: Checked by ClamAV on apache.org --f46d043c8026d5269f04d4af1e98 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Hey Marcus, So, before I get too involved in the Max/Min IOPS part of this work, I'd like to first understand more about the way CS is changing to enable dynamic creation of a single volume (LUN) for a VM Instance or Data Disk. Is there somewhere you might be able to point me to where I could learn about the code I would need to write to leverage this new architecture? Thanks!! On Fri, Feb 1, 2013 at 9:55 AM, Mike Tutkowski wrote: > I see...that makes sense. > > > On Fri, Feb 1, 2013 at 9:50 AM, Marcus Sorensen wrot= e: > >> well, the offerings are up to the admin to create, the user just gets >> to choose them. So we leave it up to the admin to create sane >> offerings (not specify cpu mhz that can't be satisfied, storage sizes >> that can't be supported, etc. We should make sure it states in the >> documentation and functional spec how the feature is implemented (i.e. >> an admin can't assume that cloudstack will just 'make it work', it has >> to be supported by their primary storage). >> >> On Fri, Feb 1, 2013 at 8:13 AM, Mike Tutkowski >> wrote: >> > Ah, yeah, now that I think of it, I didn't really phrase that question >> all >> > that well. >> > >> > What I meant to ask, Marcus, was if there is some way a user knows the >> > fields (in this case, Max and Min IOPS) may or may not be honored >> because >> > it depends on the underlying storage's capabilities? >> > >> > Thanks! >> > >> > >> > On Thu, Jan 31, 2013 at 10:31 PM, Marcus Sorensen > >wrote: >> > >> >> Yes, there are optional fields. For example if you register a new >> >> compute offering you will see that some of them have red stars, but >> >> network rate for example is optional. >> >> >> >> On Thu, Jan 31, 2013 at 10:07 PM, Mike Tutkowski >> >> wrote: >> >> > So, Marcus, you're thinking these values would be available for any >> >> Compute >> >> > or Disk Offerings regardless of the type of Primary Storage that ba= ck >> >> them, >> >> > right? >> >> > >> >> > Is there a way we denote Optional fields of this nature in CS today >> (a >> >> way >> >> > in which the end user would understand that these fields are not >> honored >> >> by >> >> > all Primary Storage types necessarily)? >> >> > >> >> > Thanks for the info! >> >> > >> >> > >> >> > On Thu, Jan 31, 2013 at 4:46 PM, Marcus Sorensen < >> shadowsor@gmail.com >> >> >wrote: >> >> > >> >> >> I would start by creating a functional spec, then people can give >> >> >> input and help solidify exactly how it's implemented. There are >> >> >> examples on the wiki. Or perhaps there is already one describing t= he >> >> >> feature that you can comment on or add to. I think a good place to >> >> >> start is simply trying to get the values into the offerings, and >> >> >> adjusting any database schemas necessary to accomodate that. Once >> the >> >> >> values are in the offerings, then it can be up to the various >> storage >> >> >> pool types to implement or not. >> >> >> >> >> >> On Thu, Jan 31, 2013 at 4:42 PM, Mike Tutkowski >> >> >> wrote: >> >> >> > Cool...thanks, Marcus. >> >> >> > >> >> >> > So, how do you recommend I go about this? Although I've got >> recent CS >> >> >> code >> >> >> > on my machine and I've built and run it, I've not yet made any >> >> changes. >> >> >> Do >> >> >> > you know of any documentation I could look at to learn the proce= ss >> >> >> involved >> >> >> > in making CS changes? >> >> >> > >> >> >> > >> >> >> > On Thu, Jan 31, 2013 at 4:36 PM, Marcus Sorensen < >> shadowsor@gmail.com >> >> >> >wrote: >> >> >> > >> >> >> >> Yes, it would need to be a part of compute offering separately, >> along >> >> >> >> the CPU/RAM and network limits. Then theoretically they could >> >> >> >> provision OS drive with relatively slow limits, and a database >> volume >> >> >> >> with higher limits (and higher pricetag or something). >> >> >> >> >> >> >> >> On Thu, Jan 31, 2013 at 4:33 PM, Mike Tutkowski >> >> >> >> wrote: >> >> >> >> > Thanks for the info, Marcus! >> >> >> >> > >> >> >> >> > So, you are thinking that when the user creates a new Disk >> Offering >> >> >> that >> >> >> >> he >> >> >> >> > or she would be given the option of specifying Max and Min >> IOPS? >> >> That >> >> >> >> > makes sense when I think of Data Disks, but how does that >> figure >> >> into >> >> >> the >> >> >> >> > kind of storage a VM Instance runs off of? I thought the way >> that >> >> >> works >> >> >> >> > today is by specifying in the Compute Offering a Storage Tag. >> >> >> >> > >> >> >> >> > Thanks! >> >> >> >> > >> >> >> >> > >> >> >> >> > On Thu, Jan 31, 2013 at 4:25 PM, Marcus Sorensen < >> >> shadowsor@gmail.com >> >> >> >> >wrote: >> >> >> >> > >> >> >> >> >> So, this is what Edison's storage refactor is designed to >> >> accomplish. >> >> >> >> >> Instead of the storage working the way it currently does, >> >> creating a >> >> >> >> >> volume for a VM would consist of the cloudstack server (or >> volume >> >> >> >> >> service as he has created) talking to your solidfire >> appliance, >> >> >> >> >> creating a new lun, and using that. Now instead of a giant >> >> pool/lun >> >> >> >> >> that each vm shares, each VM has it's own LUN that is >> provisioned >> >> on >> >> >> >> >> the fly by cloudstack. >> >> >> >> >> >> >> >> >> >> It sounds like maybe this will make it into 4.1 (I have to g= o >> >> through >> >> >> >> >> my email today, but it sounded close). >> >> >> >> >> >> >> >> >> >> Either way, it would be a good idea to add this into the dis= k >> >> >> >> >> offering, a basic IO and throughput limit, and then whether >> you >> >> >> >> >> implement it through cgroups on the Linux server, or at the >> SAN >> >> >> level, >> >> >> >> >> or through some other means on VMware or Xen, the values are >> >> there to >> >> >> >> >> use. >> >> >> >> >> >> >> >> >> >> On Thu, Jan 31, 2013 at 4:19 PM, Mike Tutkowski >> >> >> >> >> wrote: >> >> >> >> >> > Hi everyone, >> >> >> >> >> > >> >> >> >> >> > A while back, I had sent out a question regarding storage >> >> quality >> >> >> of >> >> >> >> >> > service. A few of you chimed in with some good ideas. >> >> >> >> >> > >> >> >> >> >> > Now that I have a little more experience with CloudStack >> (these >> >> >> past >> >> >> >> >> couple >> >> >> >> >> > weeks, I've been able to get a real CS system up and >> running, >> >> >> create >> >> >> >> an >> >> >> >> >> > iSCSI target, and make use of it from XenServer), I would >> like >> >> to >> >> >> >> pose my >> >> >> >> >> > question again, but in a more refined way. >> >> >> >> >> > >> >> >> >> >> > A little background: I worked for a data-storage company = in >> >> >> Boulder, >> >> >> >> CO >> >> >> >> >> > called SolidFire (http://solidfire.com). We build a highl= y >> >> >> >> >> fault-tolerant, >> >> >> >> >> > clustered SAN technology consisting exclusively of SSDs. >> One of >> >> >> our >> >> >> >> main >> >> >> >> >> > features is hard quality of service (QoS). You may have >> heard >> >> of >> >> >> QoS >> >> >> >> >> > before. In our case, we refer to it as hard QoS because >> the end >> >> >> user >> >> >> >> has >> >> >> >> >> > the ability to specify on a volume-by-volume basis what th= e >> >> maximum >> >> >> >> and >> >> >> >> >> > minimum IOPS for a given volume should be. In other words= , >> we >> >> do >> >> >> not >> >> >> >> >> have >> >> >> >> >> > the user assign relative high, medium, and low priorities = to >> >> >> volumes >> >> >> >> (the >> >> >> >> >> > way you might do with thread priorities), but rather hard >> IOPS >> >> >> limits. >> >> >> >> >> > >> >> >> >> >> > With this in mind, I would like to know how you would >> recommend >> >> I >> >> >> go >> >> >> >> >> about >> >> >> >> >> > enabling CloudStack to support this feature. >> >> >> >> >> > >> >> >> >> >> > In my previous e-mail discussion, people suggested using t= he >> >> >> Storage >> >> >> >> Tag >> >> >> >> >> > field. This is a good idea, but does not fully satisfy my >> >> >> >> requirements. >> >> >> >> >> > >> >> >> >> >> > For example, if I created two large SolidFire volumes (by >> the >> >> way, >> >> >> one >> >> >> >> >> > SolidFire volume equals one LUN), I could create two Prima= ry >> >> >> Storage >> >> >> >> >> types >> >> >> >> >> > to map onto them. One Primary Storage type could have the >> tag >> >> >> >> >> "high_perf" >> >> >> >> >> > and the other the tag "normal_perf". >> >> >> >> >> > >> >> >> >> >> > I could then create Compute Offerings and Disk Offerings >> that >> >> >> >> referenced >> >> >> >> >> > one Storage Tag or the other. >> >> >> >> >> > >> >> >> >> >> > This would guarantee that a VM Instance or Data Disk would >> run >> >> from >> >> >> >> one >> >> >> >> >> > SolidFire volume or the other. >> >> >> >> >> > >> >> >> >> >> > The problem is that one SolidFire volume could be servicin= g >> >> >> multiple >> >> >> >> VM >> >> >> >> >> > Instances and/or Data Disks. This may not seem like a >> problem, >> >> but >> >> >> >> it is >> >> >> >> >> > because in such a configuration our SAN can no longer >> guarantee >> >> >> IOPS >> >> >> >> on a >> >> >> >> >> > VM-by-VM basis (or a data disk-by-data disk basis). This = is >> >> called >> >> >> >> the >> >> >> >> >> > Noisy Neighbor problem. If, for example, one VM Instance >> starts >> >> >> >> getting >> >> >> >> >> > "greedy," it can degrade the performance of the other VM >> >> Instances >> >> >> (or >> >> >> >> >> Data >> >> >> >> >> > Disks) that share that SolidFire volume. >> >> >> >> >> > >> >> >> >> >> > Ideally we would like to have a single VM Instance run on = a >> >> single >> >> >> >> >> > SolidFire volume and a single Data Disk be associated with= a >> >> single >> >> >> >> >> > SolidFire volume. >> >> >> >> >> > >> >> >> >> >> > How might I go about accomplishing this design goal? >> >> >> >> >> > >> >> >> >> >> > Thanks!! >> >> >> >> >> > >> >> >> >> >> > -- >> >> >> >> >> > *Mike Tutkowski* >> >> >> >> >> > *Senior CloudStack Developer, SolidFire Inc.* >> >> >> >> >> > e: mike.tutkowski@solidfire.com >> >> >> >> >> > o: 303.746.7302 >> >> >> >> >> > Advancing the way the world uses the >> >> >> >> >> > cloud >> >> >> >> >> > *=99* >> >> >> >> >> >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > -- >> >> >> >> > *Mike Tutkowski* >> >> >> >> > *Senior CloudStack Developer, SolidFire Inc.* >> >> >> >> > e: mike.tutkowski@solidfire.com >> >> >> >> > o: 303.746.7302 >> >> >> >> > Advancing the way the world uses the >> >> >> >> > cloud >> >> >> >> > *=99* >> >> >> >> >> >> >> > >> >> >> > >> >> >> > >> >> >> > -- >> >> >> > *Mike Tutkowski* >> >> >> > *Senior CloudStack Developer, SolidFire Inc.* >> >> >> > e: mike.tutkowski@solidfire.com >> >> >> > o: 303.746.7302 >> >> >> > Advancing the way the world uses the >> >> >> > cloud >> >> >> > *=99* >> >> >> >> >> > >> >> > >> >> > >> >> > -- >> >> > *Mike Tutkowski* >> >> > *Senior CloudStack Developer, SolidFire Inc.* >> >> > e: mike.tutkowski@solidfire.com >> >> > o: 303.746.7302 >> >> > Advancing the way the world uses the >> >> > cloud >> >> > *=99* >> >> >> > >> > >> > >> > -- >> > *Mike Tutkowski* >> > *Senior CloudStack Developer, SolidFire Inc.* >> > e: mike.tutkowski@solidfire.com >> > o: 303.746.7302 >> > Advancing the way the world uses the >> > cloud >> > *=99* >> > > > > -- > *Mike Tutkowski* > *Senior CloudStack Developer, SolidFire Inc.* > e: mike.tutkowski@solidfire.com > o: 303.746.7302 > Advancing the way the world uses the cloud > *=99* > --=20 *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkowski@solidfire.com o: 303.746.7302 Advancing the way the world uses the cloud *=99* --f46d043c8026d5269f04d4af1e98--