Return-Path: X-Original-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 3FAE8EA74 for ; Tue, 5 Feb 2013 23:23:03 +0000 (UTC) Received: (qmail 70734 invoked by uid 500); 5 Feb 2013 23:23:02 -0000 Delivered-To: apmail-incubator-cloudstack-dev-archive@incubator.apache.org Received: (qmail 70699 invoked by uid 500); 5 Feb 2013 23:23:02 -0000 Mailing-List: contact cloudstack-dev-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list cloudstack-dev@incubator.apache.org Received: (qmail 70691 invoked by uid 99); 5 Feb 2013 23:23:02 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 05 Feb 2013 23:23:02 +0000 X-ASF-Spam-Status: No, hits=2.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_SOFTFAIL X-Spam-Check-By: apache.org Received-SPF: softfail (athena.apache.org: transitioning domain of mike.tutkowski@solidfire.com does not designate 209.85.214.174 as permitted sender) Received: from [209.85.214.174] (HELO mail-ob0-f174.google.com) (209.85.214.174) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 05 Feb 2013 23:22:57 +0000 Received: by mail-ob0-f174.google.com with SMTP id 16so818813obc.33 for ; Tue, 05 Feb 2013 15:22:37 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type:x-gm-message-state; bh=0LnXNljnMqEAIAJCHgqVAzW5hFFIeQxquMyohdHhRnE=; b=EOHm6jH/KYhozsnJQYLGjlrASnlk/mqUElCZa2I4cCtW+sDZj37Iqg1F4vWjexS1Ia rKIVsVWOyg1ELSCtIEpEM5pYX3T2Evy5rYHjaQt6mTEbJl4XREoZLh87yjg/f9s9meNZ vfeP/4BGsa+4YbVYhHU/DvrQ5uB3iKBDW12UBtawfQZJzen8BI+6alcrEhMAXOVNlR82 44m2ktt2iDwVuuSix3fWdM/iyKfBE4l9dDI53oMGOszranChVQA1kzH3tjGFsEYG82Ys F4O0hIcC6T4qTBo90HzEdQ78fku+iDqnq8fiyXrILIZjWi6o75Eana3KeaoBm6AtVXSA a39Q== MIME-Version: 1.0 X-Received: by 10.60.25.138 with SMTP id c10mr21087886oeg.12.1360106557090; Tue, 05 Feb 2013 15:22:37 -0800 (PST) Received: by 10.182.49.202 with HTTP; Tue, 5 Feb 2013 15:22:36 -0800 (PST) In-Reply-To: References: Date: Tue, 5 Feb 2013 16:22:36 -0700 Message-ID: Subject: Re: Storage Quality-of-Service Question From: Mike Tutkowski To: Edison Su Cc: "cloudstack-dev@incubator.apache.org" Content-Type: multipart/alternative; boundary=e89a8ff1c2fc312d4b04d5027c0f X-Gm-Message-State: ALoCoQlfaIIIYY3mAhN6c3PHwcdnII+VY28YHjaGCS2c3D8Sj97lFvPVaw7DDz12GD//IwM41ziT X-Virus-Checked: Checked by ClamAV on apache.org --e89a8ff1c2fc312d4b04d5027c0f Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Good to know. Thanks, Edison! On Tue, Feb 5, 2013 at 4:20 PM, Edison Su wrote: > Yes, grantAccess returns an IQN should be enough, and yes, it=92s called > after createAsync.**** > > BTW, is the iSCSI LUN accessible to all the hypervisors hosts? grantAcces= s > has a second parameter: EndPoint, which has the ip address of a client wh= o > wants to access this LUN. Whenever cloudstack wants to access the LUN, it > will call grantAccess at first. For example, in attach volume to a VM cas= e, > CloudStack mgt server will send a command to hypervisor host where the VM > is created, before doing that, mgt server will call grantaccess with > hypervisor host=92s ip address. If the LUN is not accessible to everyone, > then you may need to call storage box=92s api to grant access for specifi= ed > end point.**** > > ** ** > > *From:* Mike Tutkowski [mailto:mike.tutkowski@solidfire.com] > *Sent:* Tuesday, February 05, 2013 2:52 PM > *To:* Edison Su > *Cc:* cloudstack-dev@incubator.apache.org > > *Subject:* Re: Storage Quality-of-Service Question**** > > ** ** > > Thanks for all the info, Edison!**** > > ** ** > > I've been playing around with createAsync and deleteAsync today. I tried > to pattern these off of DefaultPrimaryDataStoreDriverImpl.**** > > ** ** > > So, for grantAccess, since I am dealing with an iSCSI volume (single LUN, > in our case), I could return an IQN? Is that correct?**** > > ** ** > > I assume grantAccess is called after createAsync (otherwise I wouldn't > have an IQN to provide)?**** > > ** ** > > On Tue, Feb 5, 2013 at 3:34 PM, Edison Su wrote:**= * > * > > > > > -----Original Message----- > > From: Mike Tutkowski [mailto:mike.tutkowski@solidfire.com]**** > > > Sent: Friday, February 01, 2013 9:18 PM > > To: cloudstack-dev@incubator.apache.org > > Subject: Re: Storage Quality-of-Service Question > >**** > > > Hi Edison, > > > > Thanks for the info!! I'm excited to start developing that plug-in. := ) > > > > I'm not sure if there is any documentation on what I'm about to ask > here, so > > I'll just ask: > > > > From a usability standpoint, how does this plug-in architecture manifes= t > itself? > > For example, today an admin has to create a Primary Storage type, tag i= t, > > then reference the tag from a Compute and/or Disk Offering. > > > > How will this user interaction look when plug-ins are available? Does > the user > > have a special option when creating a Compute and/or Disk Offering that > will > > trigger the execution of the plug-in at some point to dynamically creat= e > a > > volume?**** > > User doesn't need to know and shouldn't need to know the underlining > storage system, all the users want are to create data disk or root disk > with certain disk offerings. Right now, you can specify local or shared > storage, or storage tags in disk offering. In the future, we can add IOPS > in the disk offering, if it's what you are looking for. > Let's go through the code, take create data disk as an example: > 1. Admin creates a disk offering with IOPS 10000, name it as > "media-performance-disk". > 2. User selects above disk offering during creating data disk from UI. > 3. UI will call cloudstack mgt server, by calling CreateVolumeCmd, which > will create a DB entry in volumes table: code is at CreateVolumeCmd.java, > volumemanagerimpl.java: createVolume method > 4. User then attach the volume to a VM, by calling AttachVolumeCmd, which > will: > 4.1 create the volume on the primary storage at first: > volumemanagerimpl-> attachVolumeToVM -> createVolumeOnPrimaryStorage -> > createVolume->volumeserviceImpl-> createVolumeAsync, which will call > storage driver's createAsync to actually create a volume on primary stora= ge. > 4.2 then send a command to hypervisor host to attach above volume to = a > VM. > > In above 4.1 procedure, cloudstack mgt server will based on disk > offering and where the VM is created, to decide which primary storage > should use. The storage pool selection algorithm is implementation of > StoragePoolAllocator. Currently, these algorithms doesn't take IOPS into > consideration. We can add that in the future. > > 5. In your driver's createAsync method, it's the place to actually create > something on the storage. You can call the storage box's API directly her= e, > or you can send a command to hypervisor host. After finished volume > creation, you need to update volume db, for example, set an identifier, > either an UUID or a path of the volume into DB. > > 6. driver's grantAccess method, will return a string which will represent > the volume, the string will be passed down to hypervisor, so that > hypervisor can access the volume. In your case, the string can be somethi= ng > like: iscsi://taget/path, if your storage box exports volume as a LUN.***= * > > > > > > > > Just trying to get a feel for how this will work from both a programmin= g > and a > > user point of view. > > > > Thanks! > > > > > > On Fri, Feb 1, 2013 at 3:57 PM, Edison Su wrote: > > > > > Hi Mike, sorry for the late to reply your email. I created a branch > > > "storage_refactor" to hack storage code, it has a simple framework to > > > fit your requirements: zone-wide primary storage, and per data disk p= er > > LUN. > > > There is even a maven project called: > > > cloud-plugin-storage-volume-solidfire, you can add your code into tha= t > > > project. > > > In order to write a plugin for cloudstack storage: you need to write = a > > > storage provider, which provides implementations of > > > PrimaryDataStoreLifeCycle and PrimaryDataStoreDriver. > > > You can take a look at DefaultPrimaryDatastoreProviderImpl and > > > AncientPrimaryDataStoreProviderImpl as an example. If you have any > > > questions about the code, please let me know. > > > > > > > -----Original Message----- > > > > From: Mike Tutkowski [mailto:mike.tutkowski@solidfire.com] > > > > Sent: Friday, February 01, 2013 11:55 AM > > > > To: cloudstack-dev@incubator.apache.org > > > > Subject: Re: Storage Quality-of-Service Question > > > > > > > > Hey Marcus, > > > > > > > > So, before I get too involved in the Max/Min IOPS part of this work= , > > > > I'd > > > like to > > > > first understand more about the way CS is changing to enable dynami= c > > > > creation of a single volume (LUN) for a VM Instance or Data Disk. > > > > > > > > Is there somewhere you might be able to point me to where I could > > > > learn about the code I would need to write to leverage this new > > architecture? > > > > > > > > Thanks!! > > > > > > > > > > > > On Fri, Feb 1, 2013 at 9:55 AM, Mike Tutkowski > > > > > > > > wrote: > > > > > > > > > I see...that makes sense. > > > > > > > > > > > > > > > On Fri, Feb 1, 2013 at 9:50 AM, Marcus Sorensen > > > > wrote: > > > > > > > > > >> well, the offerings are up to the admin to create, the user just > > > > >> gets to choose them. So we leave it up to the admin to create > > > > >> sane offerings (not specify cpu mhz that can't be satisfied, > > > > >> storage sizes that can't be supported, etc. We should make sure > > > > >> it states in the documentation and functional spec how the > feature is > > implemented (i.e. > > > > >> an admin can't assume that cloudstack will just 'make it work', > > > > >> it has to be supported by their primary storage). > > > > >> > > > > >> On Fri, Feb 1, 2013 at 8:13 AM, Mike Tutkowski > > > > >> wrote: > > > > >> > Ah, yeah, now that I think of it, I didn't really phrase that > > > > >> > question > > > > >> all > > > > >> > that well. > > > > >> > > > > > >> > What I meant to ask, Marcus, was if there is some way a user > > > > >> > knows the fields (in this case, Max and Min IOPS) may or may > > > > >> > not be honored > > > > >> because > > > > >> > it depends on the underlying storage's capabilities? > > > > >> > > > > > >> > Thanks! > > > > >> > > > > > >> > > > > > >> > On Thu, Jan 31, 2013 at 10:31 PM, Marcus Sorensen > > > > >> > > > > >> >wrote: > > > > >> > > > > > >> >> Yes, there are optional fields. For example if you register a > > > > >> >> new compute offering you will see that some of them have red > > > > >> >> stars, but network rate for example is optional. > > > > >> >> > > > > >> >> On Thu, Jan 31, 2013 at 10:07 PM, Mike Tutkowski > > > > >> >> wrote: > > > > >> >> > So, Marcus, you're thinking these values would be available > > > > >> >> > for any > > > > >> >> Compute > > > > >> >> > or Disk Offerings regardless of the type of Primary Storage > > > > >> >> > that back > > > > >> >> them, > > > > >> >> > right? > > > > >> >> > > > > > >> >> > Is there a way we denote Optional fields of this nature in > > > > >> >> > CS today > > > > >> (a > > > > >> >> way > > > > >> >> > in which the end user would understand that these fields ar= e > > > > >> >> > not > > > > >> honored > > > > >> >> by > > > > >> >> > all Primary Storage types necessarily)? > > > > >> >> > > > > > >> >> > Thanks for the info! > > > > >> >> > > > > > >> >> > > > > > >> >> > On Thu, Jan 31, 2013 at 4:46 PM, Marcus Sorensen < > > > > >> shadowsor@gmail.com > > > > >> >> >wrote: > > > > >> >> > > > > > >> >> >> I would start by creating a functional spec, then people > > > > >> >> >> can give input and help solidify exactly how it's > implemented. > > > > >> >> >> There are examples on the wiki. Or perhaps there is alread= y > > > > >> >> >> one describing the feature that you can comment on or add > > > > >> >> >> to. I think a good place to start is simply trying to get > > > > >> >> >> the values into the offerings, and adjusting any database > > > > >> >> >> schemas necessary to accomodate that. Once > > > > >> the > > > > >> >> >> values are in the offerings, then it can be up to the > > > > >> >> >> various > > > > >> storage > > > > >> >> >> pool types to implement or not. > > > > >> >> >> > > > > >> >> >> On Thu, Jan 31, 2013 at 4:42 PM, Mike Tutkowski > > > > >> >> >> wrote: > > > > >> >> >> > Cool...thanks, Marcus. > > > > >> >> >> > > > > > >> >> >> > So, how do you recommend I go about this? Although I've > > > > >> >> >> > got > > > > >> recent CS > > > > >> >> >> code > > > > >> >> >> > on my machine and I've built and run it, I've not yet > > > > >> >> >> > made any > > > > >> >> changes. > > > > >> >> >> Do > > > > >> >> >> > you know of any documentation I could look at to learn > > > > >> >> >> > the process > > > > >> >> >> involved > > > > >> >> >> > in making CS changes? > > > > >> >> >> > > > > > >> >> >> > > > > > >> >> >> > On Thu, Jan 31, 2013 at 4:36 PM, Marcus Sorensen < > > > > >> shadowsor@gmail.com > > > > >> >> >> >wrote: > > > > >> >> >> > > > > > >> >> >> >> Yes, it would need to be a part of compute offering > > > > >> >> >> >> separately, > > > > >> along > > > > >> >> >> >> the CPU/RAM and network limits. Then theoretically they > > > > >> >> >> >> could provision OS drive with relatively slow limits, > > > > >> >> >> >> and a database > > > > >> volume > > > > >> >> >> >> with higher limits (and higher pricetag or something). > > > > >> >> >> >> > > > > >> >> >> >> On Thu, Jan 31, 2013 at 4:33 PM, Mike Tutkowski > > > > >> >> >> >> wrote: > > > > >> >> >> >> > Thanks for the info, Marcus! > > > > >> >> >> >> > > > > > >> >> >> >> > So, you are thinking that when the user creates a new > > > > >> >> >> >> > Disk > > > > >> Offering > > > > >> >> >> that > > > > >> >> >> >> he > > > > >> >> >> >> > or she would be given the option of specifying Max an= d > > > > >> >> >> >> > Min > > > > >> IOPS? > > > > >> >> That > > > > >> >> >> >> > makes sense when I think of Data Disks, but how does > > > > >> >> >> >> > that > > > > >> figure > > > > >> >> into > > > > >> >> >> the > > > > >> >> >> >> > kind of storage a VM Instance runs off of? I thought > > > > >> >> >> >> > the way > > > > >> that > > > > >> >> >> works > > > > >> >> >> >> > today is by specifying in the Compute Offering a > > > > >> >> >> >> > Storage > > > Tag. > > > > >> >> >> >> > > > > > >> >> >> >> > Thanks! > > > > >> >> >> >> > > > > > >> >> >> >> > > > > > >> >> >> >> > On Thu, Jan 31, 2013 at 4:25 PM, Marcus Sorensen < > > > > >> >> shadowsor@gmail.com > > > > >> >> >> >> >wrote: > > > > >> >> >> >> > > > > > >> >> >> >> >> So, this is what Edison's storage refactor is > > > > >> >> >> >> >> designed to > > > > >> >> accomplish. > > > > >> >> >> >> >> Instead of the storage working the way it currently > > > > >> >> >> >> >> does, > > > > >> >> creating a > > > > >> >> >> >> >> volume for a VM would consist of the cloudstack > > > > >> >> >> >> >> server (or > > > > >> volume > > > > >> >> >> >> >> service as he has created) talking to your solidfire > > > > >> appliance, > > > > >> >> >> >> >> creating a new lun, and using that. Now instead of a > > > > >> >> >> >> >> giant > > > > >> >> pool/lun > > > > >> >> >> >> >> that each vm shares, each VM has it's own LUN that i= s > > > > >> provisioned > > > > >> >> on > > > > >> >> >> >> >> the fly by cloudstack. > > > > >> >> >> >> >> > > > > >> >> >> >> >> It sounds like maybe this will make it into 4.1 (I > > > > >> >> >> >> >> have to go > > > > >> >> through > > > > >> >> >> >> >> my email today, but it sounded close). > > > > >> >> >> >> >> > > > > >> >> >> >> >> Either way, it would be a good idea to add this into > > > > >> >> >> >> >> the disk offering, a basic IO and throughput limit, > > > > >> >> >> >> >> and then whether > > > > >> you > > > > >> >> >> >> >> implement it through cgroups on the Linux server, or > > > > >> >> >> >> >> at the > > > > >> SAN > > > > >> >> >> level, > > > > >> >> >> >> >> or through some other means on VMware or Xen, the > > > > >> >> >> >> >> values are > > > > >> >> there to > > > > >> >> >> >> >> use. > > > > >> >> >> >> >> > > > > >> >> >> >> >> On Thu, Jan 31, 2013 at 4:19 PM, Mike Tutkowski > > > > >> >> >> >> >> wrote: > > > > >> >> >> >> >> > Hi everyone, > > > > >> >> >> >> >> > > > > > >> >> >> >> >> > A while back, I had sent out a question regarding > > > > >> >> >> >> >> > storage > > > > >> >> quality > > > > >> >> >> of > > > > >> >> >> >> >> > service. A few of you chimed in with some good > ideas. > > > > >> >> >> >> >> > > > > > >> >> >> >> >> > Now that I have a little more experience with > > > > >> >> >> >> >> > CloudStack > > > > >> (these > > > > >> >> >> past > > > > >> >> >> >> >> couple > > > > >> >> >> >> >> > weeks, I've been able to get a real CS system up > > > > >> >> >> >> >> > and > > > > >> running, > > > > >> >> >> create > > > > >> >> >> >> an > > > > >> >> >> >> >> > iSCSI target, and make use of it from XenServer), = I > > > > >> >> >> >> >> > would > > > > >> like > > > > >> >> to > > > > >> >> >> >> pose my > > > > >> >> >> >> >> > question again, but in a more refined way. > > > > >> >> >> >> >> > > > > > >> >> >> >> >> > A little background: I worked for a data-storage > > > > >> >> >> >> >> > company in > > > > >> >> >> Boulder, > > > > >> >> >> >> CO > > > > >> >> >> >> >> > called SolidFire (http://solidfire.com). We build > > > > >> >> >> >> >> > a highly > > > > >> >> >> >> >> fault-tolerant, > > > > >> >> >> >> >> > clustered SAN technology consisting exclusively of > SSDs. > > > > >> One of > > > > >> >> >> our > > > > >> >> >> >> main > > > > >> >> >> >> >> > features is hard quality of service (QoS). You ma= y > > > > >> >> >> >> >> > have > > > > >> heard > > > > >> >> of > > > > >> >> >> QoS > > > > >> >> >> >> >> > before. In our case, we refer to it as hard QoS > > > > >> >> >> >> >> > because > > > > >> the end > > > > >> >> >> user > > > > >> >> >> >> has > > > > >> >> >> >> >> > the ability to specify on a volume-by-volume basis > > > > >> >> >> >> >> > what the > > > > >> >> maximum > > > > >> >> >> >> and > > > > >> >> >> >> >> > minimum IOPS for a given volume should be. In > > > > >> >> >> >> >> > other words, > > > > >> we > > > > >> >> do > > > > >> >> >> not > > > > >> >> >> >> >> have > > > > >> >> >> >> >> > the user assign relative high, medium, and low > > > > >> >> >> >> >> > priorities to > > > > >> >> >> volumes > > > > >> >> >> >> (the > > > > >> >> >> >> >> > way you might do with thread priorities), but > > > > >> >> >> >> >> > rather hard > > > > >> IOPS > > > > >> >> >> limits. > > > > >> >> >> >> >> > > > > > >> >> >> >> >> > With this in mind, I would like to know how you > > > > >> >> >> >> >> > would > > > > >> recommend > > > > >> >> I > > > > >> >> >> go > > > > >> >> >> >> >> about > > > > >> >> >> >> >> > enabling CloudStack to support this feature. > > > > >> >> >> >> >> > > > > > >> >> >> >> >> > In my previous e-mail discussion, people suggested > > > > >> >> >> >> >> > using the > > > > >> >> >> Storage > > > > >> >> >> >> Tag > > > > >> >> >> >> >> > field. This is a good idea, but does not fully > > > > >> >> >> >> >> > satisfy my > > > > >> >> >> >> requirements. > > > > >> >> >> >> >> > > > > > >> >> >> >> >> > For example, if I created two large SolidFire > > > > >> >> >> >> >> > volumes (by > > > > >> the > > > > >> >> way, > > > > >> >> >> one > > > > >> >> >> >> >> > SolidFire volume equals one LUN), I could create > > > > >> >> >> >> >> > two Primary > > > > >> >> >> Storage > > > > >> >> >> >> >> types > > > > >> >> >> >> >> > to map onto them. One Primary Storage type could > > > > >> >> >> >> >> > have the > > > > >> tag > > > > >> >> >> >> >> "high_perf" > > > > >> >> >> >> >> > and the other the tag "normal_perf". > > > > >> >> >> >> >> > > > > > >> >> >> >> >> > I could then create Compute Offerings and Disk > > > > >> >> >> >> >> > Offerings > > > > >> that > > > > >> >> >> >> referenced > > > > >> >> >> >> >> > one Storage Tag or the other. > > > > >> >> >> >> >> > > > > > >> >> >> >> >> > This would guarantee that a VM Instance or Data > > > > >> >> >> >> >> > Disk would > > > > >> run > > > > >> >> from > > > > >> >> >> >> one > > > > >> >> >> >> >> > SolidFire volume or the other. > > > > >> >> >> >> >> > > > > > >> >> >> >> >> > The problem is that one SolidFire volume could be > > > > >> >> >> >> >> > servicing > > > > >> >> >> multiple > > > > >> >> >> >> VM > > > > >> >> >> >> >> > Instances and/or Data Disks. This may not seem > > > > >> >> >> >> >> > like a > > > > >> problem, > > > > >> >> but > > > > >> >> >> >> it is > > > > >> >> >> >> >> > because in such a configuration our SAN can no > > > > >> >> >> >> >> > longer > > > > >> guarantee > > > > >> >> >> IOPS > > > > >> >> >> >> on a > > > > >> >> >> >> >> > VM-by-VM basis (or a data disk-by-data disk basis)= . > > > > >> >> >> >> >> > This is > > > > >> >> called > > > > >> >> >> >> the > > > > >> >> >> >> >> > Noisy Neighbor problem. If, for example, one VM > > > > >> >> >> >> >> > Instance > > > > >> starts > > > > >> >> >> >> getting > > > > >> >> >> >> >> > "greedy," it can degrade the performance of the > > > > >> >> >> >> >> > other VM > > > > >> >> Instances > > > > >> >> >> (or > > > > >> >> >> >> >> Data > > > > >> >> >> >> >> > Disks) that share that SolidFire volume. > > > > >> >> >> >> >> > > > > > >> >> >> >> >> > Ideally we would like to have a single VM Instance > > > > >> >> >> >> >> > run on a > > > > >> >> single > > > > >> >> >> >> >> > SolidFire volume and a single Data Disk be > > > > >> >> >> >> >> > associated with a > > > > >> >> single > > > > >> >> >> >> >> > SolidFire volume. > > > > >> >> >> >> >> > > > > > >> >> >> >> >> > How might I go about accomplishing this design goa= l? > > > > >> >> >> >> >> > > > > > >> >> >> >> >> > Thanks!! > > > > >> >> >> >> >> > > > > > >> >> >> >> >> > -- > > > > >> >> >> >> >> > *Mike Tutkowski* > > > > >> >> >> >> >> > *Senior CloudStack Developer, SolidFire Inc.* > > > > >> >> >> >> >> > e: mike.tutkowski@solidfire.com > > > > >> >> >> >> >> > o: 303.746.7302 > > > > >> >> >> >> >> > Advancing the way the world uses the > > > > >> >> >> >> >> > cloud > > > >> >> >> >> >> > =3Dpla > > > > >> >> >> >> >> > y> > > > > >> >> >> >> >> > *(tm)* > > > > >> >> >> >> >> > > > > >> >> >> >> > > > > > >> >> >> >> > > > > > >> >> >> >> > > > > > >> >> >> >> > -- > > > > >> >> >> >> > *Mike Tutkowski* > > > > >> >> >> >> > *Senior CloudStack Developer, SolidFire Inc.* > > > > >> >> >> >> > e: mike.tutkowski@solidfire.com > > > > >> >> >> >> > o: 303.746.7302 > > > > >> >> >> >> > Advancing the way the world uses the > > > > >> >> >> >> > cloud > > > >> >> >> >> > ay> > > > > >> >> >> >> > *(tm)* > > > > >> >> >> >> > > > > >> >> >> > > > > > >> >> >> > > > > > >> >> >> > > > > > >> >> >> > -- > > > > >> >> >> > *Mike Tutkowski* > > > > >> >> >> > *Senior CloudStack Developer, SolidFire Inc.* > > > > >> >> >> > e: mike.tutkowski@solidfire.com > > > > >> >> >> > o: 303.746.7302 > > > > >> >> >> > Advancing the way the world uses the > > > > >> >> >> > cloud > > > > >> >> >> > *(tm)* > > > > >> >> >> > > > > >> >> > > > > > >> >> > > > > > >> >> > > > > > >> >> > -- > > > > >> >> > *Mike Tutkowski* > > > > >> >> > *Senior CloudStack Developer, SolidFire Inc.* > > > > >> >> > e: mike.tutkowski@solidfire.com > > > > >> >> > o: 303.746.7302 > > > > >> >> > Advancing the way the world uses the > > > > >> >> > cloud > > > > >> >> > *(tm)* > > > > >> >> > > > > >> > > > > > >> > > > > > >> > > > > > >> > -- > > > > >> > *Mike Tutkowski* > > > > >> > *Senior CloudStack Developer, SolidFire Inc.* > > > > >> > e: mike.tutkowski@solidfire.com > > > > >> > o: 303.746.7302 > > > > >> > Advancing the way the world uses the > > > > >> > cloud > > > > >> > *(tm)* > > > > >> > > > > > > > > > > > > > > > > > > > > -- > > > > > *Mike Tutkowski* > > > > > *Senior CloudStack Developer, SolidFire Inc.* > > > > > e: mike.tutkowski@solidfire.com > > > > > o: 303.746.7302 > > > > > Advancing the way the world uses the > > > > > cloud > > > > > *(tm)* > > > > > > > > > > > > > > > > > > > > > -- > > > > *Mike Tutkowski* > > > > *Senior CloudStack Developer, SolidFire Inc.* > > > > e: mike.tutkowski@solidfire.com > > > > o: 303.746.7302 > > > > Advancing the way the world uses the > > > > cloud > > > > *(tm)* > > > > > > > > > > > -- > > *Mike Tutkowski* > > *Senior CloudStack Developer, SolidFire Inc.* > > e: mike.tutkowski@solidfire.com > > o: 303.746.7302 > > Advancing the way the world uses the > > cloud > > *(tm)***** > > > > **** > > ** ** > > -- > *Mike Tutkowski***** > > *Senior CloudStack Developer, SolidFire Inc.***** > > e: mike.tutkowski@solidfire.com**** > > o: 303.746.7302**** > > Advancing the way the world uses the cloud > *=99***** > --=20 *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkowski@solidfire.com o: 303.746.7302 Advancing the way the world uses the cloud *=99* --e89a8ff1c2fc312d4b04d5027c0f--