cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Tutkowski <mike.tutkow...@solidfire.com>
Subject Re: Storage Quality-of-Service Question
Date Tue, 05 Feb 2013 23:22:36 GMT
Good to know.

Thanks, Edison!


On Tue, Feb 5, 2013 at 4:20 PM, Edison Su <Edison.su@citrix.com> wrote:

> Yes, grantAccess returns an IQN should be enough, and yes, it’s called
> after createAsync.****
>
> BTW, is the iSCSI LUN accessible to all the hypervisors hosts? grantAccess
> has a second parameter: EndPoint, which has the ip address of a client who
> wants to access this LUN. Whenever cloudstack wants to access the LUN, it
> will call grantAccess at first. For example, in attach volume to a VM case,
> CloudStack mgt server will send a command to hypervisor host where the VM
> is created, before doing that, mgt server will call grantaccess with
> hypervisor host’s ip address. If the LUN is not accessible to everyone,
> then you may need to call storage box’s api to grant access for specified
> end point.****
>
> ** **
>
> *From:* Mike Tutkowski [mailto:mike.tutkowski@solidfire.com]
> *Sent:* Tuesday, February 05, 2013 2:52 PM
> *To:* Edison Su
> *Cc:* cloudstack-dev@incubator.apache.org
>
> *Subject:* Re: Storage Quality-of-Service Question****
>
> ** **
>
> Thanks for all the info, Edison!****
>
> ** **
>
> I've been playing around with createAsync and deleteAsync today.  I tried
> to pattern these off of DefaultPrimaryDataStoreDriverImpl.****
>
> ** **
>
> So, for grantAccess, since I am dealing with an iSCSI volume (single LUN,
> in our case), I could return an IQN?  Is that correct?****
>
> ** **
>
> I assume grantAccess is called after createAsync (otherwise I wouldn't
> have an IQN to provide)?****
>
> ** **
>
> On Tue, Feb 5, 2013 at 3:34 PM, Edison Su <Edison.su@citrix.com> wrote:***
> *
>
>
>
> > -----Original Message-----
> > From: Mike Tutkowski [mailto:mike.tutkowski@solidfire.com]****
>
> > Sent: Friday, February 01, 2013 9:18 PM
> > To: cloudstack-dev@incubator.apache.org
> > Subject: Re: Storage Quality-of-Service Question
> >****
>
> > Hi Edison,
> >
> > Thanks for the info!!  I'm excited to start developing that plug-in.  :)
> >
> > I'm not sure if there is any documentation on what I'm about to ask
> here, so
> > I'll just ask:
> >
> > From a usability standpoint, how does this plug-in architecture manifest
> itself?
> > For example, today an admin has to create a Primary Storage type, tag it,
> > then reference the tag from a Compute and/or Disk Offering.
> >
> > How will this user interaction look when plug-ins are available?  Does
> the user
> > have a special option when creating a Compute and/or Disk Offering that
> will
> > trigger the execution of the plug-in at some point to dynamically create
> a
> > volume?****
>
> User doesn't need to know and shouldn't need to know the underlining
> storage system, all the users want are to create data disk or root disk
> with certain disk offerings. Right now, you can specify local or shared
> storage, or storage tags in disk offering. In the future, we can add IOPS
> in the disk offering, if it's what you are looking for.
> Let's go through the code, take create data disk as an example:
> 1. Admin creates a disk offering with IOPS 10000, name it as
> "media-performance-disk".
> 2. User selects above disk offering during creating data disk from UI.
> 3. UI will call cloudstack mgt server, by calling CreateVolumeCmd, which
> will create a DB entry in volumes table: code is at CreateVolumeCmd.java,
> volumemanagerimpl.java: createVolume method
> 4. User then attach the volume to a VM, by calling AttachVolumeCmd, which
> will:
>     4.1 create the volume on the primary storage at first:
> volumemanagerimpl-> attachVolumeToVM -> createVolumeOnPrimaryStorage ->
> createVolume->volumeserviceImpl-> createVolumeAsync, which will call
> storage driver's createAsync to actually create a volume on primary storage.
>     4.2 then send a command to hypervisor host to attach above volume to a
> VM.
>
>     In above 4.1 procedure, cloudstack mgt server will based on disk
> offering and where the VM is created, to decide which primary storage
> should use. The storage pool selection algorithm is  implementation  of
> StoragePoolAllocator. Currently, these algorithms doesn't take IOPS into
> consideration. We can add that in the future.
>
> 5. In your driver's createAsync method, it's the place to actually create
> something on the storage. You can call the storage box's API directly here,
> or you can send a command to hypervisor host. After finished volume
> creation, you need to update volume db, for example, set an identifier,
> either an UUID or a path of the volume into DB.
>
> 6. driver's grantAccess method, will return a string which will represent
> the volume, the string will be passed down to hypervisor, so that
> hypervisor can access the volume. In your case, the string can be something
> like: iscsi://taget/path, if your storage box exports volume as a LUN.****
>
>
>
>
> >
> > Just trying to get a feel for how this will work from both a programming
> and a
> > user point of view.
> >
> > Thanks!
> >
> >
> > On Fri, Feb 1, 2013 at 3:57 PM, Edison Su <Edison.su@citrix.com> wrote:
> >
> > > Hi Mike, sorry for the late to reply your email. I created a branch
> > > "storage_refactor" to hack storage code, it has a simple framework to
> > > fit your requirements: zone-wide primary storage, and per data disk per
> > LUN.
> > > There is even a maven project called:
> > > cloud-plugin-storage-volume-solidfire, you can add your code into that
> > > project.
> > > In order to write a plugin for cloudstack storage: you need to write a
> > > storage provider, which provides implementations of
> > > PrimaryDataStoreLifeCycle and PrimaryDataStoreDriver.
> > > You can take a look at DefaultPrimaryDatastoreProviderImpl and
> > > AncientPrimaryDataStoreProviderImpl as an example. If you have any
> > > questions about the code, please let me know.
> > >
> > > > -----Original Message-----
> > > > From: Mike Tutkowski [mailto:mike.tutkowski@solidfire.com]
> > > > Sent: Friday, February 01, 2013 11:55 AM
> > > > To: cloudstack-dev@incubator.apache.org
> > > > Subject: Re: Storage Quality-of-Service Question
> > > >
> > > > Hey Marcus,
> > > >
> > > > So, before I get too involved in the Max/Min IOPS part of this work,
> > > > I'd
> > > like to
> > > > first understand more about the way CS is changing to enable dynamic
> > > > creation of a single volume (LUN) for a VM Instance or Data Disk.
> > > >
> > > > Is there somewhere you might be able to point me to where I could
> > > > learn about the code I would need to write to leverage this new
> > architecture?
> > > >
> > > > Thanks!!
> > > >
> > > >
> > > > On Fri, Feb 1, 2013 at 9:55 AM, Mike Tutkowski
> > > > <mike.tutkowski@solidfire.com
> > > > > wrote:
> > > >
> > > > > I see...that makes sense.
> > > > >
> > > > >
> > > > > On Fri, Feb 1, 2013 at 9:50 AM, Marcus Sorensen
> > > > <shadowsor@gmail.com>wrote:
> > > > >
> > > > >> well, the offerings are up to the admin to create, the user just
> > > > >> gets to choose them. So we leave it up to the admin to create
> > > > >> sane offerings (not specify cpu mhz that can't be satisfied,
> > > > >> storage sizes that can't be supported, etc. We should make sure
> > > > >> it states in the documentation and functional spec how the
> feature is
> > implemented (i.e.
> > > > >> an admin can't assume that cloudstack will just 'make it work',
> > > > >> it has to be supported by their primary storage).
> > > > >>
> > > > >> On Fri, Feb 1, 2013 at 8:13 AM, Mike Tutkowski
> > > > >> <mike.tutkowski@solidfire.com> wrote:
> > > > >> > Ah, yeah, now that I think of it, I didn't really phrase
that
> > > > >> > question
> > > > >> all
> > > > >> > that well.
> > > > >> >
> > > > >> > What I meant to ask, Marcus, was if there is some way a
user
> > > > >> > knows the fields (in this case, Max and Min IOPS) may or
may
> > > > >> > not be honored
> > > > >> because
> > > > >> > it depends on the underlying storage's capabilities?
> > > > >> >
> > > > >> > Thanks!
> > > > >> >
> > > > >> >
> > > > >> > On Thu, Jan 31, 2013 at 10:31 PM, Marcus Sorensen
> > > > >> ><shadowsor@gmail.com
> > > > >> >wrote:
> > > > >> >
> > > > >> >> Yes, there are optional fields. For example if you register
a
> > > > >> >> new compute offering you will see that some of them
have red
> > > > >> >> stars, but network rate for example is optional.
> > > > >> >>
> > > > >> >> On Thu, Jan 31, 2013 at 10:07 PM, Mike Tutkowski
> > > > >> >> <mike.tutkowski@solidfire.com> wrote:
> > > > >> >> > So, Marcus, you're thinking these values would
be available
> > > > >> >> > for any
> > > > >> >> Compute
> > > > >> >> > or Disk Offerings regardless of the type of Primary
Storage
> > > > >> >> > that back
> > > > >> >> them,
> > > > >> >> > right?
> > > > >> >> >
> > > > >> >> > Is there a way we denote Optional fields of this
nature in
> > > > >> >> > CS today
> > > > >> (a
> > > > >> >> way
> > > > >> >> > in which the end user would understand that these
fields are
> > > > >> >> > not
> > > > >> honored
> > > > >> >> by
> > > > >> >> > all Primary Storage types necessarily)?
> > > > >> >> >
> > > > >> >> > Thanks for the info!
> > > > >> >> >
> > > > >> >> >
> > > > >> >> > On Thu, Jan 31, 2013 at 4:46 PM, Marcus Sorensen
<
> > > > >> shadowsor@gmail.com
> > > > >> >> >wrote:
> > > > >> >> >
> > > > >> >> >> I would start by creating a functional spec,
then people
> > > > >> >> >> can give input and help solidify exactly how
it's
> implemented.
> > > > >> >> >> There are examples on the wiki. Or perhaps
there is already
> > > > >> >> >> one describing the feature that you can comment
on or add
> > > > >> >> >> to. I think a good place to start is simply
trying to get
> > > > >> >> >> the values into the offerings, and adjusting
any database
> > > > >> >> >> schemas necessary to accomodate that. Once
> > > > >> the
> > > > >> >> >> values are in the offerings, then it can be
up to the
> > > > >> >> >> various
> > > > >> storage
> > > > >> >> >> pool types to implement or not.
> > > > >> >> >>
> > > > >> >> >> On Thu, Jan 31, 2013 at 4:42 PM, Mike Tutkowski
> > > > >> >> >> <mike.tutkowski@solidfire.com> wrote:
> > > > >> >> >> > Cool...thanks, Marcus.
> > > > >> >> >> >
> > > > >> >> >> > So, how do you recommend I go about this?
 Although I've
> > > > >> >> >> > got
> > > > >> recent CS
> > > > >> >> >> code
> > > > >> >> >> > on my machine and I've built and run it,
I've not yet
> > > > >> >> >> > made any
> > > > >> >> changes.
> > > > >> >> >>  Do
> > > > >> >> >> > you know of any documentation I could
look at to learn
> > > > >> >> >> > the process
> > > > >> >> >> involved
> > > > >> >> >> > in making CS changes?
> > > > >> >> >> >
> > > > >> >> >> >
> > > > >> >> >> > On Thu, Jan 31, 2013 at 4:36 PM, Marcus
Sorensen <
> > > > >> shadowsor@gmail.com
> > > > >> >> >> >wrote:
> > > > >> >> >> >
> > > > >> >> >> >> Yes, it would need to be a part of
compute offering
> > > > >> >> >> >> separately,
> > > > >> along
> > > > >> >> >> >> the CPU/RAM and network limits. Then
theoretically they
> > > > >> >> >> >> could provision OS drive with relatively
slow limits,
> > > > >> >> >> >> and a database
> > > > >> volume
> > > > >> >> >> >> with higher limits (and higher pricetag
or something).
> > > > >> >> >> >>
> > > > >> >> >> >> On Thu, Jan 31, 2013 at 4:33 PM, Mike
Tutkowski
> > > > >> >> >> >> <mike.tutkowski@solidfire.com>
wrote:
> > > > >> >> >> >> > Thanks for the info, Marcus!
> > > > >> >> >> >> >
> > > > >> >> >> >> > So, you are thinking that when
the user creates a new
> > > > >> >> >> >> > Disk
> > > > >> Offering
> > > > >> >> >> that
> > > > >> >> >> >> he
> > > > >> >> >> >> > or she would be given the option
of specifying Max and
> > > > >> >> >> >> > Min
> > > > >> IOPS?
> > > > >> >>  That
> > > > >> >> >> >> > makes sense when I think of Data
Disks, but how does
> > > > >> >> >> >> > that
> > > > >> figure
> > > > >> >> into
> > > > >> >> >> the
> > > > >> >> >> >> > kind of storage a VM Instance
runs off of?  I thought
> > > > >> >> >> >> > the way
> > > > >> that
> > > > >> >> >> works
> > > > >> >> >> >> > today is by specifying in the
Compute Offering a
> > > > >> >> >> >> > Storage
> > > Tag.
> > > > >> >> >> >> >
> > > > >> >> >> >> > Thanks!
> > > > >> >> >> >> >
> > > > >> >> >> >> >
> > > > >> >> >> >> > On Thu, Jan 31, 2013 at 4:25
PM, Marcus Sorensen <
> > > > >> >> shadowsor@gmail.com
> > > > >> >> >> >> >wrote:
> > > > >> >> >> >> >
> > > > >> >> >> >> >> So, this is what Edison's
storage refactor is
> > > > >> >> >> >> >> designed to
> > > > >> >> accomplish.
> > > > >> >> >> >> >> Instead of the storage working
the way it currently
> > > > >> >> >> >> >> does,
> > > > >> >> creating a
> > > > >> >> >> >> >> volume for  a VM would consist
of the cloudstack
> > > > >> >> >> >> >> server (or
> > > > >> volume
> > > > >> >> >> >> >> service as he has created)
talking to your solidfire
> > > > >> appliance,
> > > > >> >> >> >> >> creating a new lun, and using
that. Now instead of a
> > > > >> >> >> >> >> giant
> > > > >> >> pool/lun
> > > > >> >> >> >> >> that each vm shares, each
VM has it's own LUN that is
> > > > >> provisioned
> > > > >> >> on
> > > > >> >> >> >> >> the fly by cloudstack.
> > > > >> >> >> >> >>
> > > > >> >> >> >> >> It sounds like maybe this
will make it into 4.1 (I
> > > > >> >> >> >> >> have to go
> > > > >> >> through
> > > > >> >> >> >> >> my email today, but it sounded
close).
> > > > >> >> >> >> >>
> > > > >> >> >> >> >> Either way, it would be a
good idea to add this into
> > > > >> >> >> >> >> the disk offering, a basic
IO and throughput limit,
> > > > >> >> >> >> >> and then whether
> > > > >> you
> > > > >> >> >> >> >> implement it through cgroups
on the Linux server, or
> > > > >> >> >> >> >> at the
> > > > >> SAN
> > > > >> >> >> level,
> > > > >> >> >> >> >> or through some other means
on VMware or Xen, the
> > > > >> >> >> >> >> values are
> > > > >> >> there to
> > > > >> >> >> >> >> use.
> > > > >> >> >> >> >>
> > > > >> >> >> >> >> On Thu, Jan 31, 2013 at 4:19
PM, Mike Tutkowski
> > > > >> >> >> >> >> <mike.tutkowski@solidfire.com>
wrote:
> > > > >> >> >> >> >> > Hi everyone,
> > > > >> >> >> >> >> >
> > > > >> >> >> >> >> > A while back, I had
sent out a question regarding
> > > > >> >> >> >> >> > storage
> > > > >> >> quality
> > > > >> >> >> of
> > > > >> >> >> >> >> > service.  A few of you
chimed in with some good
> ideas.
> > > > >> >> >> >> >> >
> > > > >> >> >> >> >> > Now that I have a little
more experience with
> > > > >> >> >> >> >> > CloudStack
> > > > >> (these
> > > > >> >> >> past
> > > > >> >> >> >> >> couple
> > > > >> >> >> >> >> > weeks, I've been able
to get a real CS system up
> > > > >> >> >> >> >> > and
> > > > >> running,
> > > > >> >> >> create
> > > > >> >> >> >> an
> > > > >> >> >> >> >> > iSCSI target, and make
use of it from XenServer), I
> > > > >> >> >> >> >> > would
> > > > >> like
> > > > >> >> to
> > > > >> >> >> >> pose my
> > > > >> >> >> >> >> > question again, but
in a more refined way.
> > > > >> >> >> >> >> >
> > > > >> >> >> >> >> > A little background:
 I worked for a data-storage
> > > > >> >> >> >> >> > company in
> > > > >> >> >> Boulder,
> > > > >> >> >> >> CO
> > > > >> >> >> >> >> > called SolidFire (http://solidfire.com).
 We build
> > > > >> >> >> >> >> > a highly
> > > > >> >> >> >> >> fault-tolerant,
> > > > >> >> >> >> >> > clustered SAN technology
consisting exclusively of
> SSDs.
> > > > >>  One of
> > > > >> >> >> our
> > > > >> >> >> >> main
> > > > >> >> >> >> >> > features is hard quality
of service (QoS).  You may
> > > > >> >> >> >> >> > have
> > > > >> heard
> > > > >> >> of
> > > > >> >> >> QoS
> > > > >> >> >> >> >> > before.  In our case,
we refer to it as hard QoS
> > > > >> >> >> >> >> > because
> > > > >> the end
> > > > >> >> >> user
> > > > >> >> >> >> has
> > > > >> >> >> >> >> > the ability to specify
on a volume-by-volume basis
> > > > >> >> >> >> >> > what the
> > > > >> >> maximum
> > > > >> >> >> >> and
> > > > >> >> >> >> >> > minimum IOPS for a given
volume should be.  In
> > > > >> >> >> >> >> > other words,
> > > > >> we
> > > > >> >> do
> > > > >> >> >> not
> > > > >> >> >> >> >> have
> > > > >> >> >> >> >> > the user assign relative
high, medium, and low
> > > > >> >> >> >> >> > priorities to
> > > > >> >> >> volumes
> > > > >> >> >> >> (the
> > > > >> >> >> >> >> > way you might do with
thread priorities), but
> > > > >> >> >> >> >> > rather hard
> > > > >> IOPS
> > > > >> >> >> limits.
> > > > >> >> >> >> >> >
> > > > >> >> >> >> >> > With this in mind, I
would like to know how you
> > > > >> >> >> >> >> > would
> > > > >> recommend
> > > > >> >> I
> > > > >> >> >> go
> > > > >> >> >> >> >> about
> > > > >> >> >> >> >> > enabling CloudStack
to support this feature.
> > > > >> >> >> >> >> >
> > > > >> >> >> >> >> > In my previous e-mail
discussion, people suggested
> > > > >> >> >> >> >> > using the
> > > > >> >> >> Storage
> > > > >> >> >> >> Tag
> > > > >> >> >> >> >> > field.  This is a good
idea, but does not fully
> > > > >> >> >> >> >> > satisfy my
> > > > >> >> >> >> requirements.
> > > > >> >> >> >> >> >
> > > > >> >> >> >> >> > For example, if I created
two large SolidFire
> > > > >> >> >> >> >> > volumes (by
> > > > >> the
> > > > >> >> way,
> > > > >> >> >> one
> > > > >> >> >> >> >> > SolidFire volume equals
one LUN), I could create
> > > > >> >> >> >> >> > two Primary
> > > > >> >> >> Storage
> > > > >> >> >> >> >> types
> > > > >> >> >> >> >> > to map onto them.  One
Primary Storage type could
> > > > >> >> >> >> >> > have the
> > > > >> tag
> > > > >> >> >> >> >> "high_perf"
> > > > >> >> >> >> >> > and the other the tag
"normal_perf".
> > > > >> >> >> >> >> >
> > > > >> >> >> >> >> > I could then create
Compute Offerings and Disk
> > > > >> >> >> >> >> > Offerings
> > > > >> that
> > > > >> >> >> >> referenced
> > > > >> >> >> >> >> > one Storage Tag or the
other.
> > > > >> >> >> >> >> >
> > > > >> >> >> >> >> > This would guarantee
that a VM Instance or Data
> > > > >> >> >> >> >> > Disk would
> > > > >> run
> > > > >> >> from
> > > > >> >> >> >> one
> > > > >> >> >> >> >> > SolidFire volume or
the other.
> > > > >> >> >> >> >> >
> > > > >> >> >> >> >> > The problem is that
one SolidFire volume could be
> > > > >> >> >> >> >> > servicing
> > > > >> >> >> multiple
> > > > >> >> >> >> VM
> > > > >> >> >> >> >> > Instances and/or Data
Disks.  This may not seem
> > > > >> >> >> >> >> > like a
> > > > >> problem,
> > > > >> >> but
> > > > >> >> >> >> it is
> > > > >> >> >> >> >> > because in such a configuration
our SAN can no
> > > > >> >> >> >> >> > longer
> > > > >> guarantee
> > > > >> >> >> IOPS
> > > > >> >> >> >> on a
> > > > >> >> >> >> >> > VM-by-VM basis (or a
data disk-by-data disk basis).
> > > > >> >> >> >> >> > This is
> > > > >> >> called
> > > > >> >> >> >> the
> > > > >> >> >> >> >> > Noisy Neighbor problem.
 If, for example, one VM
> > > > >> >> >> >> >> > Instance
> > > > >> starts
> > > > >> >> >> >> getting
> > > > >> >> >> >> >> > "greedy," it can degrade
the performance of the
> > > > >> >> >> >> >> > other VM
> > > > >> >> Instances
> > > > >> >> >> (or
> > > > >> >> >> >> >> Data
> > > > >> >> >> >> >> > Disks) that share that
SolidFire volume.
> > > > >> >> >> >> >> >
> > > > >> >> >> >> >> > Ideally we would like
to have a single VM Instance
> > > > >> >> >> >> >> > run on a
> > > > >> >> single
> > > > >> >> >> >> >> > SolidFire volume and
a single Data Disk be
> > > > >> >> >> >> >> > associated with a
> > > > >> >> single
> > > > >> >> >> >> >> > SolidFire volume.
> > > > >> >> >> >> >> >
> > > > >> >> >> >> >> > How might I go about
accomplishing this design goal?
> > > > >> >> >> >> >> >
> > > > >> >> >> >> >> > Thanks!!
> > > > >> >> >> >> >> >
> > > > >> >> >> >> >> > --
> > > > >> >> >> >> >> > *Mike Tutkowski*
> > > > >> >> >> >> >> > *Senior CloudStack Developer,
SolidFire Inc.*
> > > > >> >> >> >> >> > e: mike.tutkowski@solidfire.com
> > > > >> >> >> >> >> > o: 303.746.7302
> > > > >> >> >> >> >> > Advancing the way the
world uses the
> > > > >> >> >> >> >> > cloud<http://solidfire.com/solution/overview/?video
> > > > >> >> >> >> >> > =pla
> > > > >> >> >> >> >> > y>
> > > > >> >> >> >> >> > *(tm)*
> > > > >> >> >> >> >>
> > > > >> >> >> >> >
> > > > >> >> >> >> >
> > > > >> >> >> >> >
> > > > >> >> >> >> > --
> > > > >> >> >> >> > *Mike Tutkowski*
> > > > >> >> >> >> > *Senior CloudStack Developer,
SolidFire Inc.*
> > > > >> >> >> >> > e: mike.tutkowski@solidfire.com
> > > > >> >> >> >> > o: 303.746.7302
> > > > >> >> >> >> > Advancing the way the world uses
the
> > > > >> >> >> >> > cloud<http://solidfire.com/solution/overview/?video=pl
> > > > >> >> >> >> > ay>
> > > > >> >> >> >> > *(tm)*
> > > > >> >> >> >>
> > > > >> >> >> >
> > > > >> >> >> >
> > > > >> >> >> >
> > > > >> >> >> > --
> > > > >> >> >> > *Mike Tutkowski*
> > > > >> >> >> > *Senior CloudStack Developer, SolidFire
Inc.*
> > > > >> >> >> > e: mike.tutkowski@solidfire.com
> > > > >> >> >> > o: 303.746.7302
> > > > >> >> >> > Advancing the way the world uses the
> > > > >> >> >> > cloud<http://solidfire.com/solution/overview/?video=play>
> > > > >> >> >> > *(tm)*
> > > > >> >> >>
> > > > >> >> >
> > > > >> >> >
> > > > >> >> >
> > > > >> >> > --
> > > > >> >> > *Mike Tutkowski*
> > > > >> >> > *Senior CloudStack Developer, SolidFire Inc.*
> > > > >> >> > e: mike.tutkowski@solidfire.com
> > > > >> >> > o: 303.746.7302
> > > > >> >> > Advancing the way the world uses the
> > > > >> >> > cloud<http://solidfire.com/solution/overview/?video=play>
> > > > >> >> > *(tm)*
> > > > >> >>
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> > --
> > > > >> > *Mike Tutkowski*
> > > > >> > *Senior CloudStack Developer, SolidFire Inc.*
> > > > >> > e: mike.tutkowski@solidfire.com
> > > > >> > o: 303.746.7302
> > > > >> > Advancing the way the world uses the
> > > > >> > cloud<http://solidfire.com/solution/overview/?video=play>
> > > > >> > *(tm)*
> > > > >>
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > *Mike Tutkowski*
> > > > > *Senior CloudStack Developer, SolidFire Inc.*
> > > > > e: mike.tutkowski@solidfire.com
> > > > > o: 303.746.7302
> > > > > Advancing the way the world uses the
> > > > > cloud<http://solidfire.com/solution/overview/?video=play>
> > > > > *(tm)*
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > *Mike Tutkowski*
> > > > *Senior CloudStack Developer, SolidFire Inc.*
> > > > e: mike.tutkowski@solidfire.com
> > > > o: 303.746.7302
> > > > Advancing the way the world uses the
> > > > cloud<http://solidfire.com/solution/overview/?video=play>
> > > > *(tm)*
> > >
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *(tm)*****
>
>
>
> ****
>
> ** **
>
> --
> *Mike Tutkowski*****
>
> *Senior CloudStack Developer, SolidFire Inc.*****
>
> e: mike.tutkowski@solidfire.com****
>
> o: 303.746.7302****
>
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*****
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message