cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Tutkowski <mike.tutkow...@solidfire.com>
Subject Re: Storage Quality-of-Service Question
Date Mon, 04 Feb 2013 20:27:53 GMT
As I delve into the code a bit more, it looks like maybe I should start
with SolidfirePrimaryDataStoreProvider and override the configure method to
look something like this (the bolded lines are where the configure method
differs in SolidfirePrimaryDataStoreProvider from the configure method
inherited from DefaultPrimaryDatastoreProviderImpl):

    public boolean configure(Map<String, Object> params) {

*        lifecyle =
ComponentContext.inject(SolidFirePrimaryDataStoreLifeCycle.class);*

*        driver = ComponentContext.inject(SolidfirePrimaryDataStoreDriver.
class);*

        HypervisorHostListener listener =
ComponentContext.inject(DefaultHostListener.class);

        uuid = (String)params.get("uuid");

        id = (Long)params.get("id");

        storeMgr.registerDriver(uuid, this.driver);

        storeMgr.registerHostListener(uuid, listener);

        return true;

    }


On Mon, Feb 4, 2013 at 12:52 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> Hi Edison!
>
> So, I updated my local Git repository today and switched over to the
> storage_refactor branch.
>
> I located the SolidfirePrimaryDataStoreDriver and
> SolidfirePrimaryDataStoreProvider classes.
>
> I used the following URL to try to figure out how I should implement the
> appropriate methods:
>
>
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Storage+subsystem+2.0
>
> It looks like the parameters to certain methods have changed since the
> design went up on the Wiki.
>
> Either way, do you have a way you recommend I approach this?  For example,
> should I just start at grantAccess and make my way through the required
> methods (that's probably the way to go)?
>
> Assuming I should do that, then I'm looking at the following to start:
>
>     public String grantAccess(DataObject data, EndPoint ep) {
>
>         // TODO Auto-generated method stub
>
>         return null;
>
>     }
>
>
> So, I'm not really sure what the String I return should look like.  Do we
> have any examples for this?  I'm also not sure of the exact purpose of the
> method.  For example, what kind of work on my end is expected?  SolidFire
> has a really robust API I can call, but I'm not sure what's expected of me
> on the CloudStack side.
>
> Thanks!
>
>
> On Fri, Feb 1, 2013 at 10:17 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> Hi Edison,
>>
>> Thanks for the info!!  I'm excited to start developing that plug-in.  :)
>>
>> I'm not sure if there is any documentation on what I'm about to ask here,
>> so I'll just ask:
>>
>> From a usability standpoint, how does this plug-in architecture manifest
>> itself?  For example, today an admin has to create a Primary Storage type,
>> tag it, then reference the tag from a Compute and/or Disk Offering.
>>
>> How will this user interaction look when plug-ins are available?  Does
>> the user have a special option when creating a Compute and/or Disk Offering
>> that will trigger the execution of the plug-in at some point to dynamically
>> create a volume?
>>
>> Just trying to get a feel for how this will work from both a programming
>> and a user point of view.
>>
>> Thanks!
>>
>>
>> On Fri, Feb 1, 2013 at 3:57 PM, Edison Su <Edison.su@citrix.com> wrote:
>>
>>> Hi Mike, sorry for the late to reply your email. I created a branch
>>> "storage_refactor" to hack storage code, it has a simple framework to fit
>>> your requirements: zone-wide primary storage, and per data disk per LUN.
>>> There is even a maven project called:
>>> cloud-plugin-storage-volume-solidfire, you can add your code into that
>>> project.
>>> In order to write a plugin for cloudstack storage: you need to write a
>>> storage provider, which provides implementations of
>>> PrimaryDataStoreLifeCycle and PrimaryDataStoreDriver.
>>> You can take a look at DefaultPrimaryDatastoreProviderImpl and
>>> AncientPrimaryDataStoreProviderImpl as an example. If you have any
>>> questions about the code, please let me know.
>>>
>>> > -----Original Message-----
>>> > From: Mike Tutkowski [mailto:mike.tutkowski@solidfire.com]
>>> > Sent: Friday, February 01, 2013 11:55 AM
>>> > To: cloudstack-dev@incubator.apache.org
>>> > Subject: Re: Storage Quality-of-Service Question
>>> >
>>> > Hey Marcus,
>>> >
>>> > So, before I get too involved in the Max/Min IOPS part of this work,
>>> I'd like to
>>> > first understand more about the way CS is changing to enable dynamic
>>> > creation of a single volume (LUN) for a VM Instance or Data Disk.
>>> >
>>> > Is there somewhere you might be able to point me to where I could learn
>>> > about the code I would need to write to leverage this new architecture?
>>> >
>>> > Thanks!!
>>> >
>>> >
>>> > On Fri, Feb 1, 2013 at 9:55 AM, Mike Tutkowski
>>> > <mike.tutkowski@solidfire.com
>>> > > wrote:
>>> >
>>> > > I see...that makes sense.
>>> > >
>>> > >
>>> > > On Fri, Feb 1, 2013 at 9:50 AM, Marcus Sorensen
>>> > <shadowsor@gmail.com>wrote:
>>> > >
>>> > >> well, the offerings are up to the admin to create, the user just
>>> gets
>>> > >> to choose them. So we leave it up to the admin to create sane
>>> > >> offerings (not specify cpu mhz that can't be satisfied, storage
>>> sizes
>>> > >> that can't be supported, etc. We should make sure it states in
the
>>> > >> documentation and functional spec how the feature is implemented
>>> (i.e.
>>> > >> an admin can't assume that cloudstack will just 'make it work',
it
>>> > >> has to be supported by their primary storage).
>>> > >>
>>> > >> On Fri, Feb 1, 2013 at 8:13 AM, Mike Tutkowski
>>> > >> <mike.tutkowski@solidfire.com> wrote:
>>> > >> > Ah, yeah, now that I think of it, I didn't really phrase that
>>> > >> > question
>>> > >> all
>>> > >> > that well.
>>> > >> >
>>> > >> > What I meant to ask, Marcus, was if there is some way a user
knows
>>> > >> > the fields (in this case, Max and Min IOPS) may or may not
be
>>> > >> > honored
>>> > >> because
>>> > >> > it depends on the underlying storage's capabilities?
>>> > >> >
>>> > >> > Thanks!
>>> > >> >
>>> > >> >
>>> > >> > On Thu, Jan 31, 2013 at 10:31 PM, Marcus Sorensen
>>> > >> ><shadowsor@gmail.com
>>> > >> >wrote:
>>> > >> >
>>> > >> >> Yes, there are optional fields. For example if you register
a new
>>> > >> >> compute offering you will see that some of them have red
stars,
>>> > >> >> but network rate for example is optional.
>>> > >> >>
>>> > >> >> On Thu, Jan 31, 2013 at 10:07 PM, Mike Tutkowski
>>> > >> >> <mike.tutkowski@solidfire.com> wrote:
>>> > >> >> > So, Marcus, you're thinking these values would be
available for
>>> > >> >> > any
>>> > >> >> Compute
>>> > >> >> > or Disk Offerings regardless of the type of Primary
Storage
>>> that
>>> > >> >> > back
>>> > >> >> them,
>>> > >> >> > right?
>>> > >> >> >
>>> > >> >> > Is there a way we denote Optional fields of this
nature in CS
>>> > >> >> > today
>>> > >> (a
>>> > >> >> way
>>> > >> >> > in which the end user would understand that these
fields are
>>> not
>>> > >> honored
>>> > >> >> by
>>> > >> >> > all Primary Storage types necessarily)?
>>> > >> >> >
>>> > >> >> > Thanks for the info!
>>> > >> >> >
>>> > >> >> >
>>> > >> >> > On Thu, Jan 31, 2013 at 4:46 PM, Marcus Sorensen
<
>>> > >> shadowsor@gmail.com
>>> > >> >> >wrote:
>>> > >> >> >
>>> > >> >> >> I would start by creating a functional spec,
then people can
>>> > >> >> >> give input and help solidify exactly how it's
implemented.
>>> > >> >> >> There are examples on the wiki. Or perhaps there
is already
>>> one
>>> > >> >> >> describing the feature that you can comment on
or add to. I
>>> > >> >> >> think a good place to start is simply trying
to get the values
>>> > >> >> >> into the offerings, and adjusting any database
schemas
>>> > >> >> >> necessary to accomodate that. Once
>>> > >> the
>>> > >> >> >> values are in the offerings, then it can be up
to the various
>>> > >> storage
>>> > >> >> >> pool types to implement or not.
>>> > >> >> >>
>>> > >> >> >> On Thu, Jan 31, 2013 at 4:42 PM, Mike Tutkowski
>>> > >> >> >> <mike.tutkowski@solidfire.com> wrote:
>>> > >> >> >> > Cool...thanks, Marcus.
>>> > >> >> >> >
>>> > >> >> >> > So, how do you recommend I go about this?
 Although I've got
>>> > >> recent CS
>>> > >> >> >> code
>>> > >> >> >> > on my machine and I've built and run it,
I've not yet made
>>> > >> >> >> > any
>>> > >> >> changes.
>>> > >> >> >>  Do
>>> > >> >> >> > you know of any documentation I could look
at to learn the
>>> > >> >> >> > process
>>> > >> >> >> involved
>>> > >> >> >> > in making CS changes?
>>> > >> >> >> >
>>> > >> >> >> >
>>> > >> >> >> > On Thu, Jan 31, 2013 at 4:36 PM, Marcus
Sorensen <
>>> > >> shadowsor@gmail.com
>>> > >> >> >> >wrote:
>>> > >> >> >> >
>>> > >> >> >> >> Yes, it would need to be a part of compute
offering
>>> > >> >> >> >> separately,
>>> > >> along
>>> > >> >> >> >> the CPU/RAM and network limits. Then
theoretically they
>>> > >> >> >> >> could provision OS drive with relatively
slow limits, and a
>>> > >> >> >> >> database
>>> > >> volume
>>> > >> >> >> >> with higher limits (and higher pricetag
or something).
>>> > >> >> >> >>
>>> > >> >> >> >> On Thu, Jan 31, 2013 at 4:33 PM, Mike
Tutkowski
>>> > >> >> >> >> <mike.tutkowski@solidfire.com>
wrote:
>>> > >> >> >> >> > Thanks for the info, Marcus!
>>> > >> >> >> >> >
>>> > >> >> >> >> > So, you are thinking that when
the user creates a new
>>> Disk
>>> > >> Offering
>>> > >> >> >> that
>>> > >> >> >> >> he
>>> > >> >> >> >> > or she would be given the option
of specifying Max and
>>> Min
>>> > >> IOPS?
>>> > >> >>  That
>>> > >> >> >> >> > makes sense when I think of Data
Disks, but how does that
>>> > >> figure
>>> > >> >> into
>>> > >> >> >> the
>>> > >> >> >> >> > kind of storage a VM Instance runs
off of?  I thought the
>>> > >> >> >> >> > way
>>> > >> that
>>> > >> >> >> works
>>> > >> >> >> >> > today is by specifying in the Compute
Offering a Storage
>>> Tag.
>>> > >> >> >> >> >
>>> > >> >> >> >> > Thanks!
>>> > >> >> >> >> >
>>> > >> >> >> >> >
>>> > >> >> >> >> > On Thu, Jan 31, 2013 at 4:25 PM,
Marcus Sorensen <
>>> > >> >> shadowsor@gmail.com
>>> > >> >> >> >> >wrote:
>>> > >> >> >> >> >
>>> > >> >> >> >> >> So, this is what Edison's storage
refactor is designed
>>> to
>>> > >> >> accomplish.
>>> > >> >> >> >> >> Instead of the storage working
the way it currently
>>> does,
>>> > >> >> creating a
>>> > >> >> >> >> >> volume for  a VM would consist
of the cloudstack server
>>> > >> >> >> >> >> (or
>>> > >> volume
>>> > >> >> >> >> >> service as he has created)
talking to your solidfire
>>> > >> appliance,
>>> > >> >> >> >> >> creating a new lun, and using
that. Now instead of a
>>> > >> >> >> >> >> giant
>>> > >> >> pool/lun
>>> > >> >> >> >> >> that each vm shares, each VM
has it's own LUN that is
>>> > >> provisioned
>>> > >> >> on
>>> > >> >> >> >> >> the fly by cloudstack.
>>> > >> >> >> >> >>
>>> > >> >> >> >> >> It sounds like maybe this will
make it into 4.1 (I have
>>> > >> >> >> >> >> to go
>>> > >> >> through
>>> > >> >> >> >> >> my email today, but it sounded
close).
>>> > >> >> >> >> >>
>>> > >> >> >> >> >> Either way, it would be a good
idea to add this into the
>>> > >> >> >> >> >> disk offering, a basic IO and
throughput limit, and then
>>> > >> >> >> >> >> whether
>>> > >> you
>>> > >> >> >> >> >> implement it through cgroups
on the Linux server, or at
>>> > >> >> >> >> >> the
>>> > >> SAN
>>> > >> >> >> level,
>>> > >> >> >> >> >> or through some other means
on VMware or Xen, the values
>>> > >> >> >> >> >> are
>>> > >> >> there to
>>> > >> >> >> >> >> use.
>>> > >> >> >> >> >>
>>> > >> >> >> >> >> On Thu, Jan 31, 2013 at 4:19
PM, Mike Tutkowski
>>> > >> >> >> >> >> <mike.tutkowski@solidfire.com>
wrote:
>>> > >> >> >> >> >> > Hi everyone,
>>> > >> >> >> >> >> >
>>> > >> >> >> >> >> > A while back, I had sent
out a question regarding
>>> > >> >> >> >> >> > storage
>>> > >> >> quality
>>> > >> >> >> of
>>> > >> >> >> >> >> > service.  A few of you
chimed in with some good ideas.
>>> > >> >> >> >> >> >
>>> > >> >> >> >> >> > Now that I have a little
more experience with
>>> > >> >> >> >> >> > CloudStack
>>> > >> (these
>>> > >> >> >> past
>>> > >> >> >> >> >> couple
>>> > >> >> >> >> >> > weeks, I've been able
to get a real CS system up and
>>> > >> running,
>>> > >> >> >> create
>>> > >> >> >> >> an
>>> > >> >> >> >> >> > iSCSI target, and make
use of it from XenServer), I
>>> > >> >> >> >> >> > would
>>> > >> like
>>> > >> >> to
>>> > >> >> >> >> pose my
>>> > >> >> >> >> >> > question again, but in
a more refined way.
>>> > >> >> >> >> >> >
>>> > >> >> >> >> >> > A little background: 
I worked for a data-storage
>>> > >> >> >> >> >> > company in
>>> > >> >> >> Boulder,
>>> > >> >> >> >> CO
>>> > >> >> >> >> >> > called SolidFire (http://solidfire.com).
 We build a
>>> > >> >> >> >> >> > highly
>>> > >> >> >> >> >> fault-tolerant,
>>> > >> >> >> >> >> > clustered SAN technology
consisting exclusively of
>>> SSDs.
>>> > >>  One of
>>> > >> >> >> our
>>> > >> >> >> >> main
>>> > >> >> >> >> >> > features is hard quality
of service (QoS).  You may
>>> > >> >> >> >> >> > have
>>> > >> heard
>>> > >> >> of
>>> > >> >> >> QoS
>>> > >> >> >> >> >> > before.  In our case,
we refer to it as hard QoS
>>> > >> >> >> >> >> > because
>>> > >> the end
>>> > >> >> >> user
>>> > >> >> >> >> has
>>> > >> >> >> >> >> > the ability to specify
on a volume-by-volume basis
>>> what
>>> > >> >> >> >> >> > the
>>> > >> >> maximum
>>> > >> >> >> >> and
>>> > >> >> >> >> >> > minimum IOPS for a given
volume should be.  In other
>>> > >> >> >> >> >> > words,
>>> > >> we
>>> > >> >> do
>>> > >> >> >> not
>>> > >> >> >> >> >> have
>>> > >> >> >> >> >> > the user assign relative
high, medium, and low
>>> > >> >> >> >> >> > priorities to
>>> > >> >> >> volumes
>>> > >> >> >> >> (the
>>> > >> >> >> >> >> > way you might do with
thread priorities), but rather
>>> > >> >> >> >> >> > hard
>>> > >> IOPS
>>> > >> >> >> limits.
>>> > >> >> >> >> >> >
>>> > >> >> >> >> >> > With this in mind, I would
like to know how you would
>>> > >> recommend
>>> > >> >> I
>>> > >> >> >> go
>>> > >> >> >> >> >> about
>>> > >> >> >> >> >> > enabling CloudStack to
support this feature.
>>> > >> >> >> >> >> >
>>> > >> >> >> >> >> > In my previous e-mail
discussion, people suggested
>>> > >> >> >> >> >> > using the
>>> > >> >> >> Storage
>>> > >> >> >> >> Tag
>>> > >> >> >> >> >> > field.  This is a good
idea, but does not fully
>>> satisfy
>>> > >> >> >> >> >> > my
>>> > >> >> >> >> requirements.
>>> > >> >> >> >> >> >
>>> > >> >> >> >> >> > For example, if I created
two large SolidFire volumes
>>> > >> >> >> >> >> > (by
>>> > >> the
>>> > >> >> way,
>>> > >> >> >> one
>>> > >> >> >> >> >> > SolidFire volume equals
one LUN), I could create two
>>> > >> >> >> >> >> > Primary
>>> > >> >> >> Storage
>>> > >> >> >> >> >> types
>>> > >> >> >> >> >> > to map onto them.  One
Primary Storage type could have
>>> > >> >> >> >> >> > the
>>> > >> tag
>>> > >> >> >> >> >> "high_perf"
>>> > >> >> >> >> >> > and the other the tag
"normal_perf".
>>> > >> >> >> >> >> >
>>> > >> >> >> >> >> > I could then create Compute
Offerings and Disk
>>> > >> >> >> >> >> > Offerings
>>> > >> that
>>> > >> >> >> >> referenced
>>> > >> >> >> >> >> > one Storage Tag or the
other.
>>> > >> >> >> >> >> >
>>> > >> >> >> >> >> > This would guarantee that
a VM Instance or Data Disk
>>> > >> >> >> >> >> > would
>>> > >> run
>>> > >> >> from
>>> > >> >> >> >> one
>>> > >> >> >> >> >> > SolidFire volume or the
other.
>>> > >> >> >> >> >> >
>>> > >> >> >> >> >> > The problem is that one
SolidFire volume could be
>>> > >> >> >> >> >> > servicing
>>> > >> >> >> multiple
>>> > >> >> >> >> VM
>>> > >> >> >> >> >> > Instances and/or Data
Disks.  This may not seem like a
>>> > >> problem,
>>> > >> >> but
>>> > >> >> >> >> it is
>>> > >> >> >> >> >> > because in such a configuration
our SAN can no longer
>>> > >> guarantee
>>> > >> >> >> IOPS
>>> > >> >> >> >> on a
>>> > >> >> >> >> >> > VM-by-VM basis (or a data
disk-by-data disk basis).
>>> > >> >> >> >> >> > This is
>>> > >> >> called
>>> > >> >> >> >> the
>>> > >> >> >> >> >> > Noisy Neighbor problem.
 If, for example, one VM
>>> > >> >> >> >> >> > Instance
>>> > >> starts
>>> > >> >> >> >> getting
>>> > >> >> >> >> >> > "greedy," it can degrade
the performance of the other
>>> > >> >> >> >> >> > VM
>>> > >> >> Instances
>>> > >> >> >> (or
>>> > >> >> >> >> >> Data
>>> > >> >> >> >> >> > Disks) that share that
SolidFire volume.
>>> > >> >> >> >> >> >
>>> > >> >> >> >> >> > Ideally we would like
to have a single VM Instance run
>>> > >> >> >> >> >> > on a
>>> > >> >> single
>>> > >> >> >> >> >> > SolidFire volume and a
single Data Disk be associated
>>> > >> >> >> >> >> > with a
>>> > >> >> single
>>> > >> >> >> >> >> > SolidFire volume.
>>> > >> >> >> >> >> >
>>> > >> >> >> >> >> > How might I go about accomplishing
this design goal?
>>> > >> >> >> >> >> >
>>> > >> >> >> >> >> > Thanks!!
>>> > >> >> >> >> >> >
>>> > >> >> >> >> >> > --
>>> > >> >> >> >> >> > *Mike Tutkowski*
>>> > >> >> >> >> >> > *Senior CloudStack Developer,
SolidFire Inc.*
>>> > >> >> >> >> >> > e: mike.tutkowski@solidfire.com
>>> > >> >> >> >> >> > o: 303.746.7302
>>> > >> >> >> >> >> > Advancing the way the
world uses the
>>> > >> >> >> >> >> > cloud<
>>> http://solidfire.com/solution/overview/?video=pla
>>> > >> >> >> >> >> > y>
>>> > >> >> >> >> >> > *(tm)*
>>> > >> >> >> >> >>
>>> > >> >> >> >> >
>>> > >> >> >> >> >
>>> > >> >> >> >> >
>>> > >> >> >> >> > --
>>> > >> >> >> >> > *Mike Tutkowski*
>>> > >> >> >> >> > *Senior CloudStack Developer, SolidFire
Inc.*
>>> > >> >> >> >> > e: mike.tutkowski@solidfire.com
>>> > >> >> >> >> > o: 303.746.7302
>>> > >> >> >> >> > Advancing the way the world uses
the
>>> > >> >> >> >> > cloud<http://solidfire.com/solution/overview/?video=play
>>> >
>>> > >> >> >> >> > *(tm)*
>>> > >> >> >> >>
>>> > >> >> >> >
>>> > >> >> >> >
>>> > >> >> >> >
>>> > >> >> >> > --
>>> > >> >> >> > *Mike Tutkowski*
>>> > >> >> >> > *Senior CloudStack Developer, SolidFire
Inc.*
>>> > >> >> >> > e: mike.tutkowski@solidfire.com
>>> > >> >> >> > o: 303.746.7302
>>> > >> >> >> > Advancing the way the world uses the
>>> > >> >> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> > >> >> >> > *(tm)*
>>> > >> >> >>
>>> > >> >> >
>>> > >> >> >
>>> > >> >> >
>>> > >> >> > --
>>> > >> >> > *Mike Tutkowski*
>>> > >> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>> > >> >> > e: mike.tutkowski@solidfire.com
>>> > >> >> > o: 303.746.7302
>>> > >> >> > Advancing the way the world uses the
>>> > >> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> > >> >> > *(tm)*
>>> > >> >>
>>> > >> >
>>> > >> >
>>> > >> >
>>> > >> > --
>>> > >> > *Mike Tutkowski*
>>> > >> > *Senior CloudStack Developer, SolidFire Inc.*
>>> > >> > e: mike.tutkowski@solidfire.com
>>> > >> > o: 303.746.7302
>>> > >> > Advancing the way the world uses the
>>> > >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> > >> > *(tm)*
>>> > >>
>>> > >
>>> > >
>>> > >
>>> > > --
>>> > > *Mike Tutkowski*
>>> > > *Senior CloudStack Developer, SolidFire Inc.*
>>> > > e: mike.tutkowski@solidfire.com
>>> > > o: 303.746.7302
>>> > > Advancing the way the world uses the
>>> > > cloud<http://solidfire.com/solution/overview/?video=play>
>>> > > *(tm)*
>>> > >
>>> >
>>> >
>>> >
>>> > --
>>> > *Mike Tutkowski*
>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>> > e: mike.tutkowski@solidfire.com
>>> > o: 303.746.7302
>>> > Advancing the way the world uses the
>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> > *(tm)*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>>  *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message