cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marcus Sorensen <shadow...@gmail.com>
Subject Re: Managed storage with KVM
Date Sat, 14 Sep 2013 02:10:02 GMT
If you wire up the block device you won't have to require users to manage a
clustered filesystem or lvm, and all of the work in maintaining those
clustered services and quorum management, cloudstack will ensure only one
vm is using the disks at any given time and where. It would be cake
compared to dealing with mounts and filesystem's.
On Sep 13, 2013 8:07 PM, "Mike Tutkowski" <mike.tutkowski@solidfire.com>
wrote:

> Yeah, I think it would be nice if it supported Live Migration.
>
> That's kind of why I was initially leaning toward SharedMountPoint and
> just doing the work ahead of time to get things in a state where the
> current code could run with it.
>
>
> On Fri, Sep 13, 2013 at 8:00 PM, Marcus Sorensen <shadowsor@gmail.com>wrote:
>
>> No, as that would rely on virtualized network/iscsi initiator inside the
>> vm, which also sucks. I mean attach /dev/sdx (your lun on hypervisor) as a
>> disk to the VM, rather than attaching some image file that resides on a
>> filesystem, mounted on the host, living on a target.
>>
>> Actually, if you plan on the storage supporting live migration I think
>> this is the only way. You can't put a filesystem on it and mount it in two
>> places to facilitate migration unless its a clustered filesystem, in which
>> case you're back to shared mount point.
>>
>> As far as I'm aware, the xenserver SR style is basically LVM with a xen
>> specific cluster management, a custom CLVM. They don't use a filesystem
>> either.
>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski" <mike.tutkowski@solidfire.com>
>> wrote:
>>
>>> When you say, "wire up the lun directly to the vm," do you mean
>>> circumventing the hypervisor? I didn't think we could do that in CS.
>>> OpenStack, on the other hand, always circumvents the hypervisor, as far as
>>> I know.
>>>
>>>
>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <shadowsor@gmail.com>wrote:
>>>
>>>> Better to wire up the lun directly to the vm unless there is a good
>>>> reason not to.
>>>>  On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <shadowsor@gmail.com>
>>>> wrote:
>>>>
>>>>> You could do that, but as mentioned I think its a mistake to go to the
>>>>> trouble of creating a 1:1 mapping of CS volumes to luns and then putting
a
>>>>> filesystem on it, mounting it, and then putting a QCOW2 or even RAW disk
>>>>> image on that filesystem. You'll lose a lot of iops along the way, and
have
>>>>> more overhead with the filesystem and its journaling, etc.
>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski" <
>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>
>>>>>> Ah, OK, I didn't know that was such new ground in KVM with CS.
>>>>>>
>>>>>> So, the way people use our SAN with KVM and CS today is by selecting
>>>>>> SharedMountPoint and specifying the location of the share.
>>>>>>
>>>>>> They can set up their share using Open iSCSI by discovering their
>>>>>> iSCSI target, logging in to it, then mounting it somewhere on their
file
>>>>>> system.
>>>>>>
>>>>>> Would it make sense for me to just do that discovery, logging in,
and
>>>>>> mounting behind the scenes for them and letting the current code
manage the
>>>>>> rest as it currently does?
>>>>>>
>>>>>>
>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen <shadowsor@gmail.com
>>>>>> > wrote:
>>>>>>
>>>>>>> Oh, hypervisor snapshots are a bit different. I need to catch
up on
>>>>>>> the work done in KVM, but this is basically just disk snapshots
+ memory
>>>>>>> dump. I still think disk snapshots would preferably be handled
by the SAN,
>>>>>>> and then memory dumps can go to secondary storage or something
else. This
>>>>>>> is relatively new ground with CS and KVM, so we will want to
see how others
>>>>>>> are planning theirs.
>>>>>>>  On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <shadowsor@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Let me back up and say I don't think you'd use a vdi style
on an
>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW format.
Otherwise you're
>>>>>>>> putting a filesystem on your lun, mounting it, creating a
QCOW2 disk image,
>>>>>>>> and that seems unnecessary and a performance killer.
>>>>>>>>
>>>>>>>> So probably attaching the raw iscsi lun as a disk to the
VM, and
>>>>>>>> handling snapshots on the San side via the storage plugin
is best. My
>>>>>>>> impression from the storage plugin refactor was that there
was a snapshot
>>>>>>>> service that would allow the San to handle snapshots.
>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <shadowsor@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Ideally volume snapshots can be handled by the SAN back
end, if
>>>>>>>>> the SAN supports it. The cloudstack mgmt server could
call your plugin for
>>>>>>>>> volume snapshot and it would be hypervisor agnostic.
As far as space, that
>>>>>>>>> would depend on how your SAN handles it. With ours, we
carve out luns from
>>>>>>>>> a pool, and the snapshot spave comes from the pool and
is independent of
>>>>>>>>> the LUN size the host sees.
>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski" <
>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hey Marcus,
>>>>>>>>>>
>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt
won't work
>>>>>>>>>> when you take into consideration hypervisor snapshots?
>>>>>>>>>>
>>>>>>>>>> On XenServer, when you take a hypervisor snapshot,
the VDI for
>>>>>>>>>> the snapshot is placed on the same storage repository
as the volume is on.
>>>>>>>>>>
>>>>>>>>>> Same idea for VMware, I believe.
>>>>>>>>>>
>>>>>>>>>> So, what would happen in my case (let's say for XenServer
and
>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
snapshots in 4.2) is I'd
>>>>>>>>>> make an iSCSI target that is larger than what the
user requested for the
>>>>>>>>>> CloudStack volume (which is fine because our SAN
thinly provisions volumes,
>>>>>>>>>> so the space is not actually used unless it needs
to be). The CloudStack
>>>>>>>>>> volume would be the only "object" on the SAN volume
until a hypervisor
>>>>>>>>>> snapshot is taken. This snapshot would also reside
on the SAN volume.
>>>>>>>>>>
>>>>>>>>>> If this is also how KVM behaves and there is no creation
of LUNs
>>>>>>>>>> within an iSCSI target from libvirt (which, even
if there were support for
>>>>>>>>>> this, our SAN currently only allows one LUN per iSCSI
target), then I don't
>>>>>>>>>> see how using this model will work.
>>>>>>>>>>
>>>>>>>>>> Perhaps I will have to go enhance the current way
this works with
>>>>>>>>>> DIR?
>>>>>>>>>>
>>>>>>>>>> What do you think?
>>>>>>>>>>
>>>>>>>>>> Thanks
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski <
>>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> That appears to be the way it's used for iSCSI
access today.
>>>>>>>>>>>
>>>>>>>>>>> I suppose I could go that route, too, but I might
as well
>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
<
>>>>>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> To your question about SharedMountPoint,
I believe it just acts
>>>>>>>>>>>> like a
>>>>>>>>>>>> 'DIR' storage type or something similar to
that. The end-user is
>>>>>>>>>>>> responsible for mounting a file system that
all KVM hosts can
>>>>>>>>>>>> access,
>>>>>>>>>>>> and CloudStack is oblivious to what is providing
the storage.
>>>>>>>>>>>> It could
>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered
filesystem,
>>>>>>>>>>>> cloudstack just
>>>>>>>>>>>> knows that the provided directory path has
VM images.
>>>>>>>>>>>>
>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
<
>>>>>>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI
all at the same time.
>>>>>>>>>>>> > Multiples, in fact.
>>>>>>>>>>>> >
>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike
Tutkowski
>>>>>>>>>>>> > <mike.tutkowski@solidfire.com>
wrote:
>>>>>>>>>>>> >> Looks like you can have multiple
storage pools:
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>>>>>>>>>>> >> Name                 State     
Autostart
>>>>>>>>>>>> >> -----------------------------------------
>>>>>>>>>>>> >> default              active    
yes
>>>>>>>>>>>> >> iSCSI                active    
no
>>>>>>>>>>>> >>
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM,
Mike Tutkowski
>>>>>>>>>>>> >> <mike.tutkowski@solidfire.com>
wrote:
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> Reading through the docs you
pointed out.
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> I see what you're saying now.
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> You can create an iSCSI (libvirt)
storage pool based on an
>>>>>>>>>>>> iSCSI target.
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> In my case, the iSCSI target
would only have one LUN, so
>>>>>>>>>>>> there would only
>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage
volume in the (libvirt)
>>>>>>>>>>>> storage pool.
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> As you say, my plug-in creates
and destroys iSCSI
>>>>>>>>>>>> targets/LUNs on the
>>>>>>>>>>>> >>> SolidFire SAN, so it is not
a problem that libvirt does not
>>>>>>>>>>>> support
>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> It looks like I need to test
this a bit to see if libvirt
>>>>>>>>>>>> supports
>>>>>>>>>>>> >>> multiple iSCSI storage pools
(as you mentioned, since each
>>>>>>>>>>>> one of its
>>>>>>>>>>>> >>> storage pools would map to one
of my iSCSI targets/LUNs).
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58
PM, Mike Tutkowski
>>>>>>>>>>>> >>> <mike.tutkowski@solidfire.com>
wrote:
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has
this type:
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>     public enum poolType
{
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>         ISCSI("iscsi"),
NETFS("netfs"),
>>>>>>>>>>>> LOGICAL("logical"), DIR("dir"),
>>>>>>>>>>>> >>>> RBD("rbd");
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>         String _poolType;
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>         poolType(String
poolType) {
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>             _poolType =
poolType;
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>         }
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>         @Override
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>         public String toString()
{
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>             return _poolType;
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>         }
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>     }
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>> It doesn't look like the
iSCSI type is currently being
>>>>>>>>>>>> used, but I'm
>>>>>>>>>>>> >>>> understanding more what
you were getting at.
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>> Can you tell me for today
(say, 4.2), when someone selects
>>>>>>>>>>>> the
>>>>>>>>>>>> >>>> SharedMountPoint option
and uses it with iSCSI, is that
>>>>>>>>>>>> the "netfs" option
>>>>>>>>>>>> >>>> above or is that just for
NFS?
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>> Thanks!
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at
5:50 PM, Marcus Sorensen <
>>>>>>>>>>>> shadowsor@gmail.com>
>>>>>>>>>>>> >>>> wrote:
>>>>>>>>>>>> >>>>>
>>>>>>>>>>>> >>>>> Take a look at this:
>>>>>>>>>>>> >>>>>
>>>>>>>>>>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>>>>>>>>>> >>>>>
>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated
on the iSCSI server, and
>>>>>>>>>>>> cannot be
>>>>>>>>>>>> >>>>> created via the libvirt
APIs.", which I believe your
>>>>>>>>>>>> plugin will take
>>>>>>>>>>>> >>>>> care of. Libvirt just
does the work of logging in and
>>>>>>>>>>>> hooking it up to
>>>>>>>>>>>> >>>>> the VM (I believe the
Xen api does that work in the Xen
>>>>>>>>>>>> stuff).
>>>>>>>>>>>> >>>>>
>>>>>>>>>>>> >>>>> What I'm not sure about
is whether this provides a 1:1
>>>>>>>>>>>> mapping, or if
>>>>>>>>>>>> >>>>> it just allows you to
register 1 iscsi device as a pool.
>>>>>>>>>>>> You may need
>>>>>>>>>>>> >>>>> to write some test code
or read up a bit more about this.
>>>>>>>>>>>> Let us know.
>>>>>>>>>>>> >>>>> If it doesn't, you may
just have to write your own
>>>>>>>>>>>> storage adaptor
>>>>>>>>>>>> >>>>> rather than changing
LibvirtStorageAdaptor.java.  We can
>>>>>>>>>>>> cross that
>>>>>>>>>>>> >>>>> bridge when we get there.
>>>>>>>>>>>> >>>>>
>>>>>>>>>>>> >>>>> As far as interfacing
with libvirt, see the java bindings
>>>>>>>>>>>> doc.
>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/
 Normally,
>>>>>>>>>>>> you'll see a
>>>>>>>>>>>> >>>>> connection object be
made, then calls made to that 'conn'
>>>>>>>>>>>> object. You
>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor
to see how that is
>>>>>>>>>>>> done for
>>>>>>>>>>>> >>>>> other pool types, and
maybe write some test java code to
>>>>>>>>>>>> see if you
>>>>>>>>>>>> >>>>> can interface with libvirt
and register iscsi storage
>>>>>>>>>>>> pools before you
>>>>>>>>>>>> >>>>> get started.
>>>>>>>>>>>> >>>>>
>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013
at 5:31 PM, Mike Tutkowski
>>>>>>>>>>>> >>>>> <mike.tutkowski@solidfire.com>
wrote:
>>>>>>>>>>>> >>>>> > So, Marcus, I need
to investigate libvirt more, but you
>>>>>>>>>>>> figure it
>>>>>>>>>>>> >>>>> > supports
>>>>>>>>>>>> >>>>> > connecting to/disconnecting
from iSCSI targets, right?
>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>> >>>>> > On Fri, Sep 13,
2013 at 5:29 PM, Mike Tutkowski
>>>>>>>>>>>> >>>>> > <mike.tutkowski@solidfire.com>
wrote:
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >> OK, thanks,
Marcus
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >> I am currently
looking through some of the classes you
>>>>>>>>>>>> pointed out
>>>>>>>>>>>> >>>>> >> last
>>>>>>>>>>>> >>>>> >> week or so.
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >> On Fri, Sep
13, 2013 at 5:26 PM, Marcus Sorensen
>>>>>>>>>>>> >>>>> >> <shadowsor@gmail.com>
>>>>>>>>>>>> >>>>> >> wrote:
>>>>>>>>>>>> >>>>> >>>
>>>>>>>>>>>> >>>>> >>> Yes, my
guess is that you will need the iscsi
>>>>>>>>>>>> initiator utilities
>>>>>>>>>>>> >>>>> >>> installed.
There should be standard packages for any
>>>>>>>>>>>> distro. Then
>>>>>>>>>>>> >>>>> >>> you'd call
>>>>>>>>>>>> >>>>> >>> an agent
storage adaptor to do the initiator login.
>>>>>>>>>>>> See the info I
>>>>>>>>>>>> >>>>> >>> sent
>>>>>>>>>>>> >>>>> >>> previously
about LibvirtStorageAdaptor.java and
>>>>>>>>>>>> libvirt iscsi
>>>>>>>>>>>> >>>>> >>> storage
type
>>>>>>>>>>>> >>>>> >>> to see
if that fits your need.
>>>>>>>>>>>> >>>>> >>>
>>>>>>>>>>>> >>>>> >>> On Sep
13, 2013 4:55 PM, "Mike Tutkowski"
>>>>>>>>>>>> >>>>> >>> <mike.tutkowski@solidfire.com>
>>>>>>>>>>>> >>>>> >>> wrote:
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> Hi,
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> As
you may remember, during the 4.2 release I
>>>>>>>>>>>> developed a SolidFire
>>>>>>>>>>>> >>>>> >>>> (storage)
plug-in for CloudStack.
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> This
plug-in was invoked by the storage framework at
>>>>>>>>>>>> the necessary
>>>>>>>>>>>> >>>>> >>>> times
>>>>>>>>>>>> >>>>> >>>> so
that I could dynamically create and delete
>>>>>>>>>>>> volumes on the
>>>>>>>>>>>> >>>>> >>>> SolidFire
SAN
>>>>>>>>>>>> >>>>> >>>> (among
other activities).
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> This
is necessary so I can establish a 1:1 mapping
>>>>>>>>>>>> between a
>>>>>>>>>>>> >>>>> >>>> CloudStack
>>>>>>>>>>>> >>>>> >>>> volume
and a SolidFire volume for QoS.
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> In
the past, CloudStack always expected the admin to
>>>>>>>>>>>> create large
>>>>>>>>>>>> >>>>> >>>> volumes
ahead of time and those volumes would likely
>>>>>>>>>>>> house many
>>>>>>>>>>>> >>>>> >>>> root
and
>>>>>>>>>>>> >>>>> >>>> data
disks (which is not QoS friendly).
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> To
make this 1:1 mapping scheme work, I needed to
>>>>>>>>>>>> modify logic in
>>>>>>>>>>>> >>>>> >>>> the
>>>>>>>>>>>> >>>>> >>>> XenServer
and VMware plug-ins so they could
>>>>>>>>>>>> create/delete storage
>>>>>>>>>>>> >>>>> >>>> repositories/datastores
as needed.
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> For
4.3 I want to make this happen with KVM.
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> I'm
coming up to speed with how this might work on
>>>>>>>>>>>> KVM, but I'm
>>>>>>>>>>>> >>>>> >>>> still
>>>>>>>>>>>> >>>>> >>>> pretty
new to KVM.
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> Does
anyone familiar with KVM know how I will need
>>>>>>>>>>>> to interact with
>>>>>>>>>>>> >>>>> >>>> the
>>>>>>>>>>>> >>>>> >>>> iSCSI
target? For example, will I have to expect
>>>>>>>>>>>> Open iSCSI will be
>>>>>>>>>>>> >>>>> >>>> installed
on the KVM host and use it for this to
>>>>>>>>>>>> work?
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> Thanks
for any suggestions,
>>>>>>>>>>>> >>>>> >>>> Mike
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> --
>>>>>>>>>>>> >>>>> >>>> Mike
Tutkowski
>>>>>>>>>>>> >>>>> >>>> Senior
CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>> >>>>> >>>> e:
mike.tutkowski@solidfire.com
>>>>>>>>>>>> >>>>> >>>> o:
303.746.7302
>>>>>>>>>>>> >>>>> >>>> Advancing
the way the world uses the cloud™
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >> --
>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>>>>>>>>>>>> >>>>> >> Senior CloudStack
Developer, SolidFire Inc.
>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>>>>>>>>>>>> >>>>> >> Advancing the
way the world uses the cloud™
>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>> >>>>> > --
>>>>>>>>>>>> >>>>> > Mike Tutkowski
>>>>>>>>>>>> >>>>> > Senior CloudStack
Developer, SolidFire Inc.
>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>>>>>>>>>>>> >>>>> > o: 303.746.7302
>>>>>>>>>>>> >>>>> > Advancing the way
the world uses the cloud™
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>> --
>>>>>>>>>>>> >>>> Mike Tutkowski
>>>>>>>>>>>> >>>> Senior CloudStack Developer,
SolidFire Inc.
>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>> >>>> o: 303.746.7302
>>>>>>>>>>>> >>>> Advancing the way the world
uses the cloud™
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> --
>>>>>>>>>>>> >>> Mike Tutkowski
>>>>>>>>>>>> >>> Senior CloudStack Developer,
SolidFire Inc.
>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>> >>> o: 303.746.7302
>>>>>>>>>>>> >>> Advancing the way the world
uses the cloud™
>>>>>>>>>>>> >>
>>>>>>>>>>>> >>
>>>>>>>>>>>> >>
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> --
>>>>>>>>>>>> >> Mike Tutkowski
>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire
Inc.
>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>> >> o: 303.746.7302
>>>>>>>>>>>> >> Advancing the way the world uses
the cloud™
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>> *™*
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>> o: 303.746.7302
>>>>>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>> *™*
>>>>>>>>>>
>>>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *Mike Tutkowski*
>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> e: mike.tutkowski@solidfire.com
>>>>>> o: 303.746.7302
>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>> *™*
>>>>>>
>>>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message