cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marcus Sorensen <shadow...@gmail.com>
Subject Re: add connect method on KVM storage code
Date Sat, 28 Sep 2013 06:16:03 GMT
On Sat, Sep 28, 2013 at 12:09 AM, Mike Tutkowski
<mike.tutkowski@solidfire.com> wrote:
> Yes, you are correct that our SAN requires CHAP or for the host and volume
> to be in a Volume Access Group.
>
> If CHAP is provided, it is used. If it fails for whatever reason, the
> Volume Access Group is NOT leveraged to grant permissions.
>
> If CHAP is not provided, the host and volume must be in a common Volume
> Access Group.
>
> As you say, CHAP only grants access...it will not save us from a rouge
> compute node that has access to the volume.

Ok, I figured the CHAP credentials were predetermined, but it sounds
like they're per-lun.

>
> If we can't get grantAccess and revokeAccess working in the storage
> plug-ins for 4.3, I can just use CHAP. I'd prefer to have no SAN calls
> coming in from the compute nodes.

Yes, we only do it because we have to, but now with the new framework
I'm trying to refactor things as they add support for doing things via
the plugin.

>
> Also, I don't think we can grant all hosts in the KVM cluster access to the
> volume ahead of time because the volume won't exist until the first time we
> try to attach it. We have to grant access programmatically.
>
> Plus you might want to disconnect the volume from a KVM host in one cluster
> and connect it up in another KVM cluster (zone-wide storage).

I was thinking more along the lines of some group that you'd add your
hosts to, regardless of which cluster they're in, as you deploy them.
But it sounds like those would need to be done dynamically? i.e. I
can't create a group, add a host, and then later add a lun and have
access to it?

>
>
> On Sat, Sep 28, 2013 at 12:03 AM, Marcus Sorensen <shadowsor@gmail.com>wrote:
>
>> On Fri, Sep 27, 2013 at 11:53 PM, Mike Tutkowski
>> <mike.tutkowski@solidfire.com> wrote:
>> > At first glance, the code you have on Review Board looks good (the
>> > attach/detach logic being very similar to what I currently have on
>> GitHub).
>> > Let me take a closer took tomorrow, though.
>> >
>> > Can you tell me how I'd pass in CHAP credentials?
>>
>> We can do a generic Map<String, String> connectDetails so that anyone
>> can pass what they need to their StorageAdaptor implementation,
>> without everyone having to add their own signature to
>> connectPhysicalDisk. I'm not sure if that bugs someone or not, I guess
>> it depends on how tightly coupled people prefer their code.
>>
>> >
>> > There's kind of an asterisk by CHAP credentials, though.
>> >
>> > If we have some way of informing a storage plug-in when we want to let a
>> > host access a volume and when we want to disallow that host access from
>> the
>> > volume, then I don't need to use CHAP credentials (I can use what we call
>> > Volume Access Groups on the SAN to say host x can access volume y).
>> >
>> > If I don't have such a mechanism, though, then I need CHAP credentials to
>> > access the volume in question.
>>
>> Yeah, I'm not sure what CHAP buys you. It doesn't disconnect other
>> hosts. I assume you mean that your SAN either requires CHAP or that
>> the host be a member of the VolumeAccess Group.  Could you just
>> require the host be added to the group during hypervisor setup?  The
>> only other alternative I can think of is what we're doing, where we
>> make an API call to the SAN in connectPhysicalDisk, until they
>> implement the grantAccess method.
>>
>> >
>> >
>> > On Fri, Sep 27, 2013 at 11:39 PM, Mike Tutkowski <
>> > mike.tutkowski@solidfire.com> wrote:
>> >
>> >> That was the only reason I was passing size into the connect method.
>> >>
>> >> I agree, though, that it shouldn't have to be passed in there.
>> >>
>> >>
>> >> On Fri, Sep 27, 2013 at 11:38 PM, Mike Tutkowski <
>> >> mike.tutkowski@solidfire.com> wrote:
>> >>
>> >>> One problem I need to solve, though, is how to implement the
>> >>> getPhysicalVolume method.
>> >>>
>> >>> In your case, you just ask Libvirt for the StoragePool by name. This
>> >>> gives you details about the volume like its size.
>> >>>
>> >>> In my case, I don't have a decent way (right now) to produce a
>> >>> KVMPhysicalDisk that has a size.
>> >>>
>> >>> Where do you recommend I get this information? Do you know if iscsiadm
>> >>> has the ability to query the LUN for its size? Maybe I can look it up
>> >>> by-path on disk with another tool to get its size?
>> >>>
>> >>> I can fill in values like its path (by-path name), its name, and its
>> >>> format (PhysicalDiskFormat.RAW), but I don't know how to retrieve the
>> size
>> >>> of the disk.
>> >>>
>> >>> Maybe you have a recommendation here.
>> >>>
>> >>> Thanks!
>> >>>
>> >>>
>> >>> On Fri, Sep 27, 2013 at 11:22 PM, Mike Tutkowski <
>> >>> mike.tutkowski@solidfire.com> wrote:
>> >>>
>> >>>> Woops...didn't see that you had already sent out the Review Board
>> update.
>> >>>>
>> >>>>
>> >>>> On Fri, Sep 27, 2013 at 11:21 PM, Mike Tutkowski <
>> >>>> mike.tutkowski@solidfire.com> wrote:
>> >>>>
>> >>>>> OK, let's not worry about that other stuff (at least now).
>> >>>>>
>> >>>>> When your code is available, send out a message and I can take a
>> look.
>> >>>>>
>> >>>>> Talk to you later!
>> >>>>>
>> >>>>>
>> >>>>> On Fri, Sep 27, 2013 at 8:53 PM, Marcus Sorensen <
>> shadowsor@gmail.com>wrote:
>> >>>>>
>> >>>>>> On a side note, I'm actually a proponent of the storage plugin only
>> >>>>>> talking
>> >>>>>> to its storage api. Best to be hypervisor agnostic. Its just in the
>> >>>>>> case of
>> >>>>>> the default storage plugin the storage API is the cloudstack agent.
>> >>>>>>
>> >>>>>> But enough about that, we can spin that off into another thread if
>> we
>> >>>>>> need
>> >>>>>> to. This is about adding connect calls to the StorageAdaptor
>> interface
>> >>>>>> so
>> >>>>>> custom storage pool types can prepare disks prior to use. I need to
>> >>>>>> know
>> >>>>>> how it will look going forward so I can continue my work.
>> >>>>>> On Sep 27, 2013 8:00 PM, "Marcus Sorensen" <shadowsor@gmail.com>
>> >>>>>> wrote:
>> >>>>>>
>> >>>>>> > I actually think its irrelevant to this discussion, since its
>> really
>> >>>>>> only
>> >>>>>> > a contention point for the default KVM plugin. I did post this
>> with
>> >>>>>> the
>> >>>>>> > intention of getting feedback from anyone who cares about the KVM
>> >>>>>> side and
>> >>>>>> > how it handles things.
>> >>>>>> >
>> >>>>>> > What you describe with Xen sounds identical to what we are doing
>> >>>>>> here.
>> >>>>>> > That process is what I've been trying to explain. Your/my plugin
>> >>>>>> create a
>> >>>>>> > volume on the SAN the first time its used, and then CS calls
>> attach
>> >>>>>> volume
>> >>>>>> > in KVMStorageProcessor or XenStorageProcessor to make the lun
>> >>>>>> available to
>> >>>>>> > the hypervisor. On KVM this calls your storage adaptor to run
>> >>>>>> iscsiadm
>> >>>>>> > commands (connectPhysicalDisk, the subject of this thread). I
>> think
>> >>>>>> we were
>> >>>>>> > good on all of it up to the point where you mentioned
>> >>>>>> createPhysicalDisk,
>> >>>>>> > which isn't necessary to implement since your plugin did it way
>> back
>> >>>>>> when.
>> >>>>>> > On Sep 27, 2013 7:51 PM, "Mike Tutkowski" <
>> >>>>>> mike.tutkowski@solidfire.com>
>> >>>>>> > wrote:
>> >>>>>> >
>> >>>>>> >> I think we should get John Burwell an Edison Su involved in these
>> >>>>>> >> discussions.
>> >>>>>> >>
>> >>>>>> >> John was a proponent for not having storage plug-ins talk to
>> >>>>>> hypervisors
>> >>>>>> >> and Edison felt it was fine.
>> >>>>>> >>
>> >>>>>> >> Until we resolve that design issue, we might just be talking in
>> >>>>>> circles
>> >>>>>> >> here. :)
>> >>>>>> >>
>> >>>>>> >>
>> >>>>>> >> On Fri, Sep 27, 2013 at 7:29 PM, Marcus Sorensen <
>> >>>>>> shadowsor@gmail.com
>> >>>>>> >> >wrote:
>> >>>>>> >>
>> >>>>>> >> > The plugin itself doesn't talk to the hypervisor.
>> >>>>>> >> > On Sep 27, 2013 7:28 PM, "Mike Tutkowski" <
>> >>>>>> mike.tutkowski@solidfire.com
>> >>>>>> >> >
>> >>>>>> >> > wrote:
>> >>>>>> >> >
>> >>>>>> >> > > Yeah, we should bring John Burwell into this conversation. He
>> >>>>>> had a
>> >>>>>> >> > strong
>> >>>>>> >> > > opposition to storage plug-ins talking to hypervisors.
>> Perhaps
>> >>>>>> he was
>> >>>>>> >> > just
>> >>>>>> >> > > referring to shared storage, though (not local disks).
>> >>>>>> >> > >
>> >>>>>> >> > > Anyways, prior to 4.2 for XenServer and VMware, you had to
>> >>>>>> preallocate
>> >>>>>> >> > your
>> >>>>>> >> > > SR/datastore (there was a minor exception for XenServer).
>> >>>>>> CloudStack
>> >>>>>> >> > wasn't
>> >>>>>> >> > > involved in that process at all. It was a manual admin task.
>> >>>>>> >> > >
>> >>>>>> >> > > The storage plug-in framework didn't really solve that
>> problem,
>> >>>>>> >> though.
>> >>>>>> >> > All
>> >>>>>> >> > > it did was allow you to create your storage volume and make
>> it
>> >>>>>> >> available
>> >>>>>> >> > to
>> >>>>>> >> > > CloudStack. There was no way to get XenServer or VMware to
>> >>>>>> actually
>> >>>>>> >> use
>> >>>>>> >> > it
>> >>>>>> >> > > because they didn't want an IQN or whatever, they wanted an
>> >>>>>> >> SR/datastore.
>> >>>>>> >> > >
>> >>>>>> >> > > That being the case, I had to write code to detect this
>> >>>>>> "managed"
>> >>>>>> >> > situation
>> >>>>>> >> > > in XenServer and VMware (in the attach/detach methods).
>> >>>>>> >> > >
>> >>>>>> >> > > For example, for XenServer I had to first create an SR and a
>> VDI
>> >>>>>> >> (which
>> >>>>>> >> > the
>> >>>>>> >> > > SolidFire plug-in probably should have no knowledge of). Then
>> >>>>>> the
>> >>>>>> >> attach
>> >>>>>> >> > > logic would work.
>> >>>>>> >> > >
>> >>>>>> >> > > After a detach, I had to have XenServer "forget" the SR.
>> >>>>>> >> > >
>> >>>>>> >> > > What I was thinking is that the attach/detach methods aren't
>> >>>>>> really
>> >>>>>> >> the
>> >>>>>> >> > > right place for this special logic.
>> >>>>>> >> > >
>> >>>>>> >> > > We could have VolumeApiServiceImpl know about the special
>> needs
>> >>>>>> of
>> >>>>>> >> > > "managed" storage.
>> >>>>>> >> > >
>> >>>>>> >> > > If you want to do an attach of a volume and you're managed
>> >>>>>> storage,
>> >>>>>> >> > > VolumeApiService Imple could first tell the hypervisor to
>> >>>>>> create the
>> >>>>>> >> > > necessary objects (XenServer = SR/VDI, VMware =
>> datastore/VMDK,
>> >>>>>> KVM =
>> >>>>>> >> > just
>> >>>>>> >> > > establish the connection to the iSCSI target).
>> >>>>>> >> > >
>> >>>>>> >> > > It true that there are commands like CreateObjectCommand in
>> >>>>>> XenServer
>> >>>>>> >> and
>> >>>>>> >> > > VMware now, but they are only used when the storage is
>> >>>>>> preallocated
>> >>>>>> >> (not
>> >>>>>> >> > > when the storage is managed by CloudStack).
>> >>>>>> >> > >
>> >>>>>> >> > > In the end, we kind of have two different storage models in
>> >>>>>> >> CloudStack:
>> >>>>>> >> > > Non-managed (preallocated storage by admin) and managed
>> (plug-in
>> >>>>>> >> > > dynamically creates storage on a storage system).
>> >>>>>> >> > >
>> >>>>>> >> > >
>> >>>>>> >> > > On Fri, Sep 27, 2013 at 6:46 PM, Marcus Sorensen <
>> >>>>>> shadowsor@gmail.com
>> >>>>>> >> > > >wrote:
>> >>>>>> >> > >
>> >>>>>> >> > > > On Fri, Sep 27, 2013 at 6:03 PM, Marcus Sorensen <
>> >>>>>> >> shadowsor@gmail.com>
>> >>>>>> >> > > > wrote:
>> >>>>>> >> > > > > On Fri, Sep 27, 2013 at 5:16 PM, Mike Tutkowski
>> >>>>>> >> > > > > <mike.tutkowski@solidfire.com> wrote:
>> >>>>>> >> > > > >> createAsync is just for creating the SAN (or whatever
>> >>>>>> storage)
>> >>>>>> >> > volume.
>> >>>>>> >> > > > >> deleteAsync is the reverse.
>> >>>>>> >> > > > >
>> >>>>>> >> > > > > Exactly. It used to be that the hypervisor created the
>> >>>>>> >> disk/lun/file
>> >>>>>> >> > > > > volume via createPhysicalDisk. Now it's done on the SAN
>> if
>> >>>>>> the
>> >>>>>> >> plugin
>> >>>>>> >> > > > > supports it. Ideally, only calls that are required for
>> >>>>>> utilizing
>> >>>>>> >> the
>> >>>>>> >> > > > > storage (or perhaps things like copy to NFS, where a
>> server
>> >>>>>> need
>> >>>>>> >> be
>> >>>>>> >> > > > > involved if your SAN can't do it directly) will go to the
>> >>>>>> >> hypervisor,
>> >>>>>> >> > > > > for external plugins.
>> >>>>>> >> > > > >
>> >>>>>> >> > > > > So mgmt server creates the LUN on the SAN, then calls the
>> >>>>>> >> hypervisor
>> >>>>>> >> > > > > to attach it to the host, so that a VM can make use of
>> it.
>> >>>>>> The
>> >>>>>> >> > > > > createAsync is hypervisor agnostic, it just creates a
>> LUN,
>> >>>>>> and
>> >>>>>> >> then
>> >>>>>> >> > > > > when you go to start up a VM or attach it to one it calls
>> >>>>>> the
>> >>>>>> >> > > > > hypervisor-specific code to make it available.
>> >>>>>> >> > > >
>> >>>>>> >> > > > I think this is the same with the other hypervisors as
>> well.
>> >>>>>> For
>> >>>>>> >> > > > example with Xen, you create the LUN via your plugin, and
>> >>>>>> then a
>> >>>>>> >> call
>> >>>>>> >> > > > is made to Xen to register it as an SR. It's basically the
>> >>>>>> same,
>> >>>>>> >> only
>> >>>>>> >> > > > you're now coding the 'register' part yourself (and it's
>> >>>>>> ephemeral,
>> >>>>>> >> > > > it's re-registered whenever it's used). You basically take
>> >>>>>> over some
>> >>>>>> >> > > > of the API coding that already exists on the other
>> platforms,
>> >>>>>> which
>> >>>>>> >> > > > means you can do whatever you want instead of just what's
>> >>>>>> supported,
>> >>>>>> >> > > > but you have to do the work as well.
>> >>>>>> >> > > >
>> >>>>>> >> > > > >
>> >>>>>> >> > > > >>
>> >>>>>> >> > > > >> Technically even the default plug-in should not call
>> into
>> >>>>>> the
>> >>>>>> >> > > hypervisor
>> >>>>>> >> > > > >> layer.
>> >>>>>> >> > > > >
>> >>>>>> >> > > > > There's no way you can create a local storage file to use
>> >>>>>> as a
>> >>>>>> >> > volume,
>> >>>>>> >> > > > > or CLVM volume, or other types of libvirt storage without
>> >>>>>> calling
>> >>>>>> >> a
>> >>>>>> >> > > > > service that runs on the hypervisor. Those things exist
>> >>>>>> only on
>> >>>>>> >> the
>> >>>>>> >> > > > > hypervisor, and are controlled by the hypervisor. For
>> NFS,
>> >>>>>> you
>> >>>>>> >> could
>> >>>>>> >> > > > > create a separate API for your NFS server that creates
>> qcow2
>> >>>>>> >> images
>> >>>>>> >> > on
>> >>>>>> >> > > > > your NFS primary, I suppose.
>> >>>>>> >> > > > >
>> >>>>>> >> > > > > One of the really nice things about KVM is that we can do
>> >>>>>> >> whatever a
>> >>>>>> >> > > > > Linux box is capable of, it was one of the draws we had
>> to
>> >>>>>> it. We
>> >>>>>> >> > > > > wouldn't be able to do the storage we do with Xen or
>> VMWare.
>> >>>>>> >> > > > >
>> >>>>>> >> > > > >>
>> >>>>>> >> > > > >> The storage layer should probably not be aware of the
>> >>>>>> hypervisor
>> >>>>>> >> > > layer.
>> >>>>>> >> > > > >
>> >>>>>> >> > > > > That's fine, but there's no reason why a storage plugin
>> >>>>>> can't
>> >>>>>> >> talk to
>> >>>>>> >> > > > > the agent that happens to be running on the hypervisor
>> for
>> >>>>>> >> > > > > implementation, if that's what the plugin intends.  I
>> don't
>> >>>>>> see
>> >>>>>> >> the
>> >>>>>> >> > > > > distinction between utilizing the kvm agent as you
>> storage
>> >>>>>> API or
>> >>>>>> >> > > > > talking to a custom SAN API, or some other concocted
>> >>>>>> service.
>> >>>>>> >> That's
>> >>>>>> >> > > > > sort of the point of the plugin, people can do whatever
>> >>>>>> they want.
>> >>>>>> >> > > > >
>> >>>>>> >> > > > >>
>> >>>>>> >> > > > >> On Fri, Sep 27, 2013 at 5:14 PM, Mike Tutkowski <
>> >>>>>> >> > > > >> mike.tutkowski@solidfire.com> wrote:
>> >>>>>> >> > > > >>
>> >>>>>> >> > > > >>> Well, from what I saw with XenServer and VMware, that
>> >>>>>> hypervisor
>> >>>>>> >> > > > logic's
>> >>>>>> >> > > > >>> attachVolume command also assumed a VDI/VMDK was
>> created
>> >>>>>> in
>> >>>>>> >> > advance.
>> >>>>>> >> > > > >>>
>> >>>>>> >> > > > >>> I had to put logic in those attachVolume methods to
>> >>>>>> create the
>> >>>>>> >> > SR/VDI
>> >>>>>> >> > > > or
>> >>>>>> >> > > > >>> datastore/VMDK.
>> >>>>>> >> > > > >>>
>> >>>>>> >> > > > >>> However, thinking back on it, it might have made more
>> >>>>>> sense for
>> >>>>>> >> the
>> >>>>>> >> > > > >>> storage framework to detect if the storage in question
>> was
>> >>>>>> >> managed
>> >>>>>> >> > > and
>> >>>>>> >> > > > -
>> >>>>>> >> > > > >>> before calling attach - call create.
>> >>>>>> >> > > > >>>
>> >>>>>> >> > > > >>> If that logic was in place, I could have left
>> >>>>>> >> attach/detachVolume
>> >>>>>> >> > > alone
>> >>>>>> >> > > > >>> and implemented create and delete in the hypervisor
>> code
>> >>>>>> to
>> >>>>>> >> create
>> >>>>>> >> > my
>> >>>>>> >> > > > >>> SR/VDI or datastore/VMDK.
>> >>>>>> >> > > > >>>
>> >>>>>> >> > > > >>> That makes sense to me because it is CloudStack-managed
>> >>>>>> storage
>> >>>>>> >> (so
>> >>>>>> >> > > > >>> CloudStack is calling into the hypervisor to create and
>> >>>>>> delete
>> >>>>>> >> > these
>> >>>>>> >> > > > types
>> >>>>>> >> > > > >>> of objects...it's managing them).
>> >>>>>> >> > > > >>>
>> >>>>>> >> > > > >>>
>> >>>>>> >> > > > >>> On Fri, Sep 27, 2013 at 5:00 PM, Marcus Sorensen <
>> >>>>>> >> > > shadowsor@gmail.com
>> >>>>>> >> > > > >wrote:
>> >>>>>> >> > > > >>>
>> >>>>>> >> > > > >>>> On Fri, Sep 27, 2013 at 4:22 PM, Mike Tutkowski
>> >>>>>> >> > > > >>>> <mike.tutkowski@solidfire.com> wrote:
>> >>>>>> >> > > > >>>> > Sure, sounds good - let me know when it's up on
>> Review
>> >>>>>> Board
>> >>>>>> >> > and I
>> >>>>>> >> > > > can
>> >>>>>> >> > > > >>>> take
>> >>>>>> >> > > > >>>> > a look.
>> >>>>>> >> > > > >>>> >
>> >>>>>> >> > > > >>>> > I made most of the changes you and I talked about:
>> >>>>>> >> > > > >>>> >
>> >>>>>> >> > > > >>>> >
>> >>>>>> >> > > > >>>>
>> >>>>>> >> > > >
>> >>>>>> >> > >
>> >>>>>> >> >
>> >>>>>> >>
>> >>>>>>
>> https://github.com/mike-tutkowski/incubator-cloudstack/commit/eb9b2edfc9062f9ca7961fecd5379b180ca3aed1
>> >>>>>> >> > > > >>>> >
>> >>>>>> >> > > > >>>> > I have a new idea, though, that I think will
>> simplify
>> >>>>>> this:
>> >>>>>> >> > > > >>>> >
>> >>>>>> >> > > > >>>> > The main "weirdness" right now is when attachVolume
>> is
>> >>>>>> called
>> >>>>>> >> > that
>> >>>>>> >> > > > the
>> >>>>>> >> > > > >>>> > original logic assumed createVolume had been called
>> >>>>>> already.
>> >>>>>> >> > > > >>>> >
>> >>>>>> >> > > > >>>> > In my case, this doesn't apply, so we had to place
>> >>>>>> extra
>> >>>>>> >> logic
>> >>>>>> >> > in
>> >>>>>> >> > > > >>>> > attachVolume to essentially "create" a volume. We
>> >>>>>> decided to
>> >>>>>> >> > make
>> >>>>>> >> > > a
>> >>>>>> >> > > > >>>> connect
>> >>>>>> >> > > > >>>> > method, which establishes the iSCSI connection and
>> >>>>>> creates a
>> >>>>>> >> > > > >>>> > KVMPhysicalDisk that can be returned when
>> attachVolume
>> >>>>>> calls
>> >>>>>> >> > > > >>>> > getPhysicalDisk.
>> >>>>>> >> > > > >>>> >
>> >>>>>> >> > > > >>>> > The "normal" place where you'd create a
>> >>>>>> KVMPhysicalDisk,
>> >>>>>> >> > however,
>> >>>>>> >> > > > would
>> >>>>>> >> > > > >>>> be
>> >>>>>> >> > > > >>>> > in the createVolume method. Since I don't currently
>> >>>>>> "create"
>> >>>>>> >> a
>> >>>>>> >> > > > volume,
>> >>>>>> >> > > > >>>> my
>> >>>>>> >> > > > >>>> > only chance to note the size of the volume is in the
>> >>>>>> connect
>> >>>>>> >> > > method.
>> >>>>>> >> > > > >>>>
>> >>>>>> >> > > > >>>> I don't think createVolume applies to plugins. My
>> >>>>>> impression
>> >>>>>> >> wash
>> >>>>>> >> > > that
>> >>>>>> >> > > > >>>> createAsync is called on the mgmt server side. If
>> >>>>>> createVolume
>> >>>>>> >> IS
>> >>>>>> >> > > > >>>> being called, that's weird. The idea here is that mgmt
>> >>>>>> server
>> >>>>>> >> > > creates
>> >>>>>> >> > > > >>>> the LUN, and then on the KVM side attach is called (or
>> >>>>>> >> > StartCommand
>> >>>>>> >> > > if
>> >>>>>> >> > > > >>>> it's a root volume and vm is being started), and it
>> >>>>>> assumes
>> >>>>>> >> that
>> >>>>>> >> > the
>> >>>>>> >> > > > >>>> LUN is already there, so we call connectPhysicalDisk
>> to
>> >>>>>> attach
>> >>>>>> >> it
>> >>>>>> >> > to
>> >>>>>> >> > > > >>>> the KVM host.
>> >>>>>> >> > > > >>>>
>> >>>>>> >> > > > >>>> >
>> >>>>>> >> > > > >>>> > It ends up being kind of weird to pass a size into
>> the
>> >>>>>> >> connect
>> >>>>>> >> > > > method,
>> >>>>>> >> > > > >>>> as
>> >>>>>> >> > > > >>>> > you've noted.
>> >>>>>> >> > > > >>>> >
>> >>>>>> >> > > > >>>> > What if we essentially left the attachVolume and
>> >>>>>> detachVolume
>> >>>>>> >> > > > methods
>> >>>>>> >> > > > >>>> alone
>> >>>>>> >> > > > >>>> > (as in how they were before my changes)? We could
>> have
>> >>>>>> >> > > > >>>> > VolumeApiServiceImpl, before sending the
>> AttachCommand,
>> >>>>>> >> detect
>> >>>>>> >> > if
>> >>>>>> >> > > > the
>> >>>>>> >> > > > >>>> > storage in question is managed. If it is,
>> >>>>>> >> VolumeApiServiceImpl
>> >>>>>> >> > > could
>> >>>>>> >> > > > >>>> send a
>> >>>>>> >> > > > >>>> > CreateObjectCommand. I would then implement
>> >>>>>> >> createPhysicalDisk
>> >>>>>> >> > to
>> >>>>>> >> > > > >>>> connect
>> >>>>>> >> > > > >>>> > my iSCSI target and create a KVMPhysicalDisk.
>> >>>>>> >> > > > >>>> >
>> >>>>>> >> > > > >>>> > On the reverse side, VolumeApiServiceImpl, after
>> >>>>>> sending the
>> >>>>>> >> > > > >>>> DetachCommand,
>> >>>>>> >> > > > >>>> > could detect if the storage in question is managed.
>> If
>> >>>>>> it is,
>> >>>>>> >> > > > >>>> > VolumeApiServiceImpl could send a DeleteCommand. I
>> >>>>>> would then
>> >>>>>> >> > > > implement
>> >>>>>> >> > > > >>>> the
>> >>>>>> >> > > > >>>> > deletePhysicalDisk method to disconnect my iSCSI
>> >>>>>> session.
>> >>>>>> >> > > > >>>> >
>> >>>>>> >> > > > >>>> > What do you think?
>> >>>>>> >> > > > >>>>
>> >>>>>> >> > > > >>>> Maybe I'm just confused, but I thought the create and
>> >>>>>> delete on
>> >>>>>> >> > the
>> >>>>>> >> > > > >>>> KVM side only apply to the default storage plugin,
>> which
>> >>>>>> has to
>> >>>>>> >> > pass
>> >>>>>> >> > > > >>>> everything on the agent. I thought the
>> creation/deletion
>> >>>>>> of
>> >>>>>> >> LUNs
>> >>>>>> >> > > > >>>> occured via createAsync and deleteAsync in your
>> plugin.
>> >>>>>> >> > > > >>>>
>> >>>>>> >> > > > >>>> >
>> >>>>>> >> > > > >>>> >
>> >>>>>> >> > > > >>>> > On Fri, Sep 27, 2013 at 3:21 PM, Marcus Sorensen <
>> >>>>>> >> > > > shadowsor@gmail.com
>> >>>>>> >> > > > >>>> >wrote:
>> >>>>>> >> > > > >>>> >
>> >>>>>> >> > > > >>>> >> Ok, I've got our plugin working against 4.2. Tested
>> >>>>>> start
>> >>>>>> >> vm,
>> >>>>>> >> > > stop
>> >>>>>> >> > > > vm,
>> >>>>>> >> > > > >>>> >> migrate vm, attach volume, detach volume.  Other
>> >>>>>> functions
>> >>>>>> >> that
>> >>>>>> >> > > we
>> >>>>>> >> > > > >>>> >> already had in our StorageAdaptor implementation,
>> >>>>>> such as
>> >>>>>> >> > copying
>> >>>>>> >> > > > >>>> >> templates to primary storage, just worked without
>> any
>> >>>>>> >> > > modification
>> >>>>>> >> > > > >>>> >> from our 4.1 version.
>> >>>>>> >> > > > >>>> >>
>> >>>>>> >> > > > >>>> >> I'll post a patch to reviewboard with the
>> applicable
>> >>>>>> >> changes. I
>> >>>>>> >> > > was
>> >>>>>> >> > > > >>>> >> correct that attachVolume and dettachVolume only
>> >>>>>> apply to
>> >>>>>> >> > > > >>>> >> adding/removing disks from running VMs, so there
>> were
>> >>>>>> some
>> >>>>>> >> more
>> >>>>>> >> > > > >>>> >> changes to LibvirtComputingResource. I don't intend
>> >>>>>> for this
>> >>>>>> >> > > patch
>> >>>>>> >> > > > to
>> >>>>>> >> > > > >>>> >> be applied (for one it's against 4.2), but I want
>> you
>> >>>>>> to
>> >>>>>> >> take a
>> >>>>>> >> > > > look
>> >>>>>> >> > > > >>>> >> and see if it will work for you as well. If it
>> does,
>> >>>>>> then
>> >>>>>> >> it's
>> >>>>>> >> > a
>> >>>>>> >> > > > good
>> >>>>>> >> > > > >>>> >> indicator that it should work for other plugins
>> too,
>> >>>>>> or if
>> >>>>>> >> it
>> >>>>>> >> > > > needs to
>> >>>>>> >> > > > >>>> >> be tweaked we can work it out.
>> >>>>>> >> > > > >>>> >>
>> >>>>>> >> > > > >>>> >> The gist is that we needed a connectPhysicalDisk
>> call
>> >>>>>> that
>> >>>>>> >> can
>> >>>>>> >> > > > accept
>> >>>>>> >> > > > >>>> >> the pool/volume info (which we've discussed), but
>> >>>>>> also a
>> >>>>>> >> > version
>> >>>>>> >> > > of
>> >>>>>> >> > > > >>>> >> connectPhysicalDisk that can take a vm
>> specification
>> >>>>>> >> > > > >>>> >> (VirtualMachineTO) and figure out which pools/disks
>> >>>>>> are
>> >>>>>> >> needed
>> >>>>>> >> > > and
>> >>>>>> >> > > > >>>> >> attach them. I largely copied the code we had
>> custom
>> >>>>>> >> inserted
>> >>>>>> >> > > into
>> >>>>>> >> > > > our
>> >>>>>> >> > > > >>>> >> 4.1 and put it into KVMStoragePoolManager so that
>> it
>> >>>>>> will be
>> >>>>>> >> > > > adaptor
>> >>>>>> >> > > > >>>> >> agnostic.
>> >>>>>> >> > > > >>>> >>
>> >>>>>> >> > > > >>>> >> Same goes for disconnectPhysicalDisk.
>> >>>>>> >> > > > >>>> >>
>> >>>>>> >> > > > >>>> >> We also needed to pass the VirtualMachineTO in a
>> few
>> >>>>>> other
>> >>>>>> >> > places
>> >>>>>> >> > > > like
>> >>>>>> >> > > > >>>> >> MigrateCommand and StopCommand, it's otherwise hard
>> >>>>>> to know
>> >>>>>> >> > which
>> >>>>>> >> > > > >>>> >> storage adaptors we need to deal with when all we
>> >>>>>> have is a
>> >>>>>> >> vm
>> >>>>>> >> > > > name or
>> >>>>>> >> > > > >>>> >> something like that.
>> >>>>>> >> > > > >>>> >>
>> >>>>>> >> > > > >>>> >> On Fri, Sep 27, 2013 at 12:56 AM, Mike Tutkowski
>> >>>>>> >> > > > >>>> >> <mike.tutkowski@solidfire.com> wrote:
>> >>>>>> >> > > > >>>> >> > Thanks for the clarification on how that works.
>> >>>>>> >> > > > >>>> >> >
>> >>>>>> >> > > > >>>> >> > Also, yeah, I think CHAP only grants you access
>> to a
>> >>>>>> >> volume.
>> >>>>>> >> > If
>> >>>>>> >> > > > >>>> multiple
>> >>>>>> >> > > > >>>> >> > hosts are using the CHAP credentials for a single
>> >>>>>> volume,
>> >>>>>> >> > it's
>> >>>>>> >> > > > up to
>> >>>>>> >> > > > >>>> >> those
>> >>>>>> >> > > > >>>> >> > hosts to make sure they don't step on each
>> other's
>> >>>>>> toes
>> >>>>>> >> (and
>> >>>>>> >> > > > this is
>> >>>>>> >> > > > >>>> - to
>> >>>>>> >> > > > >>>> >> > my understanding - how it works with XenServer
>> and
>> >>>>>> >> VMware).
>> >>>>>> >> > > > >>>> >> >
>> >>>>>> >> > > > >>>> >> >
>> >>>>>> >> > > > >>>> >> > On Fri, Sep 27, 2013 at 12:45 AM, Marcus
>> Sorensen <
>> >>>>>> >> > > > >>>> shadowsor@gmail.com
>> >>>>>> >> > > > >>>> >> >wrote:
>> >>>>>> >> > > > >>>> >> >
>> >>>>>> >> > > > >>>> >> >> On Fri, Sep 27, 2013 at 12:21 AM, Mike Tutkowski
>> >>>>>> >> > > > >>>> >> >> <mike.tutkowski@solidfire.com> wrote:
>> >>>>>> >> > > > >>>> >> >> > Maybe I should seek a little clarification as
>> to
>> >>>>>> how
>> >>>>>> >> live
>> >>>>>> >> > > > >>>> migration
>> >>>>>> >> > > > >>>> >> works
>> >>>>>> >> > > > >>>> >> >> > in CS with KVM.
>> >>>>>> >> > > > >>>> >> >> >
>> >>>>>> >> > > > >>>> >> >> > Before we do a live migration of VM 1 from
>> Host
>> >>>>>> 1 to
>> >>>>>> >> Host
>> >>>>>> >> > 2,
>> >>>>>> >> > > > do we
>> >>>>>> >> > > > >>>> >> detach
>> >>>>>> >> > > > >>>> >> >> > all disks from VM1?
>> >>>>>> >> > > > >>>> >> >> >
>> >>>>>> >> > > > >>>> >> >> > If so, then we're good to go there.
>> >>>>>> >> > > > >>>> >> >> >
>> >>>>>> >> > > > >>>> >> >> > I'm not as clear with HA.
>> >>>>>> >> > > > >>>> >> >>
>> >>>>>> >> > > > >>>> >> >> During live migration this is what we currently
>> do
>> >>>>>> in our
>> >>>>>> >> > > > modified
>> >>>>>> >> > > > >>>> >> >> 4.1, I'm not sure if the new framework is set up
>> >>>>>> for
>> >>>>>> >> this,
>> >>>>>> >> > but
>> >>>>>> >> > > > it
>> >>>>>> >> > > > >>>> >> >> should be made to do this if not.
>> >>>>>> >> > > > >>>> >> >>
>> >>>>>> >> > > > >>>> >> >> PrepareForMigrationCommand is called on
>> destination
>> >>>>>> >> host. In
>> >>>>>> >> > > > >>>> >> >> PrepareForMigrationCommand we added a few lines
>> to
>> >>>>>> call
>> >>>>>> >> > > > >>>> >> >> connectPhysicalDisk. This host connects the SAN
>> >>>>>> disks to
>> >>>>>> >> > this
>> >>>>>> >> > > > new
>> >>>>>> >> > > > >>>> >> >> host, then creates a paused VM.
>> >>>>>> >> > > > >>>> >> >>
>> >>>>>> >> > > > >>>> >> >> MigrateCommand is called on the source host.
>> This
>> >>>>>> sends
>> >>>>>> >> the
>> >>>>>> >> > > > proper
>> >>>>>> >> > > > >>>> >> >> command to transfer VM memory, then atomically
>> >>>>>> cuts over
>> >>>>>> >> to
>> >>>>>> >> > > the
>> >>>>>> >> > > > >>>> >> >> destination host. During this time, the disks
>> are
>> >>>>>> >> attached
>> >>>>>> >> > on
>> >>>>>> >> > > > both
>> >>>>>> >> > > > >>>> >> >> sides, but the VM is still the only thing that
>> is
>> >>>>>> using
>> >>>>>> >> > them,
>> >>>>>> >> > > > and it
>> >>>>>> >> > > > >>>> >> >> atomically cuts over. There's no caching on the
>> >>>>>> host
>> >>>>>> >> (qemu
>> >>>>>> >> > is
>> >>>>>> >> > > > using
>> >>>>>> >> > > > >>>> >> >> directio), so this is safe.
>> >>>>>> >> > > > >>>> >> >>
>> >>>>>> >> > > > >>>> >> >> After MigrateCommand completes it's VM passoff,
>> we
>> >>>>>> detach
>> >>>>>> >> > the
>> >>>>>> >> > > > disks
>> >>>>>> >> > > > >>>> >> >> before returning.
>> >>>>>> >> > > > >>>> >> >>
>> >>>>>> >> > > > >>>> >> >> >
>> >>>>>> >> > > > >>>> >> >> > If VM 1 goes down because Host 1 crashes, is
>> the
>> >>>>>> >> > > attach-volume
>> >>>>>> >> > > > >>>> command
>> >>>>>> >> > > > >>>> >> >> > invoked as many times as need be (depending on
>> >>>>>> how many
>> >>>>>> >> > > > volumes
>> >>>>>> >> > > > >>>> need
>> >>>>>> >> > > > >>>> >> to
>> >>>>>> >> > > > >>>> >> >> be
>> >>>>>> >> > > > >>>> >> >> > attached) when VM 1 is restarted on Host 2?
>> >>>>>> >> > > > >>>> >> >>
>> >>>>>> >> > > > >>>> >> >> From what I can tell, the attachVolume and
>> >>>>>> dettachVolume
>> >>>>>> >> > > seemed
>> >>>>>> >> > > > to
>> >>>>>> >> > > > >>>> >> >> only be for attaching disks to existing, running
>> >>>>>> VMs
>> >>>>>> >> (i.e.
>> >>>>>> >> > > > inserting
>> >>>>>> >> > > > >>>> >> >> new XML into an existing domain definition).
>> >>>>>>  Normally
>> >>>>>> >> when
>> >>>>>> >> > > > >>>> starting a
>> >>>>>> >> > > > >>>> >> >> vm from scratch the vm definition, along with
>> any
>> >>>>>> >> currently
>> >>>>>> >> > > > attached
>> >>>>>> >> > > > >>>> >> >> disks, is passed in to StartCommand (which would
>> >>>>>> also be
>> >>>>>> >> > > called
>> >>>>>> >> > > > >>>> during
>> >>>>>> >> > > > >>>> >> >> HA restart of a VM). In our 4.1 branch we also
>> >>>>>> have a
>> >>>>>> >> call
>> >>>>>> >> > to
>> >>>>>> >> > > > >>>> >> >> connectPhysicalDisk here, where we loop through
>> >>>>>> the disk
>> >>>>>> >> > > > definitions
>> >>>>>> >> > > > >>>> >> >> that were passed.
>> >>>>>> >> > > > >>>> >> >>
>> >>>>>> >> > > > >>>> >> >> Again, I should be able to flesh out the
>> >>>>>> differences in
>> >>>>>> >> 4.2
>> >>>>>> >> > > and
>> >>>>>> >> > > > how
>> >>>>>> >> > > > >>>> to
>> >>>>>> >> > > > >>>> >> >> go about making this suitable for everyone in
>> the
>> >>>>>> coming
>> >>>>>> >> > days,
>> >>>>>> >> > > > so
>> >>>>>> >> > > > >>>> long
>> >>>>>> >> > > > >>>> >> >> as you and anyone else writing plugins agree
>> with
>> >>>>>> the
>> >>>>>> >> > changes.
>> >>>>>> >> > > > >>>> >> >>
>> >>>>>> >> > > > >>>> >> >> These processes would make sure the disks are
>> >>>>>> available
>> >>>>>> >> on
>> >>>>>> >> > the
>> >>>>>> >> > > > hosts
>> >>>>>> >> > > > >>>> >> >> they need to be, but they don't really provide
>> >>>>>> locking or
>> >>>>>> >> > > ensure
>> >>>>>> >> > > > >>>> that
>> >>>>>> >> > > > >>>> >> >> only the necessary hosts can write to or see the
>> >>>>>> disks at
>> >>>>>> >> > any
>> >>>>>> >> > > > given
>> >>>>>> >> > > > >>>> >> >> time. I don't think CHAP does that either. We
>> >>>>>> currently
>> >>>>>> >> > > generate
>> >>>>>> >> > > > >>>> ACLs
>> >>>>>> >> > > > >>>> >> >> via our SAN api during connectPhysicalDisk as a
>> >>>>>> safety
>> >>>>>> >> > > measure,
>> >>>>>> >> > > > but
>> >>>>>> >> > > > >>>> if
>> >>>>>> >> > > > >>>> >> >> CloudStack is working properly it will be in
>> >>>>>> charge of
>> >>>>>> >> > > > controlling
>> >>>>>> >> > > > >>>> >> >> that the disks are only being used where they
>> >>>>>> should be.
>> >>>>>> >> The
>> >>>>>> >> > > > ACLs
>> >>>>>> >> > > > >>>> just
>> >>>>>> >> > > > >>>> >> >> ensure that if the VM somehow gets started in
>> two
>> >>>>>> >> different
>> >>>>>> >> > > > places
>> >>>>>> >> > > > >>>> >> >> (e.g. HA malfunction), only one of them will
>> have
>> >>>>>> access
>> >>>>>> >> to
>> >>>>>> >> > > the
>> >>>>>> >> > > > >>>> disks.
>> >>>>>> >> > > > >>>> >> >>
>> >>>>>> >> > > > >>>> >> >> >
>> >>>>>> >> > > > >>>> >> >> >
>> >>>>>> >> > > > >>>> >> >> > On Fri, Sep 27, 2013 at 12:17 AM, Mike
>> Tutkowski
>> >>>>>> <
>> >>>>>> >> > > > >>>> >> >> > mike.tutkowski@solidfire.com> wrote:
>> >>>>>> >> > > > >>>> >> >> >
>> >>>>>> >> > > > >>>> >> >> >> Let me clarify this line a bit:
>> >>>>>> >> > > > >>>> >> >> >>
>> >>>>>> >> > > > >>>> >> >> >> "We get away without this with XenServer and
>> >>>>>> VMware
>> >>>>>> >> > because
>> >>>>>> >> > > > - as
>> >>>>>> >> > > > >>>> far
>> >>>>>> >> > > > >>>> >> as
>> >>>>>> >> > > > >>>> >> >> I
>> >>>>>> >> > > > >>>> >> >> >> know - CS delegates HA and live migration to
>> >>>>>> those
>> >>>>>> >> > clusters
>> >>>>>> >> > > > and
>> >>>>>> >> > > > >>>> they
>> >>>>>> >> > > > >>>> >> >> handle
>> >>>>>> >> > > > >>>> >> >> >> it most likely with some kind of locking
>> >>>>>> protocol on
>> >>>>>> >> the
>> >>>>>> >> > > > >>>> >> SR/datastore."
>> >>>>>> >> > > > >>>> >> >> >>
>> >>>>>> >> > > > >>>> >> >> >> When I set up a XenServer or a VMware
>> cluster,
>> >>>>>> all
>> >>>>>> >> nodes
>> >>>>>> >> > in
>> >>>>>> >> > > > the
>> >>>>>> >> > > > >>>> >> cluster
>> >>>>>> >> > > > >>>> >> >> >> have the proper CHAP credentials and can
>> access
>> >>>>>> a
>> >>>>>> >> shared
>> >>>>>> >> > > > >>>> >> SR/datastore.
>> >>>>>> >> > > > >>>> >> >> >>
>> >>>>>> >> > > > >>>> >> >> >> HA and live migrations are OK here because
>> the
>> >>>>>> cluster
>> >>>>>> >> > > > controls
>> >>>>>> >> > > > >>>> >> access
>> >>>>>> >> > > > >>>> >> >> to
>> >>>>>> >> > > > >>>> >> >> >> the VDI on the SR (or VMDK on the datastore)
>> >>>>>> with some
>> >>>>>> >> > kind
>> >>>>>> >> > > > of
>> >>>>>> >> > > > >>>> >> locking
>> >>>>>> >> > > > >>>> >> >> >> protocol, I expect.
>> >>>>>> >> > > > >>>> >> >> >>
>> >>>>>> >> > > > >>>> >> >> >> Since KVM isn't really in a cluster outside
>> of
>> >>>>>> the
>> >>>>>> >> > > CloudStack
>> >>>>>> >> > > > >>>> world,
>> >>>>>> >> > > > >>>> >> >> >> CloudStack has to handle these intricacies.
>> >>>>>> >> > > > >>>> >> >> >>
>> >>>>>> >> > > > >>>> >> >> >> In my case, I'm just presenting a raw disk
>> to a
>> >>>>>> VM on
>> >>>>>> >> a
>> >>>>>> >> > KVM
>> >>>>>> >> > > > host.
>> >>>>>> >> > > > >>>> >> >> >>
>> >>>>>> >> > > > >>>> >> >> >> In that case, HA and live migration depend on
>> >>>>>> the
>> >>>>>> >> storage
>> >>>>>> >> > > > plug-in
>> >>>>>> >> > > > >>>> >> being
>> >>>>>> >> > > > >>>> >> >> >> able to grant and revoke access to the volume
>> >>>>>> for
>> >>>>>> >> hosts
>> >>>>>> >> > as
>> >>>>>> >> > > > >>>> needed.
>> >>>>>> >> > > > >>>> >> >> >>
>> >>>>>> >> > > > >>>> >> >> >> I'd actually rather not even bother with CHAP
>> >>>>>> when
>> >>>>>> >> using
>> >>>>>> >> > > KVM.
>> >>>>>> >> > > > >>>> >> >> >>
>> >>>>>> >> > > > >>>> >> >> >>
>> >>>>>> >> > > > >>>> >> >> >> On Fri, Sep 27, 2013 at 12:06 AM, Mike
>> >>>>>> Tutkowski <
>> >>>>>> >> > > > >>>> >> >> >> mike.tutkowski@solidfire.com> wrote:
>> >>>>>> >> > > > >>>> >> >> >>
>> >>>>>> >> > > > >>>> >> >> >>> Hey Marcus,
>> >>>>>> >> > > > >>>> >> >> >>>
>> >>>>>> >> > > > >>>> >> >> >>> I agree that CHAP does not fulfill the same
>> >>>>>> role as
>> >>>>>> >> > > fencing.
>> >>>>>> >> > > > >>>> >> >> >>>
>> >>>>>> >> > > > >>>> >> >> >>> I think we're going to have trouble with HA
>> >>>>>> and live
>> >>>>>> >> > > > migrations
>> >>>>>> >> > > > >>>> on
>> >>>>>> >> > > > >>>> >> KVM
>> >>>>>> >> > > > >>>> >> >> if
>> >>>>>> >> > > > >>>> >> >> >>> the storage plug-in doesn't have a way of
>> >>>>>> knowing
>> >>>>>> >> when a
>> >>>>>> >> > > > host
>> >>>>>> >> > > > >>>> wants
>> >>>>>> >> > > > >>>> >> to
>> >>>>>> >> > > > >>>> >> >> >>> access a volume and when we want to revoke
>> >>>>>> access to
>> >>>>>> >> > that
>> >>>>>> >> > > > >>>> volume.
>> >>>>>> >> > > > >>>> >> >> >>>
>> >>>>>> >> > > > >>>> >> >> >>> We get away without this with XenServer and
>> >>>>>> VMware
>> >>>>>> >> > because
>> >>>>>> >> > > > - as
>> >>>>>> >> > > > >>>> far
>> >>>>>> >> > > > >>>> >> as
>> >>>>>> >> > > > >>>> >> >> I
>> >>>>>> >> > > > >>>> >> >> >>> know - CS delegates HA and live migration to
>> >>>>>> those
>> >>>>>> >> > > clusters
>> >>>>>> >> > > > and
>> >>>>>> >> > > > >>>> they
>> >>>>>> >> > > > >>>> >> >> handle
>> >>>>>> >> > > > >>>> >> >> >>> it most likely with some kind of locking
>> >>>>>> protocol on
>> >>>>>> >> the
>> >>>>>> >> > > > >>>> >> SR/datastore.
>> >>>>>> >> > > > >>>> >> >> >>>
>> >>>>>> >> > > > >>>> >> >> >>> As far as the path field is concerned, I
>> >>>>>> should be
>> >>>>>> >> able
>> >>>>>> >> > to
>> >>>>>> >> > > > >>>> populate
>> >>>>>> >> > > > >>>> >> it
>> >>>>>> >> > > > >>>> >> >> >>> with the IQN of the volume in question.
>> >>>>>> >> > > > >>>> >> >> >>>
>> >>>>>> >> > > > >>>> >> >> >>> One problem I do see, however, is in the
>> >>>>>> >> getPhysicalDisk
>> >>>>>> >> > > > method.
>> >>>>>> >> > > > >>>> >> >> >>>
>> >>>>>> >> > > > >>>> >> >> >>> How are you envisioning I keep track of
>> >>>>>> >> KVMPhysicalDisks
>> >>>>>> >> > > > that I
>> >>>>>> >> > > > >>>> >> create
>> >>>>>> >> > > > >>>> >> >> in
>> >>>>>> >> > > > >>>> >> >> >>> my connect method?
>> >>>>>> >> > > > >>>> >> >> >>>
>> >>>>>> >> > > > >>>> >> >> >>> Initially I was thinking I'd just keep them
>> in
>> >>>>>> a map.
>> >>>>>> >> > > > Storage
>> >>>>>> >> > > > >>>> pool
>> >>>>>> >> > > > >>>> >> UUID
>> >>>>>> >> > > > >>>> >> >> >>> to KVMPhysicalDisks.
>> >>>>>> >> > > > >>>> >> >> >>>
>> >>>>>> >> > > > >>>> >> >> >>> The problem is, how do I reconstruct that
>> map
>> >>>>>> if the
>> >>>>>> >> > agent
>> >>>>>> >> > > > is
>> >>>>>> >> > > > >>>> >> restarted
>> >>>>>> >> > > > >>>> >> >> >>> (say the host crashes or is restarted).
>> >>>>>> >> > > > >>>> >> >> >>>
>> >>>>>> >> > > > >>>> >> >> >>> For storage pools, we get a message
>> >>>>>> >> > > > (ModifyStoragePoolCommand)
>> >>>>>> >> > > > >>>> from
>> >>>>>> >> > > > >>>> >> the
>> >>>>>> >> > > > >>>> >> >> >>> CS MS to tell us about all of the relevant
>> >>>>>> storage
>> >>>>>> >> > pools.
>> >>>>>> >> > > > With
>> >>>>>> >> > > > >>>> this
>> >>>>>> >> > > > >>>> >> >> >>> message, I can reconstruct my cache of
>> storage
>> >>>>>> >> pools. No
>> >>>>>> >> > > > problem
>> >>>>>> >> > > > >>>> >> there.
>> >>>>>> >> > > > >>>> >> >> >>>
>> >>>>>> >> > > > >>>> >> >> >>> But how will I know which volumes belong to
>> a
>> >>>>>> given
>> >>>>>> >> > > storage
>> >>>>>> >> > > > >>>> pool if
>> >>>>>> >> > > > >>>> >> I
>> >>>>>> >> > > > >>>> >> >> >>> have to rebuild that map? How will I even
>> know
>> >>>>>> which
>> >>>>>> >> > > volumes
>> >>>>>> >> > > > >>>> are in
>> >>>>>> >> > > > >>>> >> >> use at
>> >>>>>> >> > > > >>>> >> >> >>> all?
>> >>>>>> >> > > > >>>> >> >> >>>
>> >>>>>> >> > > > >>>> >> >> >>> Thanks
>> >>>>>> >> > > > >>>> >> >> >>>
>> >>>>>> >> > > > >>>> >> >> >>>
>> >>>>>> >> > > > >>>> >> >> >>> On Thu, Sep 26, 2013 at 11:37 PM, Marcus
>> >>>>>> Sorensen <
>> >>>>>> >> > > > >>>> >> shadowsor@gmail.com
>> >>>>>> >> > > > >>>> >> >> >wrote:
>> >>>>>> >> > > > >>>> >> >> >>>
>> >>>>>> >> > > > >>>> >> >> >>>> On Thu, Sep 26, 2013 at 10:23 PM, Mike
>> >>>>>> Tutkowski
>> >>>>>> >> > > > >>>> >> >> >>>> <mike.tutkowski@solidfire.com> wrote:
>> >>>>>> >> > > > >>>> >> >> >>>> > My comments are inline:
>> >>>>>> >> > > > >>>> >> >> >>>> >
>> >>>>>> >> > > > >>>> >> >> >>>> >
>> >>>>>> >> > > > >>>> >> >> >>>> > On Thu, Sep 26, 2013 at 9:10 PM, Marcus
>> >>>>>> Sorensen <
>> >>>>>> >> > > > >>>> >> >> shadowsor@gmail.com
>> >>>>>> >> > > > >>>> >> >> >>>> >wrote:
>> >>>>>> >> > > > >>>> >> >> >>>> >
>> >>>>>> >> > > > >>>> >> >> >>>> >> Ok, let me digest this a bit. I got the
>> >>>>>> github
>> >>>>>> >> > > responses
>> >>>>>> >> > > > >>>> but I'd
>> >>>>>> >> > > > >>>> >> >> also
>> >>>>>> >> > > > >>>> >> >> >>>> >> like to keep it on-list as well.
>> >>>>>> >> > > > >>>> >> >> >>>> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >>  My initial thoughts are:
>> >>>>>> >> > > > >>>> >> >> >>>> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> 1) I don't think disk format and size
>> are
>> >>>>>> >> necessary
>> >>>>>> >> > > > >>>> parameters
>> >>>>>> >> > > > >>>> >> for
>> >>>>>> >> > > > >>>> >> >> >>>> >> connectPhysicalDisk, as the format can
>> be
>> >>>>>> >> determined
>> >>>>>> >> > > by
>> >>>>>> >> > > > the
>> >>>>>> >> > > > >>>> >> >> adaptor,
>> >>>>>> >> > > > >>>> >> >> >>>> >> and the size is set during the
>> createAsync
>> >>>>>> call
>> >>>>>> >> in
>> >>>>>> >> > the
>> >>>>>> >> > > > >>>> plugin.
>> >>>>>> >> > > > >>>> >> We
>> >>>>>> >> > > > >>>> >> >> >>>> >> really just need the disk path and the
>> >>>>>> pool.
>> >>>>>> >> > > > >>>> >> >> >>>> >>
>> >>>>>> >> > > > >>>> >> >> >>>> > [Mike T.] I agree, format is not needed.
>> >>>>>> The only
>> >>>>>> >> > > reason
>> >>>>>> >> > > > I
>> >>>>>> >> > > > >>>> have
>> >>>>>> >> > > > >>>> >> size
>> >>>>>> >> > > > >>>> >> >> >>>> passed
>> >>>>>> >> > > > >>>> >> >> >>>> > in is because I need to create a
>> >>>>>> KVMPhysicalDisk
>> >>>>>> >> at
>> >>>>>> >> > the
>> >>>>>> >> > > > end
>> >>>>>> >> > > > >>>> of
>> >>>>>> >> > > > >>>> >> the
>> >>>>>> >> > > > >>>> >> >> >>>> connect
>> >>>>>> >> > > > >>>> >> >> >>>> > method. Since this KVMPhysicalDisk is (in
>> >>>>>> the
>> >>>>>> >> code on
>> >>>>>> >> > > > GitHub)
>> >>>>>> >> > > > >>>> >> being
>> >>>>>> >> > > > >>>> >> >> >>>> used to
>> >>>>>> >> > > > >>>> >> >> >>>> > create our XML to attach the disk, I
>> >>>>>> figured we'd
>> >>>>>> >> > need
>> >>>>>> >> > > > that
>> >>>>>> >> > > > >>>> size.
>> >>>>>> >> > > > >>>> >> >> The
>> >>>>>> >> > > > >>>> >> >> >>>> > KVMPhysicalDisk I produce from my
>> >>>>>> implementation
>> >>>>>> >> of
>> >>>>>> >> > > > >>>> >> getPhysicalDisk
>> >>>>>> >> > > > >>>> >> >> is
>> >>>>>> >> > > > >>>> >> >> >>>> not
>> >>>>>> >> > > > >>>> >> >> >>>> > as good because I don't know the size of
>> >>>>>> the disk
>> >>>>>> >> at
>> >>>>>> >> > > this
>> >>>>>> >> > > > >>>> point
>> >>>>>> >> > > > >>>> >> (I
>> >>>>>> >> > > > >>>> >> >> >>>> don't
>> >>>>>> >> > > > >>>> >> >> >>>> > keep that information around). The
>> reason I
>> >>>>>> don't
>> >>>>>> >> > keep
>> >>>>>> >> > > > that
>> >>>>>> >> > > > >>>> info
>> >>>>>> >> > > > >>>> >> >> >>>> around is
>> >>>>>> >> > > > >>>> >> >> >>>> > because I don't have a good way to
>> >>>>>> reproduce that
>> >>>>>> >> > info
>> >>>>>> >> > > if
>> >>>>>> >> > > > >>>> the KVM
>> >>>>>> >> > > > >>>> >> >> host
>> >>>>>> >> > > > >>>> >> >> >>>> is
>> >>>>>> >> > > > >>>> >> >> >>>> > rebooted. We get info about storage pools
>> >>>>>> in the
>> >>>>>> >> form
>> >>>>>> >> > > of
>> >>>>>> >> > > > a
>> >>>>>> >> > > > >>>> >> >> >>>> > ModifyStoragePoolCommand, but nothing
>> about
>> >>>>>> the
>> >>>>>> >> > volumes
>> >>>>>> >> > > > >>>> inside of
>> >>>>>> >> > > > >>>> >> >> the
>> >>>>>> >> > > > >>>> >> >> >>>> > storage pool.
>> >>>>>> >> > > > >>>> >> >> >>>>
>> >>>>>> >> > > > >>>> >> >> >>>> getPhysicalDisk is called in a bunch of
>> >>>>>> places. I'd
>> >>>>>> >> > rely
>> >>>>>> >> > > > on the
>> >>>>>> >> > > > >>>> >> >> >>>> connectPhysicalDisk to only make the block
>> >>>>>> device
>> >>>>>> >> > appear
>> >>>>>> >> > > > on the
>> >>>>>> >> > > > >>>> >> host,
>> >>>>>> >> > > > >>>> >> >> >>>> and then getPhysicalDisk to find that block
>> >>>>>> device
>> >>>>>> >> and
>> >>>>>> >> > > > fill out
>> >>>>>> >> > > > >>>> >> things
>> >>>>>> >> > > > >>>> >> >> >>>> like disk size and path (the real path to
>> the
>> >>>>>> local
>> >>>>>> >> > block
>> >>>>>> >> > > > >>>> device)
>> >>>>>> >> > > > >>>> >> for
>> >>>>>> >> > > > >>>> >> >> >>>> passing and creating the disk XML. Trust
>> me,
>> >>>>>> unless
>> >>>>>> >> > > things
>> >>>>>> >> > > > have
>> >>>>>> >> > > > >>>> >> >> >>>> changed significantly you need to be able
>> to
>> >>>>>> >> identify a
>> >>>>>> >> > > > given
>> >>>>>> >> > > > >>>> >> device
>> >>>>>> >> > > > >>>> >> >> >>>> as a specific local disk by whatever you
>> are
>> >>>>>> setting
>> >>>>>> >> > the
>> >>>>>> >> > > > 'path'
>> >>>>>> >> > > > >>>> >> >> >>>> attribute to be.  getPhysicalDisk will be
>> >>>>>> called on
>> >>>>>> >> > your
>> >>>>>> >> > > > >>>> storage
>> >>>>>> >> > > > >>>> >> pool
>> >>>>>> >> > > > >>>> >> >> >>>> with simply the path attribute, and via
>> your
>> >>>>>> adaptor
>> >>>>>> >> > with
>> >>>>>> >> > > > the
>> >>>>>> >> > > > >>>> pool
>> >>>>>> >> > > > >>>> >> and
>> >>>>>> >> > > > >>>> >> >> >>>> path.
>> >>>>>> >> > > > >>>> >> >> >>>>
>> >>>>>> >> > > > >>>> >> >> >>>> So you may set path as some combination of
>> >>>>>> iqn and
>> >>>>>> >> > > > target/pool
>> >>>>>> >> > > > >>>> >> info,
>> >>>>>> >> > > > >>>> >> >> >>>> or if iqn is enough to identify a unique
>> block
>> >>>>>> >> device
>> >>>>>> >> > (in
>> >>>>>> >> > > > >>>> >> >> >>>> /dev/disk/by-id maybe?) on a host then just
>> >>>>>> use
>> >>>>>> >> that.
>> >>>>>> >> > > Path
>> >>>>>> >> > > > just
>> >>>>>> >> > > > >>>> >> needs
>> >>>>>> >> > > > >>>> >> >> >>>> to be something, anything, to identify the
>> >>>>>> disk on
>> >>>>>> >> the
>> >>>>>> >> > > > host. In
>> >>>>>> >> > > > >>>> >> >> >>>> getPhysicalDisk, identify the local block
>> >>>>>> device
>> >>>>>> >> > matching
>> >>>>>> >> > > > the
>> >>>>>> >> > > > >>>> info,
>> >>>>>> >> > > > >>>> >> >> >>>> create a new KVMPhysicalDisk with the local
>> >>>>>> path,
>> >>>>>> >> size,
>> >>>>>> >> > > > etc,
>> >>>>>> >> > > > >>>> and
>> >>>>>> >> > > > >>>> >> >> >>>> return it.
>> >>>>>> >> > > > >>>> >> >> >>>>
>> >>>>>> >> > > > >>>> >> >> >>>> >
>> >>>>>> >> > > > >>>> >> >> >>>> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> 2) I thought this access group thing you
>> >>>>>> mention
>> >>>>>> >> are
>> >>>>>> >> > > the
>> >>>>>> >> > > > >>>> >> >> grantAccess
>> >>>>>> >> > > > >>>> >> >> >>>> >> and revokeAccess calls in the storage
>> >>>>>> plugin 2.0
>> >>>>>> >> > > design
>> >>>>>> >> > > > >>>> doc. Was
>> >>>>>> >> > > > >>>> >> >> that
>> >>>>>> >> > > > >>>> >> >> >>>> >> not implemented?
>> >>>>>> >> > > > >>>> >> >> >>>> >>
>> >>>>>> >> > > > >>>> >> >> >>>> > [Mike T.] Yeah, as I mentioned in an
>> e-mail
>> >>>>>> way
>> >>>>>> >> back,
>> >>>>>> >> > > > those
>> >>>>>> >> > > > >>>> >> methods
>> >>>>>> >> > > > >>>> >> >> >>>> were
>> >>>>>> >> > > > >>>> >> >> >>>> > never implemented in 4.2. I think you
>> said
>> >>>>>> you
>> >>>>>> >> were
>> >>>>>> >> > > > going to
>> >>>>>> >> > > > >>>> get
>> >>>>>> >> > > > >>>> >> >> around
>> >>>>>> >> > > > >>>> >> >> >>>> > them not being implemented by keeping
>> >>>>>> certain
>> >>>>>> >> logic
>> >>>>>> >> > > that
>> >>>>>> >> > > > >>>> talks to
>> >>>>>> >> > > > >>>> >> >> the
>> >>>>>> >> > > > >>>> >> >> >>>> SAN
>> >>>>>> >> > > > >>>> >> >> >>>> > in the agent. I don't think we want any
>> >>>>>> >> > > > SolidFire-specific
>> >>>>>> >> > > > >>>> code
>> >>>>>> >> > > > >>>> >> in
>> >>>>>> >> > > > >>>> >> >> the
>> >>>>>> >> > > > >>>> >> >> >>>> > agent, however, so I can't go that route.
>> >>>>>> If those
>> >>>>>> >> > > > methods
>> >>>>>> >> > > > >>>> do not
>> >>>>>> >> > > > >>>> >> >> get
>> >>>>>> >> > > > >>>> >> >> >>>> > implemented in 4.3, then I will need to
>> use
>> >>>>>> CHAP
>> >>>>>> >> > > > credentials
>> >>>>>> >> > > > >>>> for
>> >>>>>> >> > > > >>>> >> KVM
>> >>>>>> >> > > > >>>> >> >> >>>> (just
>> >>>>>> >> > > > >>>> >> >> >>>> > like I did with XenServer and VMware).
>> >>>>>> >> > > > >>>> >> >> >>>>
>> >>>>>> >> > > > >>>> >> >> >>>> I initially figured your StorageAdaptor
>> >>>>>> >> implementation
>> >>>>>> >> > > > would
>> >>>>>> >> > > > >>>> be all
>> >>>>>> >> > > > >>>> >> >> >>>> solidfire specific, just like the mgmt
>> server
>> >>>>>> side
>> >>>>>> >> > plugin
>> >>>>>> >> > > > is.
>> >>>>>> >> > > > >>>> If it
>> >>>>>> >> > > > >>>> >> >> >>>> can be generic to all iscsi storage then
>> >>>>>> that's
>> >>>>>> >> great.
>> >>>>>> >> > I
>> >>>>>> >> > > > agree
>> >>>>>> >> > > > >>>> that
>> >>>>>> >> > > > >>>> >> >> >>>> ideally the agent wouldn't be making API
>> >>>>>> calls to
>> >>>>>> >> your
>> >>>>>> >> > > > SAN. I
>> >>>>>> >> > > > >>>> don't
>> >>>>>> >> > > > >>>> >> >> >>>> think it should be necessary given that
>> >>>>>> you're not
>> >>>>>> >> > going
>> >>>>>> >> > > to
>> >>>>>> >> > > > >>>> use the
>> >>>>>> >> > > > >>>> >> >> >>>> ACL route. I'm not sure CHAP fills the same
>> >>>>>> purpose
>> >>>>>> >> of
>> >>>>>> >> > > > fencing.
>> >>>>>> >> > > > >>>> >> >> >>>>
>> >>>>>> >> > > > >>>> >> >> >>>> >
>> >>>>>> >> > > > >>>> >> >> >>>> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> I see you've  added getters/setters for
>> the
>> >>>>>> >> attach
>> >>>>>> >> > cmd
>> >>>>>> >> > > > to
>> >>>>>> >> > > > >>>> pass
>> >>>>>> >> > > > >>>> >> the
>> >>>>>> >> > > > >>>> >> >> >>>> >> iscsi info you need. Would it perhaps be
>> >>>>>> >> possible to
>> >>>>>> >> > > > send a
>> >>>>>> >> > > > >>>> >> details
>> >>>>>> >> > > > >>>> >> >> >>>> >> Map<String, String> instead? Then any
>> >>>>>> plugin
>> >>>>>> >> > > implementer
>> >>>>>> >> > > > >>>> could
>> >>>>>> >> > > > >>>> >> >> attach
>> >>>>>> >> > > > >>>> >> >> >>>> >> arbitrary data they need. So it might be
>> >>>>>> >> > > > >>>> >> >> >>>> >> connectPhysicalDisk(StoragePoolType
>> type,
>> >>>>>> String
>> >>>>>> >> > > > poolUuid,
>> >>>>>> >> > > > >>>> >> String
>> >>>>>> >> > > > >>>> >> >> >>>> >> volPath, Map<String, String> details)?
>> >>>>>>  I'll
>> >>>>>> >> have to
>> >>>>>> >> > > > look
>> >>>>>> >> > > > >>>> and
>> >>>>>> >> > > > >>>> >> see
>> >>>>>> >> > > > >>>> >> >> >>>> >> where those cmd. attributes are set,
>> >>>>>> ideally it
>> >>>>>> >> > would
>> >>>>>> >> > > be
>> >>>>>> >> > > > >>>> all the
>> >>>>>> >> > > > >>>> >> >> way
>> >>>>>> >> > > > >>>> >> >> >>>> >> back in the plugin to avoid custom code
>> >>>>>> for every
>> >>>>>> >> > > > adaptor
>> >>>>>> >> > > > >>>> that
>> >>>>>> >> > > > >>>> >> >> wants
>> >>>>>> >> > > > >>>> >> >> >>>> >> to set details.
>> >>>>>> >> > > > >>>> >> >> >>>> >>
>> >>>>>> >> > > > >>>> >> >> >>>> > [Mike T.] If I'm not using the
>> volumes.path
>> >>>>>> field
>> >>>>>> >> for
>> >>>>>> >> > > > >>>> anything, I
>> >>>>>> >> > > > >>>> >> >> can
>> >>>>>> >> > > > >>>> >> >> >>>> stick
>> >>>>>> >> > > > >>>> >> >> >>>> > the IQN in volumes.path (as well as
>> leaving
>> >>>>>> it in
>> >>>>>> >> > > > >>>> >> >> volumes.iscsi_name,
>> >>>>>> >> > > > >>>> >> >> >>>> which
>> >>>>>> >> > > > >>>> >> >> >>>> > is used elsewhere). That way we only have
>> >>>>>> to ask
>> >>>>>> >> for
>> >>>>>> >> > > > >>>> getPath().
>> >>>>>> >> > > > >>>> >> >> >>>>
>> >>>>>> >> > > > >>>> >> >> >>>> Yeah, whatever it is that you'd need to
>> find
>> >>>>>> the
>> >>>>>> >> right
>> >>>>>> >> > > > block
>> >>>>>> >> > > > >>>> device
>> >>>>>> >> > > > >>>> >> >> >>>> should go in the path. If you look through
>> >>>>>> >> > > > >>>> LibvirtComputingResource
>> >>>>>> >> > > > >>>> >> >> >>>> you'll see stuff like this sprinkled
>> around:
>> >>>>>> >> > > > >>>> >> >> >>>>
>> >>>>>> >> > > > >>>> >> >> >>>>                 KVMPhysicalDisk volume =
>> >>>>>> >> > > > >>>> >> >> primaryPool.getPhysicalDisk(cmd
>> >>>>>> >> > > > >>>> >> >> >>>>                         .getVolumePath());
>> >>>>>> >> > > > >>>> >> >> >>>>
>> >>>>>> >> > > > >>>> >> >> >>>>
>> >>>>>> >> > > > >>>> >> >> >>>> or:
>> >>>>>> >> > > > >>>> >> >> >>>>         String volid = cmd.getPath();
>> >>>>>> >> > > > >>>> >> >> >>>>          KVMPhysicalDisk vol =
>> >>>>>> >> > > pool.getPhysicalDisk(volid);
>> >>>>>> >> > > > >>>> >> >> >>>>
>> >>>>>> >> > > > >>>> >> >> >>>> or:
>> >>>>>> >> > > > >>>> >> >> >>>>
>> >>>>>> >> > > > >>>> >> >> >>>>                     KVMPhysicalDisk
>> >>>>>> physicalDisk =
>> >>>>>> >> > > > >>>> >> >> >>>> _storagePoolMgr.getPhysicalDisk(
>> >>>>>> >> store.getPoolType(),
>> >>>>>> >> > > > >>>> >> >> >>>>
>> store.getUuid(),
>> >>>>>> >> > > > >>>> >> >> >>>>
>> data.getPath());
>> >>>>>> >> > > > >>>> >> >> >>>>
>> >>>>>> >> > > > >>>> >> >> >>>> Maybe some of it is short-circuited by the
>> new
>> >>>>>> >> > > > >>>> KVMStorageProcessor,
>> >>>>>> >> > > > >>>> >> >> >>>> but I'd still implement a working one, and
>> >>>>>> then
>> >>>>>> >> > > > attachVolume
>> >>>>>> >> > > > >>>> can
>> >>>>>> >> > > > >>>> >> call
>> >>>>>> >> > > > >>>> >> >> >>>> getPhysicalDisk after connectPhysicalDisk,
>> >>>>>> even on
>> >>>>>> >> your
>> >>>>>> >> > > > >>>> >> adaptor/pool.
>> >>>>>> >> > > > >>>> >> >> >>>>
>> >>>>>> >> > > > >>>> >> >> >>>> >
>> >>>>>> >> > > > >>>> >> >> >>>> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> On Thu, Sep 26, 2013 at 7:35 PM, Mike
>> >>>>>> Tutkowski
>> >>>>>> >> > > > >>>> >> >> >>>> >> <mike.tutkowski@solidfire.com> wrote:
>> >>>>>> >> > > > >>>> >> >> >>>> >> > Also, if we went the non-CHAP route,
>> >>>>>> before
>> >>>>>> >> > > attaching
>> >>>>>> >> > > > a
>> >>>>>> >> > > > >>>> volume
>> >>>>>> >> > > > >>>> >> >> to a
>> >>>>>> >> > > > >>>> >> >> >>>> VM,
>> >>>>>> >> > > > >>>> >> >> >>>> >> > we'd have to tell the plug-in to set
>> up a
>> >>>>>> >> volume
>> >>>>>> >> > > > access
>> >>>>>> >> > > > >>>> group.
>> >>>>>> >> > > > >>>> >> >> >>>> >> >
>> >>>>>> >> > > > >>>> >> >> >>>> >> > When a volume is detached from a VM,
>> >>>>>> we'd have
>> >>>>>> >> to
>> >>>>>> >> > > > tell the
>> >>>>>> >> > > > >>>> >> >> plug-in
>> >>>>>> >> > > > >>>> >> >> >>>> to
>> >>>>>> >> > > > >>>> >> >> >>>> >> > delete the volume access group.
>> >>>>>> >> > > > >>>> >> >> >>>> >> >
>> >>>>>> >> > > > >>>> >> >> >>>> >> >
>> >>>>>> >> > > > >>>> >> >> >>>> >> > On Thu, Sep 26, 2013 at 7:32 PM, Mike
>> >>>>>> >> Tutkowski <
>> >>>>>> >> > > > >>>> >> >> >>>> >> > mike.tutkowski@solidfire.com> wrote:
>> >>>>>> >> > > > >>>> >> >> >>>> >> >
>> >>>>>> >> > > > >>>> >> >> >>>> >> >> I mention this is my comments on
>> >>>>>> GitHub, as
>> >>>>>> >> well,
>> >>>>>> >> > > but
>> >>>>>> >> > > > >>>> CHAP
>> >>>>>> >> > > > >>>> >> info
>> >>>>>> >> > > > >>>> >> >> is
>> >>>>>> >> > > > >>>> >> >> >>>> >> >> associated with an account - not a
>> >>>>>> storage
>> >>>>>> >> pool.
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >> Ideally we could do without CHAP info
>> >>>>>> >> entirely if
>> >>>>>> >> > > we
>> >>>>>> >> > > > had
>> >>>>>> >> > > > >>>> a
>> >>>>>> >> > > > >>>> >> >> >>>> reliable way
>> >>>>>> >> > > > >>>> >> >> >>>> >> to
>> >>>>>> >> > > > >>>> >> >> >>>> >> >> tell the storage plug-in that a given
>> >>>>>> host
>> >>>>>> >> wants
>> >>>>>> >> > to
>> >>>>>> >> > > > >>>> access a
>> >>>>>> >> > > > >>>> >> >> given
>> >>>>>> >> > > > >>>> >> >> >>>> >> volume.
>> >>>>>> >> > > > >>>> >> >> >>>> >> >> In this case, my storage plug-in
>> could
>> >>>>>> create
>> >>>>>> >> > what
>> >>>>>> >> > > we
>> >>>>>> >> > > > >>>> call a
>> >>>>>> >> > > > >>>> >> >> Volume
>> >>>>>> >> > > > >>>> >> >> >>>> >> Access
>> >>>>>> >> > > > >>>> >> >> >>>> >> >> Group on the SAN. It would
>> essentially
>> >>>>>> say,
>> >>>>>> >> "The
>> >>>>>> >> > > host
>> >>>>>> >> > > > >>>> with
>> >>>>>> >> > > > >>>> >> IQN
>> >>>>>> >> > > > >>>> >> >> <x>
>> >>>>>> >> > > > >>>> >> >> >>>> can
>> >>>>>> >> > > > >>>> >> >> >>>> >> >> access the volume with IQN <y>
>> without
>> >>>>>> using
>> >>>>>> >> CHAP
>> >>>>>> >> > > > >>>> >> credentials."
>> >>>>>> >> > > > >>>> >> >> Of
>> >>>>>> >> > > > >>>> >> >> >>>> >> course
>> >>>>>> >> > > > >>>> >> >> >>>> >> >> we'd need a way to revoke this
>> >>>>>> privilege in
>> >>>>>> >> the
>> >>>>>> >> > > > event of
>> >>>>>> >> > > > >>>> a
>> >>>>>> >> > > > >>>> >> live
>> >>>>>> >> > > > >>>> >> >> >>>> >> migration
>> >>>>>> >> > > > >>>> >> >> >>>> >> >> of a VM. Right now, I do not believe
>> >>>>>> such a
>> >>>>>> >> > > facility
>> >>>>>> >> > > > is
>> >>>>>> >> > > > >>>> >> >> supported
>> >>>>>> >> > > > >>>> >> >> >>>> with
>> >>>>>> >> > > > >>>> >> >> >>>> >> the
>> >>>>>> >> > > > >>>> >> >> >>>> >> >> storage plug-ins.
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >> On Thu, Sep 26, 2013 at 4:56 PM,
>> Marcus
>> >>>>>> >> Sorensen
>> >>>>>> >> > <
>> >>>>>> >> > > > >>>> >> >> >>>> shadowsor@gmail.com
>> >>>>>> >> > > > >>>> >> >> >>>> >> >wrote:
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> Looking at your code, is the chap
>> info
>> >>>>>> stored
>> >>>>>> >> > with
>> >>>>>> >> > > > the
>> >>>>>> >> > > > >>>> pool,
>> >>>>>> >> > > > >>>> >> >> so we
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> could pass the pool to the adaptor?
>> >>>>>> That
>> >>>>>> >> would
>> >>>>>> >> > be
>> >>>>>> >> > > > more
>> >>>>>> >> > > > >>>> >> >> agnostic,
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> anyone implementing a plugin could
>> >>>>>> pull the
>> >>>>>> >> > > > specifics
>> >>>>>> >> > > > >>>> they
>> >>>>>> >> > > > >>>> >> need
>> >>>>>> >> > > > >>>> >> >> >>>> for
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> their stuff out of the pool on the
>> >>>>>> adaptor
>> >>>>>> >> side,
>> >>>>>> >> > > > rather
>> >>>>>> >> > > > >>>> than
>> >>>>>> >> > > > >>>> >> >> >>>> creating
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> custom signatures.
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> Also, I think we may want to
>> consider
>> >>>>>> >> > implementing
>> >>>>>> >> > > > >>>> >> >> >>>> connect/disconnect
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> as just dummy methods in
>> >>>>>> >> LibvirtStorageAdaptor,
>> >>>>>> >> > so
>> >>>>>> >> > > > we
>> >>>>>> >> > > > >>>> don't
>> >>>>>> >> > > > >>>> >> >> have
>> >>>>>> >> > > > >>>> >> >> >>>> to be
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> picky about which adaptors/types in
>> >>>>>> every
>> >>>>>> >> single
>> >>>>>> >> > > > place
>> >>>>>> >> > > > >>>> we
>> >>>>>> >> > > > >>>> >> may
>> >>>>>> >> > > > >>>> >> >> >>>> want to
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> connect/disconnect (in 4.1 there
>> were
>> >>>>>> >> several,
>> >>>>>> >> > I'm
>> >>>>>> >> > > > not
>> >>>>>> >> > > > >>>> sure
>> >>>>>> >> > > > >>>> >> if
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> everything goes through this in
>> 4.2).
>> >>>>>> We can
>> >>>>>> >> > just
>> >>>>>> >> > > > call
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> adaptor.connectPhysicalDisk and the
>> >>>>>> adaptor
>> >>>>>> >> can
>> >>>>>> >> > > > decide
>> >>>>>> >> > > > >>>> if it
>> >>>>>> >> > > > >>>> >> >> >>>> needs to
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> do anything.
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> Comments are attached to your
>> commit,
>> >>>>>> I just
>> >>>>>> >> > > wanted
>> >>>>>> >> > > > to
>> >>>>>> >> > > > >>>> echo
>> >>>>>> >> > > > >>>> >> >> them
>> >>>>>> >> > > > >>>> >> >> >>>> here
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> on-list.
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> On Thu, Sep 26, 2013 at 4:32 PM,
>> Mike
>> >>>>>> >> Tutkowski
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> <mike.tutkowski@solidfire.com>
>> wrote:
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> > Oh, SnapshotTestWithFakeData is
>> just
>> >>>>>> >> modified
>> >>>>>> >> > > > because
>> >>>>>> >> > > > >>>> the
>> >>>>>> >> > > > >>>> >> >> code
>> >>>>>> >> > > > >>>> >> >> >>>> wasn't
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> > building until I corrected this.
>> It
>> >>>>>> has
>> >>>>>> >> > nothing
>> >>>>>> >> > > > >>>> really to
>> >>>>>> >> > > > >>>> >> do
>> >>>>>> >> > > > >>>> >> >> >>>> with my
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> real
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> > changes.
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> > On Thu, Sep 26, 2013 at 4:31 PM,
>> Mike
>> >>>>>> >> > Tutkowski
>> >>>>>> >> > > <
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> > mike.tutkowski@solidfire.com>
>> wrote:
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >> Hey Marcus,
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >> I implemented your
>> recommendations
>> >>>>>> >> regarding
>> >>>>>> >> > > > adding
>> >>>>>> >> > > > >>>> >> connect
>> >>>>>> >> > > > >>>> >> >> and
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> disconnect
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >> methods. It is not yet checked in
>> >>>>>> (as you
>> >>>>>> >> > know,
>> >>>>>> >> > > > >>>> having
>> >>>>>> >> > > > >>>> >> >> trouble
>> >>>>>> >> > > > >>>> >> >> >>>> with
>> >>>>>> >> > > > >>>> >> >> >>>> >> my
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> KVM
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >> environment), but it is on GitHub
>> >>>>>> here:
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>>
>> >>>>>> >> > > > >>>> >> >> >>>> >>
>> >>>>>> >> > > > >>>> >> >> >>>>
>> >>>>>> >> > > > >>>> >> >>
>> >>>>>> >> > > > >>>> >>
>> >>>>>> >> > > > >>>>
>> >>>>>> >> > > >
>> >>>>>> >> > >
>> >>>>>> >> >
>> >>>>>> >>
>> >>>>>>
>> https://github.com/mike-tutkowski/incubator-cloudstack/commit/f2897c65689012e6157c0a0c2ed7e5355900c59a
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >> Please let me know if you have
>> any
>> >>>>>> more
>> >>>>>> >> > > comments.
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >> Thanks!
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >> On Thu, Sep 26, 2013 at 4:05 PM,
>> >>>>>> Marcus
>> >>>>>> >> > > Sorensen
>> >>>>>> >> > > > <
>> >>>>>> >> > > > >>>> >> >> >>>> >> shadowsor@gmail.com
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >wrote:
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>> Mike, everyone,
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>>    As I've mentioned on the
>> board,
>> >>>>>> I'm
>> >>>>>> >> > working
>> >>>>>> >> > > > on
>> >>>>>> >> > > > >>>> >> getting
>> >>>>>> >> > > > >>>> >> >> our
>> >>>>>> >> > > > >>>> >> >> >>>> own
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>> internal KVM storage plugin
>> >>>>>> working on
>> >>>>>> >> 4.2.
>> >>>>>> >> > In
>> >>>>>> >> > > > the
>> >>>>>> >> > > > >>>> >> >> interest of
>> >>>>>> >> > > > >>>> >> >> >>>> >> making
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>> it forward compatible, I just
>> >>>>>> wanted to
>> >>>>>> >> > > confirm
>> >>>>>> >> > > > >>>> what you
>> >>>>>> >> > > > >>>> >> >> were
>> >>>>>> >> > > > >>>> >> >> >>>> doing
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>> with the solidfire plugin as
>> far as
>> >>>>>> >> > attaching
>> >>>>>> >> > > > your
>> >>>>>> >> > > > >>>> iscsi
>> >>>>>> >> > > > >>>> >> >> >>>> LUNs. We
>> >>>>>> >> > > > >>>> >> >> >>>> >> had
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>> discussed a new
>> connectPhysicalDisk
>> >>>>>> >> method
>> >>>>>> >> > for
>> >>>>>> >> > > > the
>> >>>>>> >> > > > >>>> >> >> >>>> StorageAdaptor
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>> class, something perhaps like:
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>> public boolean
>> >>>>>> connectPhysicalDisk(String
>> >>>>>> >> > > > >>>> volumeUuid,
>> >>>>>> >> > > > >>>> >> >> >>>> >> KVMStoragePool
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>> pool);
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>> then added to
>> >>>>>> KVMStoragePoolManager:
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>> public boolean
>> >>>>>> >> > > > connectPhysicalDisk(StoragePoolType
>> >>>>>> >> > > > >>>> type,
>> >>>>>> >> > > > >>>> >> >> >>>> String
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>> poolUuid, String volPath) {
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>>         StorageAdaptor adaptor =
>> >>>>>> >> > > > >>>> >> getStorageAdaptor(type);
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>>         KVMStoragePool pool =
>> >>>>>> >> > > > >>>> >> >> >>>> adaptor.getStoragePool(poolUuid);
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>>         return
>> >>>>>> >> > > > adaptor.connectPhysicalDisk(volPath,
>> >>>>>> >> > > > >>>> >> pool);
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>>     }
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>> Something similar to this for
>> >>>>>> disconnect
>> >>>>>> >> as
>> >>>>>> >> > > > well.
>> >>>>>> >> > > > >>>> Then
>> >>>>>> >> > > > >>>> >> in
>> >>>>>> >> > > > >>>> >> >> the
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>> KVMStorageProcessor these can be
>> >>>>>> called
>> >>>>>> >> as
>> >>>>>> >> > > > needed
>> >>>>>> >> > > > >>>> for
>> >>>>>> >> > > > >>>> >> >> >>>> >> attach/detach.
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>> We can probably stub out one in
>> >>>>>> >> > > > >>>> LibvirtStorageAdaptor so
>> >>>>>> >> > > > >>>> >> >> >>>> there's no
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>> need to switch or if/else for
>> pool
>> >>>>>> types,
>> >>>>>> >> > for
>> >>>>>> >> > > > >>>> example in
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>>
>> KVMStorageProcessor.attachVolume.
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>> I have debated on whether or
>> not it
>> >>>>>> >> should
>> >>>>>> >> > > just
>> >>>>>> >> > > > be
>> >>>>>> >> > > > >>>> >> rolled
>> >>>>>> >> > > > >>>> >> >> into
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>> getPhysicalDisk, having it
>> connect
>> >>>>>> the
>> >>>>>> >> disk
>> >>>>>> >> > if
>> >>>>>> >> > > > it's
>> >>>>>> >> > > > >>>> not
>> >>>>>> >> > > > >>>> >> >> >>>> already
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>> connected. getPhysicalDisk is
>> >>>>>> called a
>> >>>>>> >> lot,
>> >>>>>> >> > > and
>> >>>>>> >> > > > I'm
>> >>>>>> >> > > > >>>> not
>> >>>>>> >> > > > >>>> >> >> sure
>> >>>>>> >> > > > >>>> >> >> >>>> it
>> >>>>>> >> > > > >>>> >> >> >>>> >> always
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>> needs to connect the disk when
>> it
>> >>>>>> does.
>> >>>>>> >> In
>> >>>>>> >> > > past
>> >>>>>> >> > > > >>>> >> iterations
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>> getPhysicalDisk has simply
>> spoken
>> >>>>>> to our
>> >>>>>> >> SAN
>> >>>>>> >> > > > api and
>> >>>>>> >> > > > >>>> >> >> returned
>> >>>>>> >> > > > >>>> >> >> >>>> the
>> >>>>>> >> > > > >>>> >> >> >>>> >> disk
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>> details, nothing more. So it
>> >>>>>> seemed more
>> >>>>>> >> > > > flexible
>> >>>>>> >> > > > >>>> and
>> >>>>>> >> > > > >>>> >> >> >>>> granular to
>> >>>>>> >> > > > >>>> >> >> >>>> >> do
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>> the connectPhysicalDisk (we have
>> >>>>>> one now
>> >>>>>> >> in
>> >>>>>> >> > > our
>> >>>>>> >> > > > 4.1
>> >>>>>> >> > > > >>>> >> >> version).
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >> --
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >> *Mike Tutkowski*
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >> *Senior CloudStack Developer,
>> >>>>>> SolidFire
>> >>>>>> >> Inc.*
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >> e: mike.tutkowski@solidfire.com
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >> o: 303.746.7302
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >> Advancing the way the world uses
>> the
>> >>>>>> >> cloud<
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>>
>> >>>>>> >> > > http://solidfire.com/solution/overview/?video=play>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >> *™*
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> >
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> > --
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> > *Mike Tutkowski*
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> > *Senior CloudStack Developer,
>> >>>>>> SolidFire
>> >>>>>> >> Inc.*
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> > e: mike.tutkowski@solidfire.com
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> > o: 303.746.7302
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> > Advancing the way the world uses
>> the
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> > cloud<
>> >>>>>> >> > > > >>>> http://solidfire.com/solution/overview/?video=play>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>> > *™*
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >> --
>> >>>>>> >> > > > >>>> >> >> >>>> >> >> *Mike Tutkowski*
>> >>>>>> >> > > > >>>> >> >> >>>> >> >> *Senior CloudStack Developer,
>> SolidFire
>> >>>>>> Inc.*
>> >>>>>> >> > > > >>>> >> >> >>>> >> >> e: mike.tutkowski@solidfire.com
>> >>>>>> >> > > > >>>> >> >> >>>> >> >> o: 303.746.7302
>> >>>>>> >> > > > >>>> >> >> >>>> >> >> Advancing the way the world uses the
>> >>>>>> cloud<
>> >>>>>> >> > > > >>>> >> >> >>>> >>
>> >>>>>> >> http://solidfire.com/solution/overview/?video=play>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >> *™*
>> >>>>>> >> > > > >>>> >> >> >>>> >> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >> >
>> >>>>>> >> > > > >>>> >> >> >>>> >> >
>> >>>>>> >> > > > >>>> >> >> >>>> >> >
>> >>>>>> >> > > > >>>> >> >> >>>> >> > --
>> >>>>>> >> > > > >>>> >> >> >>>> >> > *Mike Tutkowski*
>> >>>>>> >> > > > >>>> >> >> >>>> >> > *Senior CloudStack Developer,
>> SolidFire
>> >>>>>> Inc.*
>> >>>>>> >> > > > >>>> >> >> >>>> >> > e: mike.tutkowski@solidfire.com
>> >>>>>> >> > > > >>>> >> >> >>>> >> > o: 303.746.7302
>> >>>>>> >> > > > >>>> >> >> >>>> >> > Advancing the way the world uses the
>> >>>>>> >> > > > >>>> >> >> >>>> >> > cloud<
>> >>>>>> >> > > > http://solidfire.com/solution/overview/?video=play>
>> >>>>>> >> > > > >>>> >> >> >>>> >> > *™*
>> >>>>>> >> > > > >>>> >> >> >>>> >>
>> >>>>>> >> > > > >>>> >> >> >>>> >
>> >>>>>> >> > > > >>>> >> >> >>>> >
>> >>>>>> >> > > > >>>> >> >> >>>> >
>> >>>>>> >> > > > >>>> >> >> >>>> > --
>> >>>>>> >> > > > >>>> >> >> >>>> > *Mike Tutkowski*
>> >>>>>> >> > > > >>>> >> >> >>>> > *Senior CloudStack Developer, SolidFire
>> >>>>>> Inc.*
>> >>>>>> >> > > > >>>> >> >> >>>> > e: mike.tutkowski@solidfire.com
>> >>>>>> >> > > > >>>> >> >> >>>> > o: 303.746.7302
>> >>>>>> >> > > > >>>> >> >> >>>> > Advancing the way the world uses the
>> >>>>>> >> > > > >>>> >> >> >>>> > cloud<
>> >>>>>> >> > > http://solidfire.com/solution/overview/?video=play
>> >>>>>> >> > > > >
>> >>>>>> >> > > > >>>> >> >> >>>> > *™*
>> >>>>>> >> > > > >>>> >> >> >>>>
>> >>>>>> >> > > > >>>> >> >> >>>
>> >>>>>> >> > > > >>>> >> >> >>>
>> >>>>>> >> > > > >>>> >> >> >>>
>> >>>>>> >> > > > >>>> >> >> >>> --
>> >>>>>> >> > > > >>>> >> >> >>> *Mike Tutkowski*
>> >>>>>> >> > > > >>>> >> >> >>> *Senior CloudStack Developer, SolidFire
>> Inc.*
>> >>>>>> >> > > > >>>> >> >> >>> e: mike.tutkowski@solidfire.com
>> >>>>>> >> > > > >>>> >> >> >>> o: 303.746.7302
>> >>>>>> >> > > > >>>> >> >> >>> Advancing the way the world uses the cloud<
>> >>>>>> >> > > > >>>> >> >>
>> http://solidfire.com/solution/overview/?video=play
>> >>>>>> >
>> >>>>>> >> > > > >>>> >> >> >>> *™*
>> >>>>>> >> > > > >>>> >> >> >>>
>> >>>>>> >> > > > >>>> >> >> >>
>> >>>>>> >> > > > >>>> >> >> >>
>> >>>>>> >> > > > >>>> >> >> >>
>> >>>>>> >> > > > >>>> >> >> >> --
>> >>>>>> >> > > > >>>> >> >> >> *Mike Tutkowski*
>> >>>>>> >> > > > >>>> >> >> >> *Senior CloudStack Developer, SolidFire Inc.*
>> >>>>>> >> > > > >>>> >> >> >> e: mike.tutkowski@solidfire.com
>> >>>>>> >> > > > >>>> >> >> >> o: 303.746.7302
>> >>>>>> >> > > > >>>> >> >> >> Advancing the way the world uses the cloud<
>> >>>>>> >> > > > >>>> >> >>
>> http://solidfire.com/solution/overview/?video=play
>> >>>>>> >
>> >>>>>> >> > > > >>>> >> >> >> *™*
>> >>>>>> >> > > > >>>> >> >> >>
>> >>>>>> >> > > > >>>> >> >> >
>> >>>>>> >> > > > >>>> >> >> >
>> >>>>>> >> > > > >>>> >> >> >
>> >>>>>> >> > > > >>>> >> >> > --
>> >>>>>> >> > > > >>>> >> >> > *Mike Tutkowski*
>> >>>>>> >> > > > >>>> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
>> >>>>>> >> > > > >>>> >> >> > e: mike.tutkowski@solidfire.com
>> >>>>>> >> > > > >>>> >> >> > o: 303.746.7302
>> >>>>>> >> > > > >>>> >> >> > Advancing the way the world uses the
>> >>>>>> >> > > > >>>> >> >> > cloud<
>> >>>>>> >> http://solidfire.com/solution/overview/?video=play>
>> >>>>>> >> > > > >>>> >> >> > *™*
>> >>>>>> >> > > > >>>> >> >>
>> >>>>>> >> > > > >>>> >> >
>> >>>>>> >> > > > >>>> >> >
>> >>>>>> >> > > > >>>> >> >
>> >>>>>> >> > > > >>>> >> > --
>> >>>>>> >> > > > >>>> >> > *Mike Tutkowski*
>> >>>>>> >> > > > >>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>> >>>>>> >> > > > >>>> >> > e: mike.tutkowski@solidfire.com
>> >>>>>> >> > > > >>>> >> > o: 303.746.7302
>> >>>>>> >> > > > >>>> >> > Advancing the way the world uses the
>> >>>>>> >> > > > >>>> >> > cloud<
>> >>>>>> http://solidfire.com/solution/overview/?video=play>
>> >>>>>> >> > > > >>>> >> > *™*
>> >>>>>> >> > > > >>>> >>
>> >>>>>> >> > > > >>>> >
>> >>>>>> >> > > > >>>> >
>> >>>>>> >> > > > >>>> >
>> >>>>>> >> > > > >>>> > --
>> >>>>>> >> > > > >>>> > *Mike Tutkowski*
>> >>>>>> >> > > > >>>> > *Senior CloudStack Developer, SolidFire Inc.*
>> >>>>>> >> > > > >>>> > e: mike.tutkowski@solidfire.com
>> >>>>>> >> > > > >>>> > o: 303.746.7302
>> >>>>>> >> > > > >>>> > Advancing the way the world uses the
>> >>>>>> >> > > > >>>> > cloud<
>> >>>>>> http://solidfire.com/solution/overview/?video=play>
>> >>>>>> >> > > > >>>> > *™*
>> >>>>>> >> > > > >>>>
>> >>>>>> >> > > > >>>
>> >>>>>> >> > > > >>>
>> >>>>>> >> > > > >>>
>> >>>>>> >> > > > >>> --
>> >>>>>> >> > > > >>> *Mike Tutkowski*
>> >>>>>> >> > > > >>> *Senior CloudStack Developer, SolidFire Inc.*
>> >>>>>> >> > > > >>> e: mike.tutkowski@solidfire.com
>> >>>>>> >> > > > >>> o: 303.746.7302
>> >>>>>> >> > > > >>> Advancing the way the world uses the cloud<
>> >>>>>> >> > > > http://solidfire.com/solution/overview/?video=play>
>> >>>>>> >> > > > >>> *™*
>> >>>>>> >> > > > >>>
>> >>>>>> >> > > > >>
>> >>>>>> >> > > > >>
>> >>>>>> >> > > > >>
>> >>>>>> >> > > > >> --
>> >>>>>> >> > > > >> *Mike Tutkowski*
>> >>>>>> >> > > > >> *Senior CloudStack Developer, SolidFire Inc.*
>> >>>>>> >> > > > >> e: mike.tutkowski@solidfire.com
>> >>>>>> >> > > > >> o: 303.746.7302
>> >>>>>> >> > > > >> Advancing the way the world uses the
>> >>>>>> >> > > > >> cloud<
>> http://solidfire.com/solution/overview/?video=play>
>> >>>>>> >> > > > >> *™*
>> >>>>>> >> > > >
>> >>>>>> >> > >
>> >>>>>> >> > >
>> >>>>>> >> > >
>> >>>>>> >> > > --
>> >>>>>> >> > > *Mike Tutkowski*
>> >>>>>> >> > > *Senior CloudStack Developer, SolidFire Inc.*
>> >>>>>> >> > > e: mike.tutkowski@solidfire.com
>> >>>>>> >> > > o: 303.746.7302
>> >>>>>> >> > > Advancing the way the world uses the
>> >>>>>> >> > > cloud<http://solidfire.com/solution/overview/?video=play>
>> >>>>>> >> > > *™*
>> >>>>>> >> > >
>> >>>>>> >> >
>> >>>>>> >>
>> >>>>>> >>
>> >>>>>> >>
>> >>>>>> >> --
>> >>>>>> >> *Mike Tutkowski*
>> >>>>>> >> *Senior CloudStack Developer, SolidFire Inc.*
>> >>>>>> >> e: mike.tutkowski@solidfire.com
>> >>>>>> >> o: 303.746.7302
>> >>>>>> >> Advancing the way the world uses the
>> >>>>>> >> cloud<http://solidfire.com/solution/overview/?video=play>
>> >>>>>> >> *™*
>> >>>>>> >>
>> >>>>>> >
>> >>>>>>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> --
>> >>>>> *Mike Tutkowski*
>> >>>>> *Senior CloudStack Developer, SolidFire Inc.*
>> >>>>> e: mike.tutkowski@solidfire.com
>> >>>>> o: 303.746.7302
>> >>>>> Advancing the way the world uses the cloud<
>> http://solidfire.com/solution/overview/?video=play>
>> >>>>> *™*
>> >>>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> --
>> >>>> *Mike Tutkowski*
>> >>>>  *Senior CloudStack Developer, SolidFire Inc.*
>> >>>> e: mike.tutkowski@solidfire.com
>> >>>> o: 303.746.7302
>> >>>> Advancing the way the world uses the cloud<
>> http://solidfire.com/solution/overview/?video=play>
>> >>>> *™*
>> >>>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> *Mike Tutkowski*
>> >>>  *Senior CloudStack Developer, SolidFire Inc.*
>> >>> e: mike.tutkowski@solidfire.com
>> >>> o: 303.746.7302
>> >>> Advancing the way the world uses the cloud<
>> http://solidfire.com/solution/overview/?video=play>
>> >>> *™*
>> >>>
>> >>
>> >>
>> >>
>> >> --
>> >> *Mike Tutkowski*
>> >> *Senior CloudStack Developer, SolidFire Inc.*
>> >> e: mike.tutkowski@solidfire.com
>> >> o: 303.746.7302
>> >> Advancing the way the world uses the cloud<
>> http://solidfire.com/solution/overview/?video=play>
>> >> *™*
>> >>
>> >
>> >
>> >
>> > --
>> > *Mike Tutkowski*
>> > *Senior CloudStack Developer, SolidFire Inc.*
>> > e: mike.tutkowski@solidfire.com
>> > o: 303.746.7302
>> > Advancing the way the world uses the
>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> > *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*

Mime
View raw message