cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marcus Sorensen <shadow...@gmail.com>
Subject Re: add connect method on KVM storage code
Date Sun, 29 Sep 2013 15:38:41 GMT
If you're tired of trying to get master to work, you could develop against
4.2. The majority of the code changes will be in your adaptor and my patch.
One line in KVMStoragePoolManager to register the adaptor, and whatever
changes you were going to make to passing CHAP auth as a Map. It should not
be hard to port that to master afterward once you know your code works.

I guess you eventually need to get it working in master anyway, but if
you're blocking on someone else's work there it may be an option.
On Sep 28, 2013 11:32 PM, "Mike Tutkowski" <mike.tutkowski@solidfire.com>
wrote:

> Turns out, it's the same problem as last time (from agent.err).
>
> Wei might know about this.
>
> log4j:WARN No appenders could be found for logger
> (org.apache.commons.httpclient.params.DefaultHttpParams).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
> more info.
> java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> Caused by: java.lang.NoSuchMethodError: com.sun.jna.Native.free(J)V
> at org.libvirt.Library.free(Unknown Source)
> at org.libvirt.Connect.getCapabilities(Unknown Source)
> at
>
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.IsHVMEnabled(LibvirtComputingResource.java:4533)
> at
>
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.configure(LibvirtComputingResource.java:747)
> at com.cloud.agent.Agent.<init>(Agent.java:161)
> at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:421)
> at com.cloud.agent.AgentShell.launchAgentFromClassInfo(AgentShell.java:376)
> at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:357)
> at com.cloud.agent.AgentShell.start(AgentShell.java:454)
> ... 5 more
> Cannot start daemon
> Service exit with a return value of 5
>
>
> On Sat, Sep 28, 2013 at 11:12 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
> > Hey Marcus,
> >
> > Looks like there was a Libvirt issue in master.
> >
> > I updated everything and got the following message in agent.log. It's not
> > long. Can you take a look and see if you notice anything odd or is this
> > about what you'd expect if I booted up Ubuntu, logged in, then came
> > straight over to look at this file (this was the entire content of the
> > file):
> >
> > 2013-09-28 22:54:40,624 INFO  [cloud.agent.AgentShell] (main:null) Agent
> > started
> > 2013-09-28 22:54:40,625 INFO  [cloud.agent.AgentShell] (main:null)
> > Implementation Version is 4.3.0-SNAPSHOT
> > 2013-09-28 22:54:40,628 INFO  [cloud.agent.AgentShell] (main:null)
> > agent.properties found at /etc/cloudstack/agent/agent.properties
> > 2013-09-28 22:54:40,636 INFO  [cloud.agent.AgentShell] (main:null)
> > Defaulting to using properties file for storage
> > 2013-09-28 22:54:40,637 INFO  [cloud.agent.AgentShell] (main:null)
> > Defaulting to the constant time backoff algorithm
> > 2013-09-28 22:54:40,640 INFO  [cloud.utils.LogUtils] (main:null) log4j
> > configuration found at /etc/cloudstack/agent/log4j-cloud.xml
> > 2013-09-28 22:54:40,737 INFO  [cloud.agent.Agent] (main:null) id is
> > 2013-09-28 22:54:40,740 WARN  [cloud.resource.ServerResourceBase]
> > (main:null) Nics are not specified in properties file/db, will try to
> > autodiscover
> > 2013-09-28 22:54:40,748 INFO  [cloud.resource.ServerResourceBase]
> > (main:null) Designating private to be nic cloudbr0
> > 2013-09-28 22:54:40,763 INFO
> >  [resource.virtualnetwork.VirtualRoutingResource] (main:null)
> > VirtualRoutingResource _scriptDir to use: scripts/network/domr/kvm
> > 2013-09-28 22:54:44,240 INFO  [cloud.agent.AgentShell] (main:null) Agent
> > started
> > 2013-09-28 22:54:44,242 INFO  [cloud.agent.AgentShell] (main:null)
> > Implementation Version is 4.3.0-SNAPSHOT
> > 2013-09-28 22:54:44,245 INFO  [cloud.agent.AgentShell] (main:null)
> > agent.properties found at /etc/cloudstack/agent/agent.properties
> > 2013-09-28 22:54:44,252 INFO  [cloud.agent.AgentShell] (main:null)
> > Defaulting to using properties file for storage
> > 2013-09-28 22:54:44,254 INFO  [cloud.agent.AgentShell] (main:null)
> > Defaulting to the constant time backoff algorithm
> > 2013-09-28 22:54:44,256 INFO  [cloud.utils.LogUtils] (main:null) log4j
> > configuration found at /etc/cloudstack/agent/log4j-cloud.xml
> > 2013-09-28 22:54:44,356 INFO  [cloud.agent.Agent] (main:null) id is
> > 2013-09-28 22:54:44,359 WARN  [cloud.resource.ServerResourceBase]
> > (main:null) Nics are not specified in properties file/db, will try to
> > autodiscover
> > 2013-09-28 22:54:44,367 INFO  [cloud.resource.ServerResourceBase]
> > (main:null) Designating private to be nic cloudbr0
> > 2013-09-28 22:54:44,385 INFO
> >  [resource.virtualnetwork.VirtualRoutingResource] (main:null)
> > VirtualRoutingResource _scriptDir to use: scripts/network/domr/kvm
> > 2013-09-28 22:58:26,438 INFO  [cloud.agent.AgentShell] (main:null) Agent
> > started
> > 2013-09-28 22:58:26,465 INFO  [cloud.agent.AgentShell] (main:null)
> > Implementation Version is 4.3.0-SNAPSHOT
> > 2013-09-28 22:58:26,473 INFO  [cloud.agent.AgentShell] (main:null)
> > agent.properties found at /etc/cloudstack/agent/agent.properties
> > 2013-09-28 22:58:26,475 INFO  [cloud.agent.AgentShell] (main:null)
> > Defaulting to using properties file for storage
> > 2013-09-28 22:58:26,476 INFO  [cloud.agent.AgentShell] (main:null)
> > Defaulting to the constant time backoff algorithm
> > 2013-09-28 22:58:26,562 INFO  [cloud.utils.LogUtils] (main:null) log4j
> > configuration found at /etc/cloudstack/agent/log4j-cloud.xml
> > 2013-09-28 22:58:27,263 INFO  [cloud.agent.Agent] (main:null) id is
> > 2013-09-28 22:58:27,280 WARN  [cloud.resource.ServerResourceBase]
> > (main:null) Nics are not specified in properties file/db, will try to
> > autodiscover
> > 2013-09-28 22:58:27,309 INFO  [cloud.resource.ServerResourceBase]
> > (main:null) Designating private to be nic cloudbr0
> > 2013-09-28 22:58:27,404 INFO
> >  [resource.virtualnetwork.VirtualRoutingResource] (main:null)
> > VirtualRoutingResource _scriptDir to use: scripts/network/domr/kvm
> >
> >
> > On Sat, Sep 28, 2013 at 12:35 AM, Mike Tutkowski <
> > mike.tutkowski@solidfire.com> wrote:
> >
> >> OK, sounds good.
> >>
> >>
> >> On Sat, Sep 28, 2013 at 12:32 AM, Marcus Sorensen <shadowsor@gmail.com
> >wrote:
> >>
> >>> Our setup is kind of strange. We create all of our volumes as 5TB
> >>> disks (thin provisioned), and then use a virtual device on top to do
> >>> multipathing and mirroring, and that device is set to the usable size.
> >>> This facilitates resizing because we don't have to call back to the
> >>> SAN, it can just be expanded locally on the hypervisor, among other
> >>> things. So it would be a bad example, but the spirit is the same, just
> >>> looking at the local device to create/return a KVMPhysicalDisk object.
> >>>
> >>> On Sat, Sep 28, 2013 at 12:03 AM, Mike Tutkowski
> >>> <mike.tutkowski@solidfire.com> wrote:
> >>> > Oh, I see...the code you're referring to is in your personal 4.1
> >>> branch.
> >>> >
> >>> > When I was scanning the implementation of getPhysicalDisk in
> >>> > LibvirtStorageAdaptor, it - of course - looked up the info in
> Libvirt.
> >>> >
> >>> > We should probably query this info out the same way, you and I.
> >>> >
> >>> > Can you send me the code you use to query the disk size? We should
> just
> >>> > make a common method of it.
> >>> >
> >>> >
> >>> > On Sat, Sep 28, 2013 at 12:00 AM, Mike Tutkowski <
> >>> > mike.tutkowski@solidfire.com> wrote:
> >>> >
> >>> >> Perfect...that solves that problem.
> >>> >>
> >>> >> I was assuming you were querying Libvirt for your data, but turns
> out
> >>> >> you're doing exactly what I'll need to do.
> >>> >>
> >>> >> Thanks!
> >>> >>
> >>> >>
> >>> >> On Fri, Sep 27, 2013 at 11:55 PM, Marcus Sorensen <
> >>> shadowsor@gmail.com>wrote:
> >>> >>
> >>> >>> On Fri, Sep 27, 2013 at 11:38 PM, Mike Tutkowski
> >>> >>> <mike.tutkowski@solidfire.com> wrote:
> >>> >>> > One problem I need to solve, though, is how to implement the
> >>> >>> > getPhysicalVolume method.
> >>> >>> >
> >>> >>> > In your case, you just ask Libvirt for the StoragePool by name.
> >>> This
> >>> >>> gives
> >>> >>> > you details about the volume like its size.
> >>> >>>
> >>> >>> Actually, in my case (as in my plugin) I don't use libvirt
> supported
> >>> >>> storage either. I take the disk info passed to me (a volume
> >>> identifier
> >>> >>> per 'path'), find the local disk that matches that (say /dev/sdc or
> >>> >>> /dev/disk/by-id/whatever), query it's size, create a new
> >>> >>> KVMPhysicalDisk object, use the setters to fill it in, and return
> the
> >>> >>> KVMPhysicalDisk object. Since this happens after it has been
> >>> connected
> >>> >>> locally, we just need to look at the disk we have to return the
> info
> >>> >>> we need.
> >>> >>>
> >>> >>> >
> >>> >>> > In my case, I don't have a decent way (right now) to produce a
> >>> >>> > KVMPhysicalDisk that has a size.
> >>> >>>
> >>> >>> If you can identify the disk by the IQN, or whatever you're putting
> >>> in
> >>> >>> the volume's path attribute in the database, you can do:
> >>> >>>
> >>> >>> [root@devcloud-kvm cloudstack-bs]# blockdev --getsz
> >>> >>> /dev/disk/by-path/pci-0000\:00\:07.1-scsi-0\:0\:0\:0
> >>> >>> 134217728
> >>> >>>
> >>> >>> Or even
> >>> >>>
> >>> >>> [root@devcloud-kvm cloudstack-bs]# cat /sys/block/sda/size
> >>> >>> 134217728
> >>> >>>
> >>> >>> Lots of ways to find the size of a block device on a linux system.
> >>> >>> Probably some native java methods too.
> >>> >>>
> >>> >>> >
> >>> >>> > Where do you recommend I get this information? Do you know if
> >>> iscsiadm
> >>> >>> has
> >>> >>> > the ability to query the LUN for its size? Maybe I can look it up
> >>> >>> by-path
> >>> >>> > on disk with another tool to get its size?
> >>> >>> >
> >>> >>> > I can fill in values like its path (by-path name), its name, and
> >>> its
> >>> >>> format
> >>> >>> > (PhysicalDiskFormat.RAW), but I don't know how to retrieve the
> >>> size of
> >>> >>> the
> >>> >>> > disk.
> >>> >>> >
> >>> >>> > Maybe you have a recommendation here.
> >>> >>> >
> >>> >>> > Thanks!
> >>> >>> >
> >>> >>> >
> >>> >>> > On Fri, Sep 27, 2013 at 11:22 PM, Mike Tutkowski <
> >>> >>> > mike.tutkowski@solidfire.com> wrote:
> >>> >>> >
> >>> >>> >> Woops...didn't see that you had already sent out the Review
> Board
> >>> >>> update.
> >>> >>> >>
> >>> >>> >>
> >>> >>> >> On Fri, Sep 27, 2013 at 11:21 PM, Mike Tutkowski <
> >>> >>> >> mike.tutkowski@solidfire.com> wrote:
> >>> >>> >>
> >>> >>> >>> OK, let's not worry about that other stuff (at least now).
> >>> >>> >>>
> >>> >>> >>> When your code is available, send out a message and I can take
> a
> >>> look.
> >>> >>> >>>
> >>> >>> >>> Talk to you later!
> >>> >>> >>>
> >>> >>> >>>
> >>> >>> >>> On Fri, Sep 27, 2013 at 8:53 PM, Marcus Sorensen <
> >>> shadowsor@gmail.com
> >>> >>> >wrote:
> >>> >>> >>>
> >>> >>> >>>> On a side note, I'm actually a proponent of the storage plugin
> >>> only
> >>> >>> >>>> talking
> >>> >>> >>>> to its storage api. Best to be hypervisor agnostic. Its just
> in
> >>> the
> >>> >>> case
> >>> >>> >>>> of
> >>> >>> >>>> the default storage plugin the storage API is the cloudstack
> >>> agent.
> >>> >>> >>>>
> >>> >>> >>>> But enough about that, we can spin that off into another
> thread
> >>> if we
> >>> >>> >>>> need
> >>> >>> >>>> to. This is about adding connect calls to the StorageAdaptor
> >>> >>> interface so
> >>> >>> >>>> custom storage pool types can prepare disks prior to use. I
> >>> need to
> >>> >>> know
> >>> >>> >>>> how it will look going forward so I can continue my work.
> >>> >>> >>>> On Sep 27, 2013 8:00 PM, "Marcus Sorensen" <
> shadowsor@gmail.com
> >>> >
> >>> >>> wrote:
> >>> >>> >>>>
> >>> >>> >>>> > I actually think its irrelevant to this discussion, since
> its
> >>> >>> really
> >>> >>> >>>> only
> >>> >>> >>>> > a contention point for the default KVM plugin. I did post
> this
> >>> >>> with the
> >>> >>> >>>> > intention of getting feedback from anyone who cares about
> the
> >>> KVM
> >>> >>> side
> >>> >>> >>>> and
> >>> >>> >>>> > how it handles things.
> >>> >>> >>>> >
> >>> >>> >>>> > What you describe with Xen sounds identical to what we are
> >>> doing
> >>> >>> here.
> >>> >>> >>>> > That process is what I've been trying to explain. Your/my
> >>> plugin
> >>> >>> >>>> create a
> >>> >>> >>>> > volume on the SAN the first time its used, and then CS calls
> >>> attach
> >>> >>> >>>> volume
> >>> >>> >>>> > in KVMStorageProcessor or XenStorageProcessor to make the
> lun
> >>> >>> >>>> available to
> >>> >>> >>>> > the hypervisor. On KVM this calls your storage adaptor to
> run
> >>> >>> iscsiadm
> >>> >>> >>>> > commands (connectPhysicalDisk, the subject of this thread).
> I
> >>> >>> think we
> >>> >>> >>>> were
> >>> >>> >>>> > good on all of it up to the point where you mentioned
> >>> >>> >>>> createPhysicalDisk,
> >>> >>> >>>> > which isn't necessary to implement since your plugin did it
> >>> way
> >>> >>> back
> >>> >>> >>>> when.
> >>> >>> >>>> > On Sep 27, 2013 7:51 PM, "Mike Tutkowski" <
> >>> >>> >>>> mike.tutkowski@solidfire.com>
> >>> >>> >>>> > wrote:
> >>> >>> >>>> >
> >>> >>> >>>> >> I think we should get John Burwell an Edison Su involved in
> >>> these
> >>> >>> >>>> >> discussions.
> >>> >>> >>>> >>
> >>> >>> >>>> >> John was a proponent for not having storage plug-ins talk
> to
> >>> >>> >>>> hypervisors
> >>> >>> >>>> >> and Edison felt it was fine.
> >>> >>> >>>> >>
> >>> >>> >>>> >> Until we resolve that design issue, we might just be
> talking
> >>> in
> >>> >>> >>>> circles
> >>> >>> >>>> >> here. :)
> >>> >>> >>>> >>
> >>> >>> >>>> >>
> >>> >>> >>>> >> On Fri, Sep 27, 2013 at 7:29 PM, Marcus Sorensen <
> >>> >>> shadowsor@gmail.com
> >>> >>> >>>> >> >wrote:
> >>> >>> >>>> >>
> >>> >>> >>>> >> > The plugin itself doesn't talk to the hypervisor.
> >>> >>> >>>> >> > On Sep 27, 2013 7:28 PM, "Mike Tutkowski" <
> >>> >>> >>>> mike.tutkowski@solidfire.com
> >>> >>> >>>> >> >
> >>> >>> >>>> >> > wrote:
> >>> >>> >>>> >> >
> >>> >>> >>>> >> > > Yeah, we should bring John Burwell into this
> >>> conversation. He
> >>> >>> had
> >>> >>> >>>> a
> >>> >>> >>>> >> > strong
> >>> >>> >>>> >> > > opposition to storage plug-ins talking to hypervisors.
> >>> >>> Perhaps he
> >>> >>> >>>> was
> >>> >>> >>>> >> > just
> >>> >>> >>>> >> > > referring to shared storage, though (not local disks).
> >>> >>> >>>> >> > >
> >>> >>> >>>> >> > > Anyways, prior to 4.2 for XenServer and VMware, you had
> >>> to
> >>> >>> >>>> preallocate
> >>> >>> >>>> >> > your
> >>> >>> >>>> >> > > SR/datastore (there was a minor exception for
> XenServer).
> >>> >>> >>>> CloudStack
> >>> >>> >>>> >> > wasn't
> >>> >>> >>>> >> > > involved in that process at all. It was a manual admin
> >>> task.
> >>> >>> >>>> >> > >
> >>> >>> >>>> >> > > The storage plug-in framework didn't really solve that
> >>> >>> problem,
> >>> >>> >>>> >> though.
> >>> >>> >>>> >> > All
> >>> >>> >>>> >> > > it did was allow you to create your storage volume and
> >>> make it
> >>> >>> >>>> >> available
> >>> >>> >>>> >> > to
> >>> >>> >>>> >> > > CloudStack. There was no way to get XenServer or VMware
> >>> to
> >>> >>> >>>> actually
> >>> >>> >>>> >> use
> >>> >>> >>>> >> > it
> >>> >>> >>>> >> > > because they didn't want an IQN or whatever, they
> wanted
> >>> an
> >>> >>> >>>> >> SR/datastore.
> >>> >>> >>>> >> > >
> >>> >>> >>>> >> > > That being the case, I had to write code to detect this
> >>> >>> "managed"
> >>> >>> >>>> >> > situation
> >>> >>> >>>> >> > > in XenServer and VMware (in the attach/detach methods).
> >>> >>> >>>> >> > >
> >>> >>> >>>> >> > > For example, for XenServer I had to first create an SR
> >>> and a
> >>> >>> VDI
> >>> >>> >>>> >> (which
> >>> >>> >>>> >> > the
> >>> >>> >>>> >> > > SolidFire plug-in probably should have no knowledge
> of).
> >>> Then
> >>> >>> the
> >>> >>> >>>> >> attach
> >>> >>> >>>> >> > > logic would work.
> >>> >>> >>>> >> > >
> >>> >>> >>>> >> > > After a detach, I had to have XenServer "forget" the
> SR.
> >>> >>> >>>> >> > >
> >>> >>> >>>> >> > > What I was thinking is that the attach/detach methods
> >>> aren't
> >>> >>> >>>> really
> >>> >>> >>>> >> the
> >>> >>> >>>> >> > > right place for this special logic.
> >>> >>> >>>> >> > >
> >>> >>> >>>> >> > > We could have VolumeApiServiceImpl know about the
> special
> >>> >>> needs of
> >>> >>> >>>> >> > > "managed" storage.
> >>> >>> >>>> >> > >
> >>> >>> >>>> >> > > If you want to do an attach of a volume and you're
> >>> managed
> >>> >>> >>>> storage,
> >>> >>> >>>> >> > > VolumeApiService Imple could first tell the hypervisor
> to
> >>> >>> create
> >>> >>> >>>> the
> >>> >>> >>>> >> > > necessary objects (XenServer = SR/VDI, VMware =
> >>> >>> datastore/VMDK,
> >>> >>> >>>> KVM =
> >>> >>> >>>> >> > just
> >>> >>> >>>> >> > > establish the connection to the iSCSI target).
> >>> >>> >>>> >> > >
> >>> >>> >>>> >> > > It true that there are commands like
> CreateObjectCommand
> >>> in
> >>> >>> >>>> XenServer
> >>> >>> >>>> >> and
> >>> >>> >>>> >> > > VMware now, but they are only used when the storage is
> >>> >>> >>>> preallocated
> >>> >>> >>>> >> (not
> >>> >>> >>>> >> > > when the storage is managed by CloudStack).
> >>> >>> >>>> >> > >
> >>> >>> >>>> >> > > In the end, we kind of have two different storage
> models
> >>> in
> >>> >>> >>>> >> CloudStack:
> >>> >>> >>>> >> > > Non-managed (preallocated storage by admin) and managed
> >>> >>> (plug-in
> >>> >>> >>>> >> > > dynamically creates storage on a storage system).
> >>> >>> >>>> >> > >
> >>> >>> >>>> >> > >
> >>> >>> >>>> >> > > On Fri, Sep 27, 2013 at 6:46 PM, Marcus Sorensen <
> >>> >>> >>>> shadowsor@gmail.com
> >>> >>> >>>> >> > > >wrote:
> >>> >>> >>>> >> > >
> >>> >>> >>>> >> > > > On Fri, Sep 27, 2013 at 6:03 PM, Marcus Sorensen <
> >>> >>> >>>> >> shadowsor@gmail.com>
> >>> >>> >>>> >> > > > wrote:
> >>> >>> >>>> >> > > > > On Fri, Sep 27, 2013 at 5:16 PM, Mike Tutkowski
> >>> >>> >>>> >> > > > > <mike.tutkowski@solidfire.com> wrote:
> >>> >>> >>>> >> > > > >> createAsync is just for creating the SAN (or
> >>> whatever
> >>> >>> >>>> storage)
> >>> >>> >>>> >> > volume.
> >>> >>> >>>> >> > > > >> deleteAsync is the reverse.
> >>> >>> >>>> >> > > > >
> >>> >>> >>>> >> > > > > Exactly. It used to be that the hypervisor created
> >>> the
> >>> >>> >>>> >> disk/lun/file
> >>> >>> >>>> >> > > > > volume via createPhysicalDisk. Now it's done on the
> >>> SAN
> >>> >>> if the
> >>> >>> >>>> >> plugin
> >>> >>> >>>> >> > > > > supports it. Ideally, only calls that are required
> >>> for
> >>> >>> >>>> utilizing
> >>> >>> >>>> >> the
> >>> >>> >>>> >> > > > > storage (or perhaps things like copy to NFS, where
> a
> >>> >>> server
> >>> >>> >>>> need
> >>> >>> >>>> >> be
> >>> >>> >>>> >> > > > > involved if your SAN can't do it directly) will go
> >>> to the
> >>> >>> >>>> >> hypervisor,
> >>> >>> >>>> >> > > > > for external plugins.
> >>> >>> >>>> >> > > > >
> >>> >>> >>>> >> > > > > So mgmt server creates the LUN on the SAN, then
> >>> calls the
> >>> >>> >>>> >> hypervisor
> >>> >>> >>>> >> > > > > to attach it to the host, so that a VM can make use
> >>> of
> >>> >>> it. The
> >>> >>> >>>> >> > > > > createAsync is hypervisor agnostic, it just
> creates a
> >>> >>> LUN, and
> >>> >>> >>>> >> then
> >>> >>> >>>> >> > > > > when you go to start up a VM or attach it to one it
> >>> calls
> >>> >>> the
> >>> >>> >>>> >> > > > > hypervisor-specific code to make it available.
> >>> >>> >>>> >> > > >
> >>> >>> >>>> >> > > > I think this is the same with the other hypervisors
> as
> >>> >>> well. For
> >>> >>> >>>> >> > > > example with Xen, you create the LUN via your plugin,
> >>> and
> >>> >>> then a
> >>> >>> >>>> >> call
> >>> >>> >>>> >> > > > is made to Xen to register it as an SR. It's
> basically
> >>> the
> >>> >>> same,
> >>> >>> >>>> >> only
> >>> >>> >>>> >> > > > you're now coding the 'register' part yourself (and
> >>> it's
> >>> >>> >>>> ephemeral,
> >>> >>> >>>> >> > > > it's re-registered whenever it's used). You basically
> >>> take
> >>> >>> over
> >>> >>> >>>> some
> >>> >>> >>>> >> > > > of the API coding that already exists on the other
> >>> >>> platforms,
> >>> >>> >>>> which
> >>> >>> >>>> >> > > > means you can do whatever you want instead of just
> >>> what's
> >>> >>> >>>> supported,
> >>> >>> >>>> >> > > > but you have to do the work as well.
> >>> >>> >>>> >> > > >
> >>> >>> >>>> >> > > > >
> >>> >>> >>>> >> > > > >>
> >>> >>> >>>> >> > > > >> Technically even the default plug-in should not
> call
> >>> >>> into the
> >>> >>> >>>> >> > > hypervisor
> >>> >>> >>>> >> > > > >> layer.
> >>> >>> >>>> >> > > > >
> >>> >>> >>>> >> > > > > There's no way you can create a local storage file
> >>> to use
> >>> >>> as a
> >>> >>> >>>> >> > volume,
> >>> >>> >>>> >> > > > > or CLVM volume, or other types of libvirt storage
> >>> without
> >>> >>> >>>> calling
> >>> >>> >>>> >> a
> >>> >>> >>>> >> > > > > service that runs on the hypervisor. Those things
> >>> exist
> >>> >>> only
> >>> >>> >>>> on
> >>> >>> >>>> >> the
> >>> >>> >>>> >> > > > > hypervisor, and are controlled by the hypervisor.
> For
> >>> >>> NFS, you
> >>> >>> >>>> >> could
> >>> >>> >>>> >> > > > > create a separate API for your NFS server that
> >>> creates
> >>> >>> qcow2
> >>> >>> >>>> >> images
> >>> >>> >>>> >> > on
> >>> >>> >>>> >> > > > > your NFS primary, I suppose.
> >>> >>> >>>> >> > > > >
> >>> >>> >>>> >> > > > > One of the really nice things about KVM is that we
> >>> can do
> >>> >>> >>>> >> whatever a
> >>> >>> >>>> >> > > > > Linux box is capable of, it was one of the draws we
> >>> had to
> >>> >>> >>>> it. We
> >>> >>> >>>> >> > > > > wouldn't be able to do the storage we do with Xen
> or
> >>> >>> VMWare.
> >>> >>> >>>> >> > > > >
> >>> >>> >>>> >> > > > >>
> >>> >>> >>>> >> > > > >> The storage layer should probably not be aware of
> >>> the
> >>> >>> >>>> hypervisor
> >>> >>> >>>> >> > > layer.
> >>> >>> >>>> >> > > > >
> >>> >>> >>>> >> > > > > That's fine, but there's no reason why a storage
> >>> plugin
> >>> >>> can't
> >>> >>> >>>> >> talk to
> >>> >>> >>>> >> > > > > the agent that happens to be running on the
> >>> hypervisor for
> >>> >>> >>>> >> > > > > implementation, if that's what the plugin intends.
>  I
> >>> >>> don't
> >>> >>> >>>> see
> >>> >>> >>>> >> the
> >>> >>> >>>> >> > > > > distinction between utilizing the kvm agent as you
> >>> storage
> >>> >>> >>>> API or
> >>> >>> >>>> >> > > > > talking to a custom SAN API, or some other
> concocted
> >>> >>> service.
> >>> >>> >>>> >> That's
> >>> >>> >>>> >> > > > > sort of the point of the plugin, people can do
> >>> whatever
> >>> >>> they
> >>> >>> >>>> want.
> >>> >>> >>>> >> > > > >
> >>> >>> >>>> >> > > > >>
> >>> >>> >>>> >> > > > >> On Fri, Sep 27, 2013 at 5:14 PM, Mike Tutkowski <
> >>> >>> >>>> >> > > > >> mike.tutkowski@solidfire.com> wrote:
> >>> >>> >>>> >> > > > >>
> >>> >>> >>>> >> > > > >>> Well, from what I saw with XenServer and VMware,
> >>> that
> >>> >>> >>>> hypervisor
> >>> >>> >>>> >> > > > logic's
> >>> >>> >>>> >> > > > >>> attachVolume command also assumed a VDI/VMDK was
> >>> >>> created in
> >>> >>> >>>> >> > advance.
> >>> >>> >>>> >> > > > >>>
> >>> >>> >>>> >> > > > >>> I had to put logic in those attachVolume methods
> to
> >>> >>> create
> >>> >>> >>>> the
> >>> >>> >>>> >> > SR/VDI
> >>> >>> >>>> >> > > > or
> >>> >>> >>>> >> > > > >>> datastore/VMDK.
> >>> >>> >>>> >> > > > >>>
> >>> >>> >>>> >> > > > >>> However, thinking back on it, it might have made
> >>> more
> >>> >>> sense
> >>> >>> >>>> for
> >>> >>> >>>> >> the
> >>> >>> >>>> >> > > > >>> storage framework to detect if the storage in
> >>> question
> >>> >>> was
> >>> >>> >>>> >> managed
> >>> >>> >>>> >> > > and
> >>> >>> >>>> >> > > > -
> >>> >>> >>>> >> > > > >>> before calling attach - call create.
> >>> >>> >>>> >> > > > >>>
> >>> >>> >>>> >> > > > >>> If that logic was in place, I could have left
> >>> >>> >>>> >> attach/detachVolume
> >>> >>> >>>> >> > > alone
> >>> >>> >>>> >> > > > >>> and implemented create and delete in the
> hypervisor
> >>> >>> code to
> >>> >>> >>>> >> create
> >>> >>> >>>> >> > my
> >>> >>> >>>> >> > > > >>> SR/VDI or datastore/VMDK.
> >>> >>> >>>> >> > > > >>>
> >>> >>> >>>> >> > > > >>> That makes sense to me because it is
> >>> CloudStack-managed
> >>> >>> >>>> storage
> >>> >>> >>>> >> (so
> >>> >>> >>>> >> > > > >>> CloudStack is calling into the hypervisor to
> >>> create and
> >>> >>> >>>> delete
> >>> >>> >>>> >> > these
> >>> >>> >>>> >> > > > types
> >>> >>> >>>> >> > > > >>> of objects...it's managing them).
> >>> >>> >>>> >> > > > >>>
> >>> >>> >>>> >> > > > >>>
> >>> >>> >>>> >> > > > >>> On Fri, Sep 27, 2013 at 5:00 PM, Marcus Sorensen
> <
> >>> >>> >>>> >> > > shadowsor@gmail.com
> >>> >>> >>>> >> > > > >wrote:
> >>> >>> >>>> >> > > > >>>
> >>> >>> >>>> >> > > > >>>> On Fri, Sep 27, 2013 at 4:22 PM, Mike Tutkowski
> >>> >>> >>>> >> > > > >>>> <mike.tutkowski@solidfire.com> wrote:
> >>> >>> >>>> >> > > > >>>> > Sure, sounds good - let me know when it's up
> on
> >>> >>> Review
> >>> >>> >>>> Board
> >>> >>> >>>> >> > and I
> >>> >>> >>>> >> > > > can
> >>> >>> >>>> >> > > > >>>> take
> >>> >>> >>>> >> > > > >>>> > a look.
> >>> >>> >>>> >> > > > >>>> >
> >>> >>> >>>> >> > > > >>>> > I made most of the changes you and I talked
> >>> about:
> >>> >>> >>>> >> > > > >>>> >
> >>> >>> >>>> >> > > > >>>> >
> >>> >>> >>>> >> > > > >>>>
> >>> >>> >>>> >> > > >
> >>> >>> >>>> >> > >
> >>> >>> >>>> >> >
> >>> >>> >>>> >>
> >>> >>> >>>>
> >>> >>>
> >>>
> https://github.com/mike-tutkowski/incubator-cloudstack/commit/eb9b2edfc9062f9ca7961fecd5379b180ca3aed1
> >>> >>> >>>> >> > > > >>>> >
> >>> >>> >>>> >> > > > >>>> > I have a new idea, though, that I think will
> >>> simplify
> >>> >>> >>>> this:
> >>> >>> >>>> >> > > > >>>> >
> >>> >>> >>>> >> > > > >>>> > The main "weirdness" right now is when
> >>> attachVolume
> >>> >>> is
> >>> >>> >>>> called
> >>> >>> >>>> >> > that
> >>> >>> >>>> >> > > > the
> >>> >>> >>>> >> > > > >>>> > original logic assumed createVolume had been
> >>> called
> >>> >>> >>>> already.
> >>> >>> >>>> >> > > > >>>> >
> >>> >>> >>>> >> > > > >>>> > In my case, this doesn't apply, so we had to
> >>> place
> >>> >>> extra
> >>> >>> >>>> >> logic
> >>> >>> >>>> >> > in
> >>> >>> >>>> >> > > > >>>> > attachVolume to essentially "create" a volume.
> >>> We
> >>> >>> >>>> decided to
> >>> >>> >>>> >> > make
> >>> >>> >>>> >> > > a
> >>> >>> >>>> >> > > > >>>> connect
> >>> >>> >>>> >> > > > >>>> > method, which establishes the iSCSI connection
> >>> and
> >>> >>> >>>> creates a
> >>> >>> >>>> >> > > > >>>> > KVMPhysicalDisk that can be returned when
> >>> >>> attachVolume
> >>> >>> >>>> calls
> >>> >>> >>>> >> > > > >>>> > getPhysicalDisk.
> >>> >>> >>>> >> > > > >>>> >
> >>> >>> >>>> >> > > > >>>> > The "normal" place where you'd create a
> >>> >>> KVMPhysicalDisk,
> >>> >>> >>>> >> > however,
> >>> >>> >>>> >> > > > would
> >>> >>> >>>> >> > > > >>>> be
> >>> >>> >>>> >> > > > >>>> > in the createVolume method. Since I don't
> >>> currently
> >>> >>> >>>> "create"
> >>> >>> >>>> >> a
> >>> >>> >>>> >> > > > volume,
> >>> >>> >>>> >> > > > >>>> my
> >>> >>> >>>> >> > > > >>>> > only chance to note the size of the volume is
> >>> in the
> >>> >>> >>>> connect
> >>> >>> >>>> >> > > method.
> >>> >>> >>>> >> > > > >>>>
> >>> >>> >>>> >> > > > >>>> I don't think createVolume applies to plugins.
> My
> >>> >>> >>>> impression
> >>> >>> >>>> >> wash
> >>> >>> >>>> >> > > that
> >>> >>> >>>> >> > > > >>>> createAsync is called on the mgmt server side.
> If
> >>> >>> >>>> createVolume
> >>> >>> >>>> >> IS
> >>> >>> >>>> >> > > > >>>> being called, that's weird. The idea here is
> that
> >>> mgmt
> >>> >>> >>>> server
> >>> >>> >>>> >> > > creates
> >>> >>> >>>> >> > > > >>>> the LUN, and then on the KVM side attach is
> >>> called (or
> >>> >>> >>>> >> > StartCommand
> >>> >>> >>>> >> > > if
> >>> >>> >>>> >> > > > >>>> it's a root volume and vm is being started), and
> >>> it
> >>> >>> assumes
> >>> >>> >>>> >> that
> >>> >>> >>>> >> > the
> >>> >>> >>>> >> > > > >>>> LUN is already there, so we call
> >>> connectPhysicalDisk to
> >>> >>> >>>> attach
> >>> >>> >>>> >> it
> >>> >>> >>>> >> > to
> >>> >>> >>>> >> > > > >>>> the KVM host.
> >>> >>> >>>> >> > > > >>>>
> >>> >>> >>>> >> > > > >>>> >
> >>> >>> >>>> >> > > > >>>> > It ends up being kind of weird to pass a size
> >>> into
> >>> >>> the
> >>> >>> >>>> >> connect
> >>> >>> >>>> >> > > > method,
> >>> >>> >>>> >> > > > >>>> as
> >>> >>> >>>> >> > > > >>>> > you've noted.
> >>> >>> >>>> >> > > > >>>> >
> >>> >>> >>>> >> > > > >>>> > What if we essentially left the attachVolume
> and
> >>> >>> >>>> detachVolume
> >>> >>> >>>> >> > > > methods
> >>> >>> >>>> >> > > > >>>> alone
> >>> >>> >>>> >> > > > >>>> > (as in how they were before my changes)? We
> >>> could
> >>> >>> have
> >>> >>> >>>> >> > > > >>>> > VolumeApiServiceImpl, before sending the
> >>> >>> AttachCommand,
> >>> >>> >>>> >> detect
> >>> >>> >>>> >> > if
> >>> >>> >>>> >> > > > the
> >>> >>> >>>> >> > > > >>>> > storage in question is managed. If it is,
> >>> >>> >>>> >> VolumeApiServiceImpl
> >>> >>> >>>> >> > > could
> >>> >>> >>>> >> > > > >>>> send a
> >>> >>> >>>> >> > > > >>>> > CreateObjectCommand. I would then implement
> >>> >>> >>>> >> createPhysicalDisk
> >>> >>> >>>> >> > to
> >>> >>> >>>> >> > > > >>>> connect
> >>> >>> >>>> >> > > > >>>> > my iSCSI target and create a KVMPhysicalDisk.
> >>> >>> >>>> >> > > > >>>> >
> >>> >>> >>>> >> > > > >>>> > On the reverse side, VolumeApiServiceImpl,
> after
> >>> >>> sending
> >>> >>> >>>> the
> >>> >>> >>>> >> > > > >>>> DetachCommand,
> >>> >>> >>>> >> > > > >>>> > could detect if the storage in question is
> >>> managed.
> >>> >>> If
> >>> >>> >>>> it is,
> >>> >>> >>>> >> > > > >>>> > VolumeApiServiceImpl could send a
> >>> DeleteCommand. I
> >>> >>> would
> >>> >>> >>>> then
> >>> >>> >>>> >> > > > implement
> >>> >>> >>>> >> > > > >>>> the
> >>> >>> >>>> >> > > > >>>> > deletePhysicalDisk method to disconnect my
> iSCSI
> >>> >>> session.
> >>> >>> >>>> >> > > > >>>> >
> >>> >>> >>>> >> > > > >>>> > What do you think?
> >>> >>> >>>> >> > > > >>>>
> >>> >>> >>>> >> > > > >>>> Maybe I'm just confused, but I thought the
> create
> >>> and
> >>> >>> >>>> delete on
> >>> >>> >>>> >> > the
> >>> >>> >>>> >> > > > >>>> KVM side only apply to the default storage
> plugin,
> >>> >>> which
> >>> >>> >>>> has to
> >>> >>> >>>> >> > pass
> >>> >>> >>>> >> > > > >>>> everything on the agent. I thought the
> >>> >>> creation/deletion of
> >>> >>> >>>> >> LUNs
> >>> >>> >>>> >> > > > >>>> occured via createAsync and deleteAsync in your
> >>> plugin.
> >>> >>> >>>> >> > > > >>>>
> >>> >>> >>>> >> > > > >>>> >
> >>> >>> >>>> >> > > > >>>> >
> >>> >>> >>>> >> > > > >>>> > On Fri, Sep 27, 2013 at 3:21 PM, Marcus
> >>> Sorensen <
> >>> >>> >>>> >> > > > shadowsor@gmail.com
> >>> >>> >>>> >> > > > >>>> >wrote:
> >>> >>> >>>> >> > > > >>>> >
> >>> >>> >>>> >> > > > >>>> >> Ok, I've got our plugin working against 4.2.
> >>> Tested
> >>> >>> >>>> start
> >>> >>> >>>> >> vm,
> >>> >>> >>>> >> > > stop
> >>> >>> >>>> >> > > > vm,
> >>> >>> >>>> >> > > > >>>> >> migrate vm, attach volume, detach volume.
> >>>  Other
> >>> >>> >>>> functions
> >>> >>> >>>> >> that
> >>> >>> >>>> >> > > we
> >>> >>> >>>> >> > > > >>>> >> already had in our StorageAdaptor
> >>> implementation,
> >>> >>> such
> >>> >>> >>>> as
> >>> >>> >>>> >> > copying
> >>> >>> >>>> >> > > > >>>> >> templates to primary storage, just worked
> >>> without
> >>> >>> any
> >>> >>> >>>> >> > > modification
> >>> >>> >>>> >> > > > >>>> >> from our 4.1 version.
> >>> >>> >>>> >> > > > >>>> >>
> >>> >>> >>>> >> > > > >>>> >> I'll post a patch to reviewboard with the
> >>> applicable
> >>> >>> >>>> >> changes. I
> >>> >>> >>>> >> > > was
> >>> >>> >>>> >> > > > >>>> >> correct that attachVolume and dettachVolume
> >>> only
> >>> >>> apply
> >>> >>> >>>> to
> >>> >>> >>>> >> > > > >>>> >> adding/removing disks from running VMs, so
> >>> there
> >>> >>> were
> >>> >>> >>>> some
> >>> >>> >>>> >> more
> >>> >>> >>>> >> > > > >>>> >> changes to LibvirtComputingResource. I don't
> >>> intend
> >>> >>> for
> >>> >>> >>>> this
> >>> >>> >>>> >> > > patch
> >>> >>> >>>> >> > > > to
> >>> >>> >>>> >> > > > >>>> >> be applied (for one it's against 4.2), but I
> >>> want
> >>> >>> you to
> >>> >>> >>>> >> take a
> >>> >>> >>>> >> > > > look
> >>> >>> >>>> >> > > > >>>> >> and see if it will work for you as well. If
> it
> >>> does,
> >>> >>> >>>> then
> >>> >>> >>>> >> it's
> >>> >>> >>>> >> > a
> >>> >>> >>>> >> > > > good
> >>> >>> >>>> >> > > > >>>> >> indicator that it should work for other
> plugins
> >>> >>> too, or
> >>> >>> >>>> if
> >>> >>> >>>> >> it
> >>> >>> >>>> >> > > > needs to
> >>> >>> >>>> >> > > > >>>> >> be tweaked we can work it out.
> >>> >>> >>>> >> > > > >>>> >>
> >>> >>> >>>> >> > > > >>>> >> The gist is that we needed a
> >>> connectPhysicalDisk
> >>> >>> call
> >>> >>> >>>> that
> >>> >>> >>>> >> can
> >>> >>> >>>> >> > > > accept
> >>> >>> >>>> >> > > > >>>> >> the pool/volume info (which we've discussed),
> >>> but
> >>> >>> also a
> >>> >>> >>>> >> > version
> >>> >>> >>>> >> > > of
> >>> >>> >>>> >> > > > >>>> >> connectPhysicalDisk that can take a vm
> >>> specification
> >>> >>> >>>> >> > > > >>>> >> (VirtualMachineTO) and figure out which
> >>> pools/disks
> >>> >>> are
> >>> >>> >>>> >> needed
> >>> >>> >>>> >> > > and
> >>> >>> >>>> >> > > > >>>> >> attach them. I largely copied the code we had
> >>> custom
> >>> >>> >>>> >> inserted
> >>> >>> >>>> >> > > into
> >>> >>> >>>> >> > > > our
> >>> >>> >>>> >> > > > >>>> >> 4.1 and put it into KVMStoragePoolManager so
> >>> that it
> >>> >>> >>>> will be
> >>> >>> >>>> >> > > > adaptor
> >>> >>> >>>> >> > > > >>>> >> agnostic.
> >>> >>> >>>> >> > > > >>>> >>
> >>> >>> >>>> >> > > > >>>> >> Same goes for disconnectPhysicalDisk.
> >>> >>> >>>> >> > > > >>>> >>
> >>> >>> >>>> >> > > > >>>> >> We also needed to pass the VirtualMachineTO
> in
> >>> a few
> >>> >>> >>>> other
> >>> >>> >>>> >> > places
> >>> >>> >>>> >> > > > like
> >>> >>> >>>> >> > > > >>>> >> MigrateCommand and StopCommand, it's
> otherwise
> >>> hard
> >>> >>> to
> >>> >>> >>>> know
> >>> >>> >>>> >> > which
> >>> >>> >>>> >> > > > >>>> >> storage adaptors we need to deal with when
> all
> >>> we
> >>> >>> have
> >>> >>> >>>> is a
> >>> >>> >>>> >> vm
> >>> >>> >>>> >> > > > name or
> >>> >>> >>>> >> > > > >>>> >> something like that.
> >>> >>> >>>> >> > > > >>>> >>
> >>> >>> >>>> >> > > > >>>> >> On Fri, Sep 27, 2013 at 12:56 AM, Mike
> >>> Tutkowski
> >>> >>> >>>> >> > > > >>>> >> <mike.tutkowski@solidfire.com> wrote:
> >>> >>> >>>> >> > > > >>>> >> > Thanks for the clarification on how that
> >>> works.
> >>> >>> >>>> >> > > > >>>> >> >
> >>> >>> >>>> >> > > > >>>> >> > Also, yeah, I think CHAP only grants you
> >>> access
> >>> >>> to a
> >>> >>> >>>> >> volume.
> >>> >>> >>>> >> > If
> >>> >>> >>>> >> > > > >>>> multiple
> >>> >>> >>>> >> > > > >>>> >> > hosts are using the CHAP credentials for a
> >>> single
> >>> >>> >>>> volume,
> >>> >>> >>>> >> > it's
> >>> >>> >>>> >> > > > up to
> >>> >>> >>>> >> > > > >>>> >> those
> >>> >>> >>>> >> > > > >>>> >> > hosts to make sure they don't step on each
> >>> other's
> >>> >>> >>>> toes
> >>> >>> >>>> >> (and
> >>> >>> >>>> >> > > > this is
> >>> >>> >>>> >> > > > >>>> - to
> >>> >>> >>>> >> > > > >>>> >> > my understanding - how it works with
> >>> XenServer and
> >>> >>> >>>> >> VMware).
> >>> >>> >>>> >> > > > >>>> >> >
> >>> >>> >>>> >> > > > >>>> >> >
> >>> >>> >>>> >> > > > >>>> >> > On Fri, Sep 27, 2013 at 12:45 AM, Marcus
> >>> Sorensen
> >>> >>> <
> >>> >>> >>>> >> > > > >>>> shadowsor@gmail.com
> >>> >>> >>>> >> > > > >>>> >> >wrote:
> >>> >>> >>>> >> > > > >>>> >> >
> >>> >>> >>>> >> > > > >>>> >> >> On Fri, Sep 27, 2013 at 12:21 AM, Mike
> >>> Tutkowski
> >>> >>> >>>> >> > > > >>>> >> >> <mike.tutkowski@solidfire.com> wrote:
> >>> >>> >>>> >> > > > >>>> >> >> > Maybe I should seek a little
> >>> clarification as
> >>> >>> to
> >>> >>> >>>> how
> >>> >>> >>>> >> live
> >>> >>> >>>> >> > > > >>>> migration
> >>> >>> >>>> >> > > > >>>> >> works
> >>> >>> >>>> >> > > > >>>> >> >> > in CS with KVM.
> >>> >>> >>>> >> > > > >>>> >> >> >
> >>> >>> >>>> >> > > > >>>> >> >> > Before we do a live migration of VM 1
> from
> >>> >>> Host 1
> >>> >>> >>>> to
> >>> >>> >>>> >> Host
> >>> >>> >>>> >> > 2,
> >>> >>> >>>> >> > > > do we
> >>> >>> >>>> >> > > > >>>> >> detach
> >>> >>> >>>> >> > > > >>>> >> >> > all disks from VM1?
> >>> >>> >>>> >> > > > >>>> >> >> >
> >>> >>> >>>> >> > > > >>>> >> >> > If so, then we're good to go there.
> >>> >>> >>>> >> > > > >>>> >> >> >
> >>> >>> >>>> >> > > > >>>> >> >> > I'm not as clear with HA.
> >>> >>> >>>> >> > > > >>>> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> During live migration this is what we
> >>> currently
> >>> >>> do
> >>> >>> >>>> in our
> >>> >>> >>>> >> > > > modified
> >>> >>> >>>> >> > > > >>>> >> >> 4.1, I'm not sure if the new framework is
> >>> set up
> >>> >>> for
> >>> >>> >>>> >> this,
> >>> >>> >>>> >> > but
> >>> >>> >>>> >> > > > it
> >>> >>> >>>> >> > > > >>>> >> >> should be made to do this if not.
> >>> >>> >>>> >> > > > >>>> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> PrepareForMigrationCommand is called on
> >>> >>> destination
> >>> >>> >>>> >> host. In
> >>> >>> >>>> >> > > > >>>> >> >> PrepareForMigrationCommand we added a few
> >>> lines
> >>> >>> to
> >>> >>> >>>> call
> >>> >>> >>>> >> > > > >>>> >> >> connectPhysicalDisk. This host connects
> the
> >>> SAN
> >>> >>> >>>> disks to
> >>> >>> >>>> >> > this
> >>> >>> >>>> >> > > > new
> >>> >>> >>>> >> > > > >>>> >> >> host, then creates a paused VM.
> >>> >>> >>>> >> > > > >>>> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> MigrateCommand is called on the source
> >>> host. This
> >>> >>> >>>> sends
> >>> >>> >>>> >> the
> >>> >>> >>>> >> > > > proper
> >>> >>> >>>> >> > > > >>>> >> >> command to transfer VM memory, then
> >>> atomically
> >>> >>> cuts
> >>> >>> >>>> over
> >>> >>> >>>> >> to
> >>> >>> >>>> >> > > the
> >>> >>> >>>> >> > > > >>>> >> >> destination host. During this time, the
> >>> disks are
> >>> >>> >>>> >> attached
> >>> >>> >>>> >> > on
> >>> >>> >>>> >> > > > both
> >>> >>> >>>> >> > > > >>>> >> >> sides, but the VM is still the only thing
> >>> that is
> >>> >>> >>>> using
> >>> >>> >>>> >> > them,
> >>> >>> >>>> >> > > > and it
> >>> >>> >>>> >> > > > >>>> >> >> atomically cuts over. There's no caching
> on
> >>> the
> >>> >>> host
> >>> >>> >>>> >> (qemu
> >>> >>> >>>> >> > is
> >>> >>> >>>> >> > > > using
> >>> >>> >>>> >> > > > >>>> >> >> directio), so this is safe.
> >>> >>> >>>> >> > > > >>>> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> After MigrateCommand completes it's VM
> >>> passoff,
> >>> >>> we
> >>> >>> >>>> detach
> >>> >>> >>>> >> > the
> >>> >>> >>>> >> > > > disks
> >>> >>> >>>> >> > > > >>>> >> >> before returning.
> >>> >>> >>>> >> > > > >>>> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >
> >>> >>> >>>> >> > > > >>>> >> >> > If VM 1 goes down because Host 1
> crashes,
> >>> is
> >>> >>> the
> >>> >>> >>>> >> > > attach-volume
> >>> >>> >>>> >> > > > >>>> command
> >>> >>> >>>> >> > > > >>>> >> >> > invoked as many times as need be
> >>> (depending on
> >>> >>> how
> >>> >>> >>>> many
> >>> >>> >>>> >> > > > volumes
> >>> >>> >>>> >> > > > >>>> need
> >>> >>> >>>> >> > > > >>>> >> to
> >>> >>> >>>> >> > > > >>>> >> >> be
> >>> >>> >>>> >> > > > >>>> >> >> > attached) when VM 1 is restarted on Host
> >>> 2?
> >>> >>> >>>> >> > > > >>>> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> From what I can tell, the attachVolume and
> >>> >>> >>>> dettachVolume
> >>> >>> >>>> >> > > seemed
> >>> >>> >>>> >> > > > to
> >>> >>> >>>> >> > > > >>>> >> >> only be for attaching disks to existing,
> >>> running
> >>> >>> VMs
> >>> >>> >>>> >> (i.e.
> >>> >>> >>>> >> > > > inserting
> >>> >>> >>>> >> > > > >>>> >> >> new XML into an existing domain
> definition).
> >>> >>> >>>>  Normally
> >>> >>> >>>> >> when
> >>> >>> >>>> >> > > > >>>> starting a
> >>> >>> >>>> >> > > > >>>> >> >> vm from scratch the vm definition, along
> >>> with any
> >>> >>> >>>> >> currently
> >>> >>> >>>> >> > > > attached
> >>> >>> >>>> >> > > > >>>> >> >> disks, is passed in to StartCommand (which
> >>> would
> >>> >>> >>>> also be
> >>> >>> >>>> >> > > called
> >>> >>> >>>> >> > > > >>>> during
> >>> >>> >>>> >> > > > >>>> >> >> HA restart of a VM). In our 4.1 branch we
> >>> also
> >>> >>> have a
> >>> >>> >>>> >> call
> >>> >>> >>>> >> > to
> >>> >>> >>>> >> > > > >>>> >> >> connectPhysicalDisk here, where we loop
> >>> through
> >>> >>> the
> >>> >>> >>>> disk
> >>> >>> >>>> >> > > > definitions
> >>> >>> >>>> >> > > > >>>> >> >> that were passed.
> >>> >>> >>>> >> > > > >>>> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> Again, I should be able to flesh out the
> >>> >>> differences
> >>> >>> >>>> in
> >>> >>> >>>> >> 4.2
> >>> >>> >>>> >> > > and
> >>> >>> >>>> >> > > > how
> >>> >>> >>>> >> > > > >>>> to
> >>> >>> >>>> >> > > > >>>> >> >> go about making this suitable for everyone
> >>> in the
> >>> >>> >>>> coming
> >>> >>> >>>> >> > days,
> >>> >>> >>>> >> > > > so
> >>> >>> >>>> >> > > > >>>> long
> >>> >>> >>>> >> > > > >>>> >> >> as you and anyone else writing plugins
> agree
> >>> >>> with the
> >>> >>> >>>> >> > changes.
> >>> >>> >>>> >> > > > >>>> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> These processes would make sure the disks
> >>> are
> >>> >>> >>>> available
> >>> >>> >>>> >> on
> >>> >>> >>>> >> > the
> >>> >>> >>>> >> > > > hosts
> >>> >>> >>>> >> > > > >>>> >> >> they need to be, but they don't really
> >>> provide
> >>> >>> >>>> locking or
> >>> >>> >>>> >> > > ensure
> >>> >>> >>>> >> > > > >>>> that
> >>> >>> >>>> >> > > > >>>> >> >> only the necessary hosts can write to or
> >>> see the
> >>> >>> >>>> disks at
> >>> >>> >>>> >> > any
> >>> >>> >>>> >> > > > given
> >>> >>> >>>> >> > > > >>>> >> >> time. I don't think CHAP does that either.
> >>> We
> >>> >>> >>>> currently
> >>> >>> >>>> >> > > generate
> >>> >>> >>>> >> > > > >>>> ACLs
> >>> >>> >>>> >> > > > >>>> >> >> via our SAN api during connectPhysicalDisk
> >>> as a
> >>> >>> >>>> safety
> >>> >>> >>>> >> > > measure,
> >>> >>> >>>> >> > > > but
> >>> >>> >>>> >> > > > >>>> if
> >>> >>> >>>> >> > > > >>>> >> >> CloudStack is working properly it will be
> in
> >>> >>> charge
> >>> >>> >>>> of
> >>> >>> >>>> >> > > > controlling
> >>> >>> >>>> >> > > > >>>> >> >> that the disks are only being used where
> >>> they
> >>> >>> should
> >>> >>> >>>> be.
> >>> >>> >>>> >> The
> >>> >>> >>>> >> > > > ACLs
> >>> >>> >>>> >> > > > >>>> just
> >>> >>> >>>> >> > > > >>>> >> >> ensure that if the VM somehow gets started
> >>> in two
> >>> >>> >>>> >> different
> >>> >>> >>>> >> > > > places
> >>> >>> >>>> >> > > > >>>> >> >> (e.g. HA malfunction), only one of them
> >>> will have
> >>> >>> >>>> access
> >>> >>> >>>> >> to
> >>> >>> >>>> >> > > the
> >>> >>> >>>> >> > > > >>>> disks.
> >>> >>> >>>> >> > > > >>>> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >
> >>> >>> >>>> >> > > > >>>> >> >> >
> >>> >>> >>>> >> > > > >>>> >> >> > On Fri, Sep 27, 2013 at 12:17 AM, Mike
> >>> >>> Tutkowski <
> >>> >>> >>>> >> > > > >>>> >> >> > mike.tutkowski@solidfire.com> wrote:
> >>> >>> >>>> >> > > > >>>> >> >> >
> >>> >>> >>>> >> > > > >>>> >> >> >> Let me clarify this line a bit:
> >>> >>> >>>> >> > > > >>>> >> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >> "We get away without this with
> XenServer
> >>> and
> >>> >>> >>>> VMware
> >>> >>> >>>> >> > because
> >>> >>> >>>> >> > > > - as
> >>> >>> >>>> >> > > > >>>> far
> >>> >>> >>>> >> > > > >>>> >> as
> >>> >>> >>>> >> > > > >>>> >> >> I
> >>> >>> >>>> >> > > > >>>> >> >> >> know - CS delegates HA and live
> >>> migration to
> >>> >>> those
> >>> >>> >>>> >> > clusters
> >>> >>> >>>> >> > > > and
> >>> >>> >>>> >> > > > >>>> they
> >>> >>> >>>> >> > > > >>>> >> >> handle
> >>> >>> >>>> >> > > > >>>> >> >> >> it most likely with some kind of
> locking
> >>> >>> protocol
> >>> >>> >>>> on
> >>> >>> >>>> >> the
> >>> >>> >>>> >> > > > >>>> >> SR/datastore."
> >>> >>> >>>> >> > > > >>>> >> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >> When I set up a XenServer or a VMware
> >>> >>> cluster, all
> >>> >>> >>>> >> nodes
> >>> >>> >>>> >> > in
> >>> >>> >>>> >> > > > the
> >>> >>> >>>> >> > > > >>>> >> cluster
> >>> >>> >>>> >> > > > >>>> >> >> >> have the proper CHAP credentials and
> can
> >>> >>> access a
> >>> >>> >>>> >> shared
> >>> >>> >>>> >> > > > >>>> >> SR/datastore.
> >>> >>> >>>> >> > > > >>>> >> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >> HA and live migrations are OK here
> >>> because the
> >>> >>> >>>> cluster
> >>> >>> >>>> >> > > > controls
> >>> >>> >>>> >> > > > >>>> >> access
> >>> >>> >>>> >> > > > >>>> >> >> to
> >>> >>> >>>> >> > > > >>>> >> >> >> the VDI on the SR (or VMDK on the
> >>> datastore)
> >>> >>> with
> >>> >>> >>>> some
> >>> >>> >>>> >> > kind
> >>> >>> >>>> >> > > > of
> >>> >>> >>>> >> > > > >>>> >> locking
> >>> >>> >>>> >> > > > >>>> >> >> >> protocol, I expect.
> >>> >>> >>>> >> > > > >>>> >> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >> Since KVM isn't really in a cluster
> >>> outside
> >>> >>> of the
> >>> >>> >>>> >> > > CloudStack
> >>> >>> >>>> >> > > > >>>> world,
> >>> >>> >>>> >> > > > >>>> >> >> >> CloudStack has to handle these
> >>> intricacies.
> >>> >>> >>>> >> > > > >>>> >> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >> In my case, I'm just presenting a raw
> >>> disk to
> >>> >>> a
> >>> >>> >>>> VM on
> >>> >>> >>>> >> a
> >>> >>> >>>> >> > KVM
> >>> >>> >>>> >> > > > host.
> >>> >>> >>>> >> > > > >>>> >> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >> In that case, HA and live migration
> >>> depend on
> >>> >>> the
> >>> >>> >>>> >> storage
> >>> >>> >>>> >> > > > plug-in
> >>> >>> >>>> >> > > > >>>> >> being
> >>> >>> >>>> >> > > > >>>> >> >> >> able to grant and revoke access to the
> >>> volume
> >>> >>> for
> >>> >>> >>>> >> hosts
> >>> >>> >>>> >> > as
> >>> >>> >>>> >> > > > >>>> needed.
> >>> >>> >>>> >> > > > >>>> >> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >> I'd actually rather not even bother
> with
> >>> CHAP
> >>> >>> when
> >>> >>> >>>> >> using
> >>> >>> >>>> >> > > KVM.
> >>> >>> >>>> >> > > > >>>> >> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >> On Fri, Sep 27, 2013 at 12:06 AM, Mike
> >>> >>> Tutkowski <
> >>> >>> >>>> >> > > > >>>> >> >> >> mike.tutkowski@solidfire.com> wrote:
> >>> >>> >>>> >> > > > >>>> >> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>> Hey Marcus,
> >>> >>> >>>> >> > > > >>>> >> >> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>> I agree that CHAP does not fulfill the
> >>> same
> >>> >>> role
> >>> >>> >>>> as
> >>> >>> >>>> >> > > fencing.
> >>> >>> >>>> >> > > > >>>> >> >> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>> I think we're going to have trouble
> >>> with HA
> >>> >>> and
> >>> >>> >>>> live
> >>> >>> >>>> >> > > > migrations
> >>> >>> >>>> >> > > > >>>> on
> >>> >>> >>>> >> > > > >>>> >> KVM
> >>> >>> >>>> >> > > > >>>> >> >> if
> >>> >>> >>>> >> > > > >>>> >> >> >>> the storage plug-in doesn't have a way
> >>> of
> >>> >>> knowing
> >>> >>> >>>> >> when a
> >>> >>> >>>> >> > > > host
> >>> >>> >>>> >> > > > >>>> wants
> >>> >>> >>>> >> > > > >>>> >> to
> >>> >>> >>>> >> > > > >>>> >> >> >>> access a volume and when we want to
> >>> revoke
> >>> >>> >>>> access to
> >>> >>> >>>> >> > that
> >>> >>> >>>> >> > > > >>>> volume.
> >>> >>> >>>> >> > > > >>>> >> >> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>> We get away without this with
> XenServer
> >>> and
> >>> >>> >>>> VMware
> >>> >>> >>>> >> > because
> >>> >>> >>>> >> > > > - as
> >>> >>> >>>> >> > > > >>>> far
> >>> >>> >>>> >> > > > >>>> >> as
> >>> >>> >>>> >> > > > >>>> >> >> I
> >>> >>> >>>> >> > > > >>>> >> >> >>> know - CS delegates HA and live
> >>> migration to
> >>> >>> >>>> those
> >>> >>> >>>> >> > > clusters
> >>> >>> >>>> >> > > > and
> >>> >>> >>>> >> > > > >>>> they
> >>> >>> >>>> >> > > > >>>> >> >> handle
> >>> >>> >>>> >> > > > >>>> >> >> >>> it most likely with some kind of
> locking
> >>> >>> >>>> protocol on
> >>> >>> >>>> >> the
> >>> >>> >>>> >> > > > >>>> >> SR/datastore.
> >>> >>> >>>> >> > > > >>>> >> >> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>> As far as the path field is
> concerned, I
> >>> >>> should
> >>> >>> >>>> be
> >>> >>> >>>> >> able
> >>> >>> >>>> >> > to
> >>> >>> >>>> >> > > > >>>> populate
> >>> >>> >>>> >> > > > >>>> >> it
> >>> >>> >>>> >> > > > >>>> >> >> >>> with the IQN of the volume in
> question.
> >>> >>> >>>> >> > > > >>>> >> >> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>> One problem I do see, however, is in
> the
> >>> >>> >>>> >> getPhysicalDisk
> >>> >>> >>>> >> > > > method.
> >>> >>> >>>> >> > > > >>>> >> >> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>> How are you envisioning I keep track
> of
> >>> >>> >>>> >> KVMPhysicalDisks
> >>> >>> >>>> >> > > > that I
> >>> >>> >>>> >> > > > >>>> >> create
> >>> >>> >>>> >> > > > >>>> >> >> in
> >>> >>> >>>> >> > > > >>>> >> >> >>> my connect method?
> >>> >>> >>>> >> > > > >>>> >> >> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>> Initially I was thinking I'd just keep
> >>> them
> >>> >>> in a
> >>> >>> >>>> map.
> >>> >>> >>>> >> > > > Storage
> >>> >>> >>>> >> > > > >>>> pool
> >>> >>> >>>> >> > > > >>>> >> UUID
> >>> >>> >>>> >> > > > >>>> >> >> >>> to KVMPhysicalDisks.
> >>> >>> >>>> >> > > > >>>> >> >> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>> The problem is, how do I reconstruct
> >>> that
> >>> >>> map if
> >>> >>> >>>> the
> >>> >>> >>>> >> > agent
> >>> >>> >>>> >> > > > is
> >>> >>> >>>> >> > > > >>>> >> restarted
> >>> >>> >>>> >> > > > >>>> >> >> >>> (say the host crashes or is
> restarted).
> >>> >>> >>>> >> > > > >>>> >> >> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>> For storage pools, we get a message
> >>> >>> >>>> >> > > > (ModifyStoragePoolCommand)
> >>> >>> >>>> >> > > > >>>> from
> >>> >>> >>>> >> > > > >>>> >> the
> >>> >>> >>>> >> > > > >>>> >> >> >>> CS MS to tell us about all of the
> >>> relevant
> >>> >>> >>>> storage
> >>> >>> >>>> >> > pools.
> >>> >>> >>>> >> > > > With
> >>> >>> >>>> >> > > > >>>> this
> >>> >>> >>>> >> > > > >>>> >> >> >>> message, I can reconstruct my cache of
> >>> >>> storage
> >>> >>> >>>> >> pools. No
> >>> >>> >>>> >> > > > problem
> >>> >>> >>>> >> > > > >>>> >> there.
> >>> >>> >>>> >> > > > >>>> >> >> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>> But how will I know which volumes
> >>> belong to a
> >>> >>> >>>> given
> >>> >>> >>>> >> > > storage
> >>> >>> >>>> >> > > > >>>> pool if
> >>> >>> >>>> >> > > > >>>> >> I
> >>> >>> >>>> >> > > > >>>> >> >> >>> have to rebuild that map? How will I
> >>> even
> >>> >>> know
> >>> >>> >>>> which
> >>> >>> >>>> >> > > volumes
> >>> >>> >>>> >> > > > >>>> are in
> >>> >>> >>>> >> > > > >>>> >> >> use at
> >>> >>> >>>> >> > > > >>>> >> >> >>> all?
> >>> >>> >>>> >> > > > >>>> >> >> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>> Thanks
> >>> >>> >>>> >> > > > >>>> >> >> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>> On Thu, Sep 26, 2013 at 11:37 PM,
> Marcus
> >>> >>> >>>> Sorensen <
> >>> >>> >>>> >> > > > >>>> >> shadowsor@gmail.com
> >>> >>> >>>> >> > > > >>>> >> >> >wrote:
> >>> >>> >>>> >> > > > >>>> >> >> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> On Thu, Sep 26, 2013 at 10:23 PM,
> Mike
> >>> >>> Tutkowski
> >>> >>> >>>> >> > > > >>>> >> >> >>>> <mike.tutkowski@solidfire.com>
> wrote:
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > My comments are inline:
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > On Thu, Sep 26, 2013 at 9:10 PM,
> >>> Marcus
> >>> >>> >>>> Sorensen <
> >>> >>> >>>> >> > > > >>>> >> >> shadowsor@gmail.com
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >wrote:
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> Ok, let me digest this a bit. I
> got
> >>> the
> >>> >>> >>>> github
> >>> >>> >>>> >> > > responses
> >>> >>> >>>> >> > > > >>>> but I'd
> >>> >>> >>>> >> > > > >>>> >> >> also
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> like to keep it on-list as well.
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >>  My initial thoughts are:
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> 1) I don't think disk format and
> >>> size are
> >>> >>> >>>> >> necessary
> >>> >>> >>>> >> > > > >>>> parameters
> >>> >>> >>>> >> > > > >>>> >> for
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> connectPhysicalDisk, as the format
> >>> can be
> >>> >>> >>>> >> determined
> >>> >>> >>>> >> > > by
> >>> >>> >>>> >> > > > the
> >>> >>> >>>> >> > > > >>>> >> >> adaptor,
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> and the size is set during the
> >>> >>> createAsync
> >>> >>> >>>> call
> >>> >>> >>>> >> in
> >>> >>> >>>> >> > the
> >>> >>> >>>> >> > > > >>>> plugin.
> >>> >>> >>>> >> > > > >>>> >> We
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> really just need the disk path and
> >>> the
> >>> >>> pool.
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > [Mike T.] I agree, format is not
> >>> needed.
> >>> >>> The
> >>> >>> >>>> only
> >>> >>> >>>> >> > > reason
> >>> >>> >>>> >> > > > I
> >>> >>> >>>> >> > > > >>>> have
> >>> >>> >>>> >> > > > >>>> >> size
> >>> >>> >>>> >> > > > >>>> >> >> >>>> passed
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > in is because I need to create a
> >>> >>> >>>> KVMPhysicalDisk
> >>> >>> >>>> >> at
> >>> >>> >>>> >> > the
> >>> >>> >>>> >> > > > end
> >>> >>> >>>> >> > > > >>>> of
> >>> >>> >>>> >> > > > >>>> >> the
> >>> >>> >>>> >> > > > >>>> >> >> >>>> connect
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > method. Since this KVMPhysicalDisk
> >>> is (in
> >>> >>> the
> >>> >>> >>>> >> code on
> >>> >>> >>>> >> > > > GitHub)
> >>> >>> >>>> >> > > > >>>> >> being
> >>> >>> >>>> >> > > > >>>> >> >> >>>> used to
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > create our XML to attach the disk,
> I
> >>> >>> figured
> >>> >>> >>>> we'd
> >>> >>> >>>> >> > need
> >>> >>> >>>> >> > > > that
> >>> >>> >>>> >> > > > >>>> size.
> >>> >>> >>>> >> > > > >>>> >> >> The
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > KVMPhysicalDisk I produce from my
> >>> >>> >>>> implementation
> >>> >>> >>>> >> of
> >>> >>> >>>> >> > > > >>>> >> getPhysicalDisk
> >>> >>> >>>> >> > > > >>>> >> >> is
> >>> >>> >>>> >> > > > >>>> >> >> >>>> not
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > as good because I don't know the
> >>> size of
> >>> >>> the
> >>> >>> >>>> disk
> >>> >>> >>>> >> at
> >>> >>> >>>> >> > > this
> >>> >>> >>>> >> > > > >>>> point
> >>> >>> >>>> >> > > > >>>> >> (I
> >>> >>> >>>> >> > > > >>>> >> >> >>>> don't
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > keep that information around). The
> >>> reason
> >>> >>> I
> >>> >>> >>>> don't
> >>> >>> >>>> >> > keep
> >>> >>> >>>> >> > > > that
> >>> >>> >>>> >> > > > >>>> info
> >>> >>> >>>> >> > > > >>>> >> >> >>>> around is
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > because I don't have a good way to
> >>> >>> reproduce
> >>> >>> >>>> that
> >>> >>> >>>> >> > info
> >>> >>> >>>> >> > > if
> >>> >>> >>>> >> > > > >>>> the KVM
> >>> >>> >>>> >> > > > >>>> >> >> host
> >>> >>> >>>> >> > > > >>>> >> >> >>>> is
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > rebooted. We get info about storage
> >>> pools
> >>> >>> in
> >>> >>> >>>> the
> >>> >>> >>>> >> form
> >>> >>> >>>> >> > > of
> >>> >>> >>>> >> > > > a
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > ModifyStoragePoolCommand, but
> nothing
> >>> >>> about
> >>> >>> >>>> the
> >>> >>> >>>> >> > volumes
> >>> >>> >>>> >> > > > >>>> inside of
> >>> >>> >>>> >> > > > >>>> >> >> the
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > storage pool.
> >>> >>> >>>> >> > > > >>>> >> >> >>>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> getPhysicalDisk is called in a bunch
> of
> >>> >>> places.
> >>> >>> >>>> I'd
> >>> >>> >>>> >> > rely
> >>> >>> >>>> >> > > > on the
> >>> >>> >>>> >> > > > >>>> >> >> >>>> connectPhysicalDisk to only make the
> >>> block
> >>> >>> >>>> device
> >>> >>> >>>> >> > appear
> >>> >>> >>>> >> > > > on the
> >>> >>> >>>> >> > > > >>>> >> host,
> >>> >>> >>>> >> > > > >>>> >> >> >>>> and then getPhysicalDisk to find that
> >>> block
> >>> >>> >>>> device
> >>> >>> >>>> >> and
> >>> >>> >>>> >> > > > fill out
> >>> >>> >>>> >> > > > >>>> >> things
> >>> >>> >>>> >> > > > >>>> >> >> >>>> like disk size and path (the real
> path
> >>> to
> >>> >>> the
> >>> >>> >>>> local
> >>> >>> >>>> >> > block
> >>> >>> >>>> >> > > > >>>> device)
> >>> >>> >>>> >> > > > >>>> >> for
> >>> >>> >>>> >> > > > >>>> >> >> >>>> passing and creating the disk XML.
> >>> Trust me,
> >>> >>> >>>> unless
> >>> >>> >>>> >> > > things
> >>> >>> >>>> >> > > > have
> >>> >>> >>>> >> > > > >>>> >> >> >>>> changed significantly you need to be
> >>> able to
> >>> >>> >>>> >> identify a
> >>> >>> >>>> >> > > > given
> >>> >>> >>>> >> > > > >>>> >> device
> >>> >>> >>>> >> > > > >>>> >> >> >>>> as a specific local disk by whatever
> >>> you are
> >>> >>> >>>> setting
> >>> >>> >>>> >> > the
> >>> >>> >>>> >> > > > 'path'
> >>> >>> >>>> >> > > > >>>> >> >> >>>> attribute to be.  getPhysicalDisk
> will
> >>> be
> >>> >>> >>>> called on
> >>> >>> >>>> >> > your
> >>> >>> >>>> >> > > > >>>> storage
> >>> >>> >>>> >> > > > >>>> >> pool
> >>> >>> >>>> >> > > > >>>> >> >> >>>> with simply the path attribute, and
> >>> via your
> >>> >>> >>>> adaptor
> >>> >>> >>>> >> > with
> >>> >>> >>>> >> > > > the
> >>> >>> >>>> >> > > > >>>> pool
> >>> >>> >>>> >> > > > >>>> >> and
> >>> >>> >>>> >> > > > >>>> >> >> >>>> path.
> >>> >>> >>>> >> > > > >>>> >> >> >>>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> So you may set path as some
> >>> combination of
> >>> >>> iqn
> >>> >>> >>>> and
> >>> >>> >>>> >> > > > target/pool
> >>> >>> >>>> >> > > > >>>> >> info,
> >>> >>> >>>> >> > > > >>>> >> >> >>>> or if iqn is enough to identify a
> >>> unique
> >>> >>> block
> >>> >>> >>>> >> device
> >>> >>> >>>> >> > (in
> >>> >>> >>>> >> > > > >>>> >> >> >>>> /dev/disk/by-id maybe?) on a host
> then
> >>> just
> >>> >>> use
> >>> >>> >>>> >> that.
> >>> >>> >>>> >> > > Path
> >>> >>> >>>> >> > > > just
> >>> >>> >>>> >> > > > >>>> >> needs
> >>> >>> >>>> >> > > > >>>> >> >> >>>> to be something, anything, to
> identify
> >>> the
> >>> >>> disk
> >>> >>> >>>> on
> >>> >>> >>>> >> the
> >>> >>> >>>> >> > > > host. In
> >>> >>> >>>> >> > > > >>>> >> >> >>>> getPhysicalDisk, identify the local
> >>> block
> >>> >>> device
> >>> >>> >>>> >> > matching
> >>> >>> >>>> >> > > > the
> >>> >>> >>>> >> > > > >>>> info,
> >>> >>> >>>> >> > > > >>>> >> >> >>>> create a new KVMPhysicalDisk with the
> >>> local
> >>> >>> >>>> path,
> >>> >>> >>>> >> size,
> >>> >>> >>>> >> > > > etc,
> >>> >>> >>>> >> > > > >>>> and
> >>> >>> >>>> >> > > > >>>> >> >> >>>> return it.
> >>> >>> >>>> >> > > > >>>> >> >> >>>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> 2) I thought this access group
> >>> thing you
> >>> >>> >>>> mention
> >>> >>> >>>> >> are
> >>> >>> >>>> >> > > the
> >>> >>> >>>> >> > > > >>>> >> >> grantAccess
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> and revokeAccess calls in the
> >>> storage
> >>> >>> plugin
> >>> >>> >>>> 2.0
> >>> >>> >>>> >> > > design
> >>> >>> >>>> >> > > > >>>> doc. Was
> >>> >>> >>>> >> > > > >>>> >> >> that
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> not implemented?
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > [Mike T.] Yeah, as I mentioned in
> an
> >>> >>> e-mail
> >>> >>> >>>> way
> >>> >>> >>>> >> back,
> >>> >>> >>>> >> > > > those
> >>> >>> >>>> >> > > > >>>> >> methods
> >>> >>> >>>> >> > > > >>>> >> >> >>>> were
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > never implemented in 4.2. I think
> you
> >>> >>> said you
> >>> >>> >>>> >> were
> >>> >>> >>>> >> > > > going to
> >>> >>> >>>> >> > > > >>>> get
> >>> >>> >>>> >> > > > >>>> >> >> around
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > them not being implemented by
> keeping
> >>> >>> certain
> >>> >>> >>>> >> logic
> >>> >>> >>>> >> > > that
> >>> >>> >>>> >> > > > >>>> talks to
> >>> >>> >>>> >> > > > >>>> >> >> the
> >>> >>> >>>> >> > > > >>>> >> >> >>>> SAN
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > in the agent. I don't think we want
> >>> any
> >>> >>> >>>> >> > > > SolidFire-specific
> >>> >>> >>>> >> > > > >>>> code
> >>> >>> >>>> >> > > > >>>> >> in
> >>> >>> >>>> >> > > > >>>> >> >> the
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > agent, however, so I can't go that
> >>> route.
> >>> >>> If
> >>> >>> >>>> those
> >>> >>> >>>> >> > > > methods
> >>> >>> >>>> >> > > > >>>> do not
> >>> >>> >>>> >> > > > >>>> >> >> get
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > implemented in 4.3, then I will
> need
> >>> to
> >>> >>> use
> >>> >>> >>>> CHAP
> >>> >>> >>>> >> > > > credentials
> >>> >>> >>>> >> > > > >>>> for
> >>> >>> >>>> >> > > > >>>> >> KVM
> >>> >>> >>>> >> > > > >>>> >> >> >>>> (just
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > like I did with XenServer and
> >>> VMware).
> >>> >>> >>>> >> > > > >>>> >> >> >>>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> I initially figured your
> StorageAdaptor
> >>> >>> >>>> >> implementation
> >>> >>> >>>> >> > > > would
> >>> >>> >>>> >> > > > >>>> be all
> >>> >>> >>>> >> > > > >>>> >> >> >>>> solidfire specific, just like the
> mgmt
> >>> >>> server
> >>> >>> >>>> side
> >>> >>> >>>> >> > plugin
> >>> >>> >>>> >> > > > is.
> >>> >>> >>>> >> > > > >>>> If it
> >>> >>> >>>> >> > > > >>>> >> >> >>>> can be generic to all iscsi storage
> >>> then
> >>> >>> that's
> >>> >>> >>>> >> great.
> >>> >>> >>>> >> > I
> >>> >>> >>>> >> > > > agree
> >>> >>> >>>> >> > > > >>>> that
> >>> >>> >>>> >> > > > >>>> >> >> >>>> ideally the agent wouldn't be making
> >>> API
> >>> >>> calls
> >>> >>> >>>> to
> >>> >>> >>>> >> your
> >>> >>> >>>> >> > > > SAN. I
> >>> >>> >>>> >> > > > >>>> don't
> >>> >>> >>>> >> > > > >>>> >> >> >>>> think it should be necessary given
> that
> >>> >>> you're
> >>> >>> >>>> not
> >>> >>> >>>> >> > going
> >>> >>> >>>> >> > > to
> >>> >>> >>>> >> > > > >>>> use the
> >>> >>> >>>> >> > > > >>>> >> >> >>>> ACL route. I'm not sure CHAP fills
> the
> >>> same
> >>> >>> >>>> purpose
> >>> >>> >>>> >> of
> >>> >>> >>>> >> > > > fencing.
> >>> >>> >>>> >> > > > >>>> >> >> >>>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> I see you've  added
> getters/setters
> >>> for
> >>> >>> the
> >>> >>> >>>> >> attach
> >>> >>> >>>> >> > cmd
> >>> >>> >>>> >> > > > to
> >>> >>> >>>> >> > > > >>>> pass
> >>> >>> >>>> >> > > > >>>> >> the
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> iscsi info you need. Would it
> >>> perhaps be
> >>> >>> >>>> >> possible to
> >>> >>> >>>> >> > > > send a
> >>> >>> >>>> >> > > > >>>> >> details
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> Map<String, String> instead? Then
> >>> any
> >>> >>> plugin
> >>> >>> >>>> >> > > implementer
> >>> >>> >>>> >> > > > >>>> could
> >>> >>> >>>> >> > > > >>>> >> >> attach
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> arbitrary data they need. So it
> >>> might be
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >>
> connectPhysicalDisk(StoragePoolType
> >>> type,
> >>> >>> >>>> String
> >>> >>> >>>> >> > > > poolUuid,
> >>> >>> >>>> >> > > > >>>> >> String
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> volPath, Map<String, String>
> >>> details)?
> >>> >>>  I'll
> >>> >>> >>>> >> have to
> >>> >>> >>>> >> > > > look
> >>> >>> >>>> >> > > > >>>> and
> >>> >>> >>>> >> > > > >>>> >> see
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> where those cmd. attributes are
> set,
> >>> >>> ideally
> >>> >>> >>>> it
> >>> >>> >>>> >> > would
> >>> >>> >>>> >> > > be
> >>> >>> >>>> >> > > > >>>> all the
> >>> >>> >>>> >> > > > >>>> >> >> way
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> back in the plugin to avoid custom
> >>> code
> >>> >>> for
> >>> >>> >>>> every
> >>> >>> >>>> >> > > > adaptor
> >>> >>> >>>> >> > > > >>>> that
> >>> >>> >>>> >> > > > >>>> >> >> wants
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> to set details.
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > [Mike T.] If I'm not using the
> >>> >>> volumes.path
> >>> >>> >>>> field
> >>> >>> >>>> >> for
> >>> >>> >>>> >> > > > >>>> anything, I
> >>> >>> >>>> >> > > > >>>> >> >> can
> >>> >>> >>>> >> > > > >>>> >> >> >>>> stick
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > the IQN in volumes.path (as well as
> >>> >>> leaving
> >>> >>> >>>> it in
> >>> >>> >>>> >> > > > >>>> >> >> volumes.iscsi_name,
> >>> >>> >>>> >> > > > >>>> >> >> >>>> which
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > is used elsewhere). That way we
> only
> >>> have
> >>> >>> to
> >>> >>> >>>> ask
> >>> >>> >>>> >> for
> >>> >>> >>>> >> > > > >>>> getPath().
> >>> >>> >>>> >> > > > >>>> >> >> >>>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> Yeah, whatever it is that you'd need
> to
> >>> >>> find the
> >>> >>> >>>> >> right
> >>> >>> >>>> >> > > > block
> >>> >>> >>>> >> > > > >>>> device
> >>> >>> >>>> >> > > > >>>> >> >> >>>> should go in the path. If you look
> >>> through
> >>> >>> >>>> >> > > > >>>> LibvirtComputingResource
> >>> >>> >>>> >> > > > >>>> >> >> >>>> you'll see stuff like this sprinkled
> >>> around:
> >>> >>> >>>> >> > > > >>>> >> >> >>>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>>                 KVMPhysicalDisk
> volume
> >>> =
> >>> >>> >>>> >> > > > >>>> >> >> primaryPool.getPhysicalDisk(cmd
> >>> >>> >>>> >> > > > >>>> >> >> >>>>
> >>> .getVolumePath());
> >>> >>> >>>> >> > > > >>>> >> >> >>>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> or:
> >>> >>> >>>> >> > > > >>>> >> >> >>>>         String volid = cmd.getPath();
> >>> >>> >>>> >> > > > >>>> >> >> >>>>          KVMPhysicalDisk vol =
> >>> >>> >>>> >> > > pool.getPhysicalDisk(volid);
> >>> >>> >>>> >> > > > >>>> >> >> >>>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> or:
> >>> >>> >>>> >> > > > >>>> >> >> >>>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>>                     KVMPhysicalDisk
> >>> >>> >>>> physicalDisk =
> >>> >>> >>>> >> > > > >>>> >> >> >>>> _storagePoolMgr.getPhysicalDisk(
> >>> >>> >>>> >> store.getPoolType(),
> >>> >>> >>>> >> > > > >>>> >> >> >>>>
> >>> store.getUuid(),
> >>> >>> >>>> >> > > > >>>> >> >> >>>>
> >>> data.getPath());
> >>> >>> >>>> >> > > > >>>> >> >> >>>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> Maybe some of it is short-circuited
> by
> >>> the
> >>> >>> new
> >>> >>> >>>> >> > > > >>>> KVMStorageProcessor,
> >>> >>> >>>> >> > > > >>>> >> >> >>>> but I'd still implement a working
> one,
> >>> and
> >>> >>> then
> >>> >>> >>>> >> > > > attachVolume
> >>> >>> >>>> >> > > > >>>> can
> >>> >>> >>>> >> > > > >>>> >> call
> >>> >>> >>>> >> > > > >>>> >> >> >>>> getPhysicalDisk after
> >>> connectPhysicalDisk,
> >>> >>> even
> >>> >>> >>>> on
> >>> >>> >>>> >> your
> >>> >>> >>>> >> > > > >>>> >> adaptor/pool.
> >>> >>> >>>> >> > > > >>>> >> >> >>>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> On Thu, Sep 26, 2013 at 7:35 PM,
> >>> Mike
> >>> >>> >>>> Tutkowski
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> <mike.tutkowski@solidfire.com>
> >>> wrote:
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> > Also, if we went the non-CHAP
> >>> route,
> >>> >>> before
> >>> >>> >>>> >> > > attaching
> >>> >>> >>>> >> > > > a
> >>> >>> >>>> >> > > > >>>> volume
> >>> >>> >>>> >> > > > >>>> >> >> to a
> >>> >>> >>>> >> > > > >>>> >> >> >>>> VM,
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> > we'd have to tell the plug-in to
> >>> set
> >>> >>> up a
> >>> >>> >>>> >> volume
> >>> >>> >>>> >> > > > access
> >>> >>> >>>> >> > > > >>>> group.
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> > When a volume is detached from a
> >>> VM,
> >>> >>> we'd
> >>> >>> >>>> have
> >>> >>> >>>> >> to
> >>> >>> >>>> >> > > > tell the
> >>> >>> >>>> >> > > > >>>> >> >> plug-in
> >>> >>> >>>> >> > > > >>>> >> >> >>>> to
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> > delete the volume access group.
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> > On Thu, Sep 26, 2013 at 7:32 PM,
> >>> Mike
> >>> >>> >>>> >> Tutkowski <
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> > mike.tutkowski@solidfire.com>
> >>> wrote:
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >> I mention this is my comments
> on
> >>> >>> GitHub,
> >>> >>> >>>> as
> >>> >>> >>>> >> well,
> >>> >>> >>>> >> > > but
> >>> >>> >>>> >> > > > >>>> CHAP
> >>> >>> >>>> >> > > > >>>> >> info
> >>> >>> >>>> >> > > > >>>> >> >> is
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >> associated with an account -
> not
> >>> a
> >>> >>> storage
> >>> >>> >>>> >> pool.
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >> Ideally we could do without
> CHAP
> >>> info
> >>> >>> >>>> >> entirely if
> >>> >>> >>>> >> > > we
> >>> >>> >>>> >> > > > had
> >>> >>> >>>> >> > > > >>>> a
> >>> >>> >>>> >> > > > >>>> >> >> >>>> reliable way
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> to
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >> tell the storage plug-in that a
> >>> given
> >>> >>> host
> >>> >>> >>>> >> wants
> >>> >>> >>>> >> > to
> >>> >>> >>>> >> > > > >>>> access a
> >>> >>> >>>> >> > > > >>>> >> >> given
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> volume.
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >> In this case, my storage
> plug-in
> >>> could
> >>> >>> >>>> create
> >>> >>> >>>> >> > what
> >>> >>> >>>> >> > > we
> >>> >>> >>>> >> > > > >>>> call a
> >>> >>> >>>> >> > > > >>>> >> >> Volume
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> Access
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >> Group on the SAN. It would
> >>> essentially
> >>> >>> >>>> say,
> >>> >>> >>>> >> "The
> >>> >>> >>>> >> > > host
> >>> >>> >>>> >> > > > >>>> with
> >>> >>> >>>> >> > > > >>>> >> IQN
> >>> >>> >>>> >> > > > >>>> >> >> <x>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> can
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >> access the volume with IQN <y>
> >>> without
> >>> >>> >>>> using
> >>> >>> >>>> >> CHAP
> >>> >>> >>>> >> > > > >>>> >> credentials."
> >>> >>> >>>> >> > > > >>>> >> >> Of
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> course
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >> we'd need a way to revoke this
> >>> >>> privilege
> >>> >>> >>>> in
> >>> >>> >>>> >> the
> >>> >>> >>>> >> > > > event of
> >>> >>> >>>> >> > > > >>>> a
> >>> >>> >>>> >> > > > >>>> >> live
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> migration
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >> of a VM. Right now, I do not
> >>> believe
> >>> >>> such
> >>> >>> >>>> a
> >>> >>> >>>> >> > > facility
> >>> >>> >>>> >> > > > is
> >>> >>> >>>> >> > > > >>>> >> >> supported
> >>> >>> >>>> >> > > > >>>> >> >> >>>> with
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> the
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >> storage plug-ins.
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >> On Thu, Sep 26, 2013 at 4:56
> PM,
> >>> >>> Marcus
> >>> >>> >>>> >> Sorensen
> >>> >>> >>>> >> > <
> >>> >>> >>>> >> > > > >>>> >> >> >>>> shadowsor@gmail.com
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >wrote:
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> Looking at your code, is the
> >>> chap
> >>> >>> info
> >>> >>> >>>> stored
> >>> >>> >>>> >> > with
> >>> >>> >>>> >> > > > the
> >>> >>> >>>> >> > > > >>>> pool,
> >>> >>> >>>> >> > > > >>>> >> >> so we
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> could pass the pool to the
> >>> adaptor?
> >>> >>> That
> >>> >>> >>>> >> would
> >>> >>> >>>> >> > be
> >>> >>> >>>> >> > > > more
> >>> >>> >>>> >> > > > >>>> >> >> agnostic,
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> anyone implementing a plugin
> >>> could
> >>> >>> pull
> >>> >>> >>>> the
> >>> >>> >>>> >> > > > specifics
> >>> >>> >>>> >> > > > >>>> they
> >>> >>> >>>> >> > > > >>>> >> need
> >>> >>> >>>> >> > > > >>>> >> >> >>>> for
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> their stuff out of the pool on
> >>> the
> >>> >>> >>>> adaptor
> >>> >>> >>>> >> side,
> >>> >>> >>>> >> > > > rather
> >>> >>> >>>> >> > > > >>>> than
> >>> >>> >>>> >> > > > >>>> >> >> >>>> creating
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> custom signatures.
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> Also, I think we may want to
> >>> consider
> >>> >>> >>>> >> > implementing
> >>> >>> >>>> >> > > > >>>> >> >> >>>> connect/disconnect
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> as just dummy methods in
> >>> >>> >>>> >> LibvirtStorageAdaptor,
> >>> >>> >>>> >> > so
> >>> >>> >>>> >> > > > we
> >>> >>> >>>> >> > > > >>>> don't
> >>> >>> >>>> >> > > > >>>> >> >> have
> >>> >>> >>>> >> > > > >>>> >> >> >>>> to be
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> picky about which
> >>> adaptors/types in
> >>> >>> every
> >>> >>> >>>> >> single
> >>> >>> >>>> >> > > > place
> >>> >>> >>>> >> > > > >>>> we
> >>> >>> >>>> >> > > > >>>> >> may
> >>> >>> >>>> >> > > > >>>> >> >> >>>> want to
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> connect/disconnect (in 4.1
> >>> there were
> >>> >>> >>>> >> several,
> >>> >>> >>>> >> > I'm
> >>> >>> >>>> >> > > > not
> >>> >>> >>>> >> > > > >>>> sure
> >>> >>> >>>> >> > > > >>>> >> if
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> everything goes through this
> in
> >>> >>> 4.2). We
> >>> >>> >>>> can
> >>> >>> >>>> >> > just
> >>> >>> >>>> >> > > > call
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> adaptor.connectPhysicalDisk
> and
> >>> the
> >>> >>> >>>> adaptor
> >>> >>> >>>> >> can
> >>> >>> >>>> >> > > > decide
> >>> >>> >>>> >> > > > >>>> if it
> >>> >>> >>>> >> > > > >>>> >> >> >>>> needs to
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> do anything.
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> Comments are attached to your
> >>> >>> commit, I
> >>> >>> >>>> just
> >>> >>> >>>> >> > > wanted
> >>> >>> >>>> >> > > > to
> >>> >>> >>>> >> > > > >>>> echo
> >>> >>> >>>> >> > > > >>>> >> >> them
> >>> >>> >>>> >> > > > >>>> >> >> >>>> here
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> on-list.
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> On Thu, Sep 26, 2013 at 4:32
> >>> PM, Mike
> >>> >>> >>>> >> Tutkowski
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> <mike.tutkowski@solidfire.com
> >
> >>> >>> wrote:
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> > Oh, SnapshotTestWithFakeData
> >>> is
> >>> >>> just
> >>> >>> >>>> >> modified
> >>> >>> >>>> >> > > > because
> >>> >>> >>>> >> > > > >>>> the
> >>> >>> >>>> >> > > > >>>> >> >> code
> >>> >>> >>>> >> > > > >>>> >> >> >>>> wasn't
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> > building until I corrected
> >>> this.
> >>> >>> It has
> >>> >>> >>>> >> > nothing
> >>> >>> >>>> >> > > > >>>> really to
> >>> >>> >>>> >> > > > >>>> >> do
> >>> >>> >>>> >> > > > >>>> >> >> >>>> with my
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> real
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> > changes.
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> > On Thu, Sep 26, 2013 at 4:31
> >>> PM,
> >>> >>> Mike
> >>> >>> >>>> >> > Tutkowski
> >>> >>> >>>> >> > > <
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >
> mike.tutkowski@solidfire.com>
> >>> >>> wrote:
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >> Hey Marcus,
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >> I implemented your
> >>> recommendations
> >>> >>> >>>> >> regarding
> >>> >>> >>>> >> > > > adding
> >>> >>> >>>> >> > > > >>>> >> connect
> >>> >>> >>>> >> > > > >>>> >> >> and
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> disconnect
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >> methods. It is not yet
> >>> checked in
> >>> >>> (as
> >>> >>> >>>> you
> >>> >>> >>>> >> > know,
> >>> >>> >>>> >> > > > >>>> having
> >>> >>> >>>> >> > > > >>>> >> >> trouble
> >>> >>> >>>> >> > > > >>>> >> >> >>>> with
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> my
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> KVM
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >> environment), but it is on
> >>> GitHub
> >>> >>> >>>> here:
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>>
> >>> >>> >>>> >> > > > >>>> >> >>
> >>> >>> >>>> >> > > > >>>> >>
> >>> >>> >>>> >> > > > >>>>
> >>> >>> >>>> >> > > >
> >>> >>> >>>> >> > >
> >>> >>> >>>> >> >
> >>> >>> >>>> >>
> >>> >>> >>>>
> >>> >>>
> >>>
> https://github.com/mike-tutkowski/incubator-cloudstack/commit/f2897c65689012e6157c0a0c2ed7e5355900c59a
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >> Please let me know if you
> >>> have any
> >>> >>> >>>> more
> >>> >>> >>>> >> > > comments.
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >> Thanks!
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >> On Thu, Sep 26, 2013 at
> 4:05
> >>> PM,
> >>> >>> >>>> Marcus
> >>> >>> >>>> >> > > Sorensen
> >>> >>> >>>> >> > > > <
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> shadowsor@gmail.com
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >wrote:
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>> Mike, everyone,
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>>    As I've mentioned on
> the
> >>> >>> board,
> >>> >>> >>>> I'm
> >>> >>> >>>> >> > working
> >>> >>> >>>> >> > > > on
> >>> >>> >>>> >> > > > >>>> >> getting
> >>> >>> >>>> >> > > > >>>> >> >> our
> >>> >>> >>>> >> > > > >>>> >> >> >>>> own
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>> internal KVM storage
> plugin
> >>> >>> working
> >>> >>> >>>> on
> >>> >>> >>>> >> 4.2.
> >>> >>> >>>> >> > In
> >>> >>> >>>> >> > > > the
> >>> >>> >>>> >> > > > >>>> >> >> interest of
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> making
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>> it forward compatible, I
> >>> just
> >>> >>> wanted
> >>> >>> >>>> to
> >>> >>> >>>> >> > > confirm
> >>> >>> >>>> >> > > > >>>> what you
> >>> >>> >>>> >> > > > >>>> >> >> were
> >>> >>> >>>> >> > > > >>>> >> >> >>>> doing
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>> with the solidfire plugin
> >>> as far
> >>> >>> as
> >>> >>> >>>> >> > attaching
> >>> >>> >>>> >> > > > your
> >>> >>> >>>> >> > > > >>>> iscsi
> >>> >>> >>>> >> > > > >>>> >> >> >>>> LUNs. We
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> had
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>> discussed a new
> >>> >>> connectPhysicalDisk
> >>> >>> >>>> >> method
> >>> >>> >>>> >> > for
> >>> >>> >>>> >> > > > the
> >>> >>> >>>> >> > > > >>>> >> >> >>>> StorageAdaptor
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>> class, something perhaps
> >>> like:
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>> public boolean
> >>> >>> >>>> connectPhysicalDisk(String
> >>> >>> >>>> >> > > > >>>> volumeUuid,
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> KVMStoragePool
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>> pool);
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>> then added to
> >>> >>> KVMStoragePoolManager:
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>> public boolean
> >>> >>> >>>> >> > > > connectPhysicalDisk(StoragePoolType
> >>> >>> >>>> >> > > > >>>> type,
> >>> >>> >>>> >> > > > >>>> >> >> >>>> String
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>> poolUuid, String volPath)
> {
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>>         StorageAdaptor
> >>> adaptor =
> >>> >>> >>>> >> > > > >>>> >> getStorageAdaptor(type);
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>>         KVMStoragePool
> pool
> >>> =
> >>> >>> >>>> >> > > > >>>> >> >> >>>> adaptor.getStoragePool(poolUuid);
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>>         return
> >>> >>> >>>> >> > > > adaptor.connectPhysicalDisk(volPath,
> >>> >>> >>>> >> > > > >>>> >> pool);
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>>     }
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>> Something similar to this
> >>> for
> >>> >>> >>>> disconnect
> >>> >>> >>>> >> as
> >>> >>> >>>> >> > > > well.
> >>> >>> >>>> >> > > > >>>> Then
> >>> >>> >>>> >> > > > >>>> >> in
> >>> >>> >>>> >> > > > >>>> >> >> the
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>> KVMStorageProcessor these
> >>> can be
> >>> >>> >>>> called
> >>> >>> >>>> >> as
> >>> >>> >>>> >> > > > needed
> >>> >>> >>>> >> > > > >>>> for
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> attach/detach.
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>> We can probably stub out
> >>> one in
> >>> >>> >>>> >> > > > >>>> LibvirtStorageAdaptor so
> >>> >>> >>>> >> > > > >>>> >> >> >>>> there's no
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>> need to switch or if/else
> >>> for
> >>> >>> pool
> >>> >>> >>>> types,
> >>> >>> >>>> >> > for
> >>> >>> >>>> >> > > > >>>> example in
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>>
> >>> KVMStorageProcessor.attachVolume.
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>> I have debated on whether
> >>> or not
> >>> >>> it
> >>> >>> >>>> >> should
> >>> >>> >>>> >> > > just
> >>> >>> >>>> >> > > > be
> >>> >>> >>>> >> > > > >>>> >> rolled
> >>> >>> >>>> >> > > > >>>> >> >> into
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>> getPhysicalDisk, having it
> >>> >>> connect
> >>> >>> >>>> the
> >>> >>> >>>> >> disk
> >>> >>> >>>> >> > if
> >>> >>> >>>> >> > > > it's
> >>> >>> >>>> >> > > > >>>> not
> >>> >>> >>>> >> > > > >>>> >> >> >>>> already
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>> connected. getPhysicalDisk
> >>> is
> >>> >>> called
> >>> >>> >>>> a
> >>> >>> >>>> >> lot,
> >>> >>> >>>> >> > > and
> >>> >>> >>>> >> > > > I'm
> >>> >>> >>>> >> > > > >>>> not
> >>> >>> >>>> >> > > > >>>> >> >> sure
> >>> >>> >>>> >> > > > >>>> >> >> >>>> it
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> always
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>> needs to connect the disk
> >>> when it
> >>> >>> >>>> does.
> >>> >>> >>>> >> In
> >>> >>> >>>> >> > > past
> >>> >>> >>>> >> > > > >>>> >> iterations
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>> getPhysicalDisk has simply
> >>> >>> spoken to
> >>> >>> >>>> our
> >>> >>> >>>> >> SAN
> >>> >>> >>>> >> > > > api and
> >>> >>> >>>> >> > > > >>>> >> >> returned
> >>> >>> >>>> >> > > > >>>> >> >> >>>> the
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> disk
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>> details, nothing more. So
> it
> >>> >>> seemed
> >>> >>> >>>> more
> >>> >>> >>>> >> > > > flexible
> >>> >>> >>>> >> > > > >>>> and
> >>> >>> >>>> >> > > > >>>> >> >> >>>> granular to
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> do
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>> the connectPhysicalDisk
> (we
> >>> have
> >>> >>> one
> >>> >>> >>>> now
> >>> >>> >>>> >> in
> >>> >>> >>>> >> > > our
> >>> >>> >>>> >> > > > 4.1
> >>> >>> >>>> >> > > > >>>> >> >> version).
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >> --
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >> *Mike Tutkowski*
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >> *Senior CloudStack
> Developer,
> >>> >>> >>>> SolidFire
> >>> >>> >>>> >> Inc.*
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >> e:
> >>> mike.tutkowski@solidfire.com
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >> o: 303.746.7302
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >> Advancing the way the world
> >>> uses
> >>> >>> the
> >>> >>> >>>> >> cloud<
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>>
> >>> >>> >>>> >> > > http://solidfire.com/solution/overview/?video=play>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >> *™*
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> >
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> > --
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> > *Mike Tutkowski*
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> > *Senior CloudStack
> Developer,
> >>> >>> SolidFire
> >>> >>> >>>> >> Inc.*
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> > e:
> >>> mike.tutkowski@solidfire.com
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> > o: 303.746.7302
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> > Advancing the way the world
> >>> uses
> >>> >>> the
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> > cloud<
> >>> >>> >>>> >> > > > >>>>
> >>> http://solidfire.com/solution/overview/?video=play>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>> > *™*
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >> --
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >> *Mike Tutkowski*
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >> *Senior CloudStack Developer,
> >>> >>> SolidFire
> >>> >>> >>>> Inc.*
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >> e:
> mike.tutkowski@solidfire.com
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >> o: 303.746.7302
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >> Advancing the way the world
> uses
> >>> the
> >>> >>> >>>> cloud<
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >>
> >>> >>> >>>> >> http://solidfire.com/solution/overview/?video=play>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >> *™*
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> >
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> > --
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> > *Mike Tutkowski*
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> > *Senior CloudStack Developer,
> >>> SolidFire
> >>> >>> >>>> Inc.*
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> > e: mike.tutkowski@solidfire.com
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> > o: 303.746.7302
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> > Advancing the way the world uses
> >>> the
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> > cloud<
> >>> >>> >>>> >> > > > http://solidfire.com/solution/overview/?video=play>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >> > *™*
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >
> >>> >>> >>>> >> > > > >>>> >> >> >>>> >
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > --
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > *Mike Tutkowski*
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > *Senior CloudStack Developer,
> >>> SolidFire
> >>> >>> Inc.*
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > e: mike.tutkowski@solidfire.com
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > o: 303.746.7302
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > Advancing the way the world uses
> the
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > cloud<
> >>> >>> >>>> >> > > http://solidfire.com/solution/overview/?video=play
> >>> >>> >>>> >> > > > >
> >>> >>> >>>> >> > > > >>>> >> >> >>>> > *™*
> >>> >>> >>>> >> > > > >>>> >> >> >>>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>> --
> >>> >>> >>>> >> > > > >>>> >> >> >>> *Mike Tutkowski*
> >>> >>> >>>> >> > > > >>>> >> >> >>> *Senior CloudStack Developer,
> SolidFire
> >>> Inc.*
> >>> >>> >>>> >> > > > >>>> >> >> >>> e: mike.tutkowski@solidfire.com
> >>> >>> >>>> >> > > > >>>> >> >> >>> o: 303.746.7302
> >>> >>> >>>> >> > > > >>>> >> >> >>> Advancing the way the world uses the
> >>> cloud<
> >>> >>> >>>> >> > > > >>>> >> >>
> >>> >>> http://solidfire.com/solution/overview/?video=play>
> >>> >>> >>>> >> > > > >>>> >> >> >>> *™*
> >>> >>> >>>> >> > > > >>>> >> >> >>>
> >>> >>> >>>> >> > > > >>>> >> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >> --
> >>> >>> >>>> >> > > > >>>> >> >> >> *Mike Tutkowski*
> >>> >>> >>>> >> > > > >>>> >> >> >> *Senior CloudStack Developer, SolidFire
> >>> Inc.*
> >>> >>> >>>> >> > > > >>>> >> >> >> e: mike.tutkowski@solidfire.com
> >>> >>> >>>> >> > > > >>>> >> >> >> o: 303.746.7302
> >>> >>> >>>> >> > > > >>>> >> >> >> Advancing the way the world uses the
> >>> cloud<
> >>> >>> >>>> >> > > > >>>> >> >>
> >>> >>> http://solidfire.com/solution/overview/?video=play>
> >>> >>> >>>> >> > > > >>>> >> >> >> *™*
> >>> >>> >>>> >> > > > >>>> >> >> >>
> >>> >>> >>>> >> > > > >>>> >> >> >
> >>> >>> >>>> >> > > > >>>> >> >> >
> >>> >>> >>>> >> > > > >>>> >> >> >
> >>> >>> >>>> >> > > > >>>> >> >> > --
> >>> >>> >>>> >> > > > >>>> >> >> > *Mike Tutkowski*
> >>> >>> >>>> >> > > > >>>> >> >> > *Senior CloudStack Developer, SolidFire
> >>> Inc.*
> >>> >>> >>>> >> > > > >>>> >> >> > e: mike.tutkowski@solidfire.com
> >>> >>> >>>> >> > > > >>>> >> >> > o: 303.746.7302
> >>> >>> >>>> >> > > > >>>> >> >> > Advancing the way the world uses the
> >>> >>> >>>> >> > > > >>>> >> >> > cloud<
> >>> >>> >>>> >> http://solidfire.com/solution/overview/?video=play>
> >>> >>> >>>> >> > > > >>>> >> >> > *™*
> >>> >>> >>>> >> > > > >>>> >> >>
> >>> >>> >>>> >> > > > >>>> >> >
> >>> >>> >>>> >> > > > >>>> >> >
> >>> >>> >>>> >> > > > >>>> >> >
> >>> >>> >>>> >> > > > >>>> >> > --
> >>> >>> >>>> >> > > > >>>> >> > *Mike Tutkowski*
> >>> >>> >>>> >> > > > >>>> >> > *Senior CloudStack Developer, SolidFire
> Inc.*
> >>> >>> >>>> >> > > > >>>> >> > e: mike.tutkowski@solidfire.com
> >>> >>> >>>> >> > > > >>>> >> > o: 303.746.7302
> >>> >>> >>>> >> > > > >>>> >> > Advancing the way the world uses the
> >>> >>> >>>> >> > > > >>>> >> > cloud<
> >>> >>> >>>> http://solidfire.com/solution/overview/?video=play>
> >>> >>> >>>> >> > > > >>>> >> > *™*
> >>> >>> >>>> >> > > > >>>> >>
> >>> >>> >>>> >> > > > >>>> >
> >>> >>> >>>> >> > > > >>>> >
> >>> >>> >>>> >> > > > >>>> >
> >>> >>> >>>> >> > > > >>>> > --
> >>> >>> >>>> >> > > > >>>> > *Mike Tutkowski*
> >>> >>> >>>> >> > > > >>>> > *Senior CloudStack Developer, SolidFire Inc.*
> >>> >>> >>>> >> > > > >>>> > e: mike.tutkowski@solidfire.com
> >>> >>> >>>> >> > > > >>>> > o: 303.746.7302
> >>> >>> >>>> >> > > > >>>> > Advancing the way the world uses the
> >>> >>> >>>> >> > > > >>>> > cloud<
> >>> >>> http://solidfire.com/solution/overview/?video=play
> >>> >>> >>>> >
> >>> >>> >>>> >> > > > >>>> > *™*
> >>> >>> >>>> >> > > > >>>>
> >>> >>> >>>> >> > > > >>>
> >>> >>> >>>> >> > > > >>>
> >>> >>> >>>> >> > > > >>>
> >>> >>> >>>> >> > > > >>> --
> >>> >>> >>>> >> > > > >>> *Mike Tutkowski*
> >>> >>> >>>> >> > > > >>> *Senior CloudStack Developer, SolidFire Inc.*
> >>> >>> >>>> >> > > > >>> e: mike.tutkowski@solidfire.com
> >>> >>> >>>> >> > > > >>> o: 303.746.7302
> >>> >>> >>>> >> > > > >>> Advancing the way the world uses the cloud<
> >>> >>> >>>> >> > > > http://solidfire.com/solution/overview/?video=play>
> >>> >>> >>>> >> > > > >>> *™*
> >>> >>> >>>> >> > > > >>>
> >>> >>> >>>> >> > > > >>
> >>> >>> >>>> >> > > > >>
> >>> >>> >>>> >> > > > >>
> >>> >>> >>>> >> > > > >> --
> >>> >>> >>>> >> > > > >> *Mike Tutkowski*
> >>> >>> >>>> >> > > > >> *Senior CloudStack Developer, SolidFire Inc.*
> >>> >>> >>>> >> > > > >> e: mike.tutkowski@solidfire.com
> >>> >>> >>>> >> > > > >> o: 303.746.7302
> >>> >>> >>>> >> > > > >> Advancing the way the world uses the
> >>> >>> >>>> >> > > > >> cloud<
> >>> http://solidfire.com/solution/overview/?video=play
> >>> >>> >
> >>> >>> >>>> >> > > > >> *™*
> >>> >>> >>>> >> > > >
> >>> >>> >>>> >> > >
> >>> >>> >>>> >> > >
> >>> >>> >>>> >> > >
> >>> >>> >>>> >> > > --
> >>> >>> >>>> >> > > *Mike Tutkowski*
> >>> >>> >>>> >> > > *Senior CloudStack Developer, SolidFire Inc.*
> >>> >>> >>>> >> > > e: mike.tutkowski@solidfire.com
> >>> >>> >>>> >> > > o: 303.746.7302
> >>> >>> >>>> >> > > Advancing the way the world uses the
> >>> >>> >>>> >> > > cloud<
> http://solidfire.com/solution/overview/?video=play
> >>> >
> >>> >>> >>>> >> > > *™*
> >>> >>> >>>> >> > >
> >>> >>> >>>> >> >
> >>> >>> >>>> >>
> >>> >>> >>>> >>
> >>> >>> >>>> >>
> >>> >>> >>>> >> --
> >>> >>> >>>> >> *Mike Tutkowski*
> >>> >>> >>>> >> *Senior CloudStack Developer, SolidFire Inc.*
> >>> >>> >>>> >> e: mike.tutkowski@solidfire.com
> >>> >>> >>>> >> o: 303.746.7302
> >>> >>> >>>> >> Advancing the way the world uses the
> >>> >>> >>>> >> cloud<http://solidfire.com/solution/overview/?video=play>
> >>> >>> >>>> >> *™*
> >>> >>> >>>> >>
> >>> >>> >>>> >
> >>> >>> >>>>
> >>> >>> >>>
> >>> >>> >>>
> >>> >>> >>>
> >>> >>> >>> --
> >>> >>> >>> *Mike Tutkowski*
> >>> >>> >>> *Senior CloudStack Developer, SolidFire Inc.*
> >>> >>> >>> e: mike.tutkowski@solidfire.com
> >>> >>> >>> o: 303.746.7302
> >>> >>> >>> Advancing the way the world uses the cloud<
> >>> >>> http://solidfire.com/solution/overview/?video=play>
> >>> >>> >>> *™*
> >>> >>> >>>
> >>> >>> >>
> >>> >>> >>
> >>> >>> >>
> >>> >>> >> --
> >>> >>> >> *Mike Tutkowski*
> >>> >>> >> *Senior CloudStack Developer, SolidFire Inc.*
> >>> >>> >> e: mike.tutkowski@solidfire.com
> >>> >>> >> o: 303.746.7302
> >>> >>> >> Advancing the way the world uses the cloud<
> >>> >>> http://solidfire.com/solution/overview/?video=play>
> >>> >>> >> *™*
> >>> >>> >>
> >>> >>> >
> >>> >>> >
> >>> >>> >
> >>> >>> > --
> >>> >>> > *Mike Tutkowski*
> >>> >>> > *Senior CloudStack Developer, SolidFire Inc.*
> >>> >>> > e: mike.tutkowski@solidfire.com
> >>> >>> > o: 303.746.7302
> >>> >>> > Advancing the way the world uses the
> >>> >>> > cloud<http://solidfire.com/solution/overview/?video=play>
> >>> >>> > *™*
> >>> >>>
> >>> >>
> >>> >>
> >>> >>
> >>> >> --
> >>> >> *Mike Tutkowski*
> >>> >> *Senior CloudStack Developer, SolidFire Inc.*
> >>> >> e: mike.tutkowski@solidfire.com
> >>> >> o: 303.746.7302
> >>> >> Advancing the way the world uses the cloud<
> >>> http://solidfire.com/solution/overview/?video=play>
> >>> >> *™*
> >>> >>
> >>> >
> >>> >
> >>> >
> >>> > --
> >>> > *Mike Tutkowski*
> >>> > *Senior CloudStack Developer, SolidFire Inc.*
> >>> > e: mike.tutkowski@solidfire.com
> >>> > o: 303.746.7302
> >>> > Advancing the way the world uses the
> >>> > cloud<http://solidfire.com/solution/overview/?video=play>
> >>> > *™*
> >>>
> >>
> >>
> >>
> >> --
> >> *Mike Tutkowski*
> >> *Senior CloudStack Developer, SolidFire Inc.*
> >> e: mike.tutkowski@solidfire.com
> >> o: 303.746.7302
> >> Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> >> *™*
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> > *™*
> >
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message