Return-Path: X-Original-To: apmail-cloudstack-dev-archive@www.apache.org Delivered-To: apmail-cloudstack-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 7A88F10966 for ; Sat, 28 Sep 2013 01:46:04 +0000 (UTC) Received: (qmail 34160 invoked by uid 500); 28 Sep 2013 01:46:04 -0000 Delivered-To: apmail-cloudstack-dev-archive@cloudstack.apache.org Received: (qmail 34121 invoked by uid 500); 28 Sep 2013 01:46:04 -0000 Mailing-List: contact dev-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cloudstack.apache.org Delivered-To: mailing list dev@cloudstack.apache.org Received: (qmail 34112 invoked by uid 99); 28 Sep 2013 01:46:03 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 28 Sep 2013 01:46:03 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of shadowsor@gmail.com designates 209.85.212.46 as permitted sender) Received: from [209.85.212.46] (HELO mail-vb0-f46.google.com) (209.85.212.46) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 28 Sep 2013 01:45:51 +0000 Received: by mail-vb0-f46.google.com with SMTP id p13so2339774vbe.5 for ; Fri, 27 Sep 2013 18:45:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=CvGeI1kbr1I0Z5LbqVqIvE5FbPaLHI4k3xnmmogouII=; b=udTwmuo2Am13wnMmWWqZvHFTpK4OjZb5GTqdnlGidIiktIRum1nB6YG1XhGOS9i3f1 nwO30AkUEV/y5HGeYIpY0V5ei1yM+9WKdkz7LAMfdXSXaZWyyZuPRkfoeG1h+pYeBrHb NX7poG/TLQlTwg4RON8oBjMxMEU5je0nhZToXO2ANk693yqs8aB+yn2Ehw1GDHdSp7qv k59bViYiCrTQ/SHMLwUpfRDpRD3kASD5hMMK2JrmjUFAnFXLP2fPcvP7QqYNmHdEoOIF RICOfFPrH1IbFUO595KaJ4sSqcyswhvjkB8bnAScdUSHAlvWd8FytncZ8zPsaORuGGh2 NMpQ== MIME-Version: 1.0 X-Received: by 10.58.28.147 with SMTP id b19mr66038veh.27.1380332730261; Fri, 27 Sep 2013 18:45:30 -0700 (PDT) Received: by 10.52.165.6 with HTTP; Fri, 27 Sep 2013 18:45:29 -0700 (PDT) Received: by 10.52.165.6 with HTTP; Fri, 27 Sep 2013 18:45:29 -0700 (PDT) In-Reply-To: References: Date: Fri, 27 Sep 2013 19:45:29 -0600 Message-ID: Subject: Re: add connect method on KVM storage code From: Marcus Sorensen To: dev@cloudstack.apache.org Content-Type: multipart/alternative; boundary=047d7b6d7db20f253404e767c2d3 X-Virus-Checked: Checked by ClamAV on apache.org --047d7b6d7db20f253404e767c2d3 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Yeah, OK. That's kind of what I'm talking about. With KVM we have storage adaptors, so they can just pass the VM info to the hypervisor and it can select the right way to handle the storage on the fly, which is a bit easier, even though ultimately you still have to write the hypervisor side code. On Sep 27, 2013 7:28 PM, "Mike Tutkowski" wrote: > Yeah, we should bring John Burwell into this conversation. He had a stron= g > opposition to storage plug-ins talking to hypervisors. Perhaps he was jus= t > referring to shared storage, though (not local disks). > > Anyways, prior to 4.2 for XenServer and VMware, you had to preallocate yo= ur > SR/datastore (there was a minor exception for XenServer). CloudStack wasn= 't > involved in that process at all. It was a manual admin task. > > The storage plug-in framework didn't really solve that problem, though. A= ll > it did was allow you to create your storage volume and make it available = to > CloudStack. There was no way to get XenServer or VMware to actually use i= t > because they didn't want an IQN or whatever, they wanted an SR/datastore. > > That being the case, I had to write code to detect this "managed" situati= on > in XenServer and VMware (in the attach/detach methods). > > For example, for XenServer I had to first create an SR and a VDI (which t= he > SolidFire plug-in probably should have no knowledge of). Then the attach > logic would work. > > After a detach, I had to have XenServer "forget" the SR. > > What I was thinking is that the attach/detach methods aren't really the > right place for this special logic. > > We could have VolumeApiServiceImpl know about the special needs of > "managed" storage. > > If you want to do an attach of a volume and you're managed storage, > VolumeApiService Imple could first tell the hypervisor to create the > necessary objects (XenServer =3D SR/VDI, VMware =3D datastore/VMDK, KVM = =3D just > establish the connection to the iSCSI target). > > It true that there are commands like CreateObjectCommand in XenServer and > VMware now, but they are only used when the storage is preallocated (not > when the storage is managed by CloudStack). > > In the end, we kind of have two different storage models in CloudStack: > Non-managed (preallocated storage by admin) and managed (plug-in > dynamically creates storage on a storage system). > > > On Fri, Sep 27, 2013 at 6:46 PM, Marcus Sorensen >wrote: > > > On Fri, Sep 27, 2013 at 6:03 PM, Marcus Sorensen > > wrote: > > > On Fri, Sep 27, 2013 at 5:16 PM, Mike Tutkowski > > > wrote: > > >> createAsync is just for creating the SAN (or whatever storage) volum= e. > > >> deleteAsync is the reverse. > > > > > > Exactly. It used to be that the hypervisor created the disk/lun/file > > > volume via createPhysicalDisk. Now it's done on the SAN if the plugin > > > supports it. Ideally, only calls that are required for utilizing the > > > storage (or perhaps things like copy to NFS, where a server need be > > > involved if your SAN can't do it directly) will go to the hypervisor, > > > for external plugins. > > > > > > So mgmt server creates the LUN on the SAN, then calls the hypervisor > > > to attach it to the host, so that a VM can make use of it. The > > > createAsync is hypervisor agnostic, it just creates a LUN, and then > > > when you go to start up a VM or attach it to one it calls the > > > hypervisor-specific code to make it available. > > > > I think this is the same with the other hypervisors as well. For > > example with Xen, you create the LUN via your plugin, and then a call > > is made to Xen to register it as an SR. It's basically the same, only > > you're now coding the 'register' part yourself (and it's ephemeral, > > it's re-registered whenever it's used). You basically take over some > > of the API coding that already exists on the other platforms, which > > means you can do whatever you want instead of just what's supported, > > but you have to do the work as well. > > > > > > > >> > > >> Technically even the default plug-in should not call into the > hypervisor > > >> layer. > > > > > > There's no way you can create a local storage file to use as a volume= , > > > or CLVM volume, or other types of libvirt storage without calling a > > > service that runs on the hypervisor. Those things exist only on the > > > hypervisor, and are controlled by the hypervisor. For NFS, you could > > > create a separate API for your NFS server that creates qcow2 images o= n > > > your NFS primary, I suppose. > > > > > > One of the really nice things about KVM is that we can do whatever a > > > Linux box is capable of, it was one of the draws we had to it. We > > > wouldn't be able to do the storage we do with Xen or VMWare. > > > > > >> > > >> The storage layer should probably not be aware of the hypervisor > layer. > > > > > > That's fine, but there's no reason why a storage plugin can't talk to > > > the agent that happens to be running on the hypervisor for > > > implementation, if that's what the plugin intends. I don't see the > > > distinction between utilizing the kvm agent as you storage API or > > > talking to a custom SAN API, or some other concocted service. That's > > > sort of the point of the plugin, people can do whatever they want. > > > > > >> > > >> On Fri, Sep 27, 2013 at 5:14 PM, Mike Tutkowski < > > >> mike.tutkowski@solidfire.com> wrote: > > >> > > >>> Well, from what I saw with XenServer and VMware, that hypervisor > > logic's > > >>> attachVolume command also assumed a VDI/VMDK was created in advance= . > > >>> > > >>> I had to put logic in those attachVolume methods to create the SR/V= DI > > or > > >>> datastore/VMDK. > > >>> > > >>> However, thinking back on it, it might have made more sense for the > > >>> storage framework to detect if the storage in question was managed > and > > - > > >>> before calling attach - call create. > > >>> > > >>> If that logic was in place, I could have left attach/detachVolume > alone > > >>> and implemented create and delete in the hypervisor code to create = my > > >>> SR/VDI or datastore/VMDK. > > >>> > > >>> That makes sense to me because it is CloudStack-managed storage (so > > >>> CloudStack is calling into the hypervisor to create and delete thes= e > > types > > >>> of objects...it's managing them). > > >>> > > >>> > > >>> On Fri, Sep 27, 2013 at 5:00 PM, Marcus Sorensen < > shadowsor@gmail.com > > >wrote: > > >>> > > >>>> On Fri, Sep 27, 2013 at 4:22 PM, Mike Tutkowski > > >>>> wrote: > > >>>> > Sure, sounds good - let me know when it's up on Review Board and= I > > can > > >>>> take > > >>>> > a look. > > >>>> > > > >>>> > I made most of the changes you and I talked about: > > >>>> > > > >>>> > > > >>>> > > > https://github.com/mike-tutkowski/incubator-cloudstack/commit/eb9b2edfc90= 62f9ca7961fecd5379b180ca3aed1 > > >>>> > > > >>>> > I have a new idea, though, that I think will simplify this: > > >>>> > > > >>>> > The main "weirdness" right now is when attachVolume is called th= at > > the > > >>>> > original logic assumed createVolume had been called already. > > >>>> > > > >>>> > In my case, this doesn't apply, so we had to place extra logic i= n > > >>>> > attachVolume to essentially "create" a volume. We decided to mak= e > a > > >>>> connect > > >>>> > method, which establishes the iSCSI connection and creates a > > >>>> > KVMPhysicalDisk that can be returned when attachVolume calls > > >>>> > getPhysicalDisk. > > >>>> > > > >>>> > The "normal" place where you'd create a KVMPhysicalDisk, however= , > > would > > >>>> be > > >>>> > in the createVolume method. Since I don't currently "create" a > > volume, > > >>>> my > > >>>> > only chance to note the size of the volume is in the connect > method. > > >>>> > > >>>> I don't think createVolume applies to plugins. My impression wash > that > > >>>> createAsync is called on the mgmt server side. If createVolume IS > > >>>> being called, that's weird. The idea here is that mgmt server > creates > > >>>> the LUN, and then on the KVM side attach is called (or StartComman= d > if > > >>>> it's a root volume and vm is being started), and it assumes that t= he > > >>>> LUN is already there, so we call connectPhysicalDisk to attach it = to > > >>>> the KVM host. > > >>>> > > >>>> > > > >>>> > It ends up being kind of weird to pass a size into the connect > > method, > > >>>> as > > >>>> > you've noted. > > >>>> > > > >>>> > What if we essentially left the attachVolume and detachVolume > > methods > > >>>> alone > > >>>> > (as in how they were before my changes)? We could have > > >>>> > VolumeApiServiceImpl, before sending the AttachCommand, detect i= f > > the > > >>>> > storage in question is managed. If it is, VolumeApiServiceImpl > could > > >>>> send a > > >>>> > CreateObjectCommand. I would then implement createPhysicalDisk t= o > > >>>> connect > > >>>> > my iSCSI target and create a KVMPhysicalDisk. > > >>>> > > > >>>> > On the reverse side, VolumeApiServiceImpl, after sending the > > >>>> DetachCommand, > > >>>> > could detect if the storage in question is managed. If it is, > > >>>> > VolumeApiServiceImpl could send a DeleteCommand. I would then > > implement > > >>>> the > > >>>> > deletePhysicalDisk method to disconnect my iSCSI session. > > >>>> > > > >>>> > What do you think? > > >>>> > > >>>> Maybe I'm just confused, but I thought the create and delete on th= e > > >>>> KVM side only apply to the default storage plugin, which has to pa= ss > > >>>> everything on the agent. I thought the creation/deletion of LUNs > > >>>> occured via createAsync and deleteAsync in your plugin. > > >>>> > > >>>> > > > >>>> > > > >>>> > On Fri, Sep 27, 2013 at 3:21 PM, Marcus Sorensen < > > shadowsor@gmail.com > > >>>> >wrote: > > >>>> > > > >>>> >> Ok, I've got our plugin working against 4.2. Tested start vm, > stop > > vm, > > >>>> >> migrate vm, attach volume, detach volume. Other functions that > we > > >>>> >> already had in our StorageAdaptor implementation, such as copyi= ng > > >>>> >> templates to primary storage, just worked without any > modification > > >>>> >> from our 4.1 version. > > >>>> >> > > >>>> >> I'll post a patch to reviewboard with the applicable changes. I > was > > >>>> >> correct that attachVolume and dettachVolume only apply to > > >>>> >> adding/removing disks from running VMs, so there were some more > > >>>> >> changes to LibvirtComputingResource. I don't intend for this > patch > > to > > >>>> >> be applied (for one it's against 4.2), but I want you to take a > > look > > >>>> >> and see if it will work for you as well. If it does, then it's = a > > good > > >>>> >> indicator that it should work for other plugins too, or if it > > needs to > > >>>> >> be tweaked we can work it out. > > >>>> >> > > >>>> >> The gist is that we needed a connectPhysicalDisk call that can > > accept > > >>>> >> the pool/volume info (which we've discussed), but also a versio= n > of > > >>>> >> connectPhysicalDisk that can take a vm specification > > >>>> >> (VirtualMachineTO) and figure out which pools/disks are needed > and > > >>>> >> attach them. I largely copied the code we had custom inserted > into > > our > > >>>> >> 4.1 and put it into KVMStoragePoolManager so that it will be > > adaptor > > >>>> >> agnostic. > > >>>> >> > > >>>> >> Same goes for disconnectPhysicalDisk. > > >>>> >> > > >>>> >> We also needed to pass the VirtualMachineTO in a few other plac= es > > like > > >>>> >> MigrateCommand and StopCommand, it's otherwise hard to know whi= ch > > >>>> >> storage adaptors we need to deal with when all we have is a vm > > name or > > >>>> >> something like that. > > >>>> >> > > >>>> >> On Fri, Sep 27, 2013 at 12:56 AM, Mike Tutkowski > > >>>> >> wrote: > > >>>> >> > Thanks for the clarification on how that works. > > >>>> >> > > > >>>> >> > Also, yeah, I think CHAP only grants you access to a volume. = If > > >>>> multiple > > >>>> >> > hosts are using the CHAP credentials for a single volume, it'= s > > up to > > >>>> >> those > > >>>> >> > hosts to make sure they don't step on each other's toes (and > > this is > > >>>> - to > > >>>> >> > my understanding - how it works with XenServer and VMware). > > >>>> >> > > > >>>> >> > > > >>>> >> > On Fri, Sep 27, 2013 at 12:45 AM, Marcus Sorensen < > > >>>> shadowsor@gmail.com > > >>>> >> >wrote: > > >>>> >> > > > >>>> >> >> On Fri, Sep 27, 2013 at 12:21 AM, Mike Tutkowski > > >>>> >> >> wrote: > > >>>> >> >> > Maybe I should seek a little clarification as to how live > > >>>> migration > > >>>> >> works > > >>>> >> >> > in CS with KVM. > > >>>> >> >> > > > >>>> >> >> > Before we do a live migration of VM 1 from Host 1 to Host = 2, > > do we > > >>>> >> detach > > >>>> >> >> > all disks from VM1? > > >>>> >> >> > > > >>>> >> >> > If so, then we're good to go there. > > >>>> >> >> > > > >>>> >> >> > I'm not as clear with HA. > > >>>> >> >> > > >>>> >> >> During live migration this is what we currently do in our > > modified > > >>>> >> >> 4.1, I'm not sure if the new framework is set up for this, b= ut > > it > > >>>> >> >> should be made to do this if not. > > >>>> >> >> > > >>>> >> >> PrepareForMigrationCommand is called on destination host. In > > >>>> >> >> PrepareForMigrationCommand we added a few lines to call > > >>>> >> >> connectPhysicalDisk. This host connects the SAN disks to thi= s > > new > > >>>> >> >> host, then creates a paused VM. > > >>>> >> >> > > >>>> >> >> MigrateCommand is called on the source host. This sends the > > proper > > >>>> >> >> command to transfer VM memory, then atomically cuts over to > the > > >>>> >> >> destination host. During this time, the disks are attached o= n > > both > > >>>> >> >> sides, but the VM is still the only thing that is using them= , > > and it > > >>>> >> >> atomically cuts over. There's no caching on the host (qemu i= s > > using > > >>>> >> >> directio), so this is safe. > > >>>> >> >> > > >>>> >> >> After MigrateCommand completes it's VM passoff, we detach th= e > > disks > > >>>> >> >> before returning. > > >>>> >> >> > > >>>> >> >> > > > >>>> >> >> > If VM 1 goes down because Host 1 crashes, is the > attach-volume > > >>>> command > > >>>> >> >> > invoked as many times as need be (depending on how many > > volumes > > >>>> need > > >>>> >> to > > >>>> >> >> be > > >>>> >> >> > attached) when VM 1 is restarted on Host 2? > > >>>> >> >> > > >>>> >> >> From what I can tell, the attachVolume and dettachVolume > seemed > > to > > >>>> >> >> only be for attaching disks to existing, running VMs (i.e. > > inserting > > >>>> >> >> new XML into an existing domain definition). Normally when > > >>>> starting a > > >>>> >> >> vm from scratch the vm definition, along with any currently > > attached > > >>>> >> >> disks, is passed in to StartCommand (which would also be > called > > >>>> during > > >>>> >> >> HA restart of a VM). In our 4.1 branch we also have a call t= o > > >>>> >> >> connectPhysicalDisk here, where we loop through the disk > > definitions > > >>>> >> >> that were passed. > > >>>> >> >> > > >>>> >> >> Again, I should be able to flesh out the differences in 4.2 > and > > how > > >>>> to > > >>>> >> >> go about making this suitable for everyone in the coming day= s, > > so > > >>>> long > > >>>> >> >> as you and anyone else writing plugins agree with the change= s. > > >>>> >> >> > > >>>> >> >> These processes would make sure the disks are available on t= he > > hosts > > >>>> >> >> they need to be, but they don't really provide locking or > ensure > > >>>> that > > >>>> >> >> only the necessary hosts can write to or see the disks at an= y > > given > > >>>> >> >> time. I don't think CHAP does that either. We currently > generate > > >>>> ACLs > > >>>> >> >> via our SAN api during connectPhysicalDisk as a safety > measure, > > but > > >>>> if > > >>>> >> >> CloudStack is working properly it will be in charge of > > controlling > > >>>> >> >> that the disks are only being used where they should be. The > > ACLs > > >>>> just > > >>>> >> >> ensure that if the VM somehow gets started in two different > > places > > >>>> >> >> (e.g. HA malfunction), only one of them will have access to > the > > >>>> disks. > > >>>> >> >> > > >>>> >> >> > > > >>>> >> >> > > > >>>> >> >> > On Fri, Sep 27, 2013 at 12:17 AM, Mike Tutkowski < > > >>>> >> >> > mike.tutkowski@solidfire.com> wrote: > > >>>> >> >> > > > >>>> >> >> >> Let me clarify this line a bit: > > >>>> >> >> >> > > >>>> >> >> >> "We get away without this with XenServer and VMware becau= se > > - as > > >>>> far > > >>>> >> as > > >>>> >> >> I > > >>>> >> >> >> know - CS delegates HA and live migration to those cluste= rs > > and > > >>>> they > > >>>> >> >> handle > > >>>> >> >> >> it most likely with some kind of locking protocol on the > > >>>> >> SR/datastore." > > >>>> >> >> >> > > >>>> >> >> >> When I set up a XenServer or a VMware cluster, all nodes = in > > the > > >>>> >> cluster > > >>>> >> >> >> have the proper CHAP credentials and can access a shared > > >>>> >> SR/datastore. > > >>>> >> >> >> > > >>>> >> >> >> HA and live migrations are OK here because the cluster > > controls > > >>>> >> access > > >>>> >> >> to > > >>>> >> >> >> the VDI on the SR (or VMDK on the datastore) with some ki= nd > > of > > >>>> >> locking > > >>>> >> >> >> protocol, I expect. > > >>>> >> >> >> > > >>>> >> >> >> Since KVM isn't really in a cluster outside of the > CloudStack > > >>>> world, > > >>>> >> >> >> CloudStack has to handle these intricacies. > > >>>> >> >> >> > > >>>> >> >> >> In my case, I'm just presenting a raw disk to a VM on a K= VM > > host. > > >>>> >> >> >> > > >>>> >> >> >> In that case, HA and live migration depend on the storage > > plug-in > > >>>> >> being > > >>>> >> >> >> able to grant and revoke access to the volume for hosts a= s > > >>>> needed. > > >>>> >> >> >> > > >>>> >> >> >> I'd actually rather not even bother with CHAP when using > KVM. > > >>>> >> >> >> > > >>>> >> >> >> > > >>>> >> >> >> On Fri, Sep 27, 2013 at 12:06 AM, Mike Tutkowski < > > >>>> >> >> >> mike.tutkowski@solidfire.com> wrote: > > >>>> >> >> >> > > >>>> >> >> >>> Hey Marcus, > > >>>> >> >> >>> > > >>>> >> >> >>> I agree that CHAP does not fulfill the same role as > fencing. > > >>>> >> >> >>> > > >>>> >> >> >>> I think we're going to have trouble with HA and live > > migrations > > >>>> on > > >>>> >> KVM > > >>>> >> >> if > > >>>> >> >> >>> the storage plug-in doesn't have a way of knowing when a > > host > > >>>> wants > > >>>> >> to > > >>>> >> >> >>> access a volume and when we want to revoke access to tha= t > > >>>> volume. > > >>>> >> >> >>> > > >>>> >> >> >>> We get away without this with XenServer and VMware becau= se > > - as > > >>>> far > > >>>> >> as > > >>>> >> >> I > > >>>> >> >> >>> know - CS delegates HA and live migration to those > clusters > > and > > >>>> they > > >>>> >> >> handle > > >>>> >> >> >>> it most likely with some kind of locking protocol on the > > >>>> >> SR/datastore. > > >>>> >> >> >>> > > >>>> >> >> >>> As far as the path field is concerned, I should be able = to > > >>>> populate > > >>>> >> it > > >>>> >> >> >>> with the IQN of the volume in question. > > >>>> >> >> >>> > > >>>> >> >> >>> One problem I do see, however, is in the getPhysicalDisk > > method. > > >>>> >> >> >>> > > >>>> >> >> >>> How are you envisioning I keep track of KVMPhysicalDisks > > that I > > >>>> >> create > > >>>> >> >> in > > >>>> >> >> >>> my connect method? > > >>>> >> >> >>> > > >>>> >> >> >>> Initially I was thinking I'd just keep them in a map. > > Storage > > >>>> pool > > >>>> >> UUID > > >>>> >> >> >>> to KVMPhysicalDisks. > > >>>> >> >> >>> > > >>>> >> >> >>> The problem is, how do I reconstruct that map if the age= nt > > is > > >>>> >> restarted > > >>>> >> >> >>> (say the host crashes or is restarted). > > >>>> >> >> >>> > > >>>> >> >> >>> For storage pools, we get a message > > (ModifyStoragePoolCommand) > > >>>> from > > >>>> >> the > > >>>> >> >> >>> CS MS to tell us about all of the relevant storage pools= . > > With > > >>>> this > > >>>> >> >> >>> message, I can reconstruct my cache of storage pools. No > > problem > > >>>> >> there. > > >>>> >> >> >>> > > >>>> >> >> >>> But how will I know which volumes belong to a given > storage > > >>>> pool if > > >>>> >> I > > >>>> >> >> >>> have to rebuild that map? How will I even know which > volumes > > >>>> are in > > >>>> >> >> use at > > >>>> >> >> >>> all? > > >>>> >> >> >>> > > >>>> >> >> >>> Thanks > > >>>> >> >> >>> > > >>>> >> >> >>> > > >>>> >> >> >>> On Thu, Sep 26, 2013 at 11:37 PM, Marcus Sorensen < > > >>>> >> shadowsor@gmail.com > > >>>> >> >> >wrote: > > >>>> >> >> >>> > > >>>> >> >> >>>> On Thu, Sep 26, 2013 at 10:23 PM, Mike Tutkowski > > >>>> >> >> >>>> wrote: > > >>>> >> >> >>>> > My comments are inline: > > >>>> >> >> >>>> > > > >>>> >> >> >>>> > > > >>>> >> >> >>>> > On Thu, Sep 26, 2013 at 9:10 PM, Marcus Sorensen < > > >>>> >> >> shadowsor@gmail.com > > >>>> >> >> >>>> >wrote: > > >>>> >> >> >>>> > > > >>>> >> >> >>>> >> Ok, let me digest this a bit. I got the github > responses > > >>>> but I'd > > >>>> >> >> also > > >>>> >> >> >>>> >> like to keep it on-list as well. > > >>>> >> >> >>>> >> > > >>>> >> >> >>>> >> My initial thoughts are: > > >>>> >> >> >>>> >> > > >>>> >> >> >>>> >> 1) I don't think disk format and size are necessary > > >>>> parameters > > >>>> >> for > > >>>> >> >> >>>> >> connectPhysicalDisk, as the format can be determined > by > > the > > >>>> >> >> adaptor, > > >>>> >> >> >>>> >> and the size is set during the createAsync call in t= he > > >>>> plugin. > > >>>> >> We > > >>>> >> >> >>>> >> really just need the disk path and the pool. > > >>>> >> >> >>>> >> > > >>>> >> >> >>>> > [Mike T.] I agree, format is not needed. The only > reason > > I > > >>>> have > > >>>> >> size > > >>>> >> >> >>>> passed > > >>>> >> >> >>>> > in is because I need to create a KVMPhysicalDisk at t= he > > end > > >>>> of > > >>>> >> the > > >>>> >> >> >>>> connect > > >>>> >> >> >>>> > method. Since this KVMPhysicalDisk is (in the code on > > GitHub) > > >>>> >> being > > >>>> >> >> >>>> used to > > >>>> >> >> >>>> > create our XML to attach the disk, I figured we'd nee= d > > that > > >>>> size. > > >>>> >> >> The > > >>>> >> >> >>>> > KVMPhysicalDisk I produce from my implementation of > > >>>> >> getPhysicalDisk > > >>>> >> >> is > > >>>> >> >> >>>> not > > >>>> >> >> >>>> > as good because I don't know the size of the disk at > this > > >>>> point > > >>>> >> (I > > >>>> >> >> >>>> don't > > >>>> >> >> >>>> > keep that information around). The reason I don't kee= p > > that > > >>>> info > > >>>> >> >> >>>> around is > > >>>> >> >> >>>> > because I don't have a good way to reproduce that inf= o > if > > >>>> the KVM > > >>>> >> >> host > > >>>> >> >> >>>> is > > >>>> >> >> >>>> > rebooted. We get info about storage pools in the form > of > > a > > >>>> >> >> >>>> > ModifyStoragePoolCommand, but nothing about the volum= es > > >>>> inside of > > >>>> >> >> the > > >>>> >> >> >>>> > storage pool. > > >>>> >> >> >>>> > > >>>> >> >> >>>> getPhysicalDisk is called in a bunch of places. I'd rel= y > > on the > > >>>> >> >> >>>> connectPhysicalDisk to only make the block device appea= r > > on the > > >>>> >> host, > > >>>> >> >> >>>> and then getPhysicalDisk to find that block device and > > fill out > > >>>> >> things > > >>>> >> >> >>>> like disk size and path (the real path to the local blo= ck > > >>>> device) > > >>>> >> for > > >>>> >> >> >>>> passing and creating the disk XML. Trust me, unless > things > > have > > >>>> >> >> >>>> changed significantly you need to be able to identify a > > given > > >>>> >> device > > >>>> >> >> >>>> as a specific local disk by whatever you are setting th= e > > 'path' > > >>>> >> >> >>>> attribute to be. getPhysicalDisk will be called on you= r > > >>>> storage > > >>>> >> pool > > >>>> >> >> >>>> with simply the path attribute, and via your adaptor wi= th > > the > > >>>> pool > > >>>> >> and > > >>>> >> >> >>>> path. > > >>>> >> >> >>>> > > >>>> >> >> >>>> So you may set path as some combination of iqn and > > target/pool > > >>>> >> info, > > >>>> >> >> >>>> or if iqn is enough to identify a unique block device (= in > > >>>> >> >> >>>> /dev/disk/by-id maybe?) on a host then just use that. > Path > > just > > >>>> >> needs > > >>>> >> >> >>>> to be something, anything, to identify the disk on the > > host. In > > >>>> >> >> >>>> getPhysicalDisk, identify the local block device matchi= ng > > the > > >>>> info, > > >>>> >> >> >>>> create a new KVMPhysicalDisk with the local path, size, > > etc, > > >>>> and > > >>>> >> >> >>>> return it. > > >>>> >> >> >>>> > > >>>> >> >> >>>> > > > >>>> >> >> >>>> >> > > >>>> >> >> >>>> >> 2) I thought this access group thing you mention are > the > > >>>> >> >> grantAccess > > >>>> >> >> >>>> >> and revokeAccess calls in the storage plugin 2.0 > design > > >>>> doc. Was > > >>>> >> >> that > > >>>> >> >> >>>> >> not implemented? > > >>>> >> >> >>>> >> > > >>>> >> >> >>>> > [Mike T.] Yeah, as I mentioned in an e-mail way back, > > those > > >>>> >> methods > > >>>> >> >> >>>> were > > >>>> >> >> >>>> > never implemented in 4.2. I think you said you were > > going to > > >>>> get > > >>>> >> >> around > > >>>> >> >> >>>> > them not being implemented by keeping certain logic > that > > >>>> talks to > > >>>> >> >> the > > >>>> >> >> >>>> SAN > > >>>> >> >> >>>> > in the agent. I don't think we want any > > SolidFire-specific > > >>>> code > > >>>> >> in > > >>>> >> >> the > > >>>> >> >> >>>> > agent, however, so I can't go that route. If those > > methods > > >>>> do not > > >>>> >> >> get > > >>>> >> >> >>>> > implemented in 4.3, then I will need to use CHAP > > credentials > > >>>> for > > >>>> >> KVM > > >>>> >> >> >>>> (just > > >>>> >> >> >>>> > like I did with XenServer and VMware). > > >>>> >> >> >>>> > > >>>> >> >> >>>> I initially figured your StorageAdaptor implementation > > would > > >>>> be all > > >>>> >> >> >>>> solidfire specific, just like the mgmt server side plug= in > > is. > > >>>> If it > > >>>> >> >> >>>> can be generic to all iscsi storage then that's great. = I > > agree > > >>>> that > > >>>> >> >> >>>> ideally the agent wouldn't be making API calls to your > > SAN. I > > >>>> don't > > >>>> >> >> >>>> think it should be necessary given that you're not goin= g > to > > >>>> use the > > >>>> >> >> >>>> ACL route. I'm not sure CHAP fills the same purpose of > > fencing. > > >>>> >> >> >>>> > > >>>> >> >> >>>> > > > >>>> >> >> >>>> >> > > >>>> >> >> >>>> >> I see you've added getters/setters for the attach c= md > > to > > >>>> pass > > >>>> >> the > > >>>> >> >> >>>> >> iscsi info you need. Would it perhaps be possible to > > send a > > >>>> >> details > > >>>> >> >> >>>> >> Map instead? Then any plugin > implementer > > >>>> could > > >>>> >> >> attach > > >>>> >> >> >>>> >> arbitrary data they need. So it might be > > >>>> >> >> >>>> >> connectPhysicalDisk(StoragePoolType type, String > > poolUuid, > > >>>> >> String > > >>>> >> >> >>>> >> volPath, Map details)? I'll have to > > look > > >>>> and > > >>>> >> see > > >>>> >> >> >>>> >> where those cmd. attributes are set, ideally it woul= d > be > > >>>> all the > > >>>> >> >> way > > >>>> >> >> >>>> >> back in the plugin to avoid custom code for every > > adaptor > > >>>> that > > >>>> >> >> wants > > >>>> >> >> >>>> >> to set details. > > >>>> >> >> >>>> >> > > >>>> >> >> >>>> > [Mike T.] If I'm not using the volumes.path field for > > >>>> anything, I > > >>>> >> >> can > > >>>> >> >> >>>> stick > > >>>> >> >> >>>> > the IQN in volumes.path (as well as leaving it in > > >>>> >> >> volumes.iscsi_name, > > >>>> >> >> >>>> which > > >>>> >> >> >>>> > is used elsewhere). That way we only have to ask for > > >>>> getPath(). > > >>>> >> >> >>>> > > >>>> >> >> >>>> Yeah, whatever it is that you'd need to find the right > > block > > >>>> device > > >>>> >> >> >>>> should go in the path. If you look through > > >>>> LibvirtComputingResource > > >>>> >> >> >>>> you'll see stuff like this sprinkled around: > > >>>> >> >> >>>> > > >>>> >> >> >>>> KVMPhysicalDisk volume =3D > > >>>> >> >> primaryPool.getPhysicalDisk(cmd > > >>>> >> >> >>>> .getVolumePath()); > > >>>> >> >> >>>> > > >>>> >> >> >>>> > > >>>> >> >> >>>> or: > > >>>> >> >> >>>> String volid =3D cmd.getPath(); > > >>>> >> >> >>>> KVMPhysicalDisk vol =3D > pool.getPhysicalDisk(volid); > > >>>> >> >> >>>> > > >>>> >> >> >>>> or: > > >>>> >> >> >>>> > > >>>> >> >> >>>> KVMPhysicalDisk physicalDisk =3D > > >>>> >> >> >>>> _storagePoolMgr.getPhysicalDisk( store.getPoolType(), > > >>>> >> >> >>>> store.getUuid(), > > >>>> >> >> >>>> data.getPath()); > > >>>> >> >> >>>> > > >>>> >> >> >>>> Maybe some of it is short-circuited by the new > > >>>> KVMStorageProcessor, > > >>>> >> >> >>>> but I'd still implement a working one, and then > > attachVolume > > >>>> can > > >>>> >> call > > >>>> >> >> >>>> getPhysicalDisk after connectPhysicalDisk, even on your > > >>>> >> adaptor/pool. > > >>>> >> >> >>>> > > >>>> >> >> >>>> > > > >>>> >> >> >>>> >> > > >>>> >> >> >>>> >> On Thu, Sep 26, 2013 at 7:35 PM, Mike Tutkowski > > >>>> >> >> >>>> >> wrote: > > >>>> >> >> >>>> >> > Also, if we went the non-CHAP route, before > attaching > > a > > >>>> volume > > >>>> >> >> to a > > >>>> >> >> >>>> VM, > > >>>> >> >> >>>> >> > we'd have to tell the plug-in to set up a volume > > access > > >>>> group. > > >>>> >> >> >>>> >> > > > >>>> >> >> >>>> >> > When a volume is detached from a VM, we'd have to > > tell the > > >>>> >> >> plug-in > > >>>> >> >> >>>> to > > >>>> >> >> >>>> >> > delete the volume access group. > > >>>> >> >> >>>> >> > > > >>>> >> >> >>>> >> > > > >>>> >> >> >>>> >> > On Thu, Sep 26, 2013 at 7:32 PM, Mike Tutkowski < > > >>>> >> >> >>>> >> > mike.tutkowski@solidfire.com> wrote: > > >>>> >> >> >>>> >> > > > >>>> >> >> >>>> >> >> I mention this is my comments on GitHub, as well, > but > > >>>> CHAP > > >>>> >> info > > >>>> >> >> is > > >>>> >> >> >>>> >> >> associated with an account - not a storage pool. > > >>>> >> >> >>>> >> >> > > >>>> >> >> >>>> >> >> Ideally we could do without CHAP info entirely if > we > > had > > >>>> a > > >>>> >> >> >>>> reliable way > > >>>> >> >> >>>> >> to > > >>>> >> >> >>>> >> >> tell the storage plug-in that a given host wants = to > > >>>> access a > > >>>> >> >> given > > >>>> >> >> >>>> >> volume. > > >>>> >> >> >>>> >> >> In this case, my storage plug-in could create wha= t > we > > >>>> call a > > >>>> >> >> Volume > > >>>> >> >> >>>> >> Access > > >>>> >> >> >>>> >> >> Group on the SAN. It would essentially say, "The > host > > >>>> with > > >>>> >> IQN > > >>>> >> >> > > >>>> >> >> >>>> can > > >>>> >> >> >>>> >> >> access the volume with IQN without using CHAP > > >>>> >> credentials." > > >>>> >> >> Of > > >>>> >> >> >>>> >> course > > >>>> >> >> >>>> >> >> we'd need a way to revoke this privilege in the > > event of > > >>>> a > > >>>> >> live > > >>>> >> >> >>>> >> migration > > >>>> >> >> >>>> >> >> of a VM. Right now, I do not believe such a > facility > > is > > >>>> >> >> supported > > >>>> >> >> >>>> with > > >>>> >> >> >>>> >> the > > >>>> >> >> >>>> >> >> storage plug-ins. > > >>>> >> >> >>>> >> >> > > >>>> >> >> >>>> >> >> > > >>>> >> >> >>>> >> >> On Thu, Sep 26, 2013 at 4:56 PM, Marcus Sorensen = < > > >>>> >> >> >>>> shadowsor@gmail.com > > >>>> >> >> >>>> >> >wrote: > > >>>> >> >> >>>> >> >> > > >>>> >> >> >>>> >> >>> Looking at your code, is the chap info stored wi= th > > the > > >>>> pool, > > >>>> >> >> so we > > >>>> >> >> >>>> >> >>> could pass the pool to the adaptor? That would b= e > > more > > >>>> >> >> agnostic, > > >>>> >> >> >>>> >> >>> anyone implementing a plugin could pull the > > specifics > > >>>> they > > >>>> >> need > > >>>> >> >> >>>> for > > >>>> >> >> >>>> >> >>> their stuff out of the pool on the adaptor side, > > rather > > >>>> than > > >>>> >> >> >>>> creating > > >>>> >> >> >>>> >> >>> custom signatures. > > >>>> >> >> >>>> >> >>> > > >>>> >> >> >>>> >> >>> Also, I think we may want to consider implementi= ng > > >>>> >> >> >>>> connect/disconnect > > >>>> >> >> >>>> >> >>> as just dummy methods in LibvirtStorageAdaptor, = so > > we > > >>>> don't > > >>>> >> >> have > > >>>> >> >> >>>> to be > > >>>> >> >> >>>> >> >>> picky about which adaptors/types in every single > > place > > >>>> we > > >>>> >> may > > >>>> >> >> >>>> want to > > >>>> >> >> >>>> >> >>> connect/disconnect (in 4.1 there were several, I= 'm > > not > > >>>> sure > > >>>> >> if > > >>>> >> >> >>>> >> >>> everything goes through this in 4.2). We can jus= t > > call > > >>>> >> >> >>>> >> >>> adaptor.connectPhysicalDisk and the adaptor can > > decide > > >>>> if it > > >>>> >> >> >>>> needs to > > >>>> >> >> >>>> >> >>> do anything. > > >>>> >> >> >>>> >> >>> > > >>>> >> >> >>>> >> >>> Comments are attached to your commit, I just > wanted > > to > > >>>> echo > > >>>> >> >> them > > >>>> >> >> >>>> here > > >>>> >> >> >>>> >> >>> on-list. > > >>>> >> >> >>>> >> >>> > > >>>> >> >> >>>> >> >>> On Thu, Sep 26, 2013 at 4:32 PM, Mike Tutkowski > > >>>> >> >> >>>> >> >>> wrote: > > >>>> >> >> >>>> >> >>> > Oh, SnapshotTestWithFakeData is just modified > > because > > >>>> the > > >>>> >> >> code > > >>>> >> >> >>>> wasn't > > >>>> >> >> >>>> >> >>> > building until I corrected this. It has nothin= g > > >>>> really to > > >>>> >> do > > >>>> >> >> >>>> with my > > >>>> >> >> >>>> >> >>> real > > >>>> >> >> >>>> >> >>> > changes. > > >>>> >> >> >>>> >> >>> > > > >>>> >> >> >>>> >> >>> > > > >>>> >> >> >>>> >> >>> > On Thu, Sep 26, 2013 at 4:31 PM, Mike Tutkowsk= i > < > > >>>> >> >> >>>> >> >>> > mike.tutkowski@solidfire.com> wrote: > > >>>> >> >> >>>> >> >>> > > > >>>> >> >> >>>> >> >>> >> Hey Marcus, > > >>>> >> >> >>>> >> >>> >> > > >>>> >> >> >>>> >> >>> >> I implemented your recommendations regarding > > adding > > >>>> >> connect > > >>>> >> >> and > > >>>> >> >> >>>> >> >>> disconnect > > >>>> >> >> >>>> >> >>> >> methods. It is not yet checked in (as you kno= w, > > >>>> having > > >>>> >> >> trouble > > >>>> >> >> >>>> with > > >>>> >> >> >>>> >> my > > >>>> >> >> >>>> >> >>> KVM > > >>>> >> >> >>>> >> >>> >> environment), but it is on GitHub here: > > >>>> >> >> >>>> >> >>> >> > > >>>> >> >> >>>> >> >>> >> > > >>>> >> >> >>>> >> >>> >> > > >>>> >> >> >>>> >> >>> > > >>>> >> >> >>>> >> > > >>>> >> >> >>>> > > >>>> >> >> > > >>>> >> > > >>>> > > > https://github.com/mike-tutkowski/incubator-cloudstack/commit/f2897c65689= 012e6157c0a0c2ed7e5355900c59a > > >>>> >> >> >>>> >> >>> >> > > >>>> >> >> >>>> >> >>> >> Please let me know if you have any more > comments. > > >>>> >> >> >>>> >> >>> >> > > >>>> >> >> >>>> >> >>> >> Thanks! > > >>>> >> >> >>>> >> >>> >> > > >>>> >> >> >>>> >> >>> >> > > >>>> >> >> >>>> >> >>> >> On Thu, Sep 26, 2013 at 4:05 PM, Marcus > Sorensen > > < > > >>>> >> >> >>>> >> shadowsor@gmail.com > > >>>> >> >> >>>> >> >>> >wrote: > > >>>> >> >> >>>> >> >>> >> > > >>>> >> >> >>>> >> >>> >>> Mike, everyone, > > >>>> >> >> >>>> >> >>> >>> As I've mentioned on the board, I'm worki= ng > > on > > >>>> >> getting > > >>>> >> >> our > > >>>> >> >> >>>> own > > >>>> >> >> >>>> >> >>> >>> internal KVM storage plugin working on 4.2. = In > > the > > >>>> >> >> interest of > > >>>> >> >> >>>> >> making > > >>>> >> >> >>>> >> >>> >>> it forward compatible, I just wanted to > confirm > > >>>> what you > > >>>> >> >> were > > >>>> >> >> >>>> doing > > >>>> >> >> >>>> >> >>> >>> with the solidfire plugin as far as attachin= g > > your > > >>>> iscsi > > >>>> >> >> >>>> LUNs. We > > >>>> >> >> >>>> >> had > > >>>> >> >> >>>> >> >>> >>> discussed a new connectPhysicalDisk method f= or > > the > > >>>> >> >> >>>> StorageAdaptor > > >>>> >> >> >>>> >> >>> >>> class, something perhaps like: > > >>>> >> >> >>>> >> >>> >>> > > >>>> >> >> >>>> >> >>> >>> public boolean connectPhysicalDisk(String > > >>>> volumeUuid, > > >>>> >> >> >>>> >> KVMStoragePool > > >>>> >> >> >>>> >> >>> >>> pool); > > >>>> >> >> >>>> >> >>> >>> > > >>>> >> >> >>>> >> >>> >>> then added to KVMStoragePoolManager: > > >>>> >> >> >>>> >> >>> >>> > > >>>> >> >> >>>> >> >>> >>> public boolean > > connectPhysicalDisk(StoragePoolType > > >>>> type, > > >>>> >> >> >>>> String > > >>>> >> >> >>>> >> >>> >>> poolUuid, String volPath) { > > >>>> >> >> >>>> >> >>> >>> StorageAdaptor adaptor =3D > > >>>> >> getStorageAdaptor(type); > > >>>> >> >> >>>> >> >>> >>> KVMStoragePool pool =3D > > >>>> >> >> >>>> adaptor.getStoragePool(poolUuid); > > >>>> >> >> >>>> >> >>> >>> return > > adaptor.connectPhysicalDisk(volPath, > > >>>> >> pool); > > >>>> >> >> >>>> >> >>> >>> } > > >>>> >> >> >>>> >> >>> >>> > > >>>> >> >> >>>> >> >>> >>> Something similar to this for disconnect as > > well. > > >>>> Then > > >>>> >> in > > >>>> >> >> the > > >>>> >> >> >>>> >> >>> >>> KVMStorageProcessor these can be called as > > needed > > >>>> for > > >>>> >> >> >>>> >> attach/detach. > > >>>> >> >> >>>> >> >>> >>> We can probably stub out one in > > >>>> LibvirtStorageAdaptor so > > >>>> >> >> >>>> there's no > > >>>> >> >> >>>> >> >>> >>> need to switch or if/else for pool types, fo= r > > >>>> example in > > >>>> >> >> >>>> >> >>> >>> KVMStorageProcessor.attachVolume. > > >>>> >> >> >>>> >> >>> >>> > > >>>> >> >> >>>> >> >>> >>> I have debated on whether or not it should > just > > be > > >>>> >> rolled > > >>>> >> >> into > > >>>> >> >> >>>> >> >>> >>> getPhysicalDisk, having it connect the disk = if > > it's > > >>>> not > > >>>> >> >> >>>> already > > >>>> >> >> >>>> >> >>> >>> connected. getPhysicalDisk is called a lot, > and > > I'm > > >>>> not > > >>>> >> >> sure > > >>>> >> >> >>>> it > > >>>> >> >> >>>> >> always > > >>>> >> >> >>>> >> >>> >>> needs to connect the disk when it does. In > past > > >>>> >> iterations > > >>>> >> >> >>>> >> >>> >>> getPhysicalDisk has simply spoken to our SAN > > api and > > >>>> >> >> returned > > >>>> >> >> >>>> the > > >>>> >> >> >>>> >> disk > > >>>> >> >> >>>> >> >>> >>> details, nothing more. So it seemed more > > flexible > > >>>> and > > >>>> >> >> >>>> granular to > > >>>> >> >> >>>> >> do > > >>>> >> >> >>>> >> >>> >>> the connectPhysicalDisk (we have one now in > our > > 4.1 > > >>>> >> >> version). > > >>>> >> >> >>>> >> >>> >>> > > >>>> >> >> >>>> >> >>> >> > > >>>> >> >> >>>> >> >>> >> > > >>>> >> >> >>>> >> >>> >> > > >>>> >> >> >>>> >> >>> >> -- > > >>>> >> >> >>>> >> >>> >> *Mike Tutkowski* > > >>>> >> >> >>>> >> >>> >> *Senior CloudStack Developer, SolidFire Inc.* > > >>>> >> >> >>>> >> >>> >> e: mike.tutkowski@solidfire.com > > >>>> >> >> >>>> >> >>> >> o: 303.746.7302 > > >>>> >> >> >>>> >> >>> >> Advancing the way the world uses the cloud< > > >>>> >> >> >>>> >> >>> > http://solidfire.com/solution/overview/?video=3Dplay> > > >>>> >> >> >>>> >> >>> >> *=99* > > >>>> >> >> >>>> >> >>> >> > > >>>> >> >> >>>> >> >>> > > > >>>> >> >> >>>> >> >>> > > > >>>> >> >> >>>> >> >>> > > > >>>> >> >> >>>> >> >>> > -- > > >>>> >> >> >>>> >> >>> > *Mike Tutkowski* > > >>>> >> >> >>>> >> >>> > *Senior CloudStack Developer, SolidFire Inc.* > > >>>> >> >> >>>> >> >>> > e: mike.tutkowski@solidfire.com > > >>>> >> >> >>>> >> >>> > o: 303.746.7302 > > >>>> >> >> >>>> >> >>> > Advancing the way the world uses the > > >>>> >> >> >>>> >> >>> > cloud< > > >>>> http://solidfire.com/solution/overview/?video=3Dplay> > > >>>> >> >> >>>> >> >>> > *=99* > > >>>> >> >> >>>> >> >>> > > >>>> >> >> >>>> >> >> > > >>>> >> >> >>>> >> >> > > >>>> >> >> >>>> >> >> > > >>>> >> >> >>>> >> >> -- > > >>>> >> >> >>>> >> >> *Mike Tutkowski* > > >>>> >> >> >>>> >> >> *Senior CloudStack Developer, SolidFire Inc.* > > >>>> >> >> >>>> >> >> e: mike.tutkowski@solidfire.com > > >>>> >> >> >>>> >> >> o: 303.746.7302 > > >>>> >> >> >>>> >> >> Advancing the way the world uses the cloud< > > >>>> >> >> >>>> >> http://solidfire.com/solution/overview/?video=3Dplay= > > > >>>> >> >> >>>> >> >> *=99* > > >>>> >> >> >>>> >> >> > > >>>> >> >> >>>> >> > > > >>>> >> >> >>>> >> > > > >>>> >> >> >>>> >> > > > >>>> >> >> >>>> >> > -- > > >>>> >> >> >>>> >> > *Mike Tutkowski* > > >>>> >> >> >>>> >> > *Senior CloudStack Developer, SolidFire Inc.* > > >>>> >> >> >>>> >> > e: mike.tutkowski@solidfire.com > > >>>> >> >> >>>> >> > o: 303.746.7302 > > >>>> >> >> >>>> >> > Advancing the way the world uses the > > >>>> >> >> >>>> >> > cloud< > > http://solidfire.com/solution/overview/?video=3Dplay> > > >>>> >> >> >>>> >> > *=99* > > >>>> >> >> >>>> >> > > >>>> >> >> >>>> > > > >>>> >> >> >>>> > > > >>>> >> >> >>>> > > > >>>> >> >> >>>> > -- > > >>>> >> >> >>>> > *Mike Tutkowski* > > >>>> >> >> >>>> > *Senior CloudStack Developer, SolidFire Inc.* > > >>>> >> >> >>>> > e: mike.tutkowski@solidfire.com > > >>>> >> >> >>>> > o: 303.746.7302 > > >>>> >> >> >>>> > Advancing the way the world uses the > > >>>> >> >> >>>> > cloud< > http://solidfire.com/solution/overview/?video=3Dplay > > > > > >>>> >> >> >>>> > *=99* > > >>>> >> >> >>>> > > >>>> >> >> >>> > > >>>> >> >> >>> > > >>>> >> >> >>> > > >>>> >> >> >>> -- > > >>>> >> >> >>> *Mike Tutkowski* > > >>>> >> >> >>> *Senior CloudStack Developer, SolidFire Inc.* > > >>>> >> >> >>> e: mike.tutkowski@solidfire.com > > >>>> >> >> >>> o: 303.746.7302 > > >>>> >> >> >>> Advancing the way the world uses the cloud< > > >>>> >> >> http://solidfire.com/solution/overview/?video=3Dplay> > > >>>> >> >> >>> *=99* > > >>>> >> >> >>> > > >>>> >> >> >> > > >>>> >> >> >> > > >>>> >> >> >> > > >>>> >> >> >> -- > > >>>> >> >> >> *Mike Tutkowski* > > >>>> >> >> >> *Senior CloudStack Developer, SolidFire Inc.* > > >>>> >> >> >> e: mike.tutkowski@solidfire.com > > >>>> >> >> >> o: 303.746.7302 > > >>>> >> >> >> Advancing the way the world uses the cloud< > > >>>> >> >> http://solidfire.com/solution/overview/?video=3Dplay> > > >>>> >> >> >> *=99* > > >>>> >> >> >> > > >>>> >> >> > > > >>>> >> >> > > > >>>> >> >> > > > >>>> >> >> > -- > > >>>> >> >> > *Mike Tutkowski* > > >>>> >> >> > *Senior CloudStack Developer, SolidFire Inc.* > > >>>> >> >> > e: mike.tutkowski@solidfire.com > > >>>> >> >> > o: 303.746.7302 > > >>>> >> >> > Advancing the way the world uses the > > >>>> >> >> > cloud > > >>>> >> >> > *=99* > > >>>> >> >> > > >>>> >> > > > >>>> >> > > > >>>> >> > > > >>>> >> > -- > > >>>> >> > *Mike Tutkowski* > > >>>> >> > *Senior CloudStack Developer, SolidFire Inc.* > > >>>> >> > e: mike.tutkowski@solidfire.com > > >>>> >> > o: 303.746.7302 > > >>>> >> > Advancing the way the world uses the > > >>>> >> > cloud > > >>>> >> > *=99* > > >>>> >> > > >>>> > > > >>>> > > > >>>> > > > >>>> > -- > > >>>> > *Mike Tutkowski* > > >>>> > *Senior CloudStack Developer, SolidFire Inc.* > > >>>> > e: mike.tutkowski@solidfire.com > > >>>> > o: 303.746.7302 > > >>>> > Advancing the way the world uses the > > >>>> > cloud > > >>>> > *=99* > > >>>> > > >>> > > >>> > > >>> > > >>> -- > > >>> *Mike Tutkowski* > > >>> *Senior CloudStack Developer, SolidFire Inc.* > > >>> e: mike.tutkowski@solidfire.com > > >>> o: 303.746.7302 > > >>> Advancing the way the world uses the cloud< > > http://solidfire.com/solution/overview/?video=3Dplay> > > >>> *=99* > > >>> > > >> > > >> > > >> > > >> -- > > >> *Mike Tutkowski* > > >> *Senior CloudStack Developer, SolidFire Inc.* > > >> e: mike.tutkowski@solidfire.com > > >> o: 303.746.7302 > > >> Advancing the way the world uses the > > >> cloud > > >> *=99* > > > > > > -- > *Mike Tutkowski* > *Senior CloudStack Developer, SolidFire Inc.* > e: mike.tutkowski@solidfire.com > o: 303.746.7302 > Advancing the way the world uses the > cloud > *=99* > --047d7b6d7db20f253404e767c2d3--