Return-Path: X-Original-To: apmail-cloudstack-dev-archive@www.apache.org Delivered-To: apmail-cloudstack-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id BFB6A10731 for ; Mon, 16 Sep 2013 18:10:11 +0000 (UTC) Received: (qmail 97839 invoked by uid 500); 16 Sep 2013 18:10:06 -0000 Delivered-To: apmail-cloudstack-dev-archive@cloudstack.apache.org Received: (qmail 97142 invoked by uid 500); 16 Sep 2013 18:10:05 -0000 Mailing-List: contact dev-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cloudstack.apache.org Delivered-To: mailing list dev@cloudstack.apache.org Received: (qmail 96891 invoked by uid 99); 16 Sep 2013 18:10:02 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 16 Sep 2013 18:10:02 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of mike.tutkowski@solidfire.com designates 209.85.219.53 as permitted sender) Received: from [209.85.219.53] (HELO mail-oa0-f53.google.com) (209.85.219.53) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 16 Sep 2013 18:09:56 +0000 Received: by mail-oa0-f53.google.com with SMTP id h1so310012oag.26 for ; Mon, 16 Sep 2013 11:09:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=bUt85Mjr5Kr3307IxjYYBkBBwqQnDDxDxLhd2r5CnZc=; b=jqsZMAIzqOV0KwoZhwiDEJ9f5nY2hjN5pWnGs1VQFLiwbtnvhVxOv5D1DypZiL3sMp ogO4O0C6YNdAYSa2qUK8jFwasJm/eXEQEM+N4jyKVqjb1+5gaIs98Xd/nxxyRGzKmXt3 M9W/AaroRLMapfG3xbxzDnErWa2v5oABsc6FDxgF4KGxrKRpiW4r/8QvwFjZVu6eJWY3 H58A7F52IujBkOCtMl23IX1dUaH1Cu0ppRGVLyZESYYFe/Mh9ctx5na0AovW3+5MIOxp UkaAvq4gYpHCdWZY9U8U/LSNLYE/y6rQooOLadbA4MAO7dLJOfmVZN0UJwrr1ULY+tpj hTtA== X-Gm-Message-State: ALoCoQkl8NDREuNghVJuybKsrcmBbd2ThED6ANuY2Cu8cl2Coaaa65JPnGrSw/aPxzKnAeEacdPO MIME-Version: 1.0 X-Received: by 10.60.43.131 with SMTP id w3mr26640884oel.10.1379354975178; Mon, 16 Sep 2013 11:09:35 -0700 (PDT) Received: by 10.182.139.100 with HTTP; Mon, 16 Sep 2013 11:09:34 -0700 (PDT) In-Reply-To: References: Date: Mon, 16 Sep 2013 12:09:34 -0600 Message-ID: Subject: Re: Managed storage with KVM From: Mike Tutkowski To: Marcus Sorensen Cc: "dev@cloudstack.apache.org" Content-Type: multipart/alternative; boundary=001a113346ce50a2d304e6841bc6 X-Virus-Checked: Checked by ClamAV on apache.org --001a113346ce50a2d304e6841bc6 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Hey Marcus, Thanks for that clarification. Sorry if this is a redundant question: When the AttachVolumeCommand comes in, it sounds like we thought the best approach would be for me to discover and log in to the iSCSI target using iscsiadm. This will create a new device: /dev/sdX. We would then pass this new device into the VM (passing XML into the appropriate Libvirt API). If this is an accurate understanding, can you tell me: Do you think we should be using a Disk Storage Pool or an iSCSI Storage Pool? I believe I recall you leaning toward a Disk Storage Pool because we will have already discovered the iSCSI target and, as such, will already have a device to pass into the VM. It seems like either way would work. Maybe I need to study Libvirt's iSCSI Storage Pools more to understand if they would do the work of discovering the iSCSI target for me (and maybe avoid me having to use iscsiadm). Thanks for the clarification! :) On Mon, Sep 16, 2013 at 11:08 AM, Marcus Sorensen wrot= e: > It will still register the pool. You still have a primary storage > pool that you registered, whether it's local, cluster or zone wide. > NFS is optionally zone wide as well (I'm assuming customers can launch > your storage only cluster-wide if they choose for resource > partitioning), but it registers the pool in Libvirt prior to use. > > Here's a better explanation of what I meant. AttachVolumeCommand gets > both pool and volume info. It first looks up the pool: > > KVMStoragePool primary =3D _storagePoolMgr.getStoragePool( > cmd.getPooltype(), > cmd.getPoolUuid()); > > Then it looks up the disk from that pool: > > KVMPhysicalDisk disk =3D primary.getPhysicalDisk(cmd.getVolumePath())= ; > > Most of the commands only pass volume info like this (getVolumePath > generally means the uuid of the volume), since it looks up the pool > separately. If you don't save the pool info in a map in your custom > class when createStoragePool is called, then getStoragePool won't be > able to find it. This is a simple thing in your implementation of > createStoragePool, just thought I'd mention it because it is key. Just > create a map of pool uuid and pool object and save them so they're > available across all implementations of that class. > > On Mon, Sep 16, 2013 at 10:43 AM, Mike Tutkowski > wrote: > > Thanks, Marcus > > > > About this: > > > > "When the agent connects to the > > management server, it registers all pools in the cluster with the > > agent." > > > > So, my plug-in allows you to create zone-wide primary storage. This jus= t > > means that any cluster can use the SAN (the SAN was registered as prima= ry > > storage as opposed to a preallocated volume from the SAN). Once you > create a > > primary storage based on this plug-in, the storage framework will invok= e > the > > plug-in, as needed, to create and delete volumes on the SAN. For exampl= e, > > you could have one SolidFire primary storage (zone wide) and currently > have > > 100 volumes created on the SAN to support it. > > > > In this case, what will the management server be registering with the > agent > > in ModifyStoragePool? If only the storage pool (primary storage) is > passed > > in, that will be too vague as it does not contain information on what > > volumes have been created for the agent. > > > > Thanks > > > > > > On Sun, Sep 15, 2013 at 11:53 PM, Marcus Sorensen > > wrote: > >> > >> Yes, see my previous email from the 13th. You can create your own > >> KVMStoragePool class, and StorageAdaptor class, like the libvirt ones > >> have. The previous email outlines how to add your own StorageAdaptor > >> alongside LibvirtStorageAdaptor to take over all of the calls > >> (createStoragePool, getStoragePool, etc). As mentioned, > >> getPhysicalDisk I believe will be the one you use to actually attach a > >> lun. > >> > >> Ignore CreateStoragePoolCommand. When the agent connects to the > >> management server, it registers all pools in the cluster with the > >> agent. It will call ModifyStoragePoolCommand, passing your storage > >> pool object (with all of the settings for your SAN). This in turn > >> calls _storagePoolMgr.createStoragePool, which will route through > >> KVMStoragePoolManager to your storage adapter that you've registered. > >> The last argument to createStoragePool is the pool type, which is used > >> to select a StorageAdaptor. > >> > >> From then on, most calls will only pass the volume info, and the > >> volume will have the uuid of the storage pool. For this reason, your > >> adaptor class needs to have a static Map variable that contains pool > >> uuid and pool object. Whenever they call createStoragePool on your > >> adaptor you add that pool to the map so that subsequent volume calls > >> can look up the pool details for the volume by pool uuid. With the > >> Libvirt adaptor, libvirt keeps track of that for you. > >> > >> When createStoragePool is called, you can log into the iscsi target > >> (or make sure you are already logged in, as it can be called over > >> again at any time), and when attach volume commands are fired off, you > >> can attach individual LUNs that are asked for, or rescan (say that the > >> plugin created a new ACL just prior to calling attach), or whatever is > >> necessary. > >> > >> KVM is a bit more work, but you can do anything you want. Actually, I > >> think you can call host scripts with Xen, but having the agent there > >> that runs your own code gives you the flexibility to do whatever. > >> > >> On Sun, Sep 15, 2013 at 10:44 PM, Mike Tutkowski > >> wrote: > >> > I see right now LibvirtComputingResource.java has the following meth= od > >> > that > >> > I might be able to leverage (it's probably not called at present and > >> > would > >> > need to be implemented in my case to discover my iSCSI target and lo= g > in > >> > to > >> > it): > >> > > >> > protected Answer execute(CreateStoragePoolCommand cmd) { > >> > > >> > return new Answer(cmd, true, "success"); > >> > > >> > } > >> > > >> > I would probably be able to call the KVMStorageManager to have it us= e > my > >> > StorageAdaptor to do what's necessary here. > >> > > >> > > >> > > >> > > >> > On Sun, Sep 15, 2013 at 10:37 PM, Mike Tutkowski > >> > wrote: > >> >> > >> >> Hey Marcus, > >> >> > >> >> When I implemented support in the XenServer and VMware plug-ins for > >> >> "managed" storage, I started at the execute(AttachVolumeCommand) > >> >> methods in > >> >> both plug-ins. > >> >> > >> >> The code there was changed to check the AttachVolumeCommand instanc= e > >> >> for a > >> >> "managed" property. > >> >> > >> >> If managed was false, the normal attach/detach logic would just run > and > >> >> the volume would be attached or detached. > >> >> > >> >> If managed was true, new 4.2 logic would run to create (let's talk > >> >> XenServer here) a new SR and a new VDI inside of that SR (or to > >> >> reattach an > >> >> existing VDI inside an existing SR, if this wasn't the first time t= he > >> >> volume > >> >> was attached). If managed was true and we were detaching the volume= , > >> >> the SR > >> >> would be detached from the XenServer hosts. > >> >> > >> >> I am currently walking through the execute(AttachVolumeCommand) in > >> >> LibvirtComputingResource.java. > >> >> > >> >> I see how the XML is constructed to describe whether a disk should = be > >> >> attached or detached. I also see how we call in to get a > StorageAdapter > >> >> (and > >> >> how I will likely need to write a new one of these). > >> >> > >> >> So, talking in XenServer terminology again, I was wondering if you > >> >> think > >> >> the approach we took in 4.2 with creating and deleting SRs in the > >> >> execute(AttachVolumeCommand) method would work here or if there is > some > >> >> other way I should be looking at this for KVM? > >> >> > >> >> As it is right now for KVM, storage has to be set up ahead of time. > >> >> Assuming this is the case, there probably isn't currently a place I > can > >> >> easily inject my logic to discover and log in to iSCSI targets. Thi= s > is > >> >> why > >> >> we did it as needed in the execute(AttachVolumeCommand) for XenServ= er > >> >> and > >> >> VMware, but I wanted to see if you have an alternative way that mig= ht > >> >> be > >> >> better for KVM. > >> >> > >> >> One possible way to do this would be to modify VolumeManagerImpl (o= r > >> >> whatever its equivalent is in 4.3) before it issues an attach-volum= e > >> >> command > >> >> to KVM to check to see if the volume is to be attached to managed > >> >> storage. > >> >> If it is, then (before calling the attach-volume command in KVM) ca= ll > >> >> the > >> >> create-storage-pool command in KVM (or whatever it might be called)= . > >> >> > >> >> Just wanted to get some of your thoughts on this. > >> >> > >> >> Thanks! > >> >> > >> >> > >> >> On Sat, Sep 14, 2013 at 12:07 AM, Mike Tutkowski > >> >> wrote: > >> >>> > >> >>> Yeah, I remember that StorageProcessor stuff being put in the > codebase > >> >>> and having to merge my code into it in 4.2. > >> >>> > >> >>> Thanks for all the details, Marcus! :) > >> >>> > >> >>> I can start digging into what you were talking about now. > >> >>> > >> >>> > >> >>> On Sat, Sep 14, 2013 at 12:02 AM, Marcus Sorensen > >> >>> > >> >>> wrote: > >> >>>> > >> >>>> Looks like things might be slightly different now in 4.2, with > >> >>>> KVMStorageProcessor.java in the mix.This looks more or less like > some > >> >>>> of the commands were ripped out verbatim from > >> >>>> LibvirtComputingResource > >> >>>> and placed here, so in general what I've said is probably still > true, > >> >>>> just that the location of things like AttachVolumeCommand might b= e > >> >>>> different, in this file rather than LibvirtComputingResource.java= . > >> >>>> > >> >>>> On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen > >> >>>> > >> >>>> wrote: > >> >>>> > Ok, KVM will be close to that, of course, because only the > >> >>>> > hypervisor > >> >>>> > classes differ, the rest is all mgmt server. Creating a volume = is > >> >>>> > just > >> >>>> > a db entry until it's deployed for the first time. > >> >>>> > AttachVolumeCommand > >> >>>> > on the agent side (LibvirtStorageAdaptor.java is analogous to > >> >>>> > CitrixResourceBase.java) will do the iscsiadm commands (via a K= VM > >> >>>> > StorageAdaptor) to log in the host to the target and then you > have > >> >>>> > a > >> >>>> > block device. Maybe libvirt will do that for you, but my quick > >> >>>> > read > >> >>>> > made it sound like the iscsi libvirt pool type is actually a > pool, > >> >>>> > not > >> >>>> > a lun or volume, so you'll need to figure out if that works or = if > >> >>>> > you'll have to use iscsiadm commands. > >> >>>> > > >> >>>> > If you're NOT going to use LibvirtStorageAdaptor (because Libvi= rt > >> >>>> > doesn't really manage your pool the way you want), you're going > to > >> >>>> > have to create a version of KVMStoragePool class and a > >> >>>> > StorageAdaptor > >> >>>> > class (see LibvirtStoragePool.java and > LibvirtStorageAdaptor.java), > >> >>>> > implementing all of the methods, then in KVMStorageManager.java > >> >>>> > there's a "_storageMapper" map. This is used to select the > correct > >> >>>> > adaptor, you can see in this file that every call first pulls t= he > >> >>>> > correct adaptor out of this map via getStorageAdaptor. So you c= an > >> >>>> > see > >> >>>> > a comment in this file that says "add other storage adaptors > here", > >> >>>> > where it puts to this map, this is where you'd register your > >> >>>> > adaptor. > >> >>>> > > >> >>>> > So, referencing StorageAdaptor.java, createStoragePool accepts > all > >> >>>> > of > >> >>>> > the pool data (host, port, name, path) which would be used to l= og > >> >>>> > the > >> >>>> > host into the initiator. I *believe* the method getPhysicalDisk > >> >>>> > will > >> >>>> > need to do the work of attaching the lun. AttachVolumeCommand > >> >>>> > calls > >> >>>> > this and then creates the XML diskdef and attaches it to the VM= . > >> >>>> > Now, > >> >>>> > one thing you need to know is that createStoragePool is called > >> >>>> > often, > >> >>>> > sometimes just to make sure the pool is there. You may want to > >> >>>> > create > >> >>>> > a map in your adaptor class and keep track of pools that have > been > >> >>>> > created, LibvirtStorageAdaptor doesn't have to do this because = it > >> >>>> > asks > >> >>>> > libvirt about which storage pools exist. There are also calls t= o > >> >>>> > refresh the pool stats, and all of the other calls can be seen = in > >> >>>> > the > >> >>>> > StorageAdaptor as well. There's a createPhysical disk, clone, > etc, > >> >>>> > but > >> >>>> > it's probably a hold-over from 4.1, as I have the vague idea th= at > >> >>>> > volumes are created on the mgmt server via the plugin now, so > >> >>>> > whatever > >> >>>> > doesn't apply can just be stubbed out (or optionally > >> >>>> > extended/reimplemented here, if you don't mind the hosts talkin= g > to > >> >>>> > the san api). > >> >>>> > > >> >>>> > There is a difference between attaching new volumes and > launching a > >> >>>> > VM > >> >>>> > with existing volumes. In the latter case, the VM definition > that > >> >>>> > was > >> >>>> > passed to the KVM agent includes the disks, (StartCommand). > >> >>>> > > >> >>>> > I'd be interested in how your pool is defined for Xen, I imagin= e > it > >> >>>> > would need to be kept the same. Is it just a definition to the > SAN > >> >>>> > (ip address or some such, port number) and perhaps a volume poo= l > >> >>>> > name? > >> >>>> > > >> >>>> >> If there is a way for me to update the ACL list on the SAN to > have > >> >>>> >> only a > >> >>>> >> single KVM host have access to the volume, that would be ideal= . > >> >>>> > > >> >>>> > That depends on your SAN API. I was under the impression that > the > >> >>>> > storage plugin framework allowed for acls, or for you to do > >> >>>> > whatever > >> >>>> > you want for create/attach/delete/snapshot, etc. You'd just cal= l > >> >>>> > your > >> >>>> > SAN API with the host info for the ACLs prior to when the disk = is > >> >>>> > attached (or the VM is started). I'd have to look more at the > >> >>>> > framework to know the details, in 4.1 I would do this in > >> >>>> > getPhysicalDisk just prior to connecting up the LUN. > >> >>>> > > >> >>>> > > >> >>>> > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski > >> >>>> > wrote: > >> >>>> >> OK, yeah, the ACL part will be interesting. That is a bit > >> >>>> >> different > >> >>>> >> from how > >> >>>> >> it works with XenServer and VMware. > >> >>>> >> > >> >>>> >> Just to give you an idea how it works in 4.2 with XenServer: > >> >>>> >> > >> >>>> >> * The user creates a CS volume (this is just recorded in the > >> >>>> >> cloud.volumes > >> >>>> >> table). > >> >>>> >> > >> >>>> >> * The user attaches the volume as a disk to a VM for the first > >> >>>> >> time > >> >>>> >> (if the > >> >>>> >> storage allocator picks the SolidFire plug-in, the storage > >> >>>> >> framework > >> >>>> >> invokes > >> >>>> >> a method on the plug-in that creates a volume on the SAN...inf= o > >> >>>> >> like > >> >>>> >> the IQN > >> >>>> >> of the SAN volume is recorded in the DB). > >> >>>> >> > >> >>>> >> * CitrixResourceBase's execute(AttachVolumeCommand) is execute= d. > >> >>>> >> It > >> >>>> >> determines based on a flag passed in that the storage in > question > >> >>>> >> is > >> >>>> >> "CloudStack-managed" storage (as opposed to "traditional" > >> >>>> >> preallocated > >> >>>> >> storage). This tells it to discover the iSCSI target. Once > >> >>>> >> discovered > >> >>>> >> it > >> >>>> >> determines if the iSCSI target already contains a storage > >> >>>> >> repository > >> >>>> >> (it > >> >>>> >> would if this were a re-attach situation). If it does contain = an > >> >>>> >> SR > >> >>>> >> already, > >> >>>> >> then there should already be one VDI, as well. If there is no > SR, > >> >>>> >> an > >> >>>> >> SR is > >> >>>> >> created and a single VDI is created within it (that takes up > about > >> >>>> >> as > >> >>>> >> much > >> >>>> >> space as was requested for the CloudStack volume). > >> >>>> >> > >> >>>> >> * The normal attach-volume logic continues (it depends on the > >> >>>> >> existence of > >> >>>> >> an SR and a VDI). > >> >>>> >> > >> >>>> >> The VMware case is essentially the same (mainly just substitut= e > >> >>>> >> datastore > >> >>>> >> for SR and VMDK for VDI). > >> >>>> >> > >> >>>> >> In both cases, all hosts in the cluster have discovered the > iSCSI > >> >>>> >> target, > >> >>>> >> but only the host that is currently running the VM that is usi= ng > >> >>>> >> the > >> >>>> >> VDI (or > >> >>>> >> VMKD) is actually using the disk. > >> >>>> >> > >> >>>> >> Live Migration should be OK because the hypervisors communicat= e > >> >>>> >> with > >> >>>> >> whatever metadata they have on the SR (or datastore). > >> >>>> >> > >> >>>> >> I see what you're saying with KVM, though. > >> >>>> >> > >> >>>> >> In that case, the hosts are clustered only in CloudStack's eye= s. > >> >>>> >> CS > >> >>>> >> controls > >> >>>> >> Live Migration. You don't really need a clustered filesystem o= n > >> >>>> >> the > >> >>>> >> LUN. The > >> >>>> >> LUN could be handed over raw to the VM using it. > >> >>>> >> > >> >>>> >> If there is a way for me to update the ACL list on the SAN to > have > >> >>>> >> only a > >> >>>> >> single KVM host have access to the volume, that would be ideal= . > >> >>>> >> > >> >>>> >> Also, I agree I'll need to use iscsiadm to discover and log in > to > >> >>>> >> the > >> >>>> >> iSCSI > >> >>>> >> target. I'll also need to take the resultant new device and pa= ss > >> >>>> >> it > >> >>>> >> into the > >> >>>> >> VM. > >> >>>> >> > >> >>>> >> Does this sound reasonable? Please call me out on anything I > seem > >> >>>> >> incorrect > >> >>>> >> about. :) > >> >>>> >> > >> >>>> >> Thanks for all the thought on this, Marcus! > >> >>>> >> > >> >>>> >> > >> >>>> >> On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen > >> >>>> >> > >> >>>> >> wrote: > >> >>>> >>> > >> >>>> >>> Perfect. You'll have a domain def ( the VM), a disk def, and > the > >> >>>> >>> attach > >> >>>> >>> the disk def to the vm. You may need to do your own > >> >>>> >>> StorageAdaptor > >> >>>> >>> and run > >> >>>> >>> iscsiadm commands to accomplish that, depending on how the > >> >>>> >>> libvirt > >> >>>> >>> iscsi > >> >>>> >>> works. My impression is that a 1:1:1 pool/lun/volume isn't ho= w > it > >> >>>> >>> works on > >> >>>> >>> xen at the momen., nor is it ideal. > >> >>>> >>> > >> >>>> >>> Your plugin will handle acls as far as which host can see whi= ch > >> >>>> >>> luns > >> >>>> >>> as > >> >>>> >>> well, I remember discussing that months ago, so that a disk > won't > >> >>>> >>> be > >> >>>> >>> connected until the hypervisor has exclusive access, so it wi= ll > >> >>>> >>> be > >> >>>> >>> safe and > >> >>>> >>> fence the disk from rogue nodes that cloudstack loses > >> >>>> >>> connectivity > >> >>>> >>> with. It > >> >>>> >>> should revoke access to everything but the target host... > Except > >> >>>> >>> for > >> >>>> >>> during > >> >>>> >>> migration but we can discuss that later, there's a migration > prep > >> >>>> >>> process > >> >>>> >>> where the new host can be added to the acls, and the old host > can > >> >>>> >>> be > >> >>>> >>> removed > >> >>>> >>> post migration. > >> >>>> >>> > >> >>>> >>> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" > >> >>>> >>> > >> >>>> >>> wrote: > >> >>>> >>>> > >> >>>> >>>> Yeah, that would be ideal. > >> >>>> >>>> > >> >>>> >>>> So, I would still need to discover the iSCSI target, log in = to > >> >>>> >>>> it, > >> >>>> >>>> then > >> >>>> >>>> figure out what /dev/sdX was created as a result (and leave = it > >> >>>> >>>> as > >> >>>> >>>> is - do > >> >>>> >>>> not format it with any file system...clustered or not). I > would > >> >>>> >>>> pass that > >> >>>> >>>> device into the VM. > >> >>>> >>>> > >> >>>> >>>> Kind of accurate? > >> >>>> >>>> > >> >>>> >>>> > >> >>>> >>>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen > >> >>>> >>>> > >> >>>> >>>> wrote: > >> >>>> >>>>> > >> >>>> >>>>> Look in LibvirtVMDef.java (I think) for the disk definition= s. > >> >>>> >>>>> There are > >> >>>> >>>>> ones that work for block devices rather than files. You can > >> >>>> >>>>> piggy > >> >>>> >>>>> back off > >> >>>> >>>>> of the existing disk definitions and attach it to the vm as= a > >> >>>> >>>>> block device. > >> >>>> >>>>> The definition is an XML string per libvirt XML format. You > may > >> >>>> >>>>> want to use > >> >>>> >>>>> an alternate path to the disk rather than just /dev/sdx lik= e > I > >> >>>> >>>>> mentioned, > >> >>>> >>>>> there are by-id paths to the block devices, as well as othe= r > >> >>>> >>>>> ones > >> >>>> >>>>> that will > >> >>>> >>>>> be consistent and easier for management, not sure how > familiar > >> >>>> >>>>> you > >> >>>> >>>>> are with > >> >>>> >>>>> device naming on Linux. > >> >>>> >>>>> > >> >>>> >>>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen" > >> >>>> >>>>> > >> >>>> >>>>> wrote: > >> >>>> >>>>>> > >> >>>> >>>>>> No, as that would rely on virtualized network/iscsi > initiator > >> >>>> >>>>>> inside > >> >>>> >>>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun > on > >> >>>> >>>>>> hypervisor) as > >> >>>> >>>>>> a disk to the VM, rather than attaching some image file th= at > >> >>>> >>>>>> resides on a > >> >>>> >>>>>> filesystem, mounted on the host, living on a target. > >> >>>> >>>>>> > >> >>>> >>>>>> Actually, if you plan on the storage supporting live > migration > >> >>>> >>>>>> I > >> >>>> >>>>>> think > >> >>>> >>>>>> this is the only way. You can't put a filesystem on it and > >> >>>> >>>>>> mount > >> >>>> >>>>>> it in two > >> >>>> >>>>>> places to facilitate migration unless its a clustered > >> >>>> >>>>>> filesystem, > >> >>>> >>>>>> in which > >> >>>> >>>>>> case you're back to shared mount point. > >> >>>> >>>>>> > >> >>>> >>>>>> As far as I'm aware, the xenserver SR style is basically L= VM > >> >>>> >>>>>> with > >> >>>> >>>>>> a xen > >> >>>> >>>>>> specific cluster management, a custom CLVM. They don't use= a > >> >>>> >>>>>> filesystem > >> >>>> >>>>>> either. > >> >>>> >>>>>> > >> >>>> >>>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski" > >> >>>> >>>>>> wrote: > >> >>>> >>>>>>> > >> >>>> >>>>>>> When you say, "wire up the lun directly to the vm," do yo= u > >> >>>> >>>>>>> mean > >> >>>> >>>>>>> circumventing the hypervisor? I didn't think we could do > that > >> >>>> >>>>>>> in > >> >>>> >>>>>>> CS. > >> >>>> >>>>>>> OpenStack, on the other hand, always circumvents the > >> >>>> >>>>>>> hypervisor, > >> >>>> >>>>>>> as far as I > >> >>>> >>>>>>> know. > >> >>>> >>>>>>> > >> >>>> >>>>>>> > >> >>>> >>>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen > >> >>>> >>>>>>> > >> >>>> >>>>>>> wrote: > >> >>>> >>>>>>>> > >> >>>> >>>>>>>> Better to wire up the lun directly to the vm unless ther= e > is > >> >>>> >>>>>>>> a > >> >>>> >>>>>>>> good > >> >>>> >>>>>>>> reason not to. > >> >>>> >>>>>>>> > >> >>>> >>>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" > >> >>>> >>>>>>>> > >> >>>> >>>>>>>> wrote: > >> >>>> >>>>>>>>> > >> >>>> >>>>>>>>> You could do that, but as mentioned I think its a mista= ke > >> >>>> >>>>>>>>> to > >> >>>> >>>>>>>>> go to > >> >>>> >>>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to > luns > >> >>>> >>>>>>>>> and then putting > >> >>>> >>>>>>>>> a filesystem on it, mounting it, and then putting a QCO= W2 > >> >>>> >>>>>>>>> or > >> >>>> >>>>>>>>> even RAW disk > >> >>>> >>>>>>>>> image on that filesystem. You'll lose a lot of iops alo= ng > >> >>>> >>>>>>>>> the > >> >>>> >>>>>>>>> way, and have > >> >>>> >>>>>>>>> more overhead with the filesystem and its journaling, > etc. > >> >>>> >>>>>>>>> > >> >>>> >>>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski" > >> >>>> >>>>>>>>> wrote: > >> >>>> >>>>>>>>>> > >> >>>> >>>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM > with > >> >>>> >>>>>>>>>> CS. > >> >>>> >>>>>>>>>> > >> >>>> >>>>>>>>>> So, the way people use our SAN with KVM and CS today i= s > by > >> >>>> >>>>>>>>>> selecting SharedMountPoint and specifying the location > of > >> >>>> >>>>>>>>>> the > >> >>>> >>>>>>>>>> share. > >> >>>> >>>>>>>>>> > >> >>>> >>>>>>>>>> They can set up their share using Open iSCSI by > >> >>>> >>>>>>>>>> discovering > >> >>>> >>>>>>>>>> their > >> >>>> >>>>>>>>>> iSCSI target, logging in to it, then mounting it > somewhere > >> >>>> >>>>>>>>>> on > >> >>>> >>>>>>>>>> their file > >> >>>> >>>>>>>>>> system. > >> >>>> >>>>>>>>>> > >> >>>> >>>>>>>>>> Would it make sense for me to just do that discovery, > >> >>>> >>>>>>>>>> logging > >> >>>> >>>>>>>>>> in, > >> >>>> >>>>>>>>>> and mounting behind the scenes for them and letting th= e > >> >>>> >>>>>>>>>> current code manage > >> >>>> >>>>>>>>>> the rest as it currently does? > >> >>>> >>>>>>>>>> > >> >>>> >>>>>>>>>> > >> >>>> >>>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen > >> >>>> >>>>>>>>>> wrote: > >> >>>> >>>>>>>>>>> > >> >>>> >>>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need = to > >> >>>> >>>>>>>>>>> catch up > >> >>>> >>>>>>>>>>> on the work done in KVM, but this is basically just > disk > >> >>>> >>>>>>>>>>> snapshots + memory > >> >>>> >>>>>>>>>>> dump. I still think disk snapshots would preferably b= e > >> >>>> >>>>>>>>>>> handled by the SAN, > >> >>>> >>>>>>>>>>> and then memory dumps can go to secondary storage or > >> >>>> >>>>>>>>>>> something else. This is > >> >>>> >>>>>>>>>>> relatively new ground with CS and KVM, so we will wan= t > to > >> >>>> >>>>>>>>>>> see how others are > >> >>>> >>>>>>>>>>> planning theirs. > >> >>>> >>>>>>>>>>> > >> >>>> >>>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" > >> >>>> >>>>>>>>>>> > >> >>>> >>>>>>>>>>> wrote: > >> >>>> >>>>>>>>>>>> > >> >>>> >>>>>>>>>>>> Let me back up and say I don't think you'd use a vdi > >> >>>> >>>>>>>>>>>> style > >> >>>> >>>>>>>>>>>> on an > >> >>>> >>>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW > >> >>>> >>>>>>>>>>>> format. > >> >>>> >>>>>>>>>>>> Otherwise you're > >> >>>> >>>>>>>>>>>> putting a filesystem on your lun, mounting it, > creating > >> >>>> >>>>>>>>>>>> a > >> >>>> >>>>>>>>>>>> QCOW2 disk image, > >> >>>> >>>>>>>>>>>> and that seems unnecessary and a performance killer. > >> >>>> >>>>>>>>>>>> > >> >>>> >>>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to > the > >> >>>> >>>>>>>>>>>> VM, and > >> >>>> >>>>>>>>>>>> handling snapshots on the San side via the storage > >> >>>> >>>>>>>>>>>> plugin > >> >>>> >>>>>>>>>>>> is best. My > >> >>>> >>>>>>>>>>>> impression from the storage plugin refactor was that > >> >>>> >>>>>>>>>>>> there > >> >>>> >>>>>>>>>>>> was a snapshot > >> >>>> >>>>>>>>>>>> service that would allow the San to handle snapshots= . > >> >>>> >>>>>>>>>>>> > >> >>>> >>>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" > >> >>>> >>>>>>>>>>>> > >> >>>> >>>>>>>>>>>> wrote: > >> >>>> >>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN > back > >> >>>> >>>>>>>>>>>>> end, if > >> >>>> >>>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server cou= ld > >> >>>> >>>>>>>>>>>>> call > >> >>>> >>>>>>>>>>>>> your plugin for > >> >>>> >>>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic= . > As > >> >>>> >>>>>>>>>>>>> far as space, that > >> >>>> >>>>>>>>>>>>> would depend on how your SAN handles it. With ours, > we > >> >>>> >>>>>>>>>>>>> carve out luns from a > >> >>>> >>>>>>>>>>>>> pool, and the snapshot spave comes from the pool an= d > is > >> >>>> >>>>>>>>>>>>> independent of the > >> >>>> >>>>>>>>>>>>> LUN size the host sees. > >> >>>> >>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski" > >> >>>> >>>>>>>>>>>>> wrote: > >> >>>> >>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>> Hey Marcus, > >> >>>> >>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvir= t > >> >>>> >>>>>>>>>>>>>> won't > >> >>>> >>>>>>>>>>>>>> work > >> >>>> >>>>>>>>>>>>>> when you take into consideration hypervisor > snapshots? > >> >>>> >>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, > the > >> >>>> >>>>>>>>>>>>>> VDI for > >> >>>> >>>>>>>>>>>>>> the snapshot is placed on the same storage > repository > >> >>>> >>>>>>>>>>>>>> as > >> >>>> >>>>>>>>>>>>>> the volume is on. > >> >>>> >>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>> Same idea for VMware, I believe. > >> >>>> >>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>> So, what would happen in my case (let's say for > >> >>>> >>>>>>>>>>>>>> XenServer > >> >>>> >>>>>>>>>>>>>> and > >> >>>> >>>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor > >> >>>> >>>>>>>>>>>>>> snapshots in 4.2) is I'd > >> >>>> >>>>>>>>>>>>>> make an iSCSI target that is larger than what the > user > >> >>>> >>>>>>>>>>>>>> requested for the > >> >>>> >>>>>>>>>>>>>> CloudStack volume (which is fine because our SAN > >> >>>> >>>>>>>>>>>>>> thinly > >> >>>> >>>>>>>>>>>>>> provisions volumes, > >> >>>> >>>>>>>>>>>>>> so the space is not actually used unless it needs = to > >> >>>> >>>>>>>>>>>>>> be). > >> >>>> >>>>>>>>>>>>>> The CloudStack > >> >>>> >>>>>>>>>>>>>> volume would be the only "object" on the SAN volum= e > >> >>>> >>>>>>>>>>>>>> until > >> >>>> >>>>>>>>>>>>>> a hypervisor > >> >>>> >>>>>>>>>>>>>> snapshot is taken. This snapshot would also reside > on > >> >>>> >>>>>>>>>>>>>> the > >> >>>> >>>>>>>>>>>>>> SAN volume. > >> >>>> >>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>> If this is also how KVM behaves and there is no > >> >>>> >>>>>>>>>>>>>> creation > >> >>>> >>>>>>>>>>>>>> of > >> >>>> >>>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which, > even > >> >>>> >>>>>>>>>>>>>> if > >> >>>> >>>>>>>>>>>>>> there were support > >> >>>> >>>>>>>>>>>>>> for this, our SAN currently only allows one LUN pe= r > >> >>>> >>>>>>>>>>>>>> iSCSI > >> >>>> >>>>>>>>>>>>>> target), then I > >> >>>> >>>>>>>>>>>>>> don't see how using this model will work. > >> >>>> >>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>> Perhaps I will have to go enhance the current way > this > >> >>>> >>>>>>>>>>>>>> works > >> >>>> >>>>>>>>>>>>>> with DIR? > >> >>>> >>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>> What do you think? > >> >>>> >>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>> Thanks > >> >>>> >>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski > >> >>>> >>>>>>>>>>>>>> wrote: > >> >>>> >>>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>>> That appears to be the way it's used for iSCSI > access > >> >>>> >>>>>>>>>>>>>>> today. > >> >>>> >>>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>>> I suppose I could go that route, too, but I might > as > >> >>>> >>>>>>>>>>>>>>> well > >> >>>> >>>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead. > >> >>>> >>>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen > >> >>>> >>>>>>>>>>>>>>> wrote: > >> >>>> >>>>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>>>> To your question about SharedMountPoint, I belie= ve > >> >>>> >>>>>>>>>>>>>>>> it > >> >>>> >>>>>>>>>>>>>>>> just > >> >>>> >>>>>>>>>>>>>>>> acts like a > >> >>>> >>>>>>>>>>>>>>>> 'DIR' storage type or something similar to that. > The > >> >>>> >>>>>>>>>>>>>>>> end-user > >> >>>> >>>>>>>>>>>>>>>> is > >> >>>> >>>>>>>>>>>>>>>> responsible for mounting a file system that all > KVM > >> >>>> >>>>>>>>>>>>>>>> hosts can > >> >>>> >>>>>>>>>>>>>>>> access, > >> >>>> >>>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing > the > >> >>>> >>>>>>>>>>>>>>>> storage. > >> >>>> >>>>>>>>>>>>>>>> It could > >> >>>> >>>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered > >> >>>> >>>>>>>>>>>>>>>> filesystem, > >> >>>> >>>>>>>>>>>>>>>> cloudstack just > >> >>>> >>>>>>>>>>>>>>>> knows that the provided directory path has VM > >> >>>> >>>>>>>>>>>>>>>> images. > >> >>>> >>>>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen > >> >>>> >>>>>>>>>>>>>>>> wrote: > >> >>>> >>>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at > the > >> >>>> >>>>>>>>>>>>>>>> > same > >> >>>> >>>>>>>>>>>>>>>> > time. > >> >>>> >>>>>>>>>>>>>>>> > Multiples, in fact. > >> >>>> >>>>>>>>>>>>>>>> > > >> >>>> >>>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowsk= i > >> >>>> >>>>>>>>>>>>>>>> > wrote: > >> >>>> >>>>>>>>>>>>>>>> >> Looks like you can have multiple storage pool= s: > >> >>>> >>>>>>>>>>>>>>>> >> > >> >>>> >>>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list > >> >>>> >>>>>>>>>>>>>>>> >> Name State Autostart > >> >>>> >>>>>>>>>>>>>>>> >> ----------------------------------------- > >> >>>> >>>>>>>>>>>>>>>> >> default active yes > >> >>>> >>>>>>>>>>>>>>>> >> iSCSI active no > >> >>>> >>>>>>>>>>>>>>>> >> > >> >>>> >>>>>>>>>>>>>>>> >> > >> >>>> >>>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkows= ki > >> >>>> >>>>>>>>>>>>>>>> >> wrote: > >> >>>> >>>>>>>>>>>>>>>> >>> > >> >>>> >>>>>>>>>>>>>>>> >>> Reading through the docs you pointed out. > >> >>>> >>>>>>>>>>>>>>>> >>> > >> >>>> >>>>>>>>>>>>>>>> >>> I see what you're saying now. > >> >>>> >>>>>>>>>>>>>>>> >>> > >> >>>> >>>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage po= ol > >> >>>> >>>>>>>>>>>>>>>> >>> based on > >> >>>> >>>>>>>>>>>>>>>> >>> an iSCSI target. > >> >>>> >>>>>>>>>>>>>>>> >>> > >> >>>> >>>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have > one > >> >>>> >>>>>>>>>>>>>>>> >>> LUN, so > >> >>>> >>>>>>>>>>>>>>>> >>> there would only > >> >>>> >>>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the > >> >>>> >>>>>>>>>>>>>>>> >>> (libvirt) > >> >>>> >>>>>>>>>>>>>>>> >>> storage pool. > >> >>>> >>>>>>>>>>>>>>>> >>> > >> >>>> >>>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys > >> >>>> >>>>>>>>>>>>>>>> >>> iSCSI > >> >>>> >>>>>>>>>>>>>>>> >>> targets/LUNs on the > >> >>>> >>>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that > >> >>>> >>>>>>>>>>>>>>>> >>> libvirt > >> >>>> >>>>>>>>>>>>>>>> >>> does > >> >>>> >>>>>>>>>>>>>>>> >>> not support > >> >>>> >>>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs. > >> >>>> >>>>>>>>>>>>>>>> >>> > >> >>>> >>>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to s= ee > >> >>>> >>>>>>>>>>>>>>>> >>> if > >> >>>> >>>>>>>>>>>>>>>> >>> libvirt > >> >>>> >>>>>>>>>>>>>>>> >>> supports > >> >>>> >>>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you > mentioned, > >> >>>> >>>>>>>>>>>>>>>> >>> since > >> >>>> >>>>>>>>>>>>>>>> >>> each one of its > >> >>>> >>>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI > >> >>>> >>>>>>>>>>>>>>>> >>> targets/LUNs). > >> >>>> >>>>>>>>>>>>>>>> >>> > >> >>>> >>>>>>>>>>>>>>>> >>> > >> >>>> >>>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike > Tutkowski > >> >>>> >>>>>>>>>>>>>>>> >>> wrote: > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type: > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> public enum poolType { > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> ISCSI("iscsi"), NETFS("netfs"), > >> >>>> >>>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"), > >> >>>> >>>>>>>>>>>>>>>> >>>> RBD("rbd"); > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> String _poolType; > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> poolType(String poolType) { > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> _poolType =3D poolType; > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> } > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> @Override > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> public String toString() { > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> return _poolType; > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> } > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> } > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is > >> >>>> >>>>>>>>>>>>>>>> >>>> currently > >> >>>> >>>>>>>>>>>>>>>> >>>> being > >> >>>> >>>>>>>>>>>>>>>> >>>> used, but I'm > >> >>>> >>>>>>>>>>>>>>>> >>>> understanding more what you were getting at= . > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when > >> >>>> >>>>>>>>>>>>>>>> >>>> someone > >> >>>> >>>>>>>>>>>>>>>> >>>> selects the > >> >>>> >>>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with > iSCSI, > >> >>>> >>>>>>>>>>>>>>>> >>>> is > >> >>>> >>>>>>>>>>>>>>>> >>>> that > >> >>>> >>>>>>>>>>>>>>>> >>>> the "netfs" option > >> >>>> >>>>>>>>>>>>>>>> >>>> above or is that just for NFS? > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> Thanks! > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus > >> >>>> >>>>>>>>>>>>>>>> >>>> Sorensen > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> wrote: > >> >>>> >>>>>>>>>>>>>>>> >>>>> > >> >>>> >>>>>>>>>>>>>>>> >>>>> Take a look at this: > >> >>>> >>>>>>>>>>>>>>>> >>>>> > >> >>>> >>>>>>>>>>>>>>>> >>>>> > >> >>>> >>>>>>>>>>>>>>>> >>>>> > >> >>>> >>>>>>>>>>>>>>>> >>>>> > http://libvirt.org/storage.html#StorageBackendISCSI > >> >>>> >>>>>>>>>>>>>>>> >>>>> > >> >>>> >>>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCS= I > >> >>>> >>>>>>>>>>>>>>>> >>>>> server, and > >> >>>> >>>>>>>>>>>>>>>> >>>>> cannot be > >> >>>> >>>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I > >> >>>> >>>>>>>>>>>>>>>> >>>>> believe > >> >>>> >>>>>>>>>>>>>>>> >>>>> your > >> >>>> >>>>>>>>>>>>>>>> >>>>> plugin will take > >> >>>> >>>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of > logging > >> >>>> >>>>>>>>>>>>>>>> >>>>> in > >> >>>> >>>>>>>>>>>>>>>> >>>>> and > >> >>>> >>>>>>>>>>>>>>>> >>>>> hooking it up to > >> >>>> >>>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that wo= rk > >> >>>> >>>>>>>>>>>>>>>> >>>>> in > >> >>>> >>>>>>>>>>>>>>>> >>>>> the Xen > >> >>>> >>>>>>>>>>>>>>>> >>>>> stuff). > >> >>>> >>>>>>>>>>>>>>>> >>>>> > >> >>>> >>>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this > >> >>>> >>>>>>>>>>>>>>>> >>>>> provides > >> >>>> >>>>>>>>>>>>>>>> >>>>> a 1:1 > >> >>>> >>>>>>>>>>>>>>>> >>>>> mapping, or if > >> >>>> >>>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi > device > >> >>>> >>>>>>>>>>>>>>>> >>>>> as > >> >>>> >>>>>>>>>>>>>>>> >>>>> a > >> >>>> >>>>>>>>>>>>>>>> >>>>> pool. You may need > >> >>>> >>>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit > more > >> >>>> >>>>>>>>>>>>>>>> >>>>> about > >> >>>> >>>>>>>>>>>>>>>> >>>>> this. Let us know. > >> >>>> >>>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write > your > >> >>>> >>>>>>>>>>>>>>>> >>>>> own > >> >>>> >>>>>>>>>>>>>>>> >>>>> storage adaptor > >> >>>> >>>>>>>>>>>>>>>> >>>>> rather than changing > >> >>>> >>>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java. > >> >>>> >>>>>>>>>>>>>>>> >>>>> We > >> >>>> >>>>>>>>>>>>>>>> >>>>> can cross that > >> >>>> >>>>>>>>>>>>>>>> >>>>> bridge when we get there. > >> >>>> >>>>>>>>>>>>>>>> >>>>> > >> >>>> >>>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see th= e > >> >>>> >>>>>>>>>>>>>>>> >>>>> java > >> >>>> >>>>>>>>>>>>>>>> >>>>> bindings doc. > >> >>>> >>>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/ > >> >>>> >>>>>>>>>>>>>>>> >>>>> Normally, > >> >>>> >>>>>>>>>>>>>>>> >>>>> you'll see a > >> >>>> >>>>>>>>>>>>>>>> >>>>> connection object be made, then calls made > to > >> >>>> >>>>>>>>>>>>>>>> >>>>> that > >> >>>> >>>>>>>>>>>>>>>> >>>>> 'conn' object. You > >> >>>> >>>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to s= ee > >> >>>> >>>>>>>>>>>>>>>> >>>>> how > >> >>>> >>>>>>>>>>>>>>>> >>>>> that > >> >>>> >>>>>>>>>>>>>>>> >>>>> is done for > >> >>>> >>>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some tes= t > >> >>>> >>>>>>>>>>>>>>>> >>>>> java > >> >>>> >>>>>>>>>>>>>>>> >>>>> code > >> >>>> >>>>>>>>>>>>>>>> >>>>> to see if you > >> >>>> >>>>>>>>>>>>>>>> >>>>> can interface with libvirt and register > iscsi > >> >>>> >>>>>>>>>>>>>>>> >>>>> storage > >> >>>> >>>>>>>>>>>>>>>> >>>>> pools before you > >> >>>> >>>>>>>>>>>>>>>> >>>>> get started. > >> >>>> >>>>>>>>>>>>>>>> >>>>> > >> >>>> >>>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike > >> >>>> >>>>>>>>>>>>>>>> >>>>> Tutkowski > >> >>>> >>>>>>>>>>>>>>>> >>>>> wrote: > >> >>>> >>>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvir= t > >> >>>> >>>>>>>>>>>>>>>> >>>>> > more, > >> >>>> >>>>>>>>>>>>>>>> >>>>> > but > >> >>>> >>>>>>>>>>>>>>>> >>>>> > you figure it > >> >>>> >>>>>>>>>>>>>>>> >>>>> > supports > >> >>>> >>>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI > >> >>>> >>>>>>>>>>>>>>>> >>>>> > targets, > >> >>>> >>>>>>>>>>>>>>>> >>>>> > right? > >> >>>> >>>>>>>>>>>>>>>> >>>>> > > >> >>>> >>>>>>>>>>>>>>>> >>>>> > > >> >>>> >>>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike > >> >>>> >>>>>>>>>>>>>>>> >>>>> > Tutkowski > >> >>>> >>>>>>>>>>>>>>>> >>>>> > wrote: > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of > the > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> classes > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> you pointed out > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> last > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> week or so. > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> Sorensen > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> wrote: > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need th= e > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> iscsi > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> initiator utilities > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> packages > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> for > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> any distro. Then > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> you'd call > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> initiator > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> login. > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> See the info I > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> sent > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> previously about > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> and > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> storage type > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need. > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike > Tutkowski" > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> wrote: > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Hi, > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2 > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> release > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> I > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack. > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the stora= ge > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> framework > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> at the necessary > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> times > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create an= d > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> delete > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes on the > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> (among other activities). > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish = a > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> 1:1 > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> mapping > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> between a > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> CloudStack > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS= . > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expect= ed > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> admin > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> to create large > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volum= es > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> would > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> likely house many > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> root and > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly= ). > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work,= I > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> needed to > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> modify logic in > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> could > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> create/delete storage > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed. > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen wi= th > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> KVM. > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> might > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> work on > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> still > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM. > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know ho= w > I > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> will need > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> to interact with > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I hav= e > to > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> expect > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it > for > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> this to > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> work? > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions, > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> -- > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFir= e > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Inc. > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302 > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the > >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> cloud=99 > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> -- > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire > Inc. > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302 > >> >>>> >>>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the > cloud=99 > >> >>>> >>>>>>>>>>>>>>>> >>>>> > > >> >>>> >>>>>>>>>>>>>>>> >>>>> > > >> >>>> >>>>>>>>>>>>>>>> >>>>> > > >> >>>> >>>>>>>>>>>>>>>> >>>>> > > >> >>>> >>>>>>>>>>>>>>>> >>>>> > -- > >> >>>> >>>>>>>>>>>>>>>> >>>>> > Mike Tutkowski > >> >>>> >>>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire > Inc. > >> >>>> >>>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com > >> >>>> >>>>>>>>>>>>>>>> >>>>> > o: 303.746.7302 > >> >>>> >>>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the > cloud=99 > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> > >> >>>> >>>>>>>>>>>>>>>> >>>> -- > >> >>>> >>>>>>>>>>>>>>>> >>>> Mike Tutkowski > >> >>>> >>>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc. > >> >>>> >>>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com > >> >>>> >>>>>>>>>>>>>>>> >>>> o: 303.746.7302 > >> >>>> >>>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud= =99 > >> >>>> >>>>>>>>>>>>>>>> >>> > >> >>>> >>>>>>>>>>>>>>>> >>> > >> >>>> >>>>>>>>>>>>>>>> >>> > >> >>>> >>>>>>>>>>>>>>>> >>> > >> >>>> >>>>>>>>>>>>>>>> >>> -- > >> >>>> >>>>>>>>>>>>>>>> >>> Mike Tutkowski > >> >>>> >>>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc. > >> >>>> >>>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com > >> >>>> >>>>>>>>>>>>>>>> >>> o: 303.746.7302 > >> >>>> >>>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud= =99 > >> >>>> >>>>>>>>>>>>>>>> >> > >> >>>> >>>>>>>>>>>>>>>> >> > >> >>>> >>>>>>>>>>>>>>>> >> > >> >>>> >>>>>>>>>>>>>>>> >> > >> >>>> >>>>>>>>>>>>>>>> >> -- > >> >>>> >>>>>>>>>>>>>>>> >> Mike Tutkowski > >> >>>> >>>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc. > >> >>>> >>>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com > >> >>>> >>>>>>>>>>>>>>>> >> o: 303.746.7302 > >> >>>> >>>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud=99 > >> >>>> >>>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>>> -- > >> >>>> >>>>>>>>>>>>>>> Mike Tutkowski > >> >>>> >>>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc. > >> >>>> >>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com > >> >>>> >>>>>>>>>>>>>>> o: 303.746.7302 > >> >>>> >>>>>>>>>>>>>>> Advancing the way the world uses the cloud=99 > >> >>>> >>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>> > >> >>>> >>>>>>>>>>>>>> -- > >> >>>> >>>>>>>>>>>>>> Mike Tutkowski > >> >>>> >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc. > >> >>>> >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com > >> >>>> >>>>>>>>>>>>>> o: 303.746.7302 > >> >>>> >>>>>>>>>>>>>> Advancing the way the world uses the cloud=99 > >> >>>> >>>>>>>>>> > >> >>>> >>>>>>>>>> > >> >>>> >>>>>>>>>> > >> >>>> >>>>>>>>>> > >> >>>> >>>>>>>>>> -- > >> >>>> >>>>>>>>>> Mike Tutkowski > >> >>>> >>>>>>>>>> Senior CloudStack Developer, SolidFire Inc. > >> >>>> >>>>>>>>>> e: mike.tutkowski@solidfire.com > >> >>>> >>>>>>>>>> o: 303.746.7302 > >> >>>> >>>>>>>>>> Advancing the way the world uses the cloud=99 > >> >>>> >>>>>>> > >> >>>> >>>>>>> > >> >>>> >>>>>>> > >> >>>> >>>>>>> > >> >>>> >>>>>>> -- > >> >>>> >>>>>>> Mike Tutkowski > >> >>>> >>>>>>> Senior CloudStack Developer, SolidFire Inc. > >> >>>> >>>>>>> e: mike.tutkowski@solidfire.com > >> >>>> >>>>>>> o: 303.746.7302 > >> >>>> >>>>>>> Advancing the way the world uses the cloud=99 > >> >>>> >>>> > >> >>>> >>>> > >> >>>> >>>> > >> >>>> >>>> > >> >>>> >>>> -- > >> >>>> >>>> Mike Tutkowski > >> >>>> >>>> Senior CloudStack Developer, SolidFire Inc. > >> >>>> >>>> e: mike.tutkowski@solidfire.com > >> >>>> >>>> o: 303.746.7302 > >> >>>> >>>> Advancing the way the world uses the cloud=99 > >> >>>> >> > >> >>>> >> > >> >>>> >> > >> >>>> >> > >> >>>> >> -- > >> >>>> >> Mike Tutkowski > >> >>>> >> Senior CloudStack Developer, SolidFire Inc. > >> >>>> >> e: mike.tutkowski@solidfire.com > >> >>>> >> o: 303.746.7302 > >> >>>> >> Advancing the way the world uses the cloud=99 > >> >>> > >> >>> > >> >>> > >> >>> > >> >>> -- > >> >>> Mike Tutkowski > >> >>> Senior CloudStack Developer, SolidFire Inc. > >> >>> e: mike.tutkowski@solidfire.com > >> >>> o: 303.746.7302 > >> >>> Advancing the way the world uses the cloud=99 > >> >> > >> >> > >> >> > >> >> > >> >> -- > >> >> Mike Tutkowski > >> >> Senior CloudStack Developer, SolidFire Inc. > >> >> e: mike.tutkowski@solidfire.com > >> >> o: 303.746.7302 > >> >> Advancing the way the world uses the cloud=99 > >> > > >> > > >> > > >> > > >> > -- > >> > Mike Tutkowski > >> > Senior CloudStack Developer, SolidFire Inc. > >> > e: mike.tutkowski@solidfire.com > >> > o: 303.746.7302 > >> > Advancing the way the world uses the cloud=99 > > > > > > > > > > -- > > Mike Tutkowski > > Senior CloudStack Developer, SolidFire Inc. > > e: mike.tutkowski@solidfire.com > > o: 303.746.7302 > > Advancing the way the world uses the cloud=99 > --=20 *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkowski@solidfire.com o: 303.746.7302 Advancing the way the world uses the cloud *=99* --001a113346ce50a2d304e6841bc6--