Return-Path: X-Original-To: apmail-cloudstack-dev-archive@www.apache.org Delivered-To: apmail-cloudstack-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 2D45610659 for ; Wed, 18 Sep 2013 05:53:25 +0000 (UTC) Received: (qmail 17473 invoked by uid 500); 18 Sep 2013 05:53:22 -0000 Delivered-To: apmail-cloudstack-dev-archive@cloudstack.apache.org Received: (qmail 17055 invoked by uid 500); 18 Sep 2013 05:53:21 -0000 Mailing-List: contact dev-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cloudstack.apache.org Delivered-To: mailing list dev@cloudstack.apache.org Received: (qmail 17043 invoked by uid 99); 18 Sep 2013 05:53:17 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 18 Sep 2013 05:53:17 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of mike.tutkowski@solidfire.com designates 209.85.214.181 as permitted sender) Received: from [209.85.214.181] (HELO mail-ob0-f181.google.com) (209.85.214.181) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 18 Sep 2013 05:53:02 +0000 Received: by mail-ob0-f181.google.com with SMTP id gq1so6675488obb.40 for ; Tue, 17 Sep 2013 22:52:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=ZWWdh/BUNO1DqNKk90nMqRf4sEkST+7ovKp4Dk9pG4s=; b=fKHBMeNmpKFt8M4grozRuMCZHtdwt8xmVsrycdZQNHwoPEk2hDjApZ7IQQAEQyTof8 52u5wC6YRfgRdpmOsKqQOb2DDcC7HpE1uneRZDH8Z0wZP1jcgmY39g5TibZCXEyDse0h PFdriP3HfQaa0umTvlblOc6B6d6ytpx8QPCHEJCx5xpV8z1MrDmVz7Js6V0PB7wC0Cnr HEPqDavV/sRxsTL5T9QQxZPImFZfafYC3OkRuzSZp8+1HxSn+Ahw1ErUpVkB90eIxkQD TgQI1vGuF4qa9IZe79npBoQ8yIAa+hsePneC6nvMH1un/B9FQ78A8GzoSaYSVGFl9AEK mpxQ== X-Gm-Message-State: ALoCoQmP2k5hFuQ7t0m8qNipuHDUXqxenL4W1322v1Zpe4xL/FRPAtKOWdny8TitOrAKBxHDrkzs MIME-Version: 1.0 X-Received: by 10.60.33.74 with SMTP id p10mr8896083oei.18.1379483559869; Tue, 17 Sep 2013 22:52:39 -0700 (PDT) Received: by 10.182.139.100 with HTTP; Tue, 17 Sep 2013 22:52:39 -0700 (PDT) In-Reply-To: References: Date: Tue, 17 Sep 2013 23:52:39 -0600 Message-ID: Subject: Re: Managed storage with KVM From: Mike Tutkowski To: "dev@cloudstack.apache.org" Content-Type: multipart/alternative; boundary=089e0115eef88f98d504e6a20b50 X-Virus-Checked: Checked by ClamAV on apache.org --089e0115eef88f98d504e6a20b50 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Ah, I think I see the miscommunication. I should have gone into a bit more detail about the SolidFire SAN. It is built from the ground up to support QoS on a LUN-by-LUN basis. Every LUN is assigned a Min, Max, and Burst number of IOPS. The Min IOPS are a guaranteed number (as long as the SAN itself is not over provisioned). Capacity and IOPS are provisioned independently. Multiple volumes and multiple tenants using the same SAN do not suffer from the Noisy Neighbor effect. When you create a Disk Offering in CS that is storage tagged to use SolidFire primary storage, you specify a Min, Max, and Burst number of IOPS to provision from the SAN for volumes created from that Disk Offering. There is no notion of RAID groups that you see in more traditional SANs. The SAN is built from clusters of storage nodes and data is replicated amongst all SSDs in all storage nodes (this is an SSD-only SAN) in the cluster to avoid hot spots and protect the data should a drives and/or nodes fail. You then scale the SAN by adding new storage nodes. Data is compressed and de-duplicated inline across the cluster and all volumes are thinly provisioned. On Tue, Sep 17, 2013 at 11:27 PM, Marcus Sorensen wrot= e: > I'm surprised there's no mention of pool on the SAN in your description o= f > the framework. I had assumed this was specific to your implementation, > because normally SANs host multiple disk pools, maybe multiple RAID 50s a= nd > 10s, or however the SAN admin wants to split it up. Maybe a pool intended > for root disks and a separate one for data disks. Or one pool for > cloudstack and one dedicated to some other internal db application. But i= t > sounds as though there's no place to specify which disks or pool on the S= AN > to use. > > We implemented our own internal storage SAN plugin based on 4.1. We used > the 'path' attribute of the primary storage pool object to specify which > pool name on the back end SAN to use, so we could create all-ssd pools an= d > slower spindle pools, then differentiate between them based on storage > tags. Normally the path attribute would be the mount point for NFS, but i= ts > just a string. So when registering ours we enter San dns host name, the > san's rest api port, and the pool name. Then luns created from that prima= ry > storage come from the matching disk pool on the SAN. We can create and > register multiple pools of different types and purposes on the same SAN. = We > haven't yet gotten to porting it to the 4.2 frame work, so it will be > interesting to see what we can come up with to make it work similarly. > On Sep 17, 2013 10:43 PM, "Mike Tutkowski" > wrote: > > > What you're saying here is definitely something we should talk about. > > > > Hopefully my previous e-mail has clarified how this works a bit. > > > > It mainly comes down to this: > > > > For the first time in CS history, primary storage is no longer required > to > > be preallocated by the admin and then handed to CS. CS volumes don't ha= ve > > to share a preallocated volume anymore. > > > > As of 4.2, primary storage can be based on a SAN (or some other storage > > device). You can tell CS how many bytes and IOPS to use from this stora= ge > > device and CS invokes the appropriate plug-in to carve out LUNs > > dynamically. > > > > Each LUN is home to one and only one data disk. Data disks - in this > model > > - never share a LUN. > > > > The main use case for this is so a CS volume can deliver guaranteed IOP= S > if > > the storage device (ex. SolidFire SAN) delivers guaranteed IOPS on a > > LUN-by-LUN basis. > > > > > > On Tue, Sep 17, 2013 at 10:16 PM, Marcus Sorensen > >wrote: > > > > > I guess whether or not a solidfire device is capable of hosting > > > multiple disk pools is irrelevant, we'd hope that we could get the > > > stats (maybe 30TB availabie, and 15TB allocated in LUNs). But if thes= e > > > stats aren't collected, I can't as an admin define multiple pools and > > > expect cloudstack to allocate evenly from them or fill one up and mov= e > > > to the next, because it doesn't know how big it is. > > > > > > Ultimately this discussion has nothing to do with the KVM stuff > > > itself, just a tangent, but something to think about. > > > > > > On Tue, Sep 17, 2013 at 10:13 PM, Marcus Sorensen > > > > wrote: > > > > Ok, on most storage pools it shows how many GB free/used when listi= ng > > > > the pool both via API and in the UI. I'm guessing those are empty > then > > > > for the solid fire storage, but it seems like the user should have = to > > > > define some sort of pool that the luns get carved out of, and you > > > > should be able to get the stats for that, right? Or is a solid fire > > > > appliance only one pool per appliance? This isn't about billing, bu= t > > > > just so cloudstack itself knows whether or not there is space left = on > > > > the storage device, so cloudstack can go on allocating from a > > > > different primary storage as this one fills up. There are also > > > > notifications and things. It seems like there should be a call you > can > > > > handle for this, maybe Edison knows. > > > > > > > > On Tue, Sep 17, 2013 at 8:46 PM, Marcus Sorensen < > shadowsor@gmail.com> > > > wrote: > > > >> You respond to more than attach and detach, right? Don't you creat= e > > > luns as > > > >> well? Or are you just referring to the hypervisor stuff? > > > >> > > > >> On Sep 17, 2013 7:51 PM, "Mike Tutkowski" < > > mike.tutkowski@solidfire.com > > > > > > > >> wrote: > > > >>> > > > >>> Hi Marcus, > > > >>> > > > >>> I never need to respond to a CreateStoragePool call for either > > > XenServer > > > >>> or > > > >>> VMware. > > > >>> > > > >>> What happens is I respond only to the Attach- and Detach-volume > > > commands. > > > >>> > > > >>> Let's say an attach comes in: > > > >>> > > > >>> In this case, I check to see if the storage is "managed." Talking > > > >>> XenServer > > > >>> here, if it is, I log in to the LUN that is the disk we want to > > attach. > > > >>> After, if this is the first time attaching this disk, I create an > SR > > > and a > > > >>> VDI within the SR. If it is not the first time attaching this dis= k, > > the > > > >>> LUN > > > >>> already has the SR and VDI on it. > > > >>> > > > >>> Once this is done, I let the normal "attach" logic run because th= is > > > logic > > > >>> expected an SR and a VDI and now it has it. > > > >>> > > > >>> It's the same thing for VMware: Just substitute datastore for SR > and > > > VMDK > > > >>> for VDI. > > > >>> > > > >>> Does that make sense? > > > >>> > > > >>> Thanks! > > > >>> > > > >>> > > > >>> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen > > > >>> wrote: > > > >>> > > > >>> > What do you do with Xen? I imagine the user enter the SAN detai= ls > > > when > > > >>> > registering the pool? A the pool details are basically just > > > instructions > > > >>> > on > > > >>> > how to log into a target, correct? > > > >>> > > > > >>> > You can choose to log in a KVM host to the target during > > > >>> > createStoragePool > > > >>> > and save the pool in a map, or just save the pool info in a map > for > > > >>> > future > > > >>> > reference by uuid, for when you do need to log in. The > > > createStoragePool > > > >>> > then just becomes a way to save the pool info to the agent. > > > Personally, > > > >>> > I'd > > > >>> > log in on the pool create and look/scan for specific luns when > > > they're > > > >>> > needed, but I haven't thought it through thoroughly. I just say > > that > > > >>> > mainly > > > >>> > because login only happens once, the first time the pool is use= d, > > and > > > >>> > every > > > >>> > other storage command is about discovering new luns or maybe > > > >>> > deleting/disconnecting luns no longer needed. On the other hand= , > > you > > > >>> > could > > > >>> > do all of the above: log in on pool create, then also check if > > you're > > > >>> > logged in on other commands and log in if you've lost connectio= n. > > > >>> > > > > >>> > With Xen, what does your registered pool show in the UI for > > > avail/used > > > >>> > capacity, and how does it get that info? I assume there is some > > sort > > > of > > > >>> > disk pool that the luns are carved from, and that your plugin i= s > > > called > > > >>> > to > > > >>> > talk to the SAN and expose to the user how much of that pool ha= s > > been > > > >>> > allocated. Knowing how you already solves these problems with X= en > > > will > > > >>> > help > > > >>> > figure out what to do with KVM. > > > >>> > > > > >>> > If this is the case, I think the plugin can continue to handle = it > > > rather > > > >>> > than getting details from the agent. I'm not sure if that means > > nulls > > > >>> > are > > > >>> > OK for these on the agent side or what, I need to look at the > > storage > > > >>> > plugin arch more closely. > > > >>> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" < > > > mike.tutkowski@solidfire.com> > > > >>> > wrote: > > > >>> > > > > >>> > > Hey Marcus, > > > >>> > > > > > >>> > > I'm reviewing your e-mails as I implement the necessary metho= ds > > in > > > new > > > >>> > > classes. > > > >>> > > > > > >>> > > "So, referencing StorageAdaptor.java, createStoragePool accep= ts > > > all of > > > >>> > > the pool data (host, port, name, path) which would be used to > log > > > the > > > >>> > > host into the initiator." > > > >>> > > > > > >>> > > Can you tell me, in my case, since a storage pool (primary > > > storage) is > > > >>> > > actually the SAN, I wouldn't really be logging into anything = at > > > this > > > >>> > point, > > > >>> > > correct? > > > >>> > > > > > >>> > > Also, what kind of capacity, available, and used bytes make > sense > > > to > > > >>> > report > > > >>> > > for KVMStoragePool (since KVMStoragePool represents the SAN i= n > my > > > case > > > >>> > and > > > >>> > > not an individual LUN)? > > > >>> > > > > > >>> > > Thanks! > > > >>> > > > > > >>> > > > > > >>> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen < > > > shadowsor@gmail.com > > > >>> > > >wrote: > > > >>> > > > > > >>> > > > Ok, KVM will be close to that, of course, because only the > > > >>> > > > hypervisor > > > >>> > > > classes differ, the rest is all mgmt server. Creating a > volume > > is > > > >>> > > > just > > > >>> > > > a db entry until it's deployed for the first time. > > > >>> > > > AttachVolumeCommand > > > >>> > > > on the agent side (LibvirtStorageAdaptor.java is analogous = to > > > >>> > > > CitrixResourceBase.java) will do the iscsiadm commands (via= a > > KVM > > > >>> > > > StorageAdaptor) to log in the host to the target and then y= ou > > > have a > > > >>> > > > block device. Maybe libvirt will do that for you, but my > quick > > > read > > > >>> > > > made it sound like the iscsi libvirt pool type is actually = a > > > pool, > > > >>> > > > not > > > >>> > > > a lun or volume, so you'll need to figure out if that works > or > > if > > > >>> > > > you'll have to use iscsiadm commands. > > > >>> > > > > > > >>> > > > If you're NOT going to use LibvirtStorageAdaptor (because > > Libvirt > > > >>> > > > doesn't really manage your pool the way you want), you're > going > > > to > > > >>> > > > have to create a version of KVMStoragePool class and a > > > >>> > > > StorageAdaptor > > > >>> > > > class (see LibvirtStoragePool.java and > > > LibvirtStorageAdaptor.java), > > > >>> > > > implementing all of the methods, then in > KVMStorageManager.java > > > >>> > > > there's a "_storageMapper" map. This is used to select the > > > correct > > > >>> > > > adaptor, you can see in this file that every call first pul= ls > > the > > > >>> > > > correct adaptor out of this map via getStorageAdaptor. So y= ou > > can > > > >>> > > > see > > > >>> > > > a comment in this file that says "add other storage adaptor= s > > > here", > > > >>> > > > where it puts to this map, this is where you'd register you= r > > > >>> > > > adaptor. > > > >>> > > > > > > >>> > > > So, referencing StorageAdaptor.java, createStoragePool > accepts > > > all > > > >>> > > > of > > > >>> > > > the pool data (host, port, name, path) which would be used = to > > log > > > >>> > > > the > > > >>> > > > host into the initiator. I *believe* the method > getPhysicalDisk > > > will > > > >>> > > > need to do the work of attaching the lun. > AttachVolumeCommand > > > calls > > > >>> > > > this and then creates the XML diskdef and attaches it to th= e > > VM. > > > >>> > > > Now, > > > >>> > > > one thing you need to know is that createStoragePool is > called > > > >>> > > > often, > > > >>> > > > sometimes just to make sure the pool is there. You may want > to > > > >>> > > > create > > > >>> > > > a map in your adaptor class and keep track of pools that ha= ve > > > been > > > >>> > > > created, LibvirtStorageAdaptor doesn't have to do this > because > > it > > > >>> > > > asks > > > >>> > > > libvirt about which storage pools exist. There are also cal= ls > > to > > > >>> > > > refresh the pool stats, and all of the other calls can be > seen > > in > > > >>> > > > the > > > >>> > > > StorageAdaptor as well. There's a createPhysical disk, clon= e, > > > etc, > > > >>> > > > but > > > >>> > > > it's probably a hold-over from 4.1, as I have the vague ide= a > > that > > > >>> > > > volumes are created on the mgmt server via the plugin now, = so > > > >>> > > > whatever > > > >>> > > > doesn't apply can just be stubbed out (or optionally > > > >>> > > > extended/reimplemented here, if you don't mind the hosts > > talking > > > to > > > >>> > > > the san api). > > > >>> > > > > > > >>> > > > There is a difference between attaching new volumes and > > > launching a > > > >>> > > > VM > > > >>> > > > with existing volumes. In the latter case, the VM definiti= on > > > that > > > >>> > > > was > > > >>> > > > passed to the KVM agent includes the disks, (StartCommand). > > > >>> > > > > > > >>> > > > I'd be interested in how your pool is defined for Xen, I > > imagine > > > it > > > >>> > > > would need to be kept the same. Is it just a definition to > the > > > SAN > > > >>> > > > (ip address or some such, port number) and perhaps a volume > > pool > > > >>> > > > name? > > > >>> > > > > > > >>> > > > > If there is a way for me to update the ACL list on the SA= N > to > > > have > > > >>> > > only a > > > >>> > > > > single KVM host have access to the volume, that would be > > ideal. > > > >>> > > > > > > >>> > > > That depends on your SAN API. I was under the impression > that > > > the > > > >>> > > > storage plugin framework allowed for acls, or for you to do > > > whatever > > > >>> > > > you want for create/attach/delete/snapshot, etc. You'd just > > call > > > >>> > > > your > > > >>> > > > SAN API with the host info for the ACLs prior to when the > disk > > is > > > >>> > > > attached (or the VM is started). I'd have to look more at > the > > > >>> > > > framework to know the details, in 4.1 I would do this in > > > >>> > > > getPhysicalDisk just prior to connecting up the LUN. > > > >>> > > > > > > >>> > > > > > > >>> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski > > > >>> > > > wrote: > > > >>> > > > > OK, yeah, the ACL part will be interesting. That is a bit > > > >>> > > > > different > > > >>> > > from > > > >>> > > > how > > > >>> > > > > it works with XenServer and VMware. > > > >>> > > > > > > > >>> > > > > Just to give you an idea how it works in 4.2 with > XenServer: > > > >>> > > > > > > > >>> > > > > * The user creates a CS volume (this is just recorded in > the > > > >>> > > > cloud.volumes > > > >>> > > > > table). > > > >>> > > > > > > > >>> > > > > * The user attaches the volume as a disk to a VM for the > > first > > > >>> > > > > time > > > >>> > (if > > > >>> > > > the > > > >>> > > > > storage allocator picks the SolidFire plug-in, the storag= e > > > >>> > > > > framework > > > >>> > > > invokes > > > >>> > > > > a method on the plug-in that creates a volume on the > > SAN...info > > > >>> > > > > like > > > >>> > > the > > > >>> > > > IQN > > > >>> > > > > of the SAN volume is recorded in the DB). > > > >>> > > > > > > > >>> > > > > * CitrixResourceBase's execute(AttachVolumeCommand) is > > > executed. > > > >>> > > > > It > > > >>> > > > > determines based on a flag passed in that the storage in > > > question > > > >>> > > > > is > > > >>> > > > > "CloudStack-managed" storage (as opposed to "traditional" > > > >>> > preallocated > > > >>> > > > > storage). This tells it to discover the iSCSI target. Onc= e > > > >>> > > > > discovered > > > >>> > > it > > > >>> > > > > determines if the iSCSI target already contains a storage > > > >>> > > > > repository > > > >>> > > (it > > > >>> > > > > would if this were a re-attach situation). If it does > contain > > > an > > > >>> > > > > SR > > > >>> > > > already, > > > >>> > > > > then there should already be one VDI, as well. If there i= s > no > > > SR, > > > >>> > > > > an > > > >>> > SR > > > >>> > > > is > > > >>> > > > > created and a single VDI is created within it (that takes > up > > > about > > > >>> > > > > as > > > >>> > > > much > > > >>> > > > > space as was requested for the CloudStack volume). > > > >>> > > > > > > > >>> > > > > * The normal attach-volume logic continues (it depends on > the > > > >>> > existence > > > >>> > > > of > > > >>> > > > > an SR and a VDI). > > > >>> > > > > > > > >>> > > > > The VMware case is essentially the same (mainly just > > substitute > > > >>> > > datastore > > > >>> > > > > for SR and VMDK for VDI). > > > >>> > > > > > > > >>> > > > > In both cases, all hosts in the cluster have discovered t= he > > > iSCSI > > > >>> > > target, > > > >>> > > > > but only the host that is currently running the VM that i= s > > > using > > > >>> > > > > the > > > >>> > > VDI > > > >>> > > > (or > > > >>> > > > > VMKD) is actually using the disk. > > > >>> > > > > > > > >>> > > > > Live Migration should be OK because the hypervisors > > communicate > > > >>> > > > > with > > > >>> > > > > whatever metadata they have on the SR (or datastore). > > > >>> > > > > > > > >>> > > > > I see what you're saying with KVM, though. > > > >>> > > > > > > > >>> > > > > In that case, the hosts are clustered only in CloudStack'= s > > > eyes. > > > >>> > > > > CS > > > >>> > > > controls > > > >>> > > > > Live Migration. You don't really need a clustered > filesystem > > on > > > >>> > > > > the > > > >>> > > LUN. > > > >>> > > > The > > > >>> > > > > LUN could be handed over raw to the VM using it. > > > >>> > > > > > > > >>> > > > > If there is a way for me to update the ACL list on the SA= N > to > > > have > > > >>> > > only a > > > >>> > > > > single KVM host have access to the volume, that would be > > ideal. > > > >>> > > > > > > > >>> > > > > Also, I agree I'll need to use iscsiadm to discover and l= og > > in > > > to > > > >>> > > > > the > > > >>> > > > iSCSI > > > >>> > > > > target. I'll also need to take the resultant new device a= nd > > > pass > > > >>> > > > > it > > > >>> > > into > > > >>> > > > the > > > >>> > > > > VM. > > > >>> > > > > > > > >>> > > > > Does this sound reasonable? Please call me out on anythin= g > I > > > seem > > > >>> > > > incorrect > > > >>> > > > > about. :) > > > >>> > > > > > > > >>> > > > > Thanks for all the thought on this, Marcus! > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen < > > > >>> > shadowsor@gmail.com> > > > >>> > > > > wrote: > > > >>> > > > >> > > > >>> > > > >> Perfect. You'll have a domain def ( the VM), a disk def, > and > > > the > > > >>> > > attach > > > >>> > > > >> the disk def to the vm. You may need to do your own > > > >>> > > > >> StorageAdaptor > > > >>> > and > > > >>> > > > run > > > >>> > > > >> iscsiadm commands to accomplish that, depending on how t= he > > > >>> > > > >> libvirt > > > >>> > > iscsi > > > >>> > > > >> works. My impression is that a 1:1:1 pool/lun/volume isn= 't > > > how it > > > >>> > > works > > > >>> > > > on > > > >>> > > > >> xen at the momen., nor is it ideal. > > > >>> > > > >> > > > >>> > > > >> Your plugin will handle acls as far as which host can se= e > > > which > > > >>> > > > >> luns > > > >>> > > as > > > >>> > > > >> well, I remember discussing that months ago, so that a > disk > > > won't > > > >>> > > > >> be > > > >>> > > > >> connected until the hypervisor has exclusive access, so = it > > > will > > > >>> > > > >> be > > > >>> > > safe > > > >>> > > > and > > > >>> > > > >> fence the disk from rogue nodes that cloudstack loses > > > >>> > > > >> connectivity > > > >>> > > > with. It > > > >>> > > > >> should revoke access to everything but the target host..= . > > > Except > > > >>> > > > >> for > > > >>> > > > during > > > >>> > > > >> migration but we can discuss that later, there's a > migration > > > prep > > > >>> > > > process > > > >>> > > > >> where the new host can be added to the acls, and the old > > host > > > can > > > >>> > > > >> be > > > >>> > > > removed > > > >>> > > > >> post migration. > > > >>> > > > >> > > > >>> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" < > > > >>> > > mike.tutkowski@solidfire.com > > > >>> > > > > > > > >>> > > > >> wrote: > > > >>> > > > >>> > > > >>> > > > >>> Yeah, that would be ideal. > > > >>> > > > >>> > > > >>> > > > >>> So, I would still need to discover the iSCSI target, lo= g > in > > > to > > > >>> > > > >>> it, > > > >>> > > then > > > >>> > > > >>> figure out what /dev/sdX was created as a result (and > leave > > > it > > > >>> > > > >>> as > > > >>> > is > > > >>> > > - > > > >>> > > > do > > > >>> > > > >>> not format it with any file system...clustered or not).= I > > > would > > > >>> > pass > > > >>> > > > that > > > >>> > > > >>> device into the VM. > > > >>> > > > >>> > > > >>> > > > >>> Kind of accurate? > > > >>> > > > >>> > > > >>> > > > >>> > > > >>> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen < > > > >>> > > shadowsor@gmail.com> > > > >>> > > > >>> wrote: > > > >>> > > > >>>> > > > >>> > > > >>>> Look in LibvirtVMDef.java (I think) for the disk > > > definitions. > > > >>> > There > > > >>> > > > are > > > >>> > > > >>>> ones that work for block devices rather than files. Yo= u > > can > > > >>> > > > >>>> piggy > > > >>> > > > back off > > > >>> > > > >>>> of the existing disk definitions and attach it to the = vm > > as > > > a > > > >>> > block > > > >>> > > > device. > > > >>> > > > >>>> The definition is an XML string per libvirt XML format= . > > You > > > may > > > >>> > want > > > >>> > > > to use > > > >>> > > > >>>> an alternate path to the disk rather than just /dev/sd= x > > > like I > > > >>> > > > mentioned, > > > >>> > > > >>>> there are by-id paths to the block devices, as well as > > other > > > >>> > > > >>>> ones > > > >>> > > > that will > > > >>> > > > >>>> be consistent and easier for management, not sure how > > > familiar > > > >>> > > > >>>> you > > > >>> > > > are with > > > >>> > > > >>>> device naming on Linux. > > > >>> > > > >>>> > > > >>> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen" > > > >>> > > > >>>> > > > >>> > > > wrote: > > > >>> > > > >>>>> > > > >>> > > > >>>>> No, as that would rely on virtualized network/iscsi > > > initiator > > > >>> > > inside > > > >>> > > > >>>>> the vm, which also sucks. I mean attach /dev/sdx (you= r > > lun > > > on > > > >>> > > > hypervisor) as > > > >>> > > > >>>>> a disk to the VM, rather than attaching some image fi= le > > > that > > > >>> > > resides > > > >>> > > > on a > > > >>> > > > >>>>> filesystem, mounted on the host, living on a target. > > > >>> > > > >>>>> > > > >>> > > > >>>>> Actually, if you plan on the storage supporting live > > > migration > > > >>> > > > >>>>> I > > > >>> > > > think > > > >>> > > > >>>>> this is the only way. You can't put a filesystem on i= t > > and > > > >>> > > > >>>>> mount > > > >>> > it > > > >>> > > > in two > > > >>> > > > >>>>> places to facilitate migration unless its a clustered > > > >>> > > > >>>>> filesystem, > > > >>> > > in > > > >>> > > > which > > > >>> > > > >>>>> case you're back to shared mount point. > > > >>> > > > >>>>> > > > >>> > > > >>>>> As far as I'm aware, the xenserver SR style is > basically > > > LVM > > > >>> > with a > > > >>> > > > xen > > > >>> > > > >>>>> specific cluster management, a custom CLVM. They don'= t > > use > > > a > > > >>> > > > filesystem > > > >>> > > > >>>>> either. > > > >>> > > > >>>>> > > > >>> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski" > > > >>> > > > >>>>> wrote: > > > >>> > > > >>>>>> > > > >>> > > > >>>>>> When you say, "wire up the lun directly to the vm," = do > > you > > > >>> > > > >>>>>> mean > > > >>> > > > >>>>>> circumventing the hypervisor? I didn't think we coul= d > do > > > that > > > >>> > > > >>>>>> in > > > >>> > > CS. > > > >>> > > > >>>>>> OpenStack, on the other hand, always circumvents the > > > >>> > > > >>>>>> hypervisor, > > > >>> > > as > > > >>> > > > far as I > > > >>> > > > >>>>>> know. > > > >>> > > > >>>>>> > > > >>> > > > >>>>>> > > > >>> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen < > > > >>> > > > shadowsor@gmail.com> > > > >>> > > > >>>>>> wrote: > > > >>> > > > >>>>>>> > > > >>> > > > >>>>>>> Better to wire up the lun directly to the vm unless > > > there is > > > >>> > > > >>>>>>> a > > > >>> > > good > > > >>> > > > >>>>>>> reason not to. > > > >>> > > > >>>>>>> > > > >>> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" < > > > >>> > shadowsor@gmail.com> > > > >>> > > > >>>>>>> wrote: > > > >>> > > > >>>>>>>> > > > >>> > > > >>>>>>>> You could do that, but as mentioned I think its a > > > mistake > > > >>> > > > >>>>>>>> to > > > >>> > go > > > >>> > > to > > > >>> > > > >>>>>>>> the trouble of creating a 1:1 mapping of CS volume= s > to > > > luns > > > >>> > and > > > >>> > > > then putting > > > >>> > > > >>>>>>>> a filesystem on it, mounting it, and then putting = a > > > QCOW2 > > > >>> > > > >>>>>>>> or > > > >>> > > even > > > >>> > > > RAW disk > > > >>> > > > >>>>>>>> image on that filesystem. You'll lose a lot of iop= s > > > along > > > >>> > > > >>>>>>>> the > > > >>> > > > way, and have > > > >>> > > > >>>>>>>> more overhead with the filesystem and its > journaling, > > > etc. > > > >>> > > > >>>>>>>> > > > >>> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski" > > > >>> > > > >>>>>>>> wrote: > > > >>> > > > >>>>>>>>> > > > >>> > > > >>>>>>>>> Ah, OK, I didn't know that was such new ground in > KVM > > > with > > > >>> > CS. > > > >>> > > > >>>>>>>>> > > > >>> > > > >>>>>>>>> So, the way people use our SAN with KVM and CS > today > > > is by > > > >>> > > > >>>>>>>>> selecting SharedMountPoint and specifying the > > location > > > of > > > >>> > > > >>>>>>>>> the > > > >>> > > > share. > > > >>> > > > >>>>>>>>> > > > >>> > > > >>>>>>>>> They can set up their share using Open iSCSI by > > > >>> > > > >>>>>>>>> discovering > > > >>> > > their > > > >>> > > > >>>>>>>>> iSCSI target, logging in to it, then mounting it > > > somewhere > > > >>> > > > >>>>>>>>> on > > > >>> > > > their file > > > >>> > > > >>>>>>>>> system. > > > >>> > > > >>>>>>>>> > > > >>> > > > >>>>>>>>> Would it make sense for me to just do that > discovery, > > > >>> > > > >>>>>>>>> logging > > > >>> > > in, > > > >>> > > > >>>>>>>>> and mounting behind the scenes for them and letti= ng > > the > > > >>> > current > > > >>> > > > code manage > > > >>> > > > >>>>>>>>> the rest as it currently does? > > > >>> > > > >>>>>>>>> > > > >>> > > > >>>>>>>>> > > > >>> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen > > > >>> > > > >>>>>>>>> wrote: > > > >>> > > > >>>>>>>>>> > > > >>> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit different. I > need > > > to > > > >>> > catch > > > >>> > > up > > > >>> > > > >>>>>>>>>> on the work done in KVM, but this is basically > just > > > disk > > > >>> > > > snapshots + memory > > > >>> > > > >>>>>>>>>> dump. I still think disk snapshots would > preferably > > be > > > >>> > handled > > > >>> > > > by the SAN, > > > >>> > > > >>>>>>>>>> and then memory dumps can go to secondary storag= e > or > > > >>> > something > > > >>> > > > else. This is > > > >>> > > > >>>>>>>>>> relatively new ground with CS and KVM, so we wil= l > > > want to > > > >>> > see > > > >>> > > > how others are > > > >>> > > > >>>>>>>>>> planning theirs. > > > >>> > > > >>>>>>>>>> > > > >>> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" < > > > >>> > > shadowsor@gmail.com > > > >>> > > > > > > > >>> > > > >>>>>>>>>> wrote: > > > >>> > > > >>>>>>>>>>> > > > >>> > > > >>>>>>>>>>> Let me back up and say I don't think you'd use = a > > vdi > > > >>> > > > >>>>>>>>>>> style > > > >>> > on > > > >>> > > > an > > > >>> > > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it as a > RAW > > > >>> > > > >>>>>>>>>>> format. > > > >>> > > > Otherwise you're > > > >>> > > > >>>>>>>>>>> putting a filesystem on your lun, mounting it, > > > creating > > > >>> > > > >>>>>>>>>>> a > > > >>> > > > QCOW2 disk image, > > > >>> > > > >>>>>>>>>>> and that seems unnecessary and a performance > > killer. > > > >>> > > > >>>>>>>>>>> > > > >>> > > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a di= sk > > to > > > the > > > >>> > VM, > > > >>> > > > and > > > >>> > > > >>>>>>>>>>> handling snapshots on the San side via the > storage > > > >>> > > > >>>>>>>>>>> plugin > > > >>> > is > > > >>> > > > best. My > > > >>> > > > >>>>>>>>>>> impression from the storage plugin refactor was > > that > > > >>> > > > >>>>>>>>>>> there > > > >>> > > was > > > >>> > > > a snapshot > > > >>> > > > >>>>>>>>>>> service that would allow the San to handle > > snapshots. > > > >>> > > > >>>>>>>>>>> > > > >>> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" < > > > >>> > > > shadowsor@gmail.com> > > > >>> > > > >>>>>>>>>>> wrote: > > > >>> > > > >>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>> Ideally volume snapshots can be handled by the > SAN > > > back > > > >>> > end, > > > >>> > > > if > > > >>> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt serve= r > > > could > > > >>> > > > >>>>>>>>>>>> call > > > >>> > > > your plugin for > > > >>> > > > >>>>>>>>>>>> volume snapshot and it would be hypervisor > > > agnostic. As > > > >>> > far > > > >>> > > > as space, that > > > >>> > > > >>>>>>>>>>>> would depend on how your SAN handles it. With > > ours, > > > we > > > >>> > carve > > > >>> > > > out luns from a > > > >>> > > > >>>>>>>>>>>> pool, and the snapshot spave comes from the po= ol > > > and is > > > >>> > > > independent of the > > > >>> > > > >>>>>>>>>>>> LUN size the host sees. > > > >>> > > > >>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski" > > > >>> > > > >>>>>>>>>>>> wrote: > > > >>> > > > >>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>> Hey Marcus, > > > >>> > > > >>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type for > > libvirt > > > >>> > > > >>>>>>>>>>>>> won't > > > >>> > > > work > > > >>> > > > >>>>>>>>>>>>> when you take into consideration hypervisor > > > snapshots? > > > >>> > > > >>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor > > snapshot, > > > the > > > >>> > VDI > > > >>> > > > for > > > >>> > > > >>>>>>>>>>>>> the snapshot is placed on the same storage > > > repository > > > >>> > > > >>>>>>>>>>>>> as > > > >>> > > the > > > >>> > > > volume is on. > > > >>> > > > >>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>> Same idea for VMware, I believe. > > > >>> > > > >>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>> So, what would happen in my case (let's say f= or > > > >>> > > > >>>>>>>>>>>>> XenServer > > > >>> > > and > > > >>> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't support > hypervisor > > > >>> > snapshots > > > >>> > > > in 4.2) is I'd > > > >>> > > > >>>>>>>>>>>>> make an iSCSI target that is larger than what > the > > > user > > > >>> > > > requested for the > > > >>> > > > >>>>>>>>>>>>> CloudStack volume (which is fine because our > SAN > > > >>> > > > >>>>>>>>>>>>> thinly > > > >>> > > > provisions volumes, > > > >>> > > > >>>>>>>>>>>>> so the space is not actually used unless it > needs > > > to > > > >>> > > > >>>>>>>>>>>>> be). > > > >>> > > > The CloudStack > > > >>> > > > >>>>>>>>>>>>> volume would be the only "object" on the SAN > > volume > > > >>> > until a > > > >>> > > > hypervisor > > > >>> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot would also > > reside > > > on > > > >>> > > > >>>>>>>>>>>>> the > > > >>> > > > SAN volume. > > > >>> > > > >>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>> If this is also how KVM behaves and there is = no > > > >>> > > > >>>>>>>>>>>>> creation > > > >>> > of > > > >>> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt > (which, > > > even > > > >>> > > > >>>>>>>>>>>>> if > > > >>> > > > there were support > > > >>> > > > >>>>>>>>>>>>> for this, our SAN currently only allows one L= UN > > per > > > >>> > > > >>>>>>>>>>>>> iSCSI > > > >>> > > > target), then I > > > >>> > > > >>>>>>>>>>>>> don't see how using this model will work. > > > >>> > > > >>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance the current > way > > > this > > > >>> > > works > > > >>> > > > >>>>>>>>>>>>> with DIR? > > > >>> > > > >>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>> What do you think? > > > >>> > > > >>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>> Thanks > > > >>> > > > >>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkows= ki > > > >>> > > > >>>>>>>>>>>>> wrote: > > > >>> > > > >>>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>>> That appears to be the way it's used for iSC= SI > > > access > > > >>> > > today. > > > >>> > > > >>>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>>> I suppose I could go that route, too, but I > > might > > > as > > > >>> > well > > > >>> > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead. > > > >>> > > > >>>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus > Sorensen > > > >>> > > > >>>>>>>>>>>>>> wrote: > > > >>> > > > >>>>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>>>> To your question about SharedMountPoint, I > > > believe > > > >>> > > > >>>>>>>>>>>>>>> it > > > >>> > > just > > > >>> > > > >>>>>>>>>>>>>>> acts like a > > > >>> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar to > > that. > > > The > > > >>> > > > end-user > > > >>> > > > >>>>>>>>>>>>>>> is > > > >>> > > > >>>>>>>>>>>>>>> responsible for mounting a file system that > all > > > KVM > > > >>> > hosts > > > >>> > > > can > > > >>> > > > >>>>>>>>>>>>>>> access, > > > >>> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is > > providing > > > the > > > >>> > > > storage. > > > >>> > > > >>>>>>>>>>>>>>> It could > > > >>> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered > > > >>> > > > >>>>>>>>>>>>>>> filesystem, > > > >>> > > > >>>>>>>>>>>>>>> cloudstack just > > > >>> > > > >>>>>>>>>>>>>>> knows that the provided directory path has = VM > > > >>> > > > >>>>>>>>>>>>>>> images. > > > >>> > > > >>>>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus > > Sorensen > > > >>> > > > >>>>>>>>>>>>>>> wrote: > > > >>> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI a= ll > > at > > > the > > > >>> > same > > > >>> > > > >>>>>>>>>>>>>>> > time. > > > >>> > > > >>>>>>>>>>>>>>> > Multiples, in fact. > > > >>> > > > >>>>>>>>>>>>>>> > > > > >>> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike > > Tutkowski > > > >>> > > > >>>>>>>>>>>>>>> > wrote: > > > >>> > > > >>>>>>>>>>>>>>> >> Looks like you can have multiple storage > > > pools: > > > >>> > > > >>>>>>>>>>>>>>> >> > > > >>> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list > > > >>> > > > >>>>>>>>>>>>>>> >> Name State Autostar= t > > > >>> > > > >>>>>>>>>>>>>>> >> ----------------------------------------= - > > > >>> > > > >>>>>>>>>>>>>>> >> default active yes > > > >>> > > > >>>>>>>>>>>>>>> >> iSCSI active no > > > >>> > > > >>>>>>>>>>>>>>> >> > > > >>> > > > >>>>>>>>>>>>>>> >> > > > >>> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike > > > Tutkowski > > > >>> > > > >>>>>>>>>>>>>>> >> wrote: > > > >>> > > > >>>>>>>>>>>>>>> >>> > > > >>> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed ou= t. > > > >>> > > > >>>>>>>>>>>>>>> >>> > > > >>> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now. > > > >>> > > > >>>>>>>>>>>>>>> >>> > > > >>> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) stora= ge > > > pool > > > >>> > based > > > >>> > > on > > > >>> > > > >>>>>>>>>>>>>>> >>> an iSCSI target. > > > >>> > > > >>>>>>>>>>>>>>> >>> > > > >>> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only > > have > > > one > > > >>> > LUN, > > > >>> > > > so > > > >>> > > > >>>>>>>>>>>>>>> >>> there would only > > > >>> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume i= n > > the > > > >>> > > (libvirt) > > > >>> > > > >>>>>>>>>>>>>>> >>> storage pool. > > > >>> > > > >>>>>>>>>>>>>>> >>> > > > >>> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and > destroys > > > >>> > > > >>>>>>>>>>>>>>> >>> iSCSI > > > >>> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the > > > >>> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem > that > > > >>> > > > >>>>>>>>>>>>>>> >>> libvirt > > > >>> > > does > > > >>> > > > >>>>>>>>>>>>>>> >>> not support > > > >>> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs. > > > >>> > > > >>>>>>>>>>>>>>> >>> > > > >>> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a bit > to > > > see > > > >>> > > > >>>>>>>>>>>>>>> >>> if > > > >>> > > > libvirt > > > >>> > > > >>>>>>>>>>>>>>> >>> supports > > > >>> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you > > > mentioned, > > > >>> > since > > > >>> > > > >>>>>>>>>>>>>>> >>> each one of its > > > >>> > > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my > iSCSI > > > >>> > > > targets/LUNs). > > > >>> > > > >>>>>>>>>>>>>>> >>> > > > >>> > > > >>>>>>>>>>>>>>> >>> > > > >>> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike > > > Tutkowski > > > >>> > > > >>>>>>>>>>>>>>> >>> wrote: > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type: > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> public enum poolType { > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> ISCSI("iscsi"), NETFS("netfs")= , > > > >>> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"), > > > >>> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd"); > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> String _poolType; > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> poolType(String poolType) { > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> _poolType =3D poolType; > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> } > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> @Override > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> public String toString() { > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> return _poolType; > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> } > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> } > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is > > > >>> > > > >>>>>>>>>>>>>>> >>>> currently > > > >>> > > being > > > >>> > > > >>>>>>>>>>>>>>> >>>> used, but I'm > > > >>> > > > >>>>>>>>>>>>>>> >>>> understanding more what you were getti= ng > > at. > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), > when > > > >>> > > > >>>>>>>>>>>>>>> >>>> someone > > > >>> > > > >>>>>>>>>>>>>>> >>>> selects the > > > >>> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it wi= th > > > iSCSI, > > > >>> > > > >>>>>>>>>>>>>>> >>>> is > > > >>> > > > that > > > >>> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option > > > >>> > > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS? > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> Thanks! > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcu= s > > > >>> > > > >>>>>>>>>>>>>>> >>>> Sorensen > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> wrote: > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this: > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > > > >>> > > http://libvirt.org/storage.html#StorageBackendISCSI > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the > > iSCSI > > > >>> > server, > > > >>> > > > and > > > >>> > > > >>>>>>>>>>>>>>> >>>>> cannot be > > > >>> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which= I > > > >>> > > > >>>>>>>>>>>>>>> >>>>> believe > > > >>> > > your > > > >>> > > > >>>>>>>>>>>>>>> >>>>> plugin will take > > > >>> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work o= f > > > logging > > > >>> > > > >>>>>>>>>>>>>>> >>>>> in > > > >>> > > and > > > >>> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to > > > >>> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does th= at > > > work > > > >>> > > > >>>>>>>>>>>>>>> >>>>> in > > > >>> > the > > > >>> > > > Xen > > > >>> > > > >>>>>>>>>>>>>>> >>>>> stuff). > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether th= is > > > >>> > > > >>>>>>>>>>>>>>> >>>>> provides > > > >>> > a > > > >>> > > > 1:1 > > > >>> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if > > > >>> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscs= i > > > device > > > >>> > > > >>>>>>>>>>>>>>> >>>>> as > > > >>> > a > > > >>> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need > > > >>> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or read up a > bit > > > more > > > >>> > about > > > >>> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know. > > > >>> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to > write > > > your > > > >>> > own > > > >>> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor > > > >>> > > > >>>>>>>>>>>>>>> >>>>> rather than changing > > > >>> > > > >>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java. > > > >>> > > We > > > >>> > > > >>>>>>>>>>>>>>> >>>>> can cross that > > > >>> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there. > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, s= ee > > the > > > >>> > > > >>>>>>>>>>>>>>> >>>>> java > > > >>> > > > >>>>>>>>>>>>>>> >>>>> bindings doc. > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > > > http://libvirt.org/sources/java/javadoc/Normally, > > > >>> > > > >>>>>>>>>>>>>>> >>>>> you'll see a > > > >>> > > > >>>>>>>>>>>>>>> >>>>> connection object be made, then calls > > made > > > to > > > >>> > that > > > >>> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You > > > >>> > > > >>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor > to > > > see > > > >>> > > > >>>>>>>>>>>>>>> >>>>> how > > > >>> > > that > > > >>> > > > >>>>>>>>>>>>>>> >>>>> is done for > > > >>> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write som= e > > test > > > >>> > > > >>>>>>>>>>>>>>> >>>>> java > > > >>> > > code > > > >>> > > > >>>>>>>>>>>>>>> >>>>> to see if you > > > >>> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and regist= er > > > iscsi > > > >>> > > storage > > > >>> > > > >>>>>>>>>>>>>>> >>>>> pools before you > > > >>> > > > >>>>>>>>>>>>>>> >>>>> get started. > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike > > > >>> > > > >>>>>>>>>>>>>>> >>>>> Tutkowski > > > >>> > > > >>>>>>>>>>>>>>> >>>>> wrote: > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate > > libvirt > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > more, > > > >>> > > but > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > you figure it > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > supports > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from > iSCSI > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > targets, > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > right? > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mi= ke > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Tutkowski > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > > wrote: > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through som= e > of > > > the > > > >>> > > classes > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> last > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> week or so. > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, > > Marcus > > > >>> > Sorensen > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> wrote: > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will ne= ed > > the > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> iscsi > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be standa= rd > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> packages > > > >>> > > for > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do th= e > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator > > > >>> > > > login. > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> sent > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java > > > >>> > and > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need. > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike > > > Tutkowski" > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote: > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi, > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the > 4.2 > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> release > > > >>> > I > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack= . > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the > > > storage > > > >>> > > > framework > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> times > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically crea= te > > and > > > >>> > delete > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities). > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can > > establish a > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> 1:1 > > > >>> > > > mapping > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume fo= r > > QoS. > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always > > > expected > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the > > > >>> > > > admin > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those > > > volumes > > > >>> > would > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS > > > friendly). > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme > > work, > > > I > > > >>> > needed > > > >>> > > > to > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so > > they > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> could > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as neede= d. > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happ= en > > > with > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM. > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how > this > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> might > > > >>> > > work > > > >>> > > > on > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> still > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM. > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM kn= ow > > > how I > > > >>> > will > > > >>> > > > need > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will = I > > > have to > > > >>> > > expect > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and us= e > it > > > for > > > >>> > this > > > >>> > > to > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> work? > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions, > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> -- > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, > > SolidFire > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Inc. > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302 > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses > the > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> cloud=99 > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> -- > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, > SolidFire > > > Inc. > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302 > > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses t= he > > > cloud=99 > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > -- > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, > SolidFire > > > Inc. > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302 > > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses th= e > > > cloud=99 > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> > > > >>> > > > >>>>>>>>>>>>>>> >>>> -- > > > >>> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski > > > >>> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire > > Inc. > > > >>> > > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com > > > >>> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302 > > > >>> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses the > > cloud=99 > > > >>> > > > >>>>>>>>>>>>>>> >>> > > > >>> > > > >>>>>>>>>>>>>>> >>> > > > >>> > > > >>>>>>>>>>>>>>> >>> > > > >>> > > > >>>>>>>>>>>>>>> >>> > > > >>> > > > >>>>>>>>>>>>>>> >>> -- > > > >>> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski > > > >>> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire > Inc. > > > >>> > > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com > > > >>> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302 > > > >>> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the > cloud=99 > > > >>> > > > >>>>>>>>>>>>>>> >> > > > >>> > > > >>>>>>>>>>>>>>> >> > > > >>> > > > >>>>>>>>>>>>>>> >> > > > >>> > > > >>>>>>>>>>>>>>> >> > > > >>> > > > >>>>>>>>>>>>>>> >> -- > > > >>> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski > > > >>> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire > Inc. > > > >>> > > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com > > > >>> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302 > > > >>> > > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the > cloud=99 > > > >>> > > > >>>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>>> -- > > > >>> > > > >>>>>>>>>>>>>> Mike Tutkowski > > > >>> > > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc. > > > >>> > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com > > > >>> > > > >>>>>>>>>>>>>> o: 303.746.7302 > > > >>> > > > >>>>>>>>>>>>>> Advancing the way the world uses the cloud= =99 > > > >>> > > > >>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>> > > > >>> > > > >>>>>>>>>>>>> -- > > > >>> > > > >>>>>>>>>>>>> Mike Tutkowski > > > >>> > > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc. > > > >>> > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com > > > >>> > > > >>>>>>>>>>>>> o: 303.746.7302 > > > >>> > > > >>>>>>>>>>>>> Advancing the way the world uses the cloud=99 > > > >>> > > > >>>>>>>>> > > > >>> > > > >>>>>>>>> > > > >>> > > > >>>>>>>>> > > > >>> > > > >>>>>>>>> > > > >>> > > > >>>>>>>>> -- > > > >>> > > > >>>>>>>>> Mike Tutkowski > > > >>> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc. > > > >>> > > > >>>>>>>>> e: mike.tutkowski@solidfire.com > > > >>> > > > >>>>>>>>> o: 303.746.7302 > > > >>> > > > >>>>>>>>> Advancing the way the world uses the cloud=99 > > > >>> > > > >>>>>> > > > >>> > > > >>>>>> > > > >>> > > > >>>>>> > > > >>> > > > >>>>>> > > > >>> > > > >>>>>> -- > > > >>> > > > >>>>>> Mike Tutkowski > > > >>> > > > >>>>>> Senior CloudStack Developer, SolidFire Inc. > > > >>> > > > >>>>>> e: mike.tutkowski@solidfire.com > > > >>> > > > >>>>>> o: 303.746.7302 > > > >>> > > > >>>>>> Advancing the way the world uses the cloud=99 > > > >>> > > > >>> > > > >>> > > > >>> > > > >>> > > > >>> > > > >>> > > > >>> > > > >>> > > > >>> -- > > > >>> > > > >>> Mike Tutkowski > > > >>> > > > >>> Senior CloudStack Developer, SolidFire Inc. > > > >>> > > > >>> e: mike.tutkowski@solidfire.com > > > >>> > > > >>> o: 303.746.7302 > > > >>> > > > >>> Advancing the way the world uses the cloud=99 > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > -- > > > >>> > > > > Mike Tutkowski > > > >>> > > > > Senior CloudStack Developer, SolidFire Inc. > > > >>> > > > > e: mike.tutkowski@solidfire.com > > > >>> > > > > o: 303.746.7302 > > > >>> > > > > Advancing the way the world uses the cloud=99 > > > >>> > > > > > > >>> > > > > > >>> > > > > > >>> > > > > > >>> > > -- > > > >>> > > *Mike Tutkowski* > > > >>> > > *Senior CloudStack Developer, SolidFire Inc.* > > > >>> > > e: mike.tutkowski@solidfire.com > > > >>> > > o: 303.746.7302 > > > >>> > > Advancing the way the world uses the > > > >>> > > cloud > > > >>> > > *=99* > > > >>> > > > > > >>> > > > > >>> > > > >>> > > > >>> > > > >>> -- > > > >>> *Mike Tutkowski* > > > >>> *Senior CloudStack Developer, SolidFire Inc.* > > > >>> e: mike.tutkowski@solidfire.com > > > >>> o: 303.746.7302 > > > >>> Advancing the way the world uses the > > > >>> cloud > > > >>> *=99* > > > > > > > > > > > -- > > *Mike Tutkowski* > > *Senior CloudStack Developer, SolidFire Inc.* > > e: mike.tutkowski@solidfire.com > > o: 303.746.7302 > > Advancing the way the world uses the > > cloud > > *=99* > > > --=20 *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkowski@solidfire.com o: 303.746.7302 Advancing the way the world uses the cloud *=99* --089e0115eef88f98d504e6a20b50--