Return-Path: X-Original-To: apmail-cloudstack-dev-archive@www.apache.org Delivered-To: apmail-cloudstack-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 85653105EE for ; Wed, 18 Sep 2013 13:58:26 +0000 (UTC) Received: (qmail 39030 invoked by uid 500); 18 Sep 2013 13:58:25 -0000 Delivered-To: apmail-cloudstack-dev-archive@cloudstack.apache.org Received: (qmail 38999 invoked by uid 500); 18 Sep 2013 13:58:25 -0000 Mailing-List: contact dev-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cloudstack.apache.org Delivered-To: mailing list dev@cloudstack.apache.org Received: (qmail 38975 invoked by uid 99); 18 Sep 2013 13:58:21 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 18 Sep 2013 13:58:21 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of shadowsor@gmail.com designates 209.85.220.179 as permitted sender) Received: from [209.85.220.179] (HELO mail-vc0-f179.google.com) (209.85.220.179) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 18 Sep 2013 13:58:13 +0000 Received: by mail-vc0-f179.google.com with SMTP id ht10so5381316vcb.10 for ; Wed, 18 Sep 2013 06:57:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=7zEpijKN0rtgQsbsLfBtMuY2cAiDOuIBo9SHlXnGxoQ=; b=Ry4bsCJlOyP86+68E39/Vljj4D3D0RvvuUnZXs1pB67j9wwmD7fKgmGLHmNJEEvqhn 4VjXMg2GQ9CEHxieqQn5KoGo+2D9px6D6XcYm19zp8Sv5EEx2hBUAq32uAogHptcC8i7 2wW9/OOD6ETl+SDG4etkIp99qlj1YNM2xfXHg4zUL9DXDnwBh+qLXkkqH25LucIBhQXG C8NdfrTzWrgPdupJoiloSc2nGhg6p84W1BM+qQTu0QsygzfkfFSypwcFnnbxjJMfmb+H nTA7bG7vRZRdFFthnAOpkD3MMQG4JAfsm0gvr2ziO5c261z8zQfjpSsgC21q6xxdJ3Bj ip9A== MIME-Version: 1.0 X-Received: by 10.220.46.72 with SMTP id i8mr38706977vcf.10.1379512672838; Wed, 18 Sep 2013 06:57:52 -0700 (PDT) Received: by 10.52.165.6 with HTTP; Wed, 18 Sep 2013 06:57:52 -0700 (PDT) Received: by 10.52.165.6 with HTTP; Wed, 18 Sep 2013 06:57:52 -0700 (PDT) In-Reply-To: References: Date: Wed, 18 Sep 2013 07:57:52 -0600 Message-ID: Subject: Re: Managed storage with KVM From: Marcus Sorensen To: dev@cloudstack.apache.org Content-Type: multipart/alternative; boundary=001a11c2c674d3eb5c04e6a8d2df X-Virus-Checked: Checked by ClamAV on apache.org --001a11c2c674d3eb5c04e6a8d2df Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Yeah, that's why I thought it was specific to your implementation. Perhaps that's true, then? On Sep 18, 2013 12:04 AM, "Mike Tutkowski" wrote: > I totally get where you're coming from with the tiered-pool approach, > though. > > Prior to SolidFire, I worked at HP and the product I worked on allowed a > single, clustered SAN to host multiple pools of storage. One pool might b= e > made up of all-SSD storage nodes while another pool might be made up of > slower HDDs. > > That kind of tiering is not what SolidFire QoS is about, though, as that > kind of tiering does not guarantee QoS. > > In the SolidFire SAN, QoS was designed in from the beginning and is > extremely granular. Each volume has its own performance and capacity. You > do not have to worry about Noisy Neighbors. > > The idea is to encourage businesses to trust the cloud with their most > critical business applications at a price point on par with traditional > SANs. > > > On Tue, Sep 17, 2013 at 11:52 PM, Mike Tutkowski < > mike.tutkowski@solidfire.com> wrote: > > > Ah, I think I see the miscommunication. > > > > I should have gone into a bit more detail about the SolidFire SAN. > > > > It is built from the ground up to support QoS on a LUN-by-LUN basis. > Every > > LUN is assigned a Min, Max, and Burst number of IOPS. > > > > The Min IOPS are a guaranteed number (as long as the SAN itself is not > > over provisioned). Capacity and IOPS are provisioned independently. > > Multiple volumes and multiple tenants using the same SAN do not suffer > from > > the Noisy Neighbor effect. > > > > When you create a Disk Offering in CS that is storage tagged to use > > SolidFire primary storage, you specify a Min, Max, and Burst number of > IOPS > > to provision from the SAN for volumes created from that Disk Offering. > > > > There is no notion of RAID groups that you see in more traditional SANs= . > > The SAN is built from clusters of storage nodes and data is replicated > > amongst all SSDs in all storage nodes (this is an SSD-only SAN) in the > > cluster to avoid hot spots and protect the data should a drives and/or > > nodes fail. You then scale the SAN by adding new storage nodes. > > > > Data is compressed and de-duplicated inline across the cluster and all > > volumes are thinly provisioned. > > > > > > On Tue, Sep 17, 2013 at 11:27 PM, Marcus Sorensen >wrote: > > > >> I'm surprised there's no mention of pool on the SAN in your descriptio= n > of > >> the framework. I had assumed this was specific to your implementation, > >> because normally SANs host multiple disk pools, maybe multiple RAID 50= s > >> and > >> 10s, or however the SAN admin wants to split it up. Maybe a pool > intended > >> for root disks and a separate one for data disks. Or one pool for > >> cloudstack and one dedicated to some other internal db application. Bu= t > it > >> sounds as though there's no place to specify which disks or pool on th= e > >> SAN > >> to use. > >> > >> We implemented our own internal storage SAN plugin based on 4.1. We us= ed > >> the 'path' attribute of the primary storage pool object to specify whi= ch > >> pool name on the back end SAN to use, so we could create all-ssd pools > and > >> slower spindle pools, then differentiate between them based on storage > >> tags. Normally the path attribute would be the mount point for NFS, bu= t > >> its > >> just a string. So when registering ours we enter San dns host name, th= e > >> san's rest api port, and the pool name. Then luns created from that > >> primary > >> storage come from the matching disk pool on the SAN. We can create and > >> register multiple pools of different types and purposes on the same SA= N. > >> We > >> haven't yet gotten to porting it to the 4.2 frame work, so it will be > >> interesting to see what we can come up with to make it work similarly. > >> On Sep 17, 2013 10:43 PM, "Mike Tutkowski" < > mike.tutkowski@solidfire.com > >> > > >> wrote: > >> > >> > What you're saying here is definitely something we should talk about= . > >> > > >> > Hopefully my previous e-mail has clarified how this works a bit. > >> > > >> > It mainly comes down to this: > >> > > >> > For the first time in CS history, primary storage is no longer > required > >> to > >> > be preallocated by the admin and then handed to CS. CS volumes don't > >> have > >> > to share a preallocated volume anymore. > >> > > >> > As of 4.2, primary storage can be based on a SAN (or some other > storage > >> > device). You can tell CS how many bytes and IOPS to use from this > >> storage > >> > device and CS invokes the appropriate plug-in to carve out LUNs > >> > dynamically. > >> > > >> > Each LUN is home to one and only one data disk. Data disks - in this > >> model > >> > - never share a LUN. > >> > > >> > The main use case for this is so a CS volume can deliver guaranteed > >> IOPS if > >> > the storage device (ex. SolidFire SAN) delivers guaranteed IOPS on a > >> > LUN-by-LUN basis. > >> > > >> > > >> > On Tue, Sep 17, 2013 at 10:16 PM, Marcus Sorensen < > shadowsor@gmail.com > >> > >wrote: > >> > > >> > > I guess whether or not a solidfire device is capable of hosting > >> > > multiple disk pools is irrelevant, we'd hope that we could get the > >> > > stats (maybe 30TB availabie, and 15TB allocated in LUNs). But if > these > >> > > stats aren't collected, I can't as an admin define multiple pools > and > >> > > expect cloudstack to allocate evenly from them or fill one up and > move > >> > > to the next, because it doesn't know how big it is. > >> > > > >> > > Ultimately this discussion has nothing to do with the KVM stuff > >> > > itself, just a tangent, but something to think about. > >> > > > >> > > On Tue, Sep 17, 2013 at 10:13 PM, Marcus Sorensen < > >> shadowsor@gmail.com> > >> > > wrote: > >> > > > Ok, on most storage pools it shows how many GB free/used when > >> listing > >> > > > the pool both via API and in the UI. I'm guessing those are empt= y > >> then > >> > > > for the solid fire storage, but it seems like the user should ha= ve > >> to > >> > > > define some sort of pool that the luns get carved out of, and yo= u > >> > > > should be able to get the stats for that, right? Or is a solid > fire > >> > > > appliance only one pool per appliance? This isn't about billing, > but > >> > > > just so cloudstack itself knows whether or not there is space le= ft > >> on > >> > > > the storage device, so cloudstack can go on allocating from a > >> > > > different primary storage as this one fills up. There are also > >> > > > notifications and things. It seems like there should be a call y= ou > >> can > >> > > > handle for this, maybe Edison knows. > >> > > > > >> > > > On Tue, Sep 17, 2013 at 8:46 PM, Marcus Sorensen < > >> shadowsor@gmail.com> > >> > > wrote: > >> > > >> You respond to more than attach and detach, right? Don't you > create > >> > > luns as > >> > > >> well? Or are you just referring to the hypervisor stuff? > >> > > >> > >> > > >> On Sep 17, 2013 7:51 PM, "Mike Tutkowski" < > >> > mike.tutkowski@solidfire.com > >> > > > > >> > > >> wrote: > >> > > >>> > >> > > >>> Hi Marcus, > >> > > >>> > >> > > >>> I never need to respond to a CreateStoragePool call for either > >> > > XenServer > >> > > >>> or > >> > > >>> VMware. > >> > > >>> > >> > > >>> What happens is I respond only to the Attach- and Detach-volum= e > >> > > commands. > >> > > >>> > >> > > >>> Let's say an attach comes in: > >> > > >>> > >> > > >>> In this case, I check to see if the storage is "managed." > Talking > >> > > >>> XenServer > >> > > >>> here, if it is, I log in to the LUN that is the disk we want t= o > >> > attach. > >> > > >>> After, if this is the first time attaching this disk, I create > an > >> SR > >> > > and a > >> > > >>> VDI within the SR. If it is not the first time attaching this > >> disk, > >> > the > >> > > >>> LUN > >> > > >>> already has the SR and VDI on it. > >> > > >>> > >> > > >>> Once this is done, I let the normal "attach" logic run because > >> this > >> > > logic > >> > > >>> expected an SR and a VDI and now it has it. > >> > > >>> > >> > > >>> It's the same thing for VMware: Just substitute datastore for = SR > >> and > >> > > VMDK > >> > > >>> for VDI. > >> > > >>> > >> > > >>> Does that make sense? > >> > > >>> > >> > > >>> Thanks! > >> > > >>> > >> > > >>> > >> > > >>> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen > >> > > >>> wrote: > >> > > >>> > >> > > >>> > What do you do with Xen? I imagine the user enter the SAN > >> details > >> > > when > >> > > >>> > registering the pool? A the pool details are basically just > >> > > instructions > >> > > >>> > on > >> > > >>> > how to log into a target, correct? > >> > > >>> > > >> > > >>> > You can choose to log in a KVM host to the target during > >> > > >>> > createStoragePool > >> > > >>> > and save the pool in a map, or just save the pool info in a > map > >> for > >> > > >>> > future > >> > > >>> > reference by uuid, for when you do need to log in. The > >> > > createStoragePool > >> > > >>> > then just becomes a way to save the pool info to the agent. > >> > > Personally, > >> > > >>> > I'd > >> > > >>> > log in on the pool create and look/scan for specific luns wh= en > >> > > they're > >> > > >>> > needed, but I haven't thought it through thoroughly. I just > say > >> > that > >> > > >>> > mainly > >> > > >>> > because login only happens once, the first time the pool is > >> used, > >> > and > >> > > >>> > every > >> > > >>> > other storage command is about discovering new luns or maybe > >> > > >>> > deleting/disconnecting luns no longer needed. On the other > hand, > >> > you > >> > > >>> > could > >> > > >>> > do all of the above: log in on pool create, then also check = if > >> > you're > >> > > >>> > logged in on other commands and log in if you've lost > >> connection. > >> > > >>> > > >> > > >>> > With Xen, what does your registered pool show in the UI fo= r > >> > > avail/used > >> > > >>> > capacity, and how does it get that info? I assume there is > some > >> > sort > >> > > of > >> > > >>> > disk pool that the luns are carved from, and that your plugi= n > is > >> > > called > >> > > >>> > to > >> > > >>> > talk to the SAN and expose to the user how much of that pool > has > >> > been > >> > > >>> > allocated. Knowing how you already solves these problems wit= h > >> Xen > >> > > will > >> > > >>> > help > >> > > >>> > figure out what to do with KVM. > >> > > >>> > > >> > > >>> > If this is the case, I think the plugin can continue to hand= le > >> it > >> > > rather > >> > > >>> > than getting details from the agent. I'm not sure if that > means > >> > nulls > >> > > >>> > are > >> > > >>> > OK for these on the agent side or what, I need to look at th= e > >> > storage > >> > > >>> > plugin arch more closely. > >> > > >>> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" < > >> > > mike.tutkowski@solidfire.com> > >> > > >>> > wrote: > >> > > >>> > > >> > > >>> > > Hey Marcus, > >> > > >>> > > > >> > > >>> > > I'm reviewing your e-mails as I implement the necessary > >> methods > >> > in > >> > > new > >> > > >>> > > classes. > >> > > >>> > > > >> > > >>> > > "So, referencing StorageAdaptor.java, createStoragePool > >> accepts > >> > > all of > >> > > >>> > > the pool data (host, port, name, path) which would be used > to > >> log > >> > > the > >> > > >>> > > host into the initiator." > >> > > >>> > > > >> > > >>> > > Can you tell me, in my case, since a storage pool (primary > >> > > storage) is > >> > > >>> > > actually the SAN, I wouldn't really be logging into anythi= ng > >> at > >> > > this > >> > > >>> > point, > >> > > >>> > > correct? > >> > > >>> > > > >> > > >>> > > Also, what kind of capacity, available, and used bytes mak= e > >> sense > >> > > to > >> > > >>> > report > >> > > >>> > > for KVMStoragePool (since KVMStoragePool represents the SA= N > >> in my > >> > > case > >> > > >>> > and > >> > > >>> > > not an individual LUN)? > >> > > >>> > > > >> > > >>> > > Thanks! > >> > > >>> > > > >> > > >>> > > > >> > > >>> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen < > >> > > shadowsor@gmail.com > >> > > >>> > > >wrote: > >> > > >>> > > > >> > > >>> > > > Ok, KVM will be close to that, of course, because only t= he > >> > > >>> > > > hypervisor > >> > > >>> > > > classes differ, the rest is all mgmt server. Creating a > >> volume > >> > is > >> > > >>> > > > just > >> > > >>> > > > a db entry until it's deployed for the first time. > >> > > >>> > > > AttachVolumeCommand > >> > > >>> > > > on the agent side (LibvirtStorageAdaptor.java is analogo= us > >> to > >> > > >>> > > > CitrixResourceBase.java) will do the iscsiadm commands > (via > >> a > >> > KVM > >> > > >>> > > > StorageAdaptor) to log in the host to the target and the= n > >> you > >> > > have a > >> > > >>> > > > block device. Maybe libvirt will do that for you, but m= y > >> quick > >> > > read > >> > > >>> > > > made it sound like the iscsi libvirt pool type is > actually a > >> > > pool, > >> > > >>> > > > not > >> > > >>> > > > a lun or volume, so you'll need to figure out if that > works > >> or > >> > if > >> > > >>> > > > you'll have to use iscsiadm commands. > >> > > >>> > > > > >> > > >>> > > > If you're NOT going to use LibvirtStorageAdaptor (becaus= e > >> > Libvirt > >> > > >>> > > > doesn't really manage your pool the way you want), you'r= e > >> going > >> > > to > >> > > >>> > > > have to create a version of KVMStoragePool class and a > >> > > >>> > > > StorageAdaptor > >> > > >>> > > > class (see LibvirtStoragePool.java and > >> > > LibvirtStorageAdaptor.java), > >> > > >>> > > > implementing all of the methods, then in > >> KVMStorageManager.java > >> > > >>> > > > there's a "_storageMapper" map. This is used to select t= he > >> > > correct > >> > > >>> > > > adaptor, you can see in this file that every call first > >> pulls > >> > the > >> > > >>> > > > correct adaptor out of this map via getStorageAdaptor. S= o > >> you > >> > can > >> > > >>> > > > see > >> > > >>> > > > a comment in this file that says "add other storage > adaptors > >> > > here", > >> > > >>> > > > where it puts to this map, this is where you'd register > your > >> > > >>> > > > adaptor. > >> > > >>> > > > > >> > > >>> > > > So, referencing StorageAdaptor.java, createStoragePool > >> accepts > >> > > all > >> > > >>> > > > of > >> > > >>> > > > the pool data (host, port, name, path) which would be us= ed > >> to > >> > log > >> > > >>> > > > the > >> > > >>> > > > host into the initiator. I *believe* the method > >> getPhysicalDisk > >> > > will > >> > > >>> > > > need to do the work of attaching the lun. > >> AttachVolumeCommand > >> > > calls > >> > > >>> > > > this and then creates the XML diskdef and attaches it to > the > >> > VM. > >> > > >>> > > > Now, > >> > > >>> > > > one thing you need to know is that createStoragePool is > >> called > >> > > >>> > > > often, > >> > > >>> > > > sometimes just to make sure the pool is there. You may > want > >> to > >> > > >>> > > > create > >> > > >>> > > > a map in your adaptor class and keep track of pools that > >> have > >> > > been > >> > > >>> > > > created, LibvirtStorageAdaptor doesn't have to do this > >> because > >> > it > >> > > >>> > > > asks > >> > > >>> > > > libvirt about which storage pools exist. There are also > >> calls > >> > to > >> > > >>> > > > refresh the pool stats, and all of the other calls can b= e > >> seen > >> > in > >> > > >>> > > > the > >> > > >>> > > > StorageAdaptor as well. There's a createPhysical disk, > >> clone, > >> > > etc, > >> > > >>> > > > but > >> > > >>> > > > it's probably a hold-over from 4.1, as I have the vague > idea > >> > that > >> > > >>> > > > volumes are created on the mgmt server via the plugin no= w, > >> so > >> > > >>> > > > whatever > >> > > >>> > > > doesn't apply can just be stubbed out (or optionally > >> > > >>> > > > extended/reimplemented here, if you don't mind the hosts > >> > talking > >> > > to > >> > > >>> > > > the san api). > >> > > >>> > > > > >> > > >>> > > > There is a difference between attaching new volumes and > >> > > launching a > >> > > >>> > > > VM > >> > > >>> > > > with existing volumes. In the latter case, the VM > >> definition > >> > > that > >> > > >>> > > > was > >> > > >>> > > > passed to the KVM agent includes the disks, > (StartCommand). > >> > > >>> > > > > >> > > >>> > > > I'd be interested in how your pool is defined for Xen, I > >> > imagine > >> > > it > >> > > >>> > > > would need to be kept the same. Is it just a definition = to > >> the > >> > > SAN > >> > > >>> > > > (ip address or some such, port number) and perhaps a > volume > >> > pool > >> > > >>> > > > name? > >> > > >>> > > > > >> > > >>> > > > > If there is a way for me to update the ACL list on the > >> SAN to > >> > > have > >> > > >>> > > only a > >> > > >>> > > > > single KVM host have access to the volume, that would = be > >> > ideal. > >> > > >>> > > > > >> > > >>> > > > That depends on your SAN API. I was under the impressio= n > >> that > >> > > the > >> > > >>> > > > storage plugin framework allowed for acls, or for you to > do > >> > > whatever > >> > > >>> > > > you want for create/attach/delete/snapshot, etc. You'd > just > >> > call > >> > > >>> > > > your > >> > > >>> > > > SAN API with the host info for the ACLs prior to when th= e > >> disk > >> > is > >> > > >>> > > > attached (or the VM is started). I'd have to look more = at > >> the > >> > > >>> > > > framework to know the details, in 4.1 I would do this in > >> > > >>> > > > getPhysicalDisk just prior to connecting up the LUN. > >> > > >>> > > > > >> > > >>> > > > > >> > > >>> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski > >> > > >>> > > > wrote: > >> > > >>> > > > > OK, yeah, the ACL part will be interesting. That is a > bit > >> > > >>> > > > > different > >> > > >>> > > from > >> > > >>> > > > how > >> > > >>> > > > > it works with XenServer and VMware. > >> > > >>> > > > > > >> > > >>> > > > > Just to give you an idea how it works in 4.2 with > >> XenServer: > >> > > >>> > > > > > >> > > >>> > > > > * The user creates a CS volume (this is just recorded = in > >> the > >> > > >>> > > > cloud.volumes > >> > > >>> > > > > table). > >> > > >>> > > > > > >> > > >>> > > > > * The user attaches the volume as a disk to a VM for t= he > >> > first > >> > > >>> > > > > time > >> > > >>> > (if > >> > > >>> > > > the > >> > > >>> > > > > storage allocator picks the SolidFire plug-in, the > storage > >> > > >>> > > > > framework > >> > > >>> > > > invokes > >> > > >>> > > > > a method on the plug-in that creates a volume on the > >> > SAN...info > >> > > >>> > > > > like > >> > > >>> > > the > >> > > >>> > > > IQN > >> > > >>> > > > > of the SAN volume is recorded in the DB). > >> > > >>> > > > > > >> > > >>> > > > > * CitrixResourceBase's execute(AttachVolumeCommand) is > >> > > executed. > >> > > >>> > > > > It > >> > > >>> > > > > determines based on a flag passed in that the storage = in > >> > > question > >> > > >>> > > > > is > >> > > >>> > > > > "CloudStack-managed" storage (as opposed to > "traditional" > >> > > >>> > preallocated > >> > > >>> > > > > storage). This tells it to discover the iSCSI target. > Once > >> > > >>> > > > > discovered > >> > > >>> > > it > >> > > >>> > > > > determines if the iSCSI target already contains a > storage > >> > > >>> > > > > repository > >> > > >>> > > (it > >> > > >>> > > > > would if this were a re-attach situation). If it does > >> contain > >> > > an > >> > > >>> > > > > SR > >> > > >>> > > > already, > >> > > >>> > > > > then there should already be one VDI, as well. If ther= e > >> is no > >> > > SR, > >> > > >>> > > > > an > >> > > >>> > SR > >> > > >>> > > > is > >> > > >>> > > > > created and a single VDI is created within it (that > takes > >> up > >> > > about > >> > > >>> > > > > as > >> > > >>> > > > much > >> > > >>> > > > > space as was requested for the CloudStack volume). > >> > > >>> > > > > > >> > > >>> > > > > * The normal attach-volume logic continues (it depends > on > >> the > >> > > >>> > existence > >> > > >>> > > > of > >> > > >>> > > > > an SR and a VDI). > >> > > >>> > > > > > >> > > >>> > > > > The VMware case is essentially the same (mainly just > >> > substitute > >> > > >>> > > datastore > >> > > >>> > > > > for SR and VMDK for VDI). > >> > > >>> > > > > > >> > > >>> > > > > In both cases, all hosts in the cluster have discovere= d > >> the > >> > > iSCSI > >> > > >>> > > target, > >> > > >>> > > > > but only the host that is currently running the VM tha= t > is > >> > > using > >> > > >>> > > > > the > >> > > >>> > > VDI > >> > > >>> > > > (or > >> > > >>> > > > > VMKD) is actually using the disk. > >> > > >>> > > > > > >> > > >>> > > > > Live Migration should be OK because the hypervisors > >> > communicate > >> > > >>> > > > > with > >> > > >>> > > > > whatever metadata they have on the SR (or datastore). > >> > > >>> > > > > > >> > > >>> > > > > I see what you're saying with KVM, though. > >> > > >>> > > > > > >> > > >>> > > > > In that case, the hosts are clustered only in > CloudStack's > >> > > eyes. > >> > > >>> > > > > CS > >> > > >>> > > > controls > >> > > >>> > > > > Live Migration. You don't really need a clustered > >> filesystem > >> > on > >> > > >>> > > > > the > >> > > >>> > > LUN. > >> > > >>> > > > The > >> > > >>> > > > > LUN could be handed over raw to the VM using it. > >> > > >>> > > > > > >> > > >>> > > > > If there is a way for me to update the ACL list on the > >> SAN to > >> > > have > >> > > >>> > > only a > >> > > >>> > > > > single KVM host have access to the volume, that would = be > >> > ideal. > >> > > >>> > > > > > >> > > >>> > > > > Also, I agree I'll need to use iscsiadm to discover an= d > >> log > >> > in > >> > > to > >> > > >>> > > > > the > >> > > >>> > > > iSCSI > >> > > >>> > > > > target. I'll also need to take the resultant new devic= e > >> and > >> > > pass > >> > > >>> > > > > it > >> > > >>> > > into > >> > > >>> > > > the > >> > > >>> > > > > VM. > >> > > >>> > > > > > >> > > >>> > > > > Does this sound reasonable? Please call me out on > >> anything I > >> > > seem > >> > > >>> > > > incorrect > >> > > >>> > > > > about. :) > >> > > >>> > > > > > >> > > >>> > > > > Thanks for all the thought on this, Marcus! > >> > > >>> > > > > > >> > > >>> > > > > > >> > > >>> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen < > >> > > >>> > shadowsor@gmail.com> > >> > > >>> > > > > wrote: > >> > > >>> > > > >> > >> > > >>> > > > >> Perfect. You'll have a domain def ( the VM), a disk > def, > >> and > >> > > the > >> > > >>> > > attach > >> > > >>> > > > >> the disk def to the vm. You may need to do your own > >> > > >>> > > > >> StorageAdaptor > >> > > >>> > and > >> > > >>> > > > run > >> > > >>> > > > >> iscsiadm commands to accomplish that, depending on ho= w > >> the > >> > > >>> > > > >> libvirt > >> > > >>> > > iscsi > >> > > >>> > > > >> works. My impression is that a 1:1:1 pool/lun/volume > >> isn't > >> > > how it > >> > > >>> > > works > >> > > >>> > > > on > >> > > >>> > > > >> xen at the momen., nor is it ideal. > >> > > >>> > > > >> > >> > > >>> > > > >> Your plugin will handle acls as far as which host can > see > >> > > which > >> > > >>> > > > >> luns > >> > > >>> > > as > >> > > >>> > > > >> well, I remember discussing that months ago, so that = a > >> disk > >> > > won't > >> > > >>> > > > >> be > >> > > >>> > > > >> connected until the hypervisor has exclusive access, = so > >> it > >> > > will > >> > > >>> > > > >> be > >> > > >>> > > safe > >> > > >>> > > > and > >> > > >>> > > > >> fence the disk from rogue nodes that cloudstack loses > >> > > >>> > > > >> connectivity > >> > > >>> > > > with. It > >> > > >>> > > > >> should revoke access to everything but the target > host... > >> > > Except > >> > > >>> > > > >> for > >> > > >>> > > > during > >> > > >>> > > > >> migration but we can discuss that later, there's a > >> migration > >> > > prep > >> > > >>> > > > process > >> > > >>> > > > >> where the new host can be added to the acls, and the > old > >> > host > >> > > can > >> > > >>> > > > >> be > >> > > >>> > > > removed > >> > > >>> > > > >> post migration. > >> > > >>> > > > >> > >> > > >>> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" < > >> > > >>> > > mike.tutkowski@solidfire.com > >> > > >>> > > > > > >> > > >>> > > > >> wrote: > >> > > >>> > > > >>> > >> > > >>> > > > >>> Yeah, that would be ideal. > >> > > >>> > > > >>> > >> > > >>> > > > >>> So, I would still need to discover the iSCSI target, > >> log in > >> > > to > >> > > >>> > > > >>> it, > >> > > >>> > > then > >> > > >>> > > > >>> figure out what /dev/sdX was created as a result (an= d > >> leave > >> > > it > >> > > >>> > > > >>> as > >> > > >>> > is > >> > > >>> > > - > >> > > >>> > > > do > >> > > >>> > > > >>> not format it with any file system...clustered or > not). > >> I > >> > > would > >> > > >>> > pass > >> > > >>> > > > that > >> > > >>> > > > >>> device into the VM. > >> > > >>> > > > >>> > >> > > >>> > > > >>> Kind of accurate? > >> > > >>> > > > >>> > >> > > >>> > > > >>> > >> > > >>> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen < > >> > > >>> > > shadowsor@gmail.com> > >> > > >>> > > > >>> wrote: > >> > > >>> > > > >>>> > >> > > >>> > > > >>>> Look in LibvirtVMDef.java (I think) for the disk > >> > > definitions. > >> > > >>> > There > >> > > >>> > > > are > >> > > >>> > > > >>>> ones that work for block devices rather than files. > You > >> > can > >> > > >>> > > > >>>> piggy > >> > > >>> > > > back off > >> > > >>> > > > >>>> of the existing disk definitions and attach it to t= he > >> vm > >> > as > >> > > a > >> > > >>> > block > >> > > >>> > > > device. > >> > > >>> > > > >>>> The definition is an XML string per libvirt XML > format. > >> > You > >> > > may > >> > > >>> > want > >> > > >>> > > > to use > >> > > >>> > > > >>>> an alternate path to the disk rather than just > /dev/sdx > >> > > like I > >> > > >>> > > > mentioned, > >> > > >>> > > > >>>> there are by-id paths to the block devices, as well > as > >> > other > >> > > >>> > > > >>>> ones > >> > > >>> > > > that will > >> > > >>> > > > >>>> be consistent and easier for management, not sure h= ow > >> > > familiar > >> > > >>> > > > >>>> you > >> > > >>> > > > are with > >> > > >>> > > > >>>> device naming on Linux. > >> > > >>> > > > >>>> > >> > > >>> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen" > >> > > >>> > > > >>>> > >> > > >>> > > > wrote: > >> > > >>> > > > >>>>> > >> > > >>> > > > >>>>> No, as that would rely on virtualized network/iscs= i > >> > > initiator > >> > > >>> > > inside > >> > > >>> > > > >>>>> the vm, which also sucks. I mean attach /dev/sdx > (your > >> > lun > >> > > on > >> > > >>> > > > hypervisor) as > >> > > >>> > > > >>>>> a disk to the VM, rather than attaching some image > >> file > >> > > that > >> > > >>> > > resides > >> > > >>> > > > on a > >> > > >>> > > > >>>>> filesystem, mounted on the host, living on a targe= t. > >> > > >>> > > > >>>>> > >> > > >>> > > > >>>>> Actually, if you plan on the storage supporting li= ve > >> > > migration > >> > > >>> > > > >>>>> I > >> > > >>> > > > think > >> > > >>> > > > >>>>> this is the only way. You can't put a filesystem o= n > it > >> > and > >> > > >>> > > > >>>>> mount > >> > > >>> > it > >> > > >>> > > > in two > >> > > >>> > > > >>>>> places to facilitate migration unless its a > clustered > >> > > >>> > > > >>>>> filesystem, > >> > > >>> > > in > >> > > >>> > > > which > >> > > >>> > > > >>>>> case you're back to shared mount point. > >> > > >>> > > > >>>>> > >> > > >>> > > > >>>>> As far as I'm aware, the xenserver SR style is > >> basically > >> > > LVM > >> > > >>> > with a > >> > > >>> > > > xen > >> > > >>> > > > >>>>> specific cluster management, a custom CLVM. They > don't > >> > use > >> > > a > >> > > >>> > > > filesystem > >> > > >>> > > > >>>>> either. > >> > > >>> > > > >>>>> > >> > > >>> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski" > >> > > >>> > > > >>>>> wrote: > >> > > >>> > > > >>>>>> > >> > > >>> > > > >>>>>> When you say, "wire up the lun directly to the vm= ," > >> do > >> > you > >> > > >>> > > > >>>>>> mean > >> > > >>> > > > >>>>>> circumventing the hypervisor? I didn't think we > >> could do > >> > > that > >> > > >>> > > > >>>>>> in > >> > > >>> > > CS. > >> > > >>> > > > >>>>>> OpenStack, on the other hand, always circumvents > the > >> > > >>> > > > >>>>>> hypervisor, > >> > > >>> > > as > >> > > >>> > > > far as I > >> > > >>> > > > >>>>>> know. > >> > > >>> > > > >>>>>> > >> > > >>> > > > >>>>>> > >> > > >>> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen = < > >> > > >>> > > > shadowsor@gmail.com> > >> > > >>> > > > >>>>>> wrote: > >> > > >>> > > > >>>>>>> > >> > > >>> > > > >>>>>>> Better to wire up the lun directly to the vm > unless > >> > > there is > >> > > >>> > > > >>>>>>> a > >> > > >>> > > good > >> > > >>> > > > >>>>>>> reason not to. > >> > > >>> > > > >>>>>>> > >> > > >>> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" < > >> > > >>> > shadowsor@gmail.com> > >> > > >>> > > > >>>>>>> wrote: > >> > > >>> > > > >>>>>>>> > >> > > >>> > > > >>>>>>>> You could do that, but as mentioned I think its= a > >> > > mistake > >> > > >>> > > > >>>>>>>> to > >> > > >>> > go > >> > > >>> > > to > >> > > >>> > > > >>>>>>>> the trouble of creating a 1:1 mapping of CS > >> volumes to > >> > > luns > >> > > >>> > and > >> > > >>> > > > then putting > >> > > >>> > > > >>>>>>>> a filesystem on it, mounting it, and then > putting a > >> > > QCOW2 > >> > > >>> > > > >>>>>>>> or > >> > > >>> > > even > >> > > >>> > > > RAW disk > >> > > >>> > > > >>>>>>>> image on that filesystem. You'll lose a lot of > iops > >> > > along > >> > > >>> > > > >>>>>>>> the > >> > > >>> > > > way, and have > >> > > >>> > > > >>>>>>>> more overhead with the filesystem and its > >> journaling, > >> > > etc. > >> > > >>> > > > >>>>>>>> > >> > > >>> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski" > >> > > >>> > > > >>>>>>>> wrote: > >> > > >>> > > > >>>>>>>>> > >> > > >>> > > > >>>>>>>>> Ah, OK, I didn't know that was such new ground > in > >> KVM > >> > > with > >> > > >>> > CS. > >> > > >>> > > > >>>>>>>>> > >> > > >>> > > > >>>>>>>>> So, the way people use our SAN with KVM and CS > >> today > >> > > is by > >> > > >>> > > > >>>>>>>>> selecting SharedMountPoint and specifying the > >> > location > >> > > of > >> > > >>> > > > >>>>>>>>> the > >> > > >>> > > > share. > >> > > >>> > > > >>>>>>>>> > >> > > >>> > > > >>>>>>>>> They can set up their share using Open iSCSI b= y > >> > > >>> > > > >>>>>>>>> discovering > >> > > >>> > > their > >> > > >>> > > > >>>>>>>>> iSCSI target, logging in to it, then mounting = it > >> > > somewhere > >> > > >>> > > > >>>>>>>>> on > >> > > >>> > > > their file > >> > > >>> > > > >>>>>>>>> system. > >> > > >>> > > > >>>>>>>>> > >> > > >>> > > > >>>>>>>>> Would it make sense for me to just do that > >> discovery, > >> > > >>> > > > >>>>>>>>> logging > >> > > >>> > > in, > >> > > >>> > > > >>>>>>>>> and mounting behind the scenes for them and > >> letting > >> > the > >> > > >>> > current > >> > > >>> > > > code manage > >> > > >>> > > > >>>>>>>>> the rest as it currently does? > >> > > >>> > > > >>>>>>>>> > >> > > >>> > > > >>>>>>>>> > >> > > >>> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorens= en > >> > > >>> > > > >>>>>>>>> wrote: > >> > > >>> > > > >>>>>>>>>> > >> > > >>> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit different.= I > >> need > >> > > to > >> > > >>> > catch > >> > > >>> > > up > >> > > >>> > > > >>>>>>>>>> on the work done in KVM, but this is basicall= y > >> just > >> > > disk > >> > > >>> > > > snapshots + memory > >> > > >>> > > > >>>>>>>>>> dump. I still think disk snapshots would > >> preferably > >> > be > >> > > >>> > handled > >> > > >>> > > > by the SAN, > >> > > >>> > > > >>>>>>>>>> and then memory dumps can go to secondary > >> storage or > >> > > >>> > something > >> > > >>> > > > else. This is > >> > > >>> > > > >>>>>>>>>> relatively new ground with CS and KVM, so we > will > >> > > want to > >> > > >>> > see > >> > > >>> > > > how others are > >> > > >>> > > > >>>>>>>>>> planning theirs. > >> > > >>> > > > >>>>>>>>>> > >> > > >>> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" < > >> > > >>> > > shadowsor@gmail.com > >> > > >>> > > > > > >> > > >>> > > > >>>>>>>>>> wrote: > >> > > >>> > > > >>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>> Let me back up and say I don't think you'd > use a > >> > vdi > >> > > >>> > > > >>>>>>>>>>> style > >> > > >>> > on > >> > > >>> > > > an > >> > > >>> > > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it as= a > >> RAW > >> > > >>> > > > >>>>>>>>>>> format. > >> > > >>> > > > Otherwise you're > >> > > >>> > > > >>>>>>>>>>> putting a filesystem on your lun, mounting i= t, > >> > > creating > >> > > >>> > > > >>>>>>>>>>> a > >> > > >>> > > > QCOW2 disk image, > >> > > >>> > > > >>>>>>>>>>> and that seems unnecessary and a performance > >> > killer. > >> > > >>> > > > >>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a > >> disk > >> > to > >> > > the > >> > > >>> > VM, > >> > > >>> > > > and > >> > > >>> > > > >>>>>>>>>>> handling snapshots on the San side via the > >> storage > >> > > >>> > > > >>>>>>>>>>> plugin > >> > > >>> > is > >> > > >>> > > > best. My > >> > > >>> > > > >>>>>>>>>>> impression from the storage plugin refactor > was > >> > that > >> > > >>> > > > >>>>>>>>>>> there > >> > > >>> > > was > >> > > >>> > > > a snapshot > >> > > >>> > > > >>>>>>>>>>> service that would allow the San to handle > >> > snapshots. > >> > > >>> > > > >>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" < > >> > > >>> > > > shadowsor@gmail.com> > >> > > >>> > > > >>>>>>>>>>> wrote: > >> > > >>> > > > >>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>> Ideally volume snapshots can be handled by > the > >> SAN > >> > > back > >> > > >>> > end, > >> > > >>> > > > if > >> > > >>> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt > server > >> > > could > >> > > >>> > > > >>>>>>>>>>>> call > >> > > >>> > > > your plugin for > >> > > >>> > > > >>>>>>>>>>>> volume snapshot and it would be hypervisor > >> > > agnostic. As > >> > > >>> > far > >> > > >>> > > > as space, that > >> > > >>> > > > >>>>>>>>>>>> would depend on how your SAN handles it. Wi= th > >> > ours, > >> > > we > >> > > >>> > carve > >> > > >>> > > > out luns from a > >> > > >>> > > > >>>>>>>>>>>> pool, and the snapshot spave comes from the > >> pool > >> > > and is > >> > > >>> > > > independent of the > >> > > >>> > > > >>>>>>>>>>>> LUN size the host sees. > >> > > >>> > > > >>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski" > >> > > >>> > > > >>>>>>>>>>>> wrote: > >> > > >>> > > > >>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>> Hey Marcus, > >> > > >>> > > > >>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type fo= r > >> > libvirt > >> > > >>> > > > >>>>>>>>>>>>> won't > >> > > >>> > > > work > >> > > >>> > > > >>>>>>>>>>>>> when you take into consideration hyperviso= r > >> > > snapshots? > >> > > >>> > > > >>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor > >> > snapshot, > >> > > the > >> > > >>> > VDI > >> > > >>> > > > for > >> > > >>> > > > >>>>>>>>>>>>> the snapshot is placed on the same storage > >> > > repository > >> > > >>> > > > >>>>>>>>>>>>> as > >> > > >>> > > the > >> > > >>> > > > volume is on. > >> > > >>> > > > >>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>> Same idea for VMware, I believe. > >> > > >>> > > > >>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>> So, what would happen in my case (let's sa= y > >> for > >> > > >>> > > > >>>>>>>>>>>>> XenServer > >> > > >>> > > and > >> > > >>> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't support > >> hypervisor > >> > > >>> > snapshots > >> > > >>> > > > in 4.2) is I'd > >> > > >>> > > > >>>>>>>>>>>>> make an iSCSI target that is larger than > what > >> the > >> > > user > >> > > >>> > > > requested for the > >> > > >>> > > > >>>>>>>>>>>>> CloudStack volume (which is fine because o= ur > >> SAN > >> > > >>> > > > >>>>>>>>>>>>> thinly > >> > > >>> > > > provisions volumes, > >> > > >>> > > > >>>>>>>>>>>>> so the space is not actually used unless i= t > >> needs > >> > > to > >> > > >>> > > > >>>>>>>>>>>>> be). > >> > > >>> > > > The CloudStack > >> > > >>> > > > >>>>>>>>>>>>> volume would be the only "object" on the S= AN > >> > volume > >> > > >>> > until a > >> > > >>> > > > hypervisor > >> > > >>> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot would als= o > >> > reside > >> > > on > >> > > >>> > > > >>>>>>>>>>>>> the > >> > > >>> > > > SAN volume. > >> > > >>> > > > >>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>> If this is also how KVM behaves and there = is > >> no > >> > > >>> > > > >>>>>>>>>>>>> creation > >> > > >>> > of > >> > > >>> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt > >> (which, > >> > > even > >> > > >>> > > > >>>>>>>>>>>>> if > >> > > >>> > > > there were support > >> > > >>> > > > >>>>>>>>>>>>> for this, our SAN currently only allows on= e > >> LUN > >> > per > >> > > >>> > > > >>>>>>>>>>>>> iSCSI > >> > > >>> > > > target), then I > >> > > >>> > > > >>>>>>>>>>>>> don't see how using this model will work. > >> > > >>> > > > >>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance the > current > >> way > >> > > this > >> > > >>> > > works > >> > > >>> > > > >>>>>>>>>>>>> with DIR? > >> > > >>> > > > >>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>> What do you think? > >> > > >>> > > > >>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>> Thanks > >> > > >>> > > > >>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike > >> Tutkowski > >> > > >>> > > > >>>>>>>>>>>>> wrote: > >> > > >>> > > > >>>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>>> That appears to be the way it's used for > >> iSCSI > >> > > access > >> > > >>> > > today. > >> > > >>> > > > >>>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>>> I suppose I could go that route, too, but= I > >> > might > >> > > as > >> > > >>> > well > >> > > >>> > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI > instead. > >> > > >>> > > > >>>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus > >> Sorensen > >> > > >>> > > > >>>>>>>>>>>>>> wrote: > >> > > >>> > > > >>>>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>>>> To your question about SharedMountPoint,= I > >> > > believe > >> > > >>> > > > >>>>>>>>>>>>>>> it > >> > > >>> > > just > >> > > >>> > > > >>>>>>>>>>>>>>> acts like a > >> > > >>> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar = to > >> > that. > >> > > The > >> > > >>> > > > end-user > >> > > >>> > > > >>>>>>>>>>>>>>> is > >> > > >>> > > > >>>>>>>>>>>>>>> responsible for mounting a file system > that > >> all > >> > > KVM > >> > > >>> > hosts > >> > > >>> > > > can > >> > > >>> > > > >>>>>>>>>>>>>>> access, > >> > > >>> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is > >> > providing > >> > > the > >> > > >>> > > > storage. > >> > > >>> > > > >>>>>>>>>>>>>>> It could > >> > > >>> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustere= d > >> > > >>> > > > >>>>>>>>>>>>>>> filesystem, > >> > > >>> > > > >>>>>>>>>>>>>>> cloudstack just > >> > > >>> > > > >>>>>>>>>>>>>>> knows that the provided directory path h= as > >> VM > >> > > >>> > > > >>>>>>>>>>>>>>> images. > >> > > >>> > > > >>>>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus > >> > Sorensen > >> > > >>> > > > >>>>>>>>>>>>>>> wrote: > >> > > >>> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCS= I > >> all > >> > at > >> > > the > >> > > >>> > same > >> > > >>> > > > >>>>>>>>>>>>>>> > time. > >> > > >>> > > > >>>>>>>>>>>>>>> > Multiples, in fact. > >> > > >>> > > > >>>>>>>>>>>>>>> > > >> > > >>> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike > >> > Tutkowski > >> > > >>> > > > >>>>>>>>>>>>>>> > wrote: > >> > > >>> > > > >>>>>>>>>>>>>>> >> Looks like you can have multiple > storage > >> > > pools: > >> > > >>> > > > >>>>>>>>>>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list > >> > > >>> > > > >>>>>>>>>>>>>>> >> Name State > Autostart > >> > > >>> > > > >>>>>>>>>>>>>>> >> > ----------------------------------------- > >> > > >>> > > > >>>>>>>>>>>>>>> >> default active yes > >> > > >>> > > > >>>>>>>>>>>>>>> >> iSCSI active no > >> > > >>> > > > >>>>>>>>>>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike > >> > > Tutkowski > >> > > >>> > > > >>>>>>>>>>>>>>> >> wrote: > >> > > >>> > > > >>>>>>>>>>>>>>> >>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed > >> out. > >> > > >>> > > > >>>>>>>>>>>>>>> >>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now. > >> > > >>> > > > >>>>>>>>>>>>>>> >>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) > >> storage > >> > > pool > >> > > >>> > based > >> > > >>> > > on > >> > > >>> > > > >>>>>>>>>>>>>>> >>> an iSCSI target. > >> > > >>> > > > >>>>>>>>>>>>>>> >>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would > only > >> > have > >> > > one > >> > > >>> > LUN, > >> > > >>> > > > so > >> > > >>> > > > >>>>>>>>>>>>>>> >>> there would only > >> > > >>> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volum= e > in > >> > the > >> > > >>> > > (libvirt) > >> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pool. > >> > > >>> > > > >>>>>>>>>>>>>>> >>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and > >> destroys > >> > > >>> > > > >>>>>>>>>>>>>>> >>> iSCSI > >> > > >>> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the > >> > > >>> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a proble= m > >> that > >> > > >>> > > > >>>>>>>>>>>>>>> >>> libvirt > >> > > >>> > > does > >> > > >>> > > > >>>>>>>>>>>>>>> >>> not support > >> > > >>> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs= . > >> > > >>> > > > >>>>>>>>>>>>>>> >>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a > bit > >> to > >> > > see > >> > > >>> > > > >>>>>>>>>>>>>>> >>> if > >> > > >>> > > > libvirt > >> > > >>> > > > >>>>>>>>>>>>>>> >>> supports > >> > > >>> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you > >> > > mentioned, > >> > > >>> > since > >> > > >>> > > > >>>>>>>>>>>>>>> >>> each one of its > >> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my > >> iSCSI > >> > > >>> > > > targets/LUNs). > >> > > >>> > > > >>>>>>>>>>>>>>> >>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mik= e > >> > > Tutkowski > >> > > >>> > > > >>>>>>>>>>>>>>> >>> wrote= : > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type= : > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> public enum poolType { > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> ISCSI("iscsi"), > NETFS("netfs"), > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"), > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd"); > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> String _poolType; > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> poolType(String poolType) { > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> _poolType =3D poolType; > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> } > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> @Override > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> public String toString() { > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> return _poolType; > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> } > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> } > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type > is > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> currently > >> > > >>> > > being > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> used, but I'm > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> understanding more what you were > >> getting > >> > at. > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2= ), > >> when > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> someone > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> selects the > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it > >> with > >> > > iSCSI, > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> is > >> > > >>> > > > that > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS? > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Thanks! > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, > Marcus > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Sorensen > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> wrote: > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this: > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > >> > > >>> > > http://libvirt.org/storage.html#StorageBackendISCSI > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on > the > >> > iSCSI > >> > > >>> > server, > >> > > >>> > > > and > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> cannot be > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", > which > >> I > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> believe > >> > > >>> > > your > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> plugin will take > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the wor= k > of > >> > > logging > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in > >> > > >>> > > and > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does > >> that > >> > > work > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in > >> > > >>> > the > >> > > >>> > > > Xen > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> stuff). > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether > >> this > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> provides > >> > > >>> > a > >> > > >>> > > > 1:1 > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1 > iscsi > >> > > device > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> as > >> > > >>> > a > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or read up= a > >> bit > >> > > more > >> > > >>> > about > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know. > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have t= o > >> write > >> > > your > >> > > >>> > own > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> rather than changing > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java. > >> > > >>> > > We > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can cross that > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there. > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt= , > >> see > >> > the > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bindings doc. > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > >> > > http://libvirt.org/sources/java/javadoc/Normally, > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> you'll see a > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> connection object be made, then > calls > >> > made > >> > > to > >> > > >>> > that > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can look at the > LibvirtStorageAdaptor > >> to > >> > > see > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> how > >> > > >>> > > that > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> is done for > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write > some > >> > test > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java > >> > > >>> > > code > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to see if you > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and > >> register > >> > > iscsi > >> > > >>> > > storage > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pools before you > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> get started. > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, > Mike > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Tutkowski > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > wrote: > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigat= e > >> > libvirt > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > more, > >> > > >>> > > but > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > you figure it > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > supports > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from > >> iSCSI > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > targets, > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > right? > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, > >> Mike > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Tutkowski > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > > >> wrote: > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through > >> some of > >> > > the > >> > > >>> > > classes > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> last > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> week or so. > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM= , > >> > Marcus > >> > > >>> > Sorensen > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> wrote: > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will > >> need > >> > the > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> iscsi > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be > >> standard > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> packages > >> > > >>> > > for > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do > the > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator > >> > > >>> > > > login. > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> sent > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java > >> > > >>> > and > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need. > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike > >> > > Tutkowski" > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote: > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi, > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during t= he > >> 4.2 > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> release > >> > > >>> > I > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for > CloudStack. > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by t= he > >> > > storage > >> > > >>> > > > framework > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> times > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically > >> create > >> > and > >> > > >>> > delete > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities). > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can > >> > establish a > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> 1:1 > >> > > >>> > > > mapping > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume > for > >> > QoS. > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack alway= s > >> > > expected > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the > >> > > >>> > > > admin > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and tho= se > >> > > volumes > >> > > >>> > would > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS > >> > > friendly). > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping sche= me > >> > work, > >> > > I > >> > > >>> > needed > >> > > >>> > > > to > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins > so > >> > they > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> could > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as > >> needed. > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this > >> happen > >> > > with > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM. > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with h= ow > >> this > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> might > >> > > >>> > > work > >> > > >>> > > > on > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> still > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM. > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM > >> know > >> > > how I > >> > > >>> > will > >> > > >>> > > > need > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, > will I > >> > > have to > >> > > >>> > > expect > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and > >> use it > >> > > for > >> > > >>> > this > >> > > >>> > > to > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> work? > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions, > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> -- > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, > >> > SolidFire > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Inc. > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> e: > mike.tutkowski@solidfire.com > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302 > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world > uses > >> the > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> cloud=99 > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> -- > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, > >> SolidFire > >> > > Inc. > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302 > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world use= s > >> the > >> > > cloud=99 > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > -- > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, > >> SolidFire > >> > > Inc. > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302 > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses > the > >> > > cloud=99 > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> -- > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, > SolidFire > >> > Inc. > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302 > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses th= e > >> > cloud=99 > >> > > >>> > > > >>>>>>>>>>>>>>> >>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> -- > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFi= re > >> Inc. > >> > > >>> > > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com > >> > > >>> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302 > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the > >> cloud=99 > >> > > >>> > > > >>>>>>>>>>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> -- > >> > > >>> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski > >> > > >>> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFir= e > >> Inc. > >> > > >>> > > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com > >> > > >>> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302 > >> > > >>> > > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the > >> cloud=99 > >> > > >>> > > > >>>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>>> -- > >> > > >>> > > > >>>>>>>>>>>>>> Mike Tutkowski > >> > > >>> > > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire In= c. > >> > > >>> > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com > >> > > >>> > > > >>>>>>>>>>>>>> o: 303.746.7302 > >> > > >>> > > > >>>>>>>>>>>>>> Advancing the way the world uses the clou= d=99 > >> > > >>> > > > >>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>> > >> > > >>> > > > >>>>>>>>>>>>> -- > >> > > >>> > > > >>>>>>>>>>>>> Mike Tutkowski > >> > > >>> > > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc= . > >> > > >>> > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com > >> > > >>> > > > >>>>>>>>>>>>> o: 303.746.7302 > >> > > >>> > > > >>>>>>>>>>>>> Advancing the way the world uses the cloud= =99 > >> > > >>> > > > >>>>>>>>> > >> > > >>> > > > >>>>>>>>> > >> > > >>> > > > >>>>>>>>> > >> > > >>> > > > >>>>>>>>> > >> > > >>> > > > >>>>>>>>> -- > >> > > >>> > > > >>>>>>>>> Mike Tutkowski > >> > > >>> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc. > >> > > >>> > > > >>>>>>>>> e: mike.tutkowski@solidfire.com > >> > > >>> > > > >>>>>>>>> o: 303.746.7302 > >> > > >>> > > > >>>>>>>>> Advancing the way the world uses the cloud=99 > >> > > >>> > > > >>>>>> > >> > > >>> > > > >>>>>> > >> > > >>> > > > >>>>>> > >> > > >>> > > > >>>>>> > >> > > >>> > > > >>>>>> -- > >> > > >>> > > > >>>>>> Mike Tutkowski > >> > > >>> > > > >>>>>> Senior CloudStack Developer, SolidFire Inc. > >> > > >>> > > > >>>>>> e: mike.tutkowski@solidfire.com > >> > > >>> > > > >>>>>> o: 303.746.7302 > >> > > >>> > > > >>>>>> Advancing the way the world uses the cloud=99 > >> > > >>> > > > >>> > >> > > >>> > > > >>> > >> > > >>> > > > >>> > >> > > >>> > > > >>> > >> > > >>> > > > >>> -- > >> > > >>> > > > >>> Mike Tutkowski > >> > > >>> > > > >>> Senior CloudStack Developer, SolidFire Inc. > >> > > >>> > > > >>> e: mike.tutkowski@solidfire.com > >> > > >>> > > > >>> o: 303.746.7302 > >> > > >>> > > > >>> Advancing the way the world uses the cloud=99 > >> > > >>> > > > > > >> > > >>> > > > > > >> > > >>> > > > > > >> > > >>> > > > > > >> > > >>> > > > > -- > >> > > >>> > > > > Mike Tutkowski > >> > > >>> > > > > Senior CloudStack Developer, SolidFire Inc. > >> > > >>> > > > > e: mike.tutkowski@solidfire.com > >> > > >>> > > > > o: 303.746.7302 > >> > > >>> > > > > Advancing the way the world uses the cloud=99 > >> > > >>> > > > > >> > > >>> > > > >> > > >>> > > > >> > > >>> > > > >> > > >>> > > -- > >> > > >>> > > *Mike Tutkowski* > >> > > >>> > > *Senior CloudStack Developer, SolidFire Inc.* > >> > > >>> > > e: mike.tutkowski@solidfire.com > >> > > >>> > > o: 303.746.7302 > >> > > >>> > > Advancing the way the world uses the > >> > > >>> > > cloud > >> > > >>> > > *=99* > >> > > >>> > > > >> > > >>> > > >> > > >>> > >> > > >>> > >> > > >>> > >> > > >>> -- > >> > > >>> *Mike Tutkowski* > >> > > >>> *Senior CloudStack Developer, SolidFire Inc.* > >> > > >>> e: mike.tutkowski@solidfire.com > >> > > >>> o: 303.746.7302 > >> > > >>> Advancing the way the world uses the > >> > > >>> cloud > >> > > >>> *=99* > >> > > > >> > > >> > > >> > > >> > -- > >> > *Mike Tutkowski* > >> > *Senior CloudStack Developer, SolidFire Inc.* > >> > e: mike.tutkowski@solidfire.com > >> > o: 303.746.7302 > >> > Advancing the way the world uses the > >> > cloud > >> > *=99* > >> > > >> > > > > > > > > -- > > *Mike Tutkowski* > > *Senior CloudStack Developer, SolidFire Inc.* > > e: mike.tutkowski@solidfire.com > > o: 303.746.7302 > > Advancing the way the world uses the cloud< > http://solidfire.com/solution/overview/?video=3Dplay> > > *=99* > > > > > > -- > *Mike Tutkowski* > *Senior CloudStack Developer, SolidFire Inc.* > e: mike.tutkowski@solidfire.com > o: 303.746.7302 > Advancing the way the world uses the > cloud > *=99* > --001a11c2c674d3eb5c04e6a8d2df--