Return-Path: X-Original-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A3E89D40A for ; Sat, 9 Mar 2013 06:04:42 +0000 (UTC) Received: (qmail 33613 invoked by uid 500); 9 Mar 2013 06:04:41 -0000 Delivered-To: apmail-incubator-cloudstack-dev-archive@incubator.apache.org Received: (qmail 33343 invoked by uid 500); 9 Mar 2013 06:04:40 -0000 Mailing-List: contact cloudstack-dev-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: cloudstack-dev@incubator.apache.org Delivered-To: mailing list cloudstack-dev@incubator.apache.org Received: (qmail 33308 invoked by uid 99); 9 Mar 2013 06:04:39 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 09 Mar 2013 06:04:39 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of shadowsor@gmail.com designates 209.85.215.42 as permitted sender) Received: from [209.85.215.42] (HELO mail-la0-f42.google.com) (209.85.215.42) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 09 Mar 2013 06:04:33 +0000 Received: by mail-la0-f42.google.com with SMTP id fe20so2411562lab.15 for ; Fri, 08 Mar 2013 22:04:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type:content-transfer-encoding; bh=MIJgzq+12UcKsM5QRhYh4RjYiZLIRkV2nhKGQ2NFAKQ=; b=Z4tRP9b+kEe/ts/AI12/EDQolgbDRWmNB0raA0WWI3QGIL8Q3mMvggHG+l9XxUTTH0 +ZzLRPv/+oPEwezHqpJK/q4Xy1ZKjfQbKt3sqXjPQg5Y0dgYWrrei9YrhpBh3sQ58Yyu 3ZbsDZNkbWkunJvScB+oNgcPiezcf0BNCTmgutrIP4rcsYTpXLFmPWQM70sSMtt6P/+w h7MZi5q9dESmbeDAL7FHiD5ojCbckO9Pl6zjDS9Hl4oUt8P/S6ZoddOm7vJLUdHpM5cV R+Zu2YGt+iLPqLfxF8PDag8DAv8v36V4L0/bQLiRAReIihd4GcTZPEgb555UbYMqMVpB 0U5w== MIME-Version: 1.0 X-Received: by 10.112.16.199 with SMTP id i7mr1999020lbd.65.1362809052634; Fri, 08 Mar 2013 22:04:12 -0800 (PST) Received: by 10.114.4.37 with HTTP; Fri, 8 Mar 2013 22:04:12 -0800 (PST) In-Reply-To: References: Date: Fri, 8 Mar 2013 23:04:12 -0700 Message-ID: Subject: Re: Making use of a 4.2 storage plug-in from the GUI or API From: Marcus Sorensen To: cloudstack-dev@incubator.apache.org Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-Virus-Checked: Checked by ClamAV on apache.org Cloudstack wouldn't/shouldn't even get to the point of letting you do that without zone-wide storage. It wouldn't think the lun was available/reachable in that cluster. It would be like trying to start a VM in a cluster it's not associated with. If the volume isn't on a pool that is "in" the cluster, it's not going to even try to use it there. I'm not sure if zone-wide/cluster-wide are exclusive, but I don't think so. It might be that you have to choose at the time of adding the primary storage, but hopefully its implemented such that it can be either if it's capable of being zone-wide. The functional spec probably has the answers to this. On Fri, Mar 8, 2013 at 6:45 PM, Mike Tutkowski wrote: > There is a method to implement called grantAccess. > > Edison was telling me it is here where I enforce an ACL. If a data disk > were being migrated from one cluster to another, wouldn't this grantAcces= s > method be called when the hypervisor in the other cluster was ready to > access the volume? At this point, I could add the IQN of the host in > question into the ACL and the volume could be accessed by the host in the > new cluster. > > Maybe I'm missing something here? > > > On Fri, Mar 8, 2013 at 6:40 PM, Marcus Sorensen wro= te: > >> On that topic, I hope there's a method in the volume service that allows >> plugin writers to handle volume copy directly. >> On Mar 8, 2013 6:32 PM, "Marcus Sorensen" wrote: >> >> > It just depends. A VM will generally be tied to a cluster. There's >> > technically no reason why someone couldn't make a giant cluster if you= r >> > storage supports it, so on that side cluster based seems fine. But if = you >> > end up wanting to move a data disk from one VM to another, and they >> happen >> > to be in different clusters, that's expensive if you don't have zone-w= ide >> > storage. Usually involves dumping and reimporting, and if the same San= is >> > hosting multiple clusters it may seem silly to dump and copy back to t= he >> > same San just so that the disk is associated with another cluster. >> > On Mar 8, 2013 6:22 PM, "Mike Tutkowski" >> > wrote: >> > >> >> Thanks for that explanation, Marcus. >> >> >> >> I believe the primary use case for me is to allow a cluster of hosts >> >> (XenServer, VMware, or KVM in particular) to share access to my iSCSI >> >> target (we would have a mapping of one VM per iSCSI target or one dat= a >> >> disk >> >> per iSCSI target). >> >> >> >> I can't really see why hosts outside of the cluster would need access= to >> >> it >> >> unless you actually are migrating the VM that's running on that volum= e >> to >> >> another cluster. >> >> >> >> >> >> On Fri, Mar 8, 2013 at 6:16 PM, Marcus Sorensen >> >> wrote: >> >> >> >> > Cluster wide is good for storage that requires some sort of >> organization >> >> > path the host level, for example, mounted file systems that rely on >> >> cluster >> >> > locking, like OCFS, GFS, cluster LVM, where hosts that aren't in a >> >> cluster >> >> > can't make use of the storage. Xen's SR's are sort of like this as >> well, >> >> > actually almost identical to cluster LVM where it carves volumes ou= t >> of >> >> a >> >> > pool or lun, leveraging locking mechanisms in the xen cluster. Clus= ter >> >> wide >> >> > is also good for topologies that are simply laid out in a way that >> makes >> >> > sense for it, for example if you had a 10g switch dedicated to a >> >> particular >> >> > cluster, with NFS services over it. >> >> > >> >> > It boils down to whether every host in the zone can access/make use= of >> >> the >> >> > storage or whether only certain hosts can. >> >> > On Mar 8, 2013 6:04 PM, "Mike Tutkowski" < >> mike.tutkowski@solidfire.com> >> >> > wrote: >> >> > >> >> > > Hey Edison, >> >> > > >> >> > > It is entirely possible that Zone wide for my plug-in would make >> >> sense. >> >> > > I'm trying to understand what restrictions, if any, are in place= if >> >> it >> >> > is >> >> > > Zone wide versus Cluster wide. >> >> > > >> >> > > In my case, the plug-in I'm developing will be creating an iSCSI >> >> target >> >> > > (volume/LUN) (nothing NFS related) and if that is best to make >> >> available >> >> > at >> >> > > a Zone level, that is totally fine with me. >> >> > > >> >> > > What would you suggest for my situation? >> >> > > >> >> > > Thanks! >> >> > > >> >> > > >> >> > > On Fri, Mar 8, 2013 at 5:35 PM, Edison Su >> >> wrote: >> >> > > >> >> > > > That API will be easy to be added, and yes, I=92ll add it next >> >> week.**** >> >> > > > >> >> > > > In the last email, I just give zone-wide primary storage as an >> >> example, >> >> > > > and I thought your storage box will be zone-wide? As you can se= e, >> >> > > > createstoragepoolcmd api is quite flexible, it can be used for >> >> > > > zone-wide/cluster storage, so do the storage plugin.**** >> >> > > > >> >> > > > ** ** >> >> > > > >> >> > > > *From:* Mike Tutkowski [mailto:mike.tutkowski@solidfire.com] >> >> > > > *Sent:* Friday, March 08, 2013 4:09 PM >> >> > > > *To:* Edison Su >> >> > > > *Cc:* cloudstack-dev@incubator.apache.org >> >> > > > *Subject:* Re: Making use of a 4.2 storage plug-in from the GUI= or >> >> > > API**** >> >> > > > >> >> > > > ** ** >> >> > > > >> >> > > > OK, cool - thanks for the info, Edison.**** >> >> > > > >> >> > > > ** ** >> >> > > > >> >> > > > When you say, "One API is missing," does that mean you're still >> >> working >> >> > > on >> >> > > > implementing that functionality?**** >> >> > > > >> >> > > > ** ** >> >> > > > >> >> > > > Also, it sounds like these plug-ins are associated with Zone-wi= de >> >> > Primary >> >> > > > Storage. I thought Zone-wide Primary Storage wasn't available = for >> >> all >> >> > > > hypervisors?**** >> >> > > > >> >> > > > ** ** >> >> > > > >> >> > > > This is from a different e-mail you sent out: >> >> > > > >> >> > > > "Xenserver and vmware doesn=92t support zone wide primary stora= ge, >> >> > > > currently, this feature is only for NFS/Ceph in KVM. And I thin= k >> it >> >> > > should >> >> > > > be useful for your storage box? I am thinking per data volume p= er >> >> LUN >> >> > for >> >> > > > xenserver."**** >> >> > > > >> >> > > > ** ** >> >> > > > >> >> > > > I'm not sure how my plug-in would work with XenServer, VMware, >> etc. >> >> if >> >> > it >> >> > > > has to be Zone-wide.**** >> >> > > > >> >> > > > ** ** >> >> > > > >> >> > > > Can you clarify this for me?**** >> >> > > > >> >> > > > ** ** >> >> > > > >> >> > > > Thanks!**** >> >> > > > >> >> > > > ** ** >> >> > > > >> >> > > > On Fri, Mar 8, 2013 at 4:33 PM, Edison Su >> >> > > wrote:*** >> >> > > > * >> >> > > > >> >> > > > One API is missing, liststorageproviderscmd, which will list al= l >> the >> >> > > > storage providers registered in the mgt server. **** >> >> > > > >> >> > > > When adding a zone wide storage pool on the UI, the UI will hav= e a >> >> > > > drop-down list to show all the primary storage providers. Then >> user >> >> > will >> >> > > > choose one of them, and select some other parameters for the >> storage >> >> > user >> >> > > > wants to add. At the end, UI will call, createstoragepoolcmd, w= ith >> >> > > > provider=3Dthe-storage-provider-uuid-returned from >> >> liststoageprovidercmd, >> >> > > > scope=3Dzone, and other input parameters. Mgt server will then = call >> >> > > > corresponding storage provider based on provider uuid, to regis= ter >> >> the >> >> > > > storage into cloudstack.**** >> >> > > > >> >> > > > **** >> >> > > > >> >> > > > *From:* Mike Tutkowski [mailto:mike.tutkowski@solidfire.com] >> >> > > > *Sent:* Friday, March 08, 2013 2:46 PM >> >> > > > *To:* cloudstack-dev@incubator.apache.org >> >> > > > *Cc:* Edison Su >> >> > > > *Subject:* Making use of a 4.2 storage plug-in from the GUI or >> >> API**** >> >> > > > >> >> > > > **** >> >> > > > >> >> > > > Hi,**** >> >> > > > >> >> > > > **** >> >> > > > >> >> > > > As you may remember, I'm leveraging Edison's new (4.2) storage >> >> plug-in >> >> > > > framework to build what is probably the first such plug-in for >> >> > > CloudStack. >> >> > > > **** >> >> > > > >> >> > > > **** >> >> > > > >> >> > > > I was wondering, does anyone know how to make the system aware = of >> >> the >> >> > > > plug-in? I believe once the plug-in is ready (i.e. usable) tha= t >> the >> >> > > intent >> >> > > > is to be able to select it when creating Primary Storage (inste= ad >> of >> >> > > having >> >> > > > to select a pre-existent iSCSI target).**** >> >> > > > >> >> > > > **** >> >> > > > >> >> > > > I'm curious how to get this working (i.e. select my plug-in) in >> the >> >> GUI >> >> > > > and via the API.**** >> >> > > > >> >> > > > **** >> >> > > > >> >> > > > Thanks!**** >> >> > > > >> >> > > > **** >> >> > > > >> >> > > > -- >> >> > > > *Mike Tutkowski***** >> >> > > > >> >> > > > *Senior CloudStack Developer, SolidFire Inc.***** >> >> > > > >> >> > > > e: mike.tutkowski@solidfire.com**** >> >> > > > >> >> > > > o: 303.746.7302**** >> >> > > > >> >> > > > Advancing the way the world uses the cloud< >> >> > > http://solidfire.com/solution/overview/?video=3Dplay> >> >> > > > *=99***** >> >> > > > >> >> > > > >> >> > > > >> >> > > > **** >> >> > > > >> >> > > > ** ** >> >> > > > >> >> > > > -- >> >> > > > *Mike Tutkowski***** >> >> > > > >> >> > > > *Senior CloudStack Developer, SolidFire Inc.***** >> >> > > > >> >> > > > e: mike.tutkowski@solidfire.com**** >> >> > > > >> >> > > > o: 303.746.7302**** >> >> > > > >> >> > > > Advancing the way the world uses the cloud< >> >> > > http://solidfire.com/solution/overview/?video=3Dplay> >> >> > > > *=99***** >> >> > > > >> >> > > >> >> > > >> >> > > >> >> > > -- >> >> > > *Mike Tutkowski* >> >> > > *Senior CloudStack Developer, SolidFire Inc.* >> >> > > e: mike.tutkowski@solidfire.com >> >> > > o: 303.746.7302 >> >> > > Advancing the way the world uses the >> >> > > cloud >> >> > > *=99* >> >> > > >> >> > >> >> >> >> >> >> >> >> -- >> >> *Mike Tutkowski* >> >> *Senior CloudStack Developer, SolidFire Inc.* >> >> e: mike.tutkowski@solidfire.com >> >> o: 303.746.7302 >> >> Advancing the way the world uses the >> >> cloud >> >> *=99* >> >> >> > >> > > > > -- > *Mike Tutkowski* > *Senior CloudStack Developer, SolidFire Inc.* > e: mike.tutkowski@solidfire.com > o: 303.746.7302 > Advancing the way the world uses the > cloud > *=99*