Return-Path: X-Original-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id DEC84ED66 for ; Wed, 16 Jan 2013 04:39:41 +0000 (UTC) Received: (qmail 29354 invoked by uid 500); 16 Jan 2013 04:39:40 -0000 Delivered-To: apmail-incubator-cloudstack-dev-archive@incubator.apache.org Received: (qmail 29247 invoked by uid 500); 16 Jan 2013 04:39:39 -0000 Mailing-List: contact cloudstack-dev-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: cloudstack-dev@incubator.apache.org Delivered-To: mailing list cloudstack-dev@incubator.apache.org Received: (qmail 29226 invoked by uid 99); 16 Jan 2013 04:39:38 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 16 Jan 2013 04:39:38 +0000 X-ASF-Spam-Status: No, hits=-5.0 required=5.0 tests=RCVD_IN_DNSWL_HI,SPF_HELO_PASS,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of Alex.Huang@citrix.com designates 66.165.176.89 as permitted sender) Received: from [66.165.176.89] (HELO SMTP.CITRIX.COM) (66.165.176.89) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 16 Jan 2013 04:39:33 +0000 X-IronPort-AV: E=Sophos;i="4.84,476,1355097600"; d="scan'208";a="3888310" Received: from sjcpmailmx01.citrite.net ([10.216.14.74]) by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5; 16 Jan 2013 04:39:12 +0000 Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by SJCPMAILMX01.citrite.net ([10.216.14.74]) with mapi; Tue, 15 Jan 2013 20:39:11 -0800 From: Alex Huang To: "cloudstack-dev@incubator.apache.org" Date: Tue, 15 Jan 2013 20:39:10 -0800 Subject: RE: new storage framework update Thread-Topic: new storage framework update Thread-Index: Ac3zlvxrKohmKTXDQdavizk6oKzhKAADGOoA Message-ID: References: <4EADDB6B-D91F-41F8-BEED-F4944566CAEE@basho.com> <9506226C-2CDC-4FD5-AA7C-879D994959FB@basho.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-Virus-Checked: Checked by ClamAV on apache.org I'm interested in this. Please add me to whatever medium you guys are usin= g. --Alex > -----Original Message----- > From: David Nalley [mailto:david@gnsa.us] > Sent: Tuesday, January 15, 2013 7:09 PM > To: cloudstack-dev@incubator.apache.org > Subject: Re: new storage framework update >=20 > On Tue, Jan 15, 2013 at 8:35 PM, Edison Su wrote: > > After a lengthy discussion(more than two hours) with John on Skype, I > think we figured out the difference between us. The API proposed by John > is more at the execution level, that's where input/output stream coming > from, which assumes that both source and destination object will be > operated at the same place(either inside ssvm, or on hypervisor host). Wh= ile > the API I proposed is more about how to hook up vendor's own storage into > cloudstack's mgt server, thus can replace the process on how and where to > operate on the storage. > > Let's talk about the execution model at first, which will have huge imp= act > on the design we made. The execution model is about where to execute > operations issued by mgt server. Currently, there is no universal executi= on > model, it's quite different for each hypervisor. > > E.g. for KVM, mgt server will send commands to KVM host, there is a ja= va > agent running on kvm host, which can execute command send by mgt server. > > For xenserver, most of commands will be executed on mgt server, which > will call xapi, then talking to xenserver host. But we do put some pytho= n > code at xenserver host, if there are operations not supported by xapi. > > For vmware, most of commands will be executed on mgt server, which > talking to vcenter API, while some of them will be executed inside SSVM. > > Due to the different execution models, we'll get into a problem about h= ow > and where to access storage device. For example, there is a storage box, > which has its own management API to be accessed. Now I want to create a > volume on the storage box, where should I call stoage box's create volume > api? If we follow up above execution models, we need to call the api at > different places and even worse, you need to write the API call in differ= ent > languages. For kvm, you may need to write java code in kvm agent, for > xenserver, you may need to write a xapi python plugin, for vmware, you ma= y > need to put the java code inside ssvm etc. > > But if the storage box already has management api, why just call it ins= ide > cloudstack mgt server, then device vendor should just write java code onc= e, > for all the different hypervisors? If we don't enforce the execution mode= l, > then the storage framework should have a hook in management server, > device vendor can decide where to execute commands send by mgt server. > > That's my datastoredriver layer used for. Take taking snapshot diagram = as > an example: > https://cwiki.apache.org/confluence/download/attachments/30741569/take > +snapshot+sequence.png?version=3D1&modificationDate=3D1358189965000 > > Datastoredriver is running inside mgt server, while datastoredriver its= elf > can decide where to execute "takasnapshot" API, driver can send a > command to hypervisor host, or directly call storage box's API, or direct= ly call > hypervisor's own API, or another service running outside of cloudstack mg= t > server. It's all up to the implementation of driver. > > Does it make sense? If it's true, the device driver should not take inp= ut/out > stream as parameter, as it enforces the execution model, which I don't th= ink > it's necessary. > > BTW, John and I will discuss the matter tomorrow on Skype, if you want = to > join, please let me know. > > >=20 >=20 > Is this text conversation on Skype? If so, why not do this on IRC in > -meeting or -dev? We can log it all and have more people join. >=20 > --David