Return-Path: X-Original-To: apmail-cloudstack-dev-archive@www.apache.org Delivered-To: apmail-cloudstack-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 50C1910608 for ; Mon, 3 Jun 2013 21:11:16 +0000 (UTC) Received: (qmail 10724 invoked by uid 500); 3 Jun 2013 21:11:15 -0000 Delivered-To: apmail-cloudstack-dev-archive@cloudstack.apache.org Received: (qmail 10683 invoked by uid 500); 3 Jun 2013 21:11:15 -0000 Mailing-List: contact dev-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cloudstack.apache.org Delivered-To: mailing list dev@cloudstack.apache.org Received: (qmail 10675 invoked by uid 99); 3 Jun 2013 21:11:15 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 03 Jun 2013 21:11:15 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of mike.tutkowski@solidfire.com designates 209.85.214.171 as permitted sender) Received: from [209.85.214.171] (HELO mail-ob0-f171.google.com) (209.85.214.171) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 03 Jun 2013 21:11:11 +0000 Received: by mail-ob0-f171.google.com with SMTP id dn14so8128320obc.2 for ; Mon, 03 Jun 2013 14:10:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:x-gm-message-state; bh=e4GmLuVUMQkzCFplhs5aGTiKG8vSKAv8yQ2nd+vRq3k=; b=MazvD8dv52rlg/aOFcnsj5wgJSiuGoPdHpdRqHDxMgZrd/inqyPB34wZDf4W+GsYqD cnrNjQsVcY8aWb150/beod9TX6M48RajJI6bthZiqh0h8l8tTSiYDoYfEslywFLV9m7D eVdA0SESvpBWcDuKp17otwRNrNbLizThhjPXYKxFSXsYaEFkUP3HZ+to4FpXAVYCvNcu bUNltSa9+kJ8JqWVEsWbPCWGlirQLC1OK1QUjWPH8ImYOCctcCdQPsh+3pQ3OA5/1zdG GJkRoKHhzrz7rrXCDcbR7KcuXMYIaK8EoQdKBIcjNmyGAKLfLX1+DA7IyZOL9YG3h2O1 S2ew== MIME-Version: 1.0 X-Received: by 10.182.86.6 with SMTP id l6mr10708155obz.6.1370293849645; Mon, 03 Jun 2013 14:10:49 -0700 (PDT) Received: by 10.182.10.66 with HTTP; Mon, 3 Jun 2013 14:10:49 -0700 (PDT) In-Reply-To: References: <-6717161953645650340@unknownmsgid> <7966164390304114442@unknownmsgid> <1637190856.747092.1370279686542.JavaMail.root@bbits.ca> <93B5A79D-CDC1-4D3B-9742-16CE018454BA@basho.com> Date: Mon, 3 Jun 2013 15:10:49 -0600 Message-ID: Subject: Re: [MERGE] disk_io_throttling to MASTER From: Mike Tutkowski To: "dev@cloudstack.apache.org" Content-Type: multipart/alternative; boundary=089e0149c468257e1f04de4666e0 X-Gm-Message-State: ALoCoQklS1o7J77tCqLoGfF1FdmYlEA5+WZCNNO6tKFv29yxWmFQ2aOoUXX+4+kD0ymqoW429N60 X-Virus-Checked: Checked by ClamAV on apache.org --089e0149c468257e1f04de4666e0 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable As far as I know, the iSCSI type is uniquely used by XenServer when you want to set up Primary Storage that is directly based on an iSCSI target. This allows you to skip the step of going to the hypervisor and creating a storage repository based on that iSCSI target as CloudStack does that part for you. I think this is only supported for XenServer. For all other hypervisors, you must first go to the hypervisor and perform this step manually. I don't really know what RBD is. On Mon, Jun 3, 2013 at 2:13 PM, John Burwell wrote: > Mike, > > Reading through the code, what is the difference between the ISCSI and > Dynamic types? Why isn't RBD considered Dynamic? > > Thanks, > -John > > On Jun 3, 2013, at 3:46 PM, Mike Tutkowski > wrote: > > > This new type of storage is defined in the Storage.StoragePoolType clas= s > > (called Dynamic): > > > > public static enum StoragePoolType { > > > > Filesystem(false), // local directory > > > > NetworkFilesystem(true), // NFS or CIFS > > > > IscsiLUN(true), // shared LUN, with a clusterfs overlay > > > > Iscsi(true), // for e.g., ZFS Comstar > > > > ISO(false), // for iso image > > > > LVM(false), // XenServer local LVM SR > > > > CLVM(true), > > > > RBD(true), > > > > SharedMountPoint(true), > > > > VMFS(true), // VMware VMFS storage > > > > PreSetup(true), // for XenServer, Storage Pool is set up by > > customers. > > > > EXT(false), // XenServer local EXT SR > > > > OCFS2(true), > > > > Dynamic(true); // dynamic, zone-wide storage (ex. SolidFire) > > > > > > boolean shared; > > > > > > StoragePoolType(boolean shared) { > > > > this.shared =3D shared; > > > > } > > > > > > public boolean isShared() { > > > > return shared; > > > > } > > > > } > > > > > > On Mon, Jun 3, 2013 at 1:41 PM, Mike Tutkowski < > mike.tutkowski@solidfire.com > >> wrote: > > > >> For example, let's say another storage company wants to implement a > >> plug-in to leverage its Quality of Service feature. It would be dynami= c, > >> zone-wide storage, as well. They would need only implement a storage > >> plug-in as I've made the necessary changes to the hypervisor-attach > logic > >> to support their plug-in. > >> > >> > >> On Mon, Jun 3, 2013 at 1:39 PM, Mike Tutkowski < > >> mike.tutkowski@solidfire.com> wrote: > >> > >>> Oh, sorry to imply the XenServer code is SolidFire specific. It is no= t. > >>> > >>> The XenServer attach logic is now aware of dynamic, zone-wide storage > >>> (and SolidFire is an implementation of this kind of storage). This > kind of > >>> storage is new to 4.2 with Edison's storage framework changes. > >>> > >>> Edison created a new framework that supported the creation and deleti= on > >>> of volumes dynamically. However, when I visited with him in Portland > back > >>> in April, we realized that it was not complete. We realized there was > >>> nothing CloudStack could do with these volumes unless the attach logi= c > was > >>> changed to recognize this new type of storage and create the > appropriate > >>> hypervisor data structure. > >>> > >>> > >>> On Mon, Jun 3, 2013 at 1:28 PM, John Burwell > wrote: > >>> > >>>> Mike, > >>>> > >>>> It is generally odd to me that any operation in the Storage layer > would > >>>> understand or care about details. I expect to see the Storage > services > >>>> expose a set of operations that can be composed/driven by the > Hypervisor > >>>> implementations to allocate space/create structures per their needs. > If > >>>> we > >>>> don't invert this dependency, we are going to end with a massive > n-to-n > >>>> problem that will make the system increasingly difficult to maintain > and > >>>> enhance. Am I understanding that the Xen specific SolidFire code is > >>>> located in the CitrixResourceBase class? > >>>> > >>>> Thanks, > >>>> -John > >>>> > >>>> > >>>> On Mon, Jun 3, 2013 at 3:12 PM, Mike Tutkowski < > >>>> mike.tutkowski@solidfire.com > >>>>> wrote: > >>>> > >>>>> To delve into this in a bit more detail: > >>>>> > >>>>> Prior to 4.2 and aside from one setup method for XenServer, the adm= in > >>>> had > >>>>> to first create a volume on the storage system, then go into the > >>>> hypervisor > >>>>> to set up a data structure to make use of the volume (ex. a storage > >>>>> repository on XenServer or a datastore on ESX(i)). VMs and data dis= ks > >>>> then > >>>>> shared this storage system's volume. > >>>>> > >>>>> With Edison's new storage framework, storage need no longer be so > >>>> static > >>>>> and you can easily create a 1:1 relationship between a storage > system's > >>>>> volume and the VM's data disk (necessary for storage Quality of > >>>> Service). > >>>>> > >>>>> You can now write a plug-in that is called to dynamically create an= d > >>>> delete > >>>>> volumes as needed. > >>>>> > >>>>> The problem that the storage framework did not address is in creati= ng > >>>> and > >>>>> deleting the hypervisor-specific data structure when performing an > >>>>> attach/detach. > >>>>> > >>>>> That being the case, I've been enhancing it to do so. I've got > >>>> XenServer > >>>>> worked out and submitted. I've got ESX(i) in my sandbox and can > submit > >>>> this > >>>>> if we extend the 4.2 freeze date. > >>>>> > >>>>> Does that help a bit? :) > >>>>> > >>>>> > >>>>> On Mon, Jun 3, 2013 at 1:03 PM, Mike Tutkowski < > >>>>> mike.tutkowski@solidfire.com > >>>>>> wrote: > >>>>> > >>>>>> Hi John, > >>>>>> > >>>>>> The storage plug-in - by itself - is hypervisor agnostic. > >>>>>> > >>>>>> The issue is with the volume-attach logic (in the agent code). The > >>>>> storage > >>>>>> framework calls into the plug-in to have it create a volume as > >>>> needed, > >>>>> but > >>>>>> when the time comes to attach the volume to a hypervisor, the atta= ch > >>>>> logic > >>>>>> has to be smart enough to recognize it's being invoked on zone-wid= e > >>>>> storage > >>>>>> (where the volume has just been created) and create, say, a storag= e > >>>>>> repository (for XenServer) or a datastore (for VMware) to make use > >>>> of the > >>>>>> volume that was just created. > >>>>>> > >>>>>> I've been spending most of my time recently making the attach logi= c > >>>> work > >>>>>> in the agent code. > >>>>>> > >>>>>> Does that clear it up? > >>>>>> > >>>>>> Thanks! > >>>>>> > >>>>>> > >>>>>> On Mon, Jun 3, 2013 at 12:48 PM, John Burwell > >>>>> wrote: > >>>>>> > >>>>>>> Mike, > >>>>>>> > >>>>>>> Can you explain why the the storage driver is hypervisor specific= ? > >>>>>>> > >>>>>>> Thanks, > >>>>>>> -John > >>>>>>> > >>>>>>> On Jun 3, 2013, at 1:21 PM, Mike Tutkowski < > >>>>> mike.tutkowski@solidfire.com> > >>>>>>> wrote: > >>>>>>> > >>>>>>>> Yes, ultimately I would like to support all hypervisors that > >>>>> CloudStack > >>>>>>>> supports. I think I'm just out of time for 4.2 to get KVM in. > >>>>>>>> > >>>>>>>> Right now this plug-in supports XenServer. Depending on what we = do > >>>>> with > >>>>>>>> regards to 4.2 feature freeze, I have it working for VMware in m= y > >>>>>>> sandbox, > >>>>>>>> as well. > >>>>>>>> > >>>>>>>> Also, just to be clear, this is all in regards to Disk Offerings= . > >>>> I > >>>>>>> plan to > >>>>>>>> support Compute Offerings post 4.2. > >>>>>>>> > >>>>>>>> > >>>>>>>> On Mon, Jun 3, 2013 at 11:14 AM, Kelcey Jamison Damage < > >>>>> kelcey@bbits.ca > >>>>>>>> wrote: > >>>>>>>> > >>>>>>>>> Is there any plan on supporting KVM in the patch cycle post 4.2= ? > >>>>>>>>> > >>>>>>>>> ----- Original Message ----- > >>>>>>>>> From: "Mike Tutkowski" > >>>>>>>>> To: dev@cloudstack.apache.org > >>>>>>>>> Sent: Monday, June 3, 2013 10:12:32 AM > >>>>>>>>> Subject: Re: [MERGE] disk_io_throttling to MASTER > >>>>>>>>> > >>>>>>>>> I agree on merging Wei's feature first, then mine. > >>>>>>>>> > >>>>>>>>> If his feature is for KVM only, then it is a non issue as I don= 't > >>>>>>> support > >>>>>>>>> KVM in 4.2. > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> On Mon, Jun 3, 2013 at 8:55 AM, Wei ZHOU > >>>>>>> wrote: > >>>>>>>>> > >>>>>>>>>> John, > >>>>>>>>>> > >>>>>>>>>> For the billing, as no one works on billing now, users need to > >>>>>>> calculate > >>>>>>>>>> the billing by themselves. They can get the service_offering a= nd > >>>>>>>>>> disk_offering of a VMs and volumes for calculation. Of course > >>>> it is > >>>>>>>>> better > >>>>>>>>>> to tell user the exact limitation value of individual volume, > >>>> and > >>>>>>> network > >>>>>>>>>> rate limitation for nics as well. I can work on it later. Do y= ou > >>>>>>> think it > >>>>>>>>>> is a part of I/O throttling? > >>>>>>>>>> > >>>>>>>>>> Sorry my misunstand the second the question. > >>>>>>>>>> > >>>>>>>>>> Agree with what you said about the two features. > >>>>>>>>>> > >>>>>>>>>> -Wei > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> 2013/6/3 John Burwell > >>>>>>>>>> > >>>>>>>>>>> Wei, > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> On Jun 3, 2013, at 2:13 AM, Wei ZHOU > >>>>> wrote: > >>>>>>>>>>> > >>>>>>>>>>>> Hi John, Mike > >>>>>>>>>>>> > >>>>>>>>>>>> I hope Mike's aswer helps you. I am trying to adding more. > >>>>>>>>>>>> > >>>>>>>>>>>> (1) I think billing should depend on IO statistics rather th= an > >>>>> IOPS > >>>>>>>>>>>> limitation. Please review disk_io_stat if you have time. > >>>>>>>>> disk_io_stat > >>>>>>>>>>> can > >>>>>>>>>>>> get the IO statistics including bytes/iops read/write for an > >>>>>>>>> individual > >>>>>>>>>>>> virtual machine. > >>>>>>>>>>> > >>>>>>>>>>> Going by the AWS model, customers are billed more for volumes > >>>> with > >>>>>>>>>>> provisioned IOPS, as well as, for those operations ( > >>>>>>>>>>> http://aws.amazon.com/ebs/). I would imagine our users would > >>>> like > >>>>>>> the > >>>>>>>>>>> option to employ similar cost models. Could an operator > >>>> implement > >>>>>>>>> such a > >>>>>>>>>>> billing model in the current patch? > >>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> (2) Do you mean IOPS runtime change? KVM supports setting > >>>> IOPS/BPS > >>>>>>>>>>>> limitation for a running virtual machine through command lin= e. > >>>>>>>>> However, > >>>>>>>>>>>> CloudStack does not support changing the parameters of a > >>>> created > >>>>>>>>>> offering > >>>>>>>>>>>> (computer offering or disk offering). > >>>>>>>>>>> > >>>>>>>>>>> I meant at the Java interface level. I apologize for being > >>>>> unclear. > >>>>>>>>> Can > >>>>>>>>>>> we more generalize allocation algorithms with a set of > >>>> interfaces > >>>>>>> that > >>>>>>>>>>> describe the service guarantees provided by a resource? > >>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> (3) It is a good question. Maybe it is better to commit Mike= 's > >>>>> patch > >>>>>>>>>>> after > >>>>>>>>>>>> disk_io_throttling as Mike needs to consider the limitation = in > >>>>>>>>>> hypervisor > >>>>>>>>>>>> type, I think. > >>>>>>>>>>> > >>>>>>>>>>> I will expand on my thoughts in a later response to Mike > >>>> regarding > >>>>>>> the > >>>>>>>>>>> touch points between these two features. I think that > >>>>>>>>> disk_io_throttling > >>>>>>>>>>> will need to be merged before SolidFire, but I think we need > >>>> closer > >>>>>>>>>>> coordination between the branches (possibly have have solidfi= re > >>>>> track > >>>>>>>>>>> disk_io_throttling) to coordinate on this issue. > >>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> - Wei > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> 2013/6/3 John Burwell > >>>>>>>>>>>> > >>>>>>>>>>>>> Mike, > >>>>>>>>>>>>> > >>>>>>>>>>>>> The things I want to understand are the following: > >>>>>>>>>>>>> > >>>>>>>>>>>>> 1) Is there value in capturing IOPS policies be captured in > >>>> a > >>>>>>>>> common > >>>>>>>>>>>>> data model (e.g. for billing/usage purposes, expressing > >>>>> offerings). > >>>>>>>>>>>>> 2) Should there be a common interface model for reasoning > >>>> about > >>>>>>>>> IOP > >>>>>>>>>>>>> provisioning at runtime? > >>>>>>>>>>>>> 3) How are conflicting provisioned IOPS configurations > >>>> between > >>>>> a > >>>>>>>>>>>>> hypervisor and storage device reconciled? In particular, a > >>>>>>> scenario > >>>>>>>>>>> where > >>>>>>>>>>>>> is lead to believe (and billed) for more IOPS configured fo= r > >>>> a VM > >>>>>>>>>> than a > >>>>>>>>>>>>> storage device has been configured to deliver. Another > >>>> scenario > >>>>>>>>>> could a > >>>>>>>>>>>>> consistent configuration between a VM and a storage device = at > >>>>>>>>> creation > >>>>>>>>>>>>> time, but a later modification to storage device introduces > >>>>> logical > >>>>>>>>>>>>> inconsistency. > >>>>>>>>>>>>> > >>>>>>>>>>>>> Thanks, > >>>>>>>>>>>>> -John > >>>>>>>>>>>>> > >>>>>>>>>>>>> On Jun 2, 2013, at 8:38 PM, Mike Tutkowski < > >>>>>>>>>>> mike.tutkowski@solidfire.com> > >>>>>>>>>>>>> wrote: > >>>>>>>>>>>>> > >>>>>>>>>>>>> Hi John, > >>>>>>>>>>>>> > >>>>>>>>>>>>> I believe Wei's feature deals with controlling the max > >>>> number of > >>>>>>>>> IOPS > >>>>>>>>>>> from > >>>>>>>>>>>>> the hypervisor side. > >>>>>>>>>>>>> > >>>>>>>>>>>>> My feature is focused on controlling IOPS from the storage > >>>> system > >>>>>>>>>> side. > >>>>>>>>>>>>> > >>>>>>>>>>>>> I hope that helps. :) > >>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> On Sun, Jun 2, 2013 at 6:35 PM, John Burwell < > >>>> jburwell@basho.com > >>>>>> > >>>>>>>>>>> wrote: > >>>>>>>>>>>>> > >>>>>>>>>>>>>> Wei, > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> My opinion is that no features should be merged until all > >>>>>>>>> functional > >>>>>>>>>>>>>> issues have been resolved and it is ready to turn over to > >>>> test. > >>>>>>>>>> Until > >>>>>>>>>>>>> the > >>>>>>>>>>>>>> total Ops vs discrete read/write ops issue is addressed an= d > >>>>>>>>>> re-reviewed > >>>>>>>>>>>>> by > >>>>>>>>>>>>>> Wido, I don't think this criteria has been satisfied. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Also, how does this work intersect/compliment the SolidFir= e > >>>>> patch > >>>>>>> ( > >>>>>>>>>>>>>> https://reviews.apache.org/r/11479/)? As I understand it > >>>> that > >>>>>>>>> work > >>>>>>>>>> is > >>>>>>>>>>>>>> also involves provisioned IOPS. I would like to ensure we > >>>> don't > >>>>>>>>>> have a > >>>>>>>>>>>>>> scenario where provisioned IOPS in KVM and SolidFire are > >>>>>>>>>> unnecessarily > >>>>>>>>>>>>>> incompatible. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Thanks, > >>>>>>>>>>>>>> -John > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> On Jun 1, 2013, at 6:47 AM, Wei ZHOU >>>>> > >>>>>>>>> wrote: > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Wido, > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Sure. I will change it next week. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> -Wei > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> 2013/6/1 Wido den Hollander > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Hi Wei, > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> On 06/01/2013 08:24 AM, Wei ZHOU wrote: > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Wido, > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Exactly. I have pushed the features into master. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> If anyone object thems for technical reason till Monday, I > >>>> will > >>>>>>>>>> revert > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> them. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> For the sake of clarity I just want to mention again that = we > >>>>>>> should > >>>>>>>>>>>>> change > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> the total IOps to R/W IOps asap so that we never release a > >>>>> version > >>>>>>>>>> with > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> only total IOps. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> You laid the groundwork for the I/O throttling and that's > >>>> great! > >>>>>>> We > >>>>>>>>>>>>> should > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> however prevent that we create legacy from day #1. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Wido > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> -Wei > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> 2013/5/31 Wido den Hollander > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> On 05/31/2013 03:59 PM, John Burwell wrote: > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Wido, > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> +1 -- this enhancement must to discretely support read and > >>>> write > >>>>>>>>>> IOPS. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> I > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> don't see how it could be fixed later because I don't see > >>>> how we > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> correctly > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> split total IOPS into read and write. Therefore, we would > >>>> be > >>>>>>> stuck > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> with a > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> total unless/until we decided to break backwards > >>>> compatibility. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> What Wei meant was merging it into master now so that it > >>>> will go > >>>>>>> in > >>>>>>>>>> the > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> 4.2 branch and add Read / Write IOps before the 4.2 releas= e > >>>> so > >>>>>>> that > >>>>>>>>>> 4.2 > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> will be released with Read and Write instead of Total IOps= . > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> This is to make the May 31st feature freeze date. But if t= he > >>>>>>> window > >>>>>>>>>>> moves > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> (see other threads) then it won't be necessary to do that. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Wido > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> I also completely agree that there is no association betwe= en > >>>>>>>>> network > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> and > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> disk I/O. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Thanks, > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> -John > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> On May 31, 2013, at 9:51 AM, Wido den Hollander < > >>>> wido@widodh.nl > >>>>>> > >>>>>>>>>>> wrote: > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Hi Wei, > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> On 05/31/2013 03:13 PM, Wei ZHOU wrote: > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Hi Wido, > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Thanks. Good question. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> I thought about at the beginning. Finally I decided to > >>>> ignore > >>>>> the > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> difference of read and write mainly because the network > >>>>> throttling > >>>>>>>>>> did > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> not > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> care the difference of sent and received bytes as well. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> That reasoning seems odd. Networking and disk I/O complete= ly > >>>>>>>>>> different. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Disk I/O is much more expensive in most situations then > >>>> network > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> bandwith. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Implementing it will be some copy-paste work. It could be > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> implemented in > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> few days. For the deadline of feature freeze, I will > >>>> implement > >>>>> it > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> after > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> that , if needed. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> It think it's a feature we can't miss. But if it goes into > >>>> the > >>>>> 4.2 > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> window we have to make sure we don't release with only tot= al > >>>>> IOps > >>>>>>>>> and > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> fix > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> it in 4.3, that will confuse users. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Wido > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> -Wei > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> 2013/5/31 Wido den Hollander > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Hi Wei, > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> On 05/30/2013 06:03 PM, Wei ZHOU wrote: > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Hi, > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> I would like to merge disk_io_throttling branch into maste= r. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> If nobody object, I will merge into master in 48 hours. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> The purpose is : > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Virtual machines are running on the same storage device > >>>> (local > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> storage or > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> share strage). Because of the rate limitation of device > >>>> (such as > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> iops), if > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> one VM has large disk operation, it may affect the disk > >>>>>>> performance > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> of > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> other VMs running on the same storage device. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> It is neccesary to set the maximum rate and limit the disk > >>>> I/O > >>>>> of > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> VMs. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Looking at the code I see you make no difference between > >>>> Read > >>>>> and > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Write > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> IOps. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Qemu and libvirt support setting both a different rate for > >>>> Read > >>>>>>> and > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Write > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> IOps which could benefit a lot of users. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> It's also strange, in the polling side you collect both th= e > >>>> Read > >>>>>>>>> and > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Write > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> IOps, but on the throttling side you only go for a global > >>>> value. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Write IOps are usually much more expensive then Read IOps, > >>>> so it > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> seems > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> like a valid use-case where that an admin would set a lowe= r > >>>>> value > >>>>>>>>> for > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> write > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> IOps vs Read IOps. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Since this only supports KVM at this point I think it woul= d > >>>> be > >>>>> of > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> great > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> value to at least have the mechanism in place to support > >>>> both, > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> implementing > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> this later would be a lot of work. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> If a hypervisor doesn't support setting different values f= or > >>>>> read > >>>>>>>>> and > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> write you can always sum both up and set that as the total > >>>>> limit. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Can you explain why you implemented it this way? > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Wido > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> The feature includes: > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> (1) set the maximum rate of VMs (in disk_offering, and > >>>> global > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> configuration) > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> (2) change the maximum rate of VMs > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> (3) limit the disk rate (total bps and iops) > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> JIRA ticket: https://issues.apache.org/**** > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> jira/browse/CLOUDSTACK-1192 >>>> issues.apache.org/**** > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> jira/browse/CLOUDSTACK-1192< > >>>>>>>>>>>>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-1192> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> >>>> issues.apache.org/**jira/**browse/CLOUDSTACK-1192< > >>>>>>>>>>>>>> http://issues.apache.org/jira/**browse/CLOUDSTACK-1192> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> <** > >>>>>>>>>>>>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-1192< > >>>>>>>>>>>>> https://issues.apache.org/jira/browse/CLOUDSTACK-1192> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> FS (I will update later) : > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>> > >>>> https://cwiki.apache.org/******confluence/display/CLOUDSTACK/******< > >>>>>>>>>>>>> > >>>> https://cwiki.apache.org/****confluence/display/CLOUDSTACK/****> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> < > >>>>>>>>>>>>>> > >>>> https://cwiki.apache.org/****confluence/display/**CLOUDSTACK/** > >>>>> < > >>>>>>>>>>>>> https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**= > > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> VM+Disk+IO+Throttling >>>>>>> cwiki.apache.org/confluence/**** > >>>>>>>>> < > >>>>>>>>>>>>>> http://cwiki.apache.org/confluence/**> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> display/CLOUDSTACK/VM+Disk+IO+****Throttling >>>> ** > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>> apache.org/confluence/display/**CLOUDSTACK/VM+Disk+IO+**Throttlin= g > >>>>>>>>> < > >>>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>> > >>>>>>>>> > >>>>>>> > >>>>> > >>>> > https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Throttl= ing > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Merge check list :- > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> * Did you check the branch's RAT execution success? > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Yes > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> * Are there new dependencies introduced? > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> No > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> * What automated testing (unit and integration) is include= d > >>>> in > >>>>> the > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> new > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> feature? > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Unit tests are added. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> * What testing has been done to check for potential > >>>> regressions? > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> (1) set the bytes rate and IOPS rate on CloudStack UI. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> (2) VM operations, including > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> deploy, stop, start, reboot, destroy, expunge. migrate, > >>>> restore > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> (3) Volume operations, including > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Attach, Detach > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> To review the code, you can try > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> git diff c30057635d04a2396f84c588127d7e******be42e503a7 > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> f2e5591b710d04cc86815044f5823e******73a4a58944 > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Best regards, > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Wei > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> [1] > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>> > >>>> https://cwiki.apache.org/******confluence/display/CLOUDSTACK/******< > >>>>>>>>>>>>> > >>>> https://cwiki.apache.org/****confluence/display/CLOUDSTACK/****> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> < > >>>>>>>>>>>>>> > >>>> https://cwiki.apache.org/****confluence/display/**CLOUDSTACK/** > >>>>> < > >>>>>>>>>>>>> https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**= > > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> VM+Disk+IO+Throttling >>>>>>> cwiki.apache.org/confluence/**** > >>>>>>>>> < > >>>>>>>>>>>>>> http://cwiki.apache.org/confluence/**> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> display/CLOUDSTACK/VM+Disk+IO+****Throttling >>>> ** > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>> apache.org/confluence/display/**CLOUDSTACK/VM+Disk+IO+**Throttlin= g > >>>>>>>>> < > >>>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>> > >>>>>>>>> > >>>>>>> > >>>>> > >>>> > https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Throttl= ing > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> [2] refs/heads/disk_io_throttling > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> [3] > >>>>>>>>>>>>>> https://issues.apache.org/******jira/browse/CLOUDSTACK-130= 1 > >>>> < > >>>>>>>>>>>>> https://issues.apache.org/****jira/browse/CLOUDSTACK-1301> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> >>>> issues.apache.org/****jira/browse/CLOUDSTACK-1301< > >>>>>>>>>>>>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-1301> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> >>>> issues.apache.org/**jira/**browse/CLOUDSTACK-1301< > >>>>>>>>>>>>>> http://issues.apache.org/jira/**browse/CLOUDSTACK-1301> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> <** > >>>>>>>>>>>>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-1301< > >>>>>>>>>>>>> https://issues.apache.org/jira/browse/CLOUDSTACK-1301> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> >>>> issues.apache.org/****jira/**browse/CLOUDSTACK-2071< > >>>>>>>>>>>>>> http://issues.apache.org/**jira/**browse/CLOUDSTACK-2071> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> **< > >>>>>>>>>>>>>> http://issues.apache.org/**jira/**browse/CLOUDSTACK-2071< > >>>>>>>>>>>>> http://issues.apache.org/jira/**browse/CLOUDSTACK-2071> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> <** > >>>>>>>>>>>>>> https://issues.apache.org/****jira/browse/CLOUDSTACK-2071< > >>>>>>>>>>>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-2071> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> >>>>>>>>>>>>>> https://issues.apache.org/jira/browse/CLOUDSTACK-2071> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> (**CLOUDSTACK-1301 > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> - VM Disk I/O Throttling) > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> -- > >>>>>>>>>>>>> *Mike Tutkowski* > >>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.* > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com > >>>>>>>>>>>>> o: 303.746.7302 > >>>>>>>>>>>>> Advancing the way the world uses the > >>>>>>>>>>>>> cloud > >>>>>>>>>>>>> *=99* > >>>>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>> > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> -- > >>>>>>>>> *Mike Tutkowski* > >>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.* > >>>>>>>>> e: mike.tutkowski@solidfire.com > >>>>>>>>> o: 303.746.7302 > >>>>>>>>> Advancing the way the world uses the > >>>>>>>>> cloud > >>>>>>>>> *=99* > >>>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> -- > >>>>>>>> *Mike Tutkowski* > >>>>>>>> *Senior CloudStack Developer, SolidFire Inc.* > >>>>>>>> e: mike.tutkowski@solidfire.com > >>>>>>>> o: 303.746.7302 > >>>>>>>> Advancing the way the world uses the > >>>>>>>> cloud > >>>>>>>> *=99* > >>>>>>> > >>>>>>> > >>>>>> > >>>>>> > >>>>>> -- > >>>>>> *Mike Tutkowski* > >>>>>> *Senior CloudStack Developer, SolidFire Inc.* > >>>>>> e: mike.tutkowski@solidfire.com > >>>>>> o: 303.746.7302 > >>>>>> Advancing the way the world uses the cloud< > >>>>> http://solidfire.com/solution/overview/?video=3Dplay> > >>>>>> *=99* > >>>>>> > >>>>> > >>>>> > >>>>> > >>>>> -- > >>>>> *Mike Tutkowski* > >>>>> *Senior CloudStack Developer, SolidFire Inc.* > >>>>> e: mike.tutkowski@solidfire.com > >>>>> o: 303.746.7302 > >>>>> Advancing the way the world uses the > >>>>> cloud > >>>>> *=99* > >>>>> > >>>> > >>> > >>> > >>> > >>> -- > >>> *Mike Tutkowski* > >>> *Senior CloudStack Developer, SolidFire Inc.* > >>> e: mike.tutkowski@solidfire.com > >>> o: 303.746.7302 > >>> Advancing the way the world uses the cloud< > http://solidfire.com/solution/overview/?video=3Dplay> > >>> *=99* > >>> > >> > >> > >> > >> -- > >> *Mike Tutkowski* > >> *Senior CloudStack Developer, SolidFire Inc.* > >> e: mike.tutkowski@solidfire.com > >> o: 303.746.7302 > >> Advancing the way the world uses the cloud< > http://solidfire.com/solution/overview/?video=3Dplay> > >> *=99* > >> > > > > > > > > -- > > *Mike Tutkowski* > > *Senior CloudStack Developer, SolidFire Inc.* > > e: mike.tutkowski@solidfire.com > > o: 303.746.7302 > > Advancing the way the world uses the > > cloud > > *=99* > > --=20 *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkowski@solidfire.com o: 303.746.7302 Advancing the way the world uses the cloud *=99* --089e0149c468257e1f04de4666e0--