Return-Path: X-Original-To: apmail-cloudstack-dev-archive@www.apache.org Delivered-To: apmail-cloudstack-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 60FDF10058 for ; Tue, 4 Jun 2013 18:43:29 +0000 (UTC) Received: (qmail 26341 invoked by uid 500); 4 Jun 2013 18:43:27 -0000 Delivered-To: apmail-cloudstack-dev-archive@cloudstack.apache.org Received: (qmail 26244 invoked by uid 500); 4 Jun 2013 18:43:26 -0000 Mailing-List: contact dev-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cloudstack.apache.org Delivered-To: mailing list dev@cloudstack.apache.org Received: (qmail 25438 invoked by uid 99); 4 Jun 2013 18:43:25 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 04 Jun 2013 18:43:25 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of jburwell@basho.com designates 209.85.216.180 as permitted sender) Received: from [209.85.216.180] (HELO mail-qc0-f180.google.com) (209.85.216.180) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 04 Jun 2013 18:43:16 +0000 Received: by mail-qc0-f180.google.com with SMTP id a10so373334qcx.11 for ; Tue, 04 Jun 2013 11:42:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=content-type:mime-version:subject:from:in-reply-to:date :content-transfer-encoding:message-id:references:to:x-mailer :x-gm-message-state; bh=wdIk/knsD9jaQ+6oD3O4UzUzgnelgdOk7RZAXcuki+k=; b=Z1O7nUMQMRBNqbhAx9CSvhv8yQsx/I/RneDA83j7hkKP/5pbtTKrkAJg/gy7eUeTEL /NFJmEAi8ybPBjRvxmYp4tyAqS60naiTKuOaMsM/P4pKyONVikXpwe928pZ9HzxC9iQQ r05ET/bjxZdtje5Vxv/3LonGDZuSaggJVSUbe1i03fnL5xEORDcoC5pYM8gZ+kw7zL23 nqhfw9ooKy/WsdqzKYKqgUDwPBX2ELCl1WuW3QCVgAwk4fQq0cbnK7jxs1IK8PdivH+c XO2/KiF1ET2Y5XFNmUiVbvL5FEWBYdY9/3Zv4gHHr/IdVm79wiYlQwNCZa8M4XzJfPUW yaeQ== X-Received: by 10.229.148.7 with SMTP id n7mr9893058qcv.112.1370371375418; Tue, 04 Jun 2013 11:42:55 -0700 (PDT) Received: from jburwell-basho.westell.com (pool-71-178-110-142.washdc.east.verizon.net. [71.178.110.142]) by mx.google.com with ESMTPSA id c7sm7606590qaj.5.2013.06.04.11.42.53 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 04 Jun 2013 11:42:54 -0700 (PDT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Subject: Re: [MERGE] disk_io_throttling to MASTER From: John Burwell In-Reply-To: Date: Tue, 4 Jun 2013 14:42:52 -0400 Content-Transfer-Encoding: quoted-printable Message-Id: References: <-6717161953645650340@unknownmsgid> <7966164390304114442@unknownmsgid> <1637190856.747092.1370279686542.JavaMail.root@bbits.ca> <93B5A79D-CDC1-4D3B-9742-16CE018454BA@basho.com> < D0D859DA-CA73-48A7-9666-165D0E4FE711@basho.com> To: dev@cloudstack.apache.org X-Mailer: Apple Mail (2.1503) X-Gm-Message-State: ALoCoQl7JmjCXKlc5piQuWF98niAHwdrJGTIYG9C1FyyiPTctKb42goJYkHtrJUqJWib6AAe/eqM X-Virus-Checked: Checked by ClamAV on apache.org Mike, It feels like we are combining two distinct concepts -- storage device = management and storage protocols. In both cases, we are communicating = with ISCSI, but one allows the system to create/delete volumes (Dynamic) = on the device while the other requires the volume to be volume to be = managed outside of the CloudStack context. To ensure that we are in = sync on terminology, volume, in these definitions, refers to the = physical allocation on the device, correct? Minimally, we must be able = to communicate with a storage device to move bits from one place to = another, read bits, delete bits, etc. Optionally, a storage device may = be able to managed by CloudStack. Therefore, we can have a unmanaged = iSCSI device onto which we store a Xen SR, and we can have a managed = SolidFire iSCSI device on which CloudStack is capable of allocating LUNs = and storing volumes. Finally, while CloudStack may be able to manage a = device, an operator may chose to leave it unmanaged by CloudStack (e.g. = the device is shared by many services, and the operator has chosen to = dedicate only a portion of it to CloudStack). Does my reasoning make = sense? Assuming my thoughts above are reasonable, it seems appropriate to strip = the management concerns from StoragePoolType, add the notion of a = storage device with an attached driver that indicates whether or not is = managed by CloudStack, and establish a abstraction representing a = physical allocation on a device separate that is associated with a = volume. With these notions in place, hypervisor drivers can declare = which protocols they support and when they encounter a device managed by = CloudStack, utilize the management operations exposed by the driver to = automate allocation. If these thoughts/concepts make sense, then we can = sit down and drill down to a more detailed design. Thanks, -John On Jun 3, 2013, at 5:25 PM, Mike Tutkowski = wrote: > Here is the difference between the current iSCSI type and the Dynamic = type: >=20 > iSCSI type: The admin has to go in and create a Primary Storage based = on > the iSCSI type. At this point in time, the iSCSI volume must exist on = the > storage system (it is pre-allocated). Future CloudStack volumes are = created > as VDIs on the SR that was created behind the scenes. >=20 > Dynamic type: The admin has to go in and create Primary Storage based = on a > plug-in that will create and delete volumes on its storage system > dynamically (as is enabled via the storage framework). When a user = wants to > attach a CloudStack volume that was created, the framework tells the > plug-in to create a new volume. After this is done, the attach logic = for > the hypervisor in question is called. No hypervisor data structure = exists > at this point because the volume was just created. The hypervisor data > structure must be created. >=20 >=20 > On Mon, Jun 3, 2013 at 3:21 PM, Mike Tutkowski = > wrote: >=20 >> These are new terms, so I should probably have defined them up front = for >> you. :) >>=20 >> Static storage: Storage that is pre-allocated (ex. an admin creates a >> volume on a SAN), then a hypervisor data structure is created to = consume >> the storage (ex. XenServer SR), then that hypervisor data structure = is >> consumed by CloudStack. Disks (VDI) are later placed on this = hypervisor >> data structure as needed. In these cases, the attach logic assumes = the >> hypervisor data structure is already in place and simply attaches the = VDI >> on the hypervisor data structure to the VM in question. >>=20 >> Dynamic storage: Storage that is not pre-allocated. Instead of >> pre-existent storage, this could be a SAN (not a volume on a SAN, but = the >> SAN itself). The hypervisor data structure must be created when an = attach >> volume is performed because these types of volumes have not been = pre-hooked >> up to such a hypervisor data structure by an admin. Once the attach = logic >> creates, say, an SR on XenServer for this volume, it attaches the one = and >> only VDI within the SR to the VM in question. >>=20 >>=20 >> On Mon, Jun 3, 2013 at 3:13 PM, John Burwell = wrote: >>=20 >>> Mike, >>>=20 >>> The current implementation of the Dynamic type attach behavior works = in >>> terms of Xen ISCSI which why I ask about the difference. Another = way to >>> ask the question -- what is the definition of a Dynamic storage pool = type? >>>=20 >>> Thanks, >>> -John >>>=20 >>> On Jun 3, 2013, at 5:10 PM, Mike Tutkowski = >>> wrote: >>>=20 >>>> As far as I know, the iSCSI type is uniquely used by XenServer when = you >>>> want to set up Primary Storage that is directly based on an iSCSI >>> target. >>>> This allows you to skip the step of going to the hypervisor and >>> creating a >>>> storage repository based on that iSCSI target as CloudStack does = that >>> part >>>> for you. I think this is only supported for XenServer. For all = other >>>> hypervisors, you must first go to the hypervisor and perform this = step >>>> manually. >>>>=20 >>>> I don't really know what RBD is. >>>>=20 >>>>=20 >>>> On Mon, Jun 3, 2013 at 2:13 PM, John Burwell >>> wrote: >>>>=20 >>>>> Mike, >>>>>=20 >>>>> Reading through the code, what is the difference between the ISCSI = and >>>>> Dynamic types? Why isn't RBD considered Dynamic? >>>>>=20 >>>>> Thanks, >>>>> -John >>>>>=20 >>>>> On Jun 3, 2013, at 3:46 PM, Mike Tutkowski < >>> mike.tutkowski@solidfire.com> >>>>> wrote: >>>>>=20 >>>>>> This new type of storage is defined in the = Storage.StoragePoolType >>> class >>>>>> (called Dynamic): >>>>>>=20 >>>>>> public static enum StoragePoolType { >>>>>>=20 >>>>>> Filesystem(false), // local directory >>>>>>=20 >>>>>> NetworkFilesystem(true), // NFS or CIFS >>>>>>=20 >>>>>> IscsiLUN(true), // shared LUN, with a clusterfs overlay >>>>>>=20 >>>>>> Iscsi(true), // for e.g., ZFS Comstar >>>>>>=20 >>>>>> ISO(false), // for iso image >>>>>>=20 >>>>>> LVM(false), // XenServer local LVM SR >>>>>>=20 >>>>>> CLVM(true), >>>>>>=20 >>>>>> RBD(true), >>>>>>=20 >>>>>> SharedMountPoint(true), >>>>>>=20 >>>>>> VMFS(true), // VMware VMFS storage >>>>>>=20 >>>>>> PreSetup(true), // for XenServer, Storage Pool is set up by >>>>>> customers. >>>>>>=20 >>>>>> EXT(false), // XenServer local EXT SR >>>>>>=20 >>>>>> OCFS2(true), >>>>>>=20 >>>>>> Dynamic(true); // dynamic, zone-wide storage (ex. SolidFire) >>>>>>=20 >>>>>>=20 >>>>>> boolean shared; >>>>>>=20 >>>>>>=20 >>>>>> StoragePoolType(boolean shared) { >>>>>>=20 >>>>>> this.shared =3D shared; >>>>>>=20 >>>>>> } >>>>>>=20 >>>>>>=20 >>>>>> public boolean isShared() { >>>>>>=20 >>>>>> return shared; >>>>>>=20 >>>>>> } >>>>>>=20 >>>>>> } >>>>>>=20 >>>>>>=20 >>>>>> On Mon, Jun 3, 2013 at 1:41 PM, Mike Tutkowski < >>>>> mike.tutkowski@solidfire.com >>>>>>> wrote: >>>>>>=20 >>>>>>> For example, let's say another storage company wants to = implement a >>>>>>> plug-in to leverage its Quality of Service feature. It would be >>> dynamic, >>>>>>> zone-wide storage, as well. They would need only implement a = storage >>>>>>> plug-in as I've made the necessary changes to the = hypervisor-attach >>>>> logic >>>>>>> to support their plug-in. >>>>>>>=20 >>>>>>>=20 >>>>>>> On Mon, Jun 3, 2013 at 1:39 PM, Mike Tutkowski < >>>>>>> mike.tutkowski@solidfire.com> wrote: >>>>>>>=20 >>>>>>>> Oh, sorry to imply the XenServer code is SolidFire specific. It = is >>> not. >>>>>>>>=20 >>>>>>>> The XenServer attach logic is now aware of dynamic, zone-wide >>> storage >>>>>>>> (and SolidFire is an implementation of this kind of storage). = This >>>>> kind of >>>>>>>> storage is new to 4.2 with Edison's storage framework changes. >>>>>>>>=20 >>>>>>>> Edison created a new framework that supported the creation and >>> deletion >>>>>>>> of volumes dynamically. However, when I visited with him in = Portland >>>>> back >>>>>>>> in April, we realized that it was not complete. We realized = there >>> was >>>>>>>> nothing CloudStack could do with these volumes unless the = attach >>> logic >>>>> was >>>>>>>> changed to recognize this new type of storage and create the >>>>> appropriate >>>>>>>> hypervisor data structure. >>>>>>>>=20 >>>>>>>>=20 >>>>>>>> On Mon, Jun 3, 2013 at 1:28 PM, John Burwell = >>>>> wrote: >>>>>>>>=20 >>>>>>>>> Mike, >>>>>>>>>=20 >>>>>>>>> It is generally odd to me that any operation in the Storage = layer >>>>> would >>>>>>>>> understand or care about details. I expect to see the Storage >>>>> services >>>>>>>>> expose a set of operations that can be composed/driven by the >>>>> Hypervisor >>>>>>>>> implementations to allocate space/create structures per their >>> needs. >>>>> If >>>>>>>>> we >>>>>>>>> don't invert this dependency, we are going to end with a = massive >>>>> n-to-n >>>>>>>>> problem that will make the system increasingly difficult to >>> maintain >>>>> and >>>>>>>>> enhance. Am I understanding that the Xen specific SolidFire = code >>> is >>>>>>>>> located in the CitrixResourceBase class? >>>>>>>>>=20 >>>>>>>>> Thanks, >>>>>>>>> -John >>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>>> On Mon, Jun 3, 2013 at 3:12 PM, Mike Tutkowski < >>>>>>>>> mike.tutkowski@solidfire.com >>>>>>>>>> wrote: >>>>>>>>>=20 >>>>>>>>>> To delve into this in a bit more detail: >>>>>>>>>>=20 >>>>>>>>>> Prior to 4.2 and aside from one setup method for XenServer, = the >>> admin >>>>>>>>> had >>>>>>>>>> to first create a volume on the storage system, then go into = the >>>>>>>>> hypervisor >>>>>>>>>> to set up a data structure to make use of the volume (ex. a >>> storage >>>>>>>>>> repository on XenServer or a datastore on ESX(i)). VMs and = data >>> disks >>>>>>>>> then >>>>>>>>>> shared this storage system's volume. >>>>>>>>>>=20 >>>>>>>>>> With Edison's new storage framework, storage need no longer = be so >>>>>>>>> static >>>>>>>>>> and you can easily create a 1:1 relationship between a = storage >>>>> system's >>>>>>>>>> volume and the VM's data disk (necessary for storage Quality = of >>>>>>>>> Service). >>>>>>>>>>=20 >>>>>>>>>> You can now write a plug-in that is called to dynamically = create >>> and >>>>>>>>> delete >>>>>>>>>> volumes as needed. >>>>>>>>>>=20 >>>>>>>>>> The problem that the storage framework did not address is in >>> creating >>>>>>>>> and >>>>>>>>>> deleting the hypervisor-specific data structure when = performing an >>>>>>>>>> attach/detach. >>>>>>>>>>=20 >>>>>>>>>> That being the case, I've been enhancing it to do so. I've = got >>>>>>>>> XenServer >>>>>>>>>> worked out and submitted. I've got ESX(i) in my sandbox and = can >>>>> submit >>>>>>>>> this >>>>>>>>>> if we extend the 4.2 freeze date. >>>>>>>>>>=20 >>>>>>>>>> Does that help a bit? :) >>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>> On Mon, Jun 3, 2013 at 1:03 PM, Mike Tutkowski < >>>>>>>>>> mike.tutkowski@solidfire.com >>>>>>>>>>> wrote: >>>>>>>>>>=20 >>>>>>>>>>> Hi John, >>>>>>>>>>>=20 >>>>>>>>>>> The storage plug-in - by itself - is hypervisor agnostic. >>>>>>>>>>>=20 >>>>>>>>>>> The issue is with the volume-attach logic (in the agent = code). >>> The >>>>>>>>>> storage >>>>>>>>>>> framework calls into the plug-in to have it create a volume = as >>>>>>>>> needed, >>>>>>>>>> but >>>>>>>>>>> when the time comes to attach the volume to a hypervisor, = the >>> attach >>>>>>>>>> logic >>>>>>>>>>> has to be smart enough to recognize it's being invoked on >>> zone-wide >>>>>>>>>> storage >>>>>>>>>>> (where the volume has just been created) and create, say, a >>> storage >>>>>>>>>>> repository (for XenServer) or a datastore (for VMware) to = make >>> use >>>>>>>>> of the >>>>>>>>>>> volume that was just created. >>>>>>>>>>>=20 >>>>>>>>>>> I've been spending most of my time recently making the = attach >>> logic >>>>>>>>> work >>>>>>>>>>> in the agent code. >>>>>>>>>>>=20 >>>>>>>>>>> Does that clear it up? >>>>>>>>>>>=20 >>>>>>>>>>> Thanks! >>>>>>>>>>>=20 >>>>>>>>>>>=20 >>>>>>>>>>> On Mon, Jun 3, 2013 at 12:48 PM, John Burwell < >>> jburwell@basho.com> >>>>>>>>>> wrote: >>>>>>>>>>>=20 >>>>>>>>>>>> Mike, >>>>>>>>>>>>=20 >>>>>>>>>>>> Can you explain why the the storage driver is hypervisor >>> specific? >>>>>>>>>>>>=20 >>>>>>>>>>>> Thanks, >>>>>>>>>>>> -John >>>>>>>>>>>>=20 >>>>>>>>>>>> On Jun 3, 2013, at 1:21 PM, Mike Tutkowski < >>>>>>>>>> mike.tutkowski@solidfire.com> >>>>>>>>>>>> wrote: >>>>>>>>>>>>=20 >>>>>>>>>>>>> Yes, ultimately I would like to support all hypervisors = that >>>>>>>>>> CloudStack >>>>>>>>>>>>> supports. I think I'm just out of time for 4.2 to get KVM = in. >>>>>>>>>>>>>=20 >>>>>>>>>>>>> Right now this plug-in supports XenServer. Depending on = what >>> we do >>>>>>>>>> with >>>>>>>>>>>>> regards to 4.2 feature freeze, I have it working for = VMware in >>> my >>>>>>>>>>>> sandbox, >>>>>>>>>>>>> as well. >>>>>>>>>>>>>=20 >>>>>>>>>>>>> Also, just to be clear, this is all in regards to Disk >>> Offerings. >>>>>>>>> I >>>>>>>>>>>> plan to >>>>>>>>>>>>> support Compute Offerings post 4.2. >>>>>>>>>>>>>=20 >>>>>>>>>>>>>=20 >>>>>>>>>>>>> On Mon, Jun 3, 2013 at 11:14 AM, Kelcey Jamison Damage < >>>>>>>>>> kelcey@bbits.ca >>>>>>>>>>>>> wrote: >>>>>>>>>>>>>=20 >>>>>>>>>>>>>> Is there any plan on supporting KVM in the patch cycle = post >>> 4.2? >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> ----- Original Message ----- >>>>>>>>>>>>>> From: "Mike Tutkowski" >>>>>>>>>>>>>> To: dev@cloudstack.apache.org >>>>>>>>>>>>>> Sent: Monday, June 3, 2013 10:12:32 AM >>>>>>>>>>>>>> Subject: Re: [MERGE] disk_io_throttling to MASTER >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> I agree on merging Wei's feature first, then mine. >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> If his feature is for KVM only, then it is a non issue as = I >>> don't >>>>>>>>>>>> support >>>>>>>>>>>>>> KVM in 4.2. >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> On Mon, Jun 3, 2013 at 8:55 AM, Wei ZHOU < >>> ustcweizhou@gmail.com> >>>>>>>>>>>> wrote: >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> John, >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> For the billing, as no one works on billing now, users = need >>> to >>>>>>>>>>>> calculate >>>>>>>>>>>>>>> the billing by themselves. They can get the = service_offering >>> and >>>>>>>>>>>>>>> disk_offering of a VMs and volumes for calculation. Of = course >>>>>>>>> it is >>>>>>>>>>>>>> better >>>>>>>>>>>>>>> to tell user the exact limitation value of individual = volume, >>>>>>>>> and >>>>>>>>>>>> network >>>>>>>>>>>>>>> rate limitation for nics as well. I can work on it = later. Do >>> you >>>>>>>>>>>> think it >>>>>>>>>>>>>>> is a part of I/O throttling? >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> Sorry my misunstand the second the question. >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> Agree with what you said about the two features. >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> -Wei >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> 2013/6/3 John Burwell >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>> Wei, >>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>> On Jun 3, 2013, at 2:13 AM, Wei ZHOU = >>>=20 >>>>>>>>>> wrote: >>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>> Hi John, Mike >>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>> I hope Mike's aswer helps you. I am trying to adding = more. >>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>> (1) I think billing should depend on IO statistics = rather >>> than >>>>>>>>>> IOPS >>>>>>>>>>>>>>>>> limitation. Please review disk_io_stat if you have = time. >>>>>>>>>>>>>> disk_io_stat >>>>>>>>>>>>>>>> can >>>>>>>>>>>>>>>>> get the IO statistics including bytes/iops read/write = for >>> an >>>>>>>>>>>>>> individual >>>>>>>>>>>>>>>>> virtual machine. >>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>> Going by the AWS model, customers are billed more for >>> volumes >>>>>>>>> with >>>>>>>>>>>>>>>> provisioned IOPS, as well as, for those operations ( >>>>>>>>>>>>>>>> http://aws.amazon.com/ebs/). I would imagine our users >>> would >>>>>>>>> like >>>>>>>>>>>> the >>>>>>>>>>>>>>>> option to employ similar cost models. Could an = operator >>>>>>>>> implement >>>>>>>>>>>>>> such a >>>>>>>>>>>>>>>> billing model in the current patch? >>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>> (2) Do you mean IOPS runtime change? KVM supports = setting >>>>>>>>> IOPS/BPS >>>>>>>>>>>>>>>>> limitation for a running virtual machine through = command >>> line. >>>>>>>>>>>>>> However, >>>>>>>>>>>>>>>>> CloudStack does not support changing the parameters of = a >>>>>>>>> created >>>>>>>>>>>>>>> offering >>>>>>>>>>>>>>>>> (computer offering or disk offering). >>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>> I meant at the Java interface level. I apologize for = being >>>>>>>>>> unclear. >>>>>>>>>>>>>> Can >>>>>>>>>>>>>>>> we more generalize allocation algorithms with a set of >>>>>>>>> interfaces >>>>>>>>>>>> that >>>>>>>>>>>>>>>> describe the service guarantees provided by a resource? >>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>> (3) It is a good question. Maybe it is better to = commit >>> Mike's >>>>>>>>>> patch >>>>>>>>>>>>>>>> after >>>>>>>>>>>>>>>>> disk_io_throttling as Mike needs to consider the >>> limitation in >>>>>>>>>>>>>>> hypervisor >>>>>>>>>>>>>>>>> type, I think. >>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>> I will expand on my thoughts in a later response to = Mike >>>>>>>>> regarding >>>>>>>>>>>> the >>>>>>>>>>>>>>>> touch points between these two features. I think that >>>>>>>>>>>>>> disk_io_throttling >>>>>>>>>>>>>>>> will need to be merged before SolidFire, but I think we = need >>>>>>>>> closer >>>>>>>>>>>>>>>> coordination between the branches (possibly have have >>> solidfire >>>>>>>>>> track >>>>>>>>>>>>>>>> disk_io_throttling) to coordinate on this issue. >>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>> - Wei >>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>> 2013/6/3 John Burwell >>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>> Mike, >>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>> The things I want to understand are the following: >>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>> 1) Is there value in capturing IOPS policies be = captured >>> in >>>>>>>>> a >>>>>>>>>>>>>> common >>>>>>>>>>>>>>>>>> data model (e.g. for billing/usage purposes, = expressing >>>>>>>>>> offerings). >>>>>>>>>>>>>>>>>> 2) Should there be a common interface model for = reasoning >>>>>>>>> about >>>>>>>>>>>>>> IOP >>>>>>>>>>>>>>>>>> provisioning at runtime? >>>>>>>>>>>>>>>>>> 3) How are conflicting provisioned IOPS = configurations >>>>>>>>> between >>>>>>>>>> a >>>>>>>>>>>>>>>>>> hypervisor and storage device reconciled? In = particular, >>> a >>>>>>>>>>>> scenario >>>>>>>>>>>>>>>> where >>>>>>>>>>>>>>>>>> is lead to believe (and billed) for more IOPS = configured >>> for >>>>>>>>> a VM >>>>>>>>>>>>>>> than a >>>>>>>>>>>>>>>>>> storage device has been configured to deliver. = Another >>>>>>>>> scenario >>>>>>>>>>>>>>> could a >>>>>>>>>>>>>>>>>> consistent configuration between a VM and a storage >>> device at >>>>>>>>>>>>>> creation >>>>>>>>>>>>>>>>>> time, but a later modification to storage device >>> introduces >>>>>>>>>> logical >>>>>>>>>>>>>>>>>> inconsistency. >>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>> -John >>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>> On Jun 2, 2013, at 8:38 PM, Mike Tutkowski < >>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com> >>>>>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>> Hi John, >>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>> I believe Wei's feature deals with controlling the = max >>>>>>>>> number of >>>>>>>>>>>>>> IOPS >>>>>>>>>>>>>>>> from >>>>>>>>>>>>>>>>>> the hypervisor side. >>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>> My feature is focused on controlling IOPS from the = storage >>>>>>>>> system >>>>>>>>>>>>>>> side. >>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>> I hope that helps. :) >>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>> On Sun, Jun 2, 2013 at 6:35 PM, John Burwell < >>>>>>>>> jburwell@basho.com >>>>>>>>>>>=20 >>>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Wei, >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> My opinion is that no features should be merged = until all >>>>>>>>>>>>>> functional >>>>>>>>>>>>>>>>>>> issues have been resolved and it is ready to turn = over to >>>>>>>>> test. >>>>>>>>>>>>>>> Until >>>>>>>>>>>>>>>>>> the >>>>>>>>>>>>>>>>>>> total Ops vs discrete read/write ops issue is = addressed >>> and >>>>>>>>>>>>>>> re-reviewed >>>>>>>>>>>>>>>>>> by >>>>>>>>>>>>>>>>>>> Wido, I don't think this criteria has been = satisfied. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Also, how does this work intersect/compliment the >>> SolidFire >>>>>>>>>> patch >>>>>>>>>>>> ( >>>>>>>>>>>>>>>>>>> https://reviews.apache.org/r/11479/)? As I = understand >>> it >>>>>>>>> that >>>>>>>>>>>>>> work >>>>>>>>>>>>>>> is >>>>>>>>>>>>>>>>>>> also involves provisioned IOPS. I would like to = ensure >>> we >>>>>>>>> don't >>>>>>>>>>>>>>> have a >>>>>>>>>>>>>>>>>>> scenario where provisioned IOPS in KVM and SolidFire = are >>>>>>>>>>>>>>> unnecessarily >>>>>>>>>>>>>>>>>>> incompatible. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>> -John >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> On Jun 1, 2013, at 6:47 AM, Wei ZHOU < >>> ustcweizhou@gmail.com >>>>>>>>>>=20 >>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Wido, >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Sure. I will change it next week. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> -Wei >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> 2013/6/1 Wido den Hollander >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Hi Wei, >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> On 06/01/2013 08:24 AM, Wei ZHOU wrote: >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Wido, >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Exactly. I have pushed the features into master. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> If anyone object thems for technical reason till = Monday, >>> I >>>>>>>>> will >>>>>>>>>>>>>>> revert >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> them. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> For the sake of clarity I just want to mention again >>> that we >>>>>>>>>>>> should >>>>>>>>>>>>>>>>>> change >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> the total IOps to R/W IOps asap so that we never = release >>> a >>>>>>>>>> version >>>>>>>>>>>>>>> with >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> only total IOps. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> You laid the groundwork for the I/O throttling and = that's >>>>>>>>> great! >>>>>>>>>>>> We >>>>>>>>>>>>>>>>>> should >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> however prevent that we create legacy from day #1. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Wido >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> -Wei >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> 2013/5/31 Wido den Hollander >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> On 05/31/2013 03:59 PM, John Burwell wrote: >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Wido, >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> +1 -- this enhancement must to discretely support = read >>> and >>>>>>>>> write >>>>>>>>>>>>>>> IOPS. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> I >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> don't see how it could be fixed later because I = don't see >>>>>>>>> how we >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> correctly >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> split total IOPS into read and write. Therefore, we >>> would >>>>>>>>> be >>>>>>>>>>>> stuck >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> with a >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> total unless/until we decided to break backwards >>>>>>>>> compatibility. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> What Wei meant was merging it into master now so = that it >>>>>>>>> will go >>>>>>>>>>>> in >>>>>>>>>>>>>>> the >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> 4.2 branch and add Read / Write IOps before the 4.2 >>> release >>>>>>>>> so >>>>>>>>>>>> that >>>>>>>>>>>>>>> 4.2 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> will be released with Read and Write instead of = Total >>> IOps. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> This is to make the May 31st feature freeze date. = But if >>> the >>>>>>>>>>>> window >>>>>>>>>>>>>>>> moves >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> (see other threads) then it won't be necessary to do >>> that. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Wido >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> I also completely agree that there is no association >>> between >>>>>>>>>>>>>> network >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> and >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> disk I/O. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> -John >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> On May 31, 2013, at 9:51 AM, Wido den Hollander < >>>>>>>>> wido@widodh.nl >>>>>>>>>>>=20 >>>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Hi Wei, >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> On 05/31/2013 03:13 PM, Wei ZHOU wrote: >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Hi Wido, >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Thanks. Good question. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> I thought about at the beginning. Finally I decided = to >>>>>>>>> ignore >>>>>>>>>> the >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> difference of read and write mainly because the = network >>>>>>>>>> throttling >>>>>>>>>>>>>>> did >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> not >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> care the difference of sent and received bytes as = well. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> That reasoning seems odd. Networking and disk I/O >>> completely >>>>>>>>>>>>>>> different. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Disk I/O is much more expensive in most situations = then >>>>>>>>> network >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> bandwith. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Implementing it will be some copy-paste work. It = could be >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> implemented in >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> few days. For the deadline of feature freeze, I will >>>>>>>>> implement >>>>>>>>>> it >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> after >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> that , if needed. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> It think it's a feature we can't miss. But if it = goes >>> into >>>>>>>>> the >>>>>>>>>> 4.2 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> window we have to make sure we don't release with = only >>> total >>>>>>>>>> IOps >>>>>>>>>>>>>> and >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> fix >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> it in 4.3, that will confuse users. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Wido >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> -Wei >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> 2013/5/31 Wido den Hollander >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Hi Wei, >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> On 05/30/2013 06:03 PM, Wei ZHOU wrote: >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> I would like to merge disk_io_throttling branch into >>> master. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> If nobody object, I will merge into master in 48 = hours. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> The purpose is : >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Virtual machines are running on the same storage = device >>>>>>>>> (local >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> storage or >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> share strage). Because of the rate limitation of = device >>>>>>>>> (such as >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> iops), if >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> one VM has large disk operation, it may affect the = disk >>>>>>>>>>>> performance >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> of >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> other VMs running on the same storage device. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> It is neccesary to set the maximum rate and limit = the >>> disk >>>>>>>>> I/O >>>>>>>>>> of >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> VMs. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Looking at the code I see you make no difference = between >>>>>>>>> Read >>>>>>>>>> and >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Write >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> IOps. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Qemu and libvirt support setting both a different = rate >>> for >>>>>>>>> Read >>>>>>>>>>>> and >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Write >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> IOps which could benefit a lot of users. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> It's also strange, in the polling side you collect = both >>> the >>>>>>>>> Read >>>>>>>>>>>>>> and >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Write >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> IOps, but on the throttling side you only go for a = global >>>>>>>>> value. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Write IOps are usually much more expensive then Read >>> IOps, >>>>>>>>> so it >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> seems >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> like a valid use-case where that an admin would set = a >>> lower >>>>>>>>>> value >>>>>>>>>>>>>> for >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> write >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> IOps vs Read IOps. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Since this only supports KVM at this point I think = it >>> would >>>>>>>>> be >>>>>>>>>> of >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> great >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> value to at least have the mechanism in place to = support >>>>>>>>> both, >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> implementing >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> this later would be a lot of work. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> If a hypervisor doesn't support setting different = values >>> for >>>>>>>>>> read >>>>>>>>>>>>>> and >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> write you can always sum both up and set that as the >>> total >>>>>>>>>> limit. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Can you explain why you implemented it this way? >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Wido >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> The feature includes: >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> (1) set the maximum rate of VMs (in disk_offering, = and >>>>>>>>> global >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> configuration) >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> (2) change the maximum rate of VMs >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> (3) limit the disk rate (total bps and iops) >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> JIRA ticket: https://issues.apache.org/**** >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> jira/browse/CLOUDSTACK-1192>>>>>>>> issues.apache.org/**** >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> jira/browse/CLOUDSTACK-1192< >>>>>>>>>>>>>>>>>>> = https://issues.apache.org/**jira/browse/CLOUDSTACK-1192> >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> >>>>>>>> issues.apache.org/**jira/**browse/CLOUDSTACK-1192< >>>>>>>>>>>>>>>>>>> = http://issues.apache.org/jira/**browse/CLOUDSTACK-1192> >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> <** >>>>>>>>>>>>>>>>>>> = https://issues.apache.org/**jira/browse/CLOUDSTACK-1192< >>>>>>>>>>>>>>>>>> = https://issues.apache.org/jira/browse/CLOUDSTACK-1192> >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> FS (I will update later) : >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>=20 >>> https://cwiki.apache.org/******confluence/display/CLOUDSTACK/******< >>>>>>>>>>>>>>>>>>=20 >>>>>>>>> = https://cwiki.apache.org/****confluence/display/CLOUDSTACK/****> >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> < >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>> = https://cwiki.apache.org/****confluence/display/**CLOUDSTACK/** >>>>>>>>>> < >>>>>>>>>>>>>>>>>>=20 >>> https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**> >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> VM+Disk+IO+Throttling>>>>>>>>>>> cwiki.apache.org/confluence/**** >>>>>>>>>>>>>> < >>>>>>>>>>>>>>>>>>> http://cwiki.apache.org/confluence/**> >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> display/CLOUDSTACK/VM+Disk+IO+****Throttling< >>> https://cwiki. >>>>>>>>> ** >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>> apache.org/confluence/display/**CLOUDSTACK/VM+Disk+IO+**Throttling >>>>>>>>>>>>>> < >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>=20 >>>>>=20 >>> = https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Throttli= ng >>>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Merge check list :- >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> * Did you check the branch's RAT execution success? >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Yes >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> * Are there new dependencies introduced? >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> No >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> * What automated testing (unit and integration) is >>> included >>>>>>>>> in >>>>>>>>>> the >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> new >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> feature? >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Unit tests are added. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> * What testing has been done to check for potential >>>>>>>>> regressions? >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> (1) set the bytes rate and IOPS rate on CloudStack = UI. >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> (2) VM operations, including >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> deploy, stop, start, reboot, destroy, expunge. = migrate, >>>>>>>>> restore >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> (3) Volume operations, including >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Attach, Detach >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> To review the code, you can try >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> git diff = c30057635d04a2396f84c588127d7e******be42e503a7 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> f2e5591b710d04cc86815044f5823e******73a4a58944 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Best regards, >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> Wei >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> [1] >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>=20 >>> https://cwiki.apache.org/******confluence/display/CLOUDSTACK/******< >>>>>>>>>>>>>>>>>>=20 >>>>>>>>> = https://cwiki.apache.org/****confluence/display/CLOUDSTACK/****> >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> < >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>> = https://cwiki.apache.org/****confluence/display/**CLOUDSTACK/** >>>>>>>>>> < >>>>>>>>>>>>>>>>>>=20 >>> https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**> >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> VM+Disk+IO+Throttling>>>>>>>>>>> cwiki.apache.org/confluence/**** >>>>>>>>>>>>>> < >>>>>>>>>>>>>>>>>>> http://cwiki.apache.org/confluence/**> >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> display/CLOUDSTACK/VM+Disk+IO+****Throttling< >>> https://cwiki. >>>>>>>>> ** >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>> apache.org/confluence/display/**CLOUDSTACK/VM+Disk+IO+**Throttling >>>>>>>>>>>>>> < >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>=20 >>>>>=20 >>> = https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Throttli= ng >>>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> [2] refs/heads/disk_io_throttling >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> [3] >>>>>>>>>>>>>>>>>>>=20 >>> https://issues.apache.org/******jira/browse/CLOUDSTACK-1301 >>>>>>>>> < >>>>>>>>>>>>>>>>>> = https://issues.apache.org/****jira/browse/CLOUDSTACK-1301 >>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> >>>>>>>> issues.apache.org/****jira/browse/CLOUDSTACK-1301< >>>>>>>>>>>>>>>>>>> = https://issues.apache.org/**jira/browse/CLOUDSTACK-1301> >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> >>>>>>>> issues.apache.org/**jira/**browse/CLOUDSTACK-1301< >>>>>>>>>>>>>>>>>>> = http://issues.apache.org/jira/**browse/CLOUDSTACK-1301> >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> <** >>>>>>>>>>>>>>>>>>> = https://issues.apache.org/**jira/browse/CLOUDSTACK-1301< >>>>>>>>>>>>>>>>>> = https://issues.apache.org/jira/browse/CLOUDSTACK-1301> >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> >>>>>>>> issues.apache.org/****jira/**browse/CLOUDSTACK-2071< >>>>>>>>>>>>>>>>>>> = http://issues.apache.org/**jira/**browse/CLOUDSTACK-2071 >>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> **< >>>>>>>>>>>>>>>>>>> = http://issues.apache.org/**jira/**browse/CLOUDSTACK-2071 >>> < >>>>>>>>>>>>>>>>>> = http://issues.apache.org/jira/**browse/CLOUDSTACK-2071> >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> <** >>>>>>>>>>>>>>>>>>>=20 >>> https://issues.apache.org/****jira/browse/CLOUDSTACK-2071< >>>>>>>>>>>>>>>>>> = https://issues.apache.org/**jira/browse/CLOUDSTACK-2071> >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> >> issues.apache.org/jira/**browse/CLOUDSTACK-2071< >>>>>>>>>>>>>>>>>>> = https://issues.apache.org/jira/browse/CLOUDSTACK-2071> >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> (**CLOUDSTACK-1301 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>> - VM Disk I/O Throttling) >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> *Mike Tutkowski* >>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.* >>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com >>>>>>>>>>>>>>>>>> o: 303.746.7302 >>>>>>>>>>>>>>>>>> Advancing the way the world uses the >>>>>>>>>>>>>>>>>> = cloud >>>>>>>>>>>>>>>>>> *=99* >>>>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> *Mike Tutkowski* >>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.* >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com >>>>>>>>>>>>>> o: 303.746.7302 >>>>>>>>>>>>>> Advancing the way the world uses the >>>>>>>>>>>>>> cloud= >>>>>>>>>>>>>> *=99* >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>=20 >>>>>>>>>>>>>=20 >>>>>>>>>>>>>=20 >>>>>>>>>>>>> -- >>>>>>>>>>>>> *Mike Tutkowski* >>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.* >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com >>>>>>>>>>>>> o: 303.746.7302 >>>>>>>>>>>>> Advancing the way the world uses the >>>>>>>>>>>>> cloud >>>>>>>>>>>>> *=99* >>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>>=20 >>>>>>>>>>>=20 >>>>>>>>>>> -- >>>>>>>>>>> *Mike Tutkowski* >>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.* >>>>>>>>>>> e: mike.tutkowski@solidfire.com >>>>>>>>>>> o: 303.746.7302 >>>>>>>>>>> Advancing the way the world uses the cloud< >>>>>>>>>> http://solidfire.com/solution/overview/?video=3Dplay> >>>>>>>>>>> *=99* >>>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>> -- >>>>>>>>>> *Mike Tutkowski* >>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.* >>>>>>>>>> e: mike.tutkowski@solidfire.com >>>>>>>>>> o: 303.746.7302 >>>>>>>>>> Advancing the way the world uses the >>>>>>>>>> cloud >>>>>>>>>> *=99* >>>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>> -- >>>>>>>> *Mike Tutkowski* >>>>>>>> *Senior CloudStack Developer, SolidFire Inc.* >>>>>>>> e: mike.tutkowski@solidfire.com >>>>>>>> o: 303.746.7302 >>>>>>>> Advancing the way the world uses the cloud< >>>>> http://solidfire.com/solution/overview/?video=3Dplay> >>>>>>>> *=99* >>>>>>>>=20 >>>>>>>=20 >>>>>>>=20 >>>>>>>=20 >>>>>>> -- >>>>>>> *Mike Tutkowski* >>>>>>> *Senior CloudStack Developer, SolidFire Inc.* >>>>>>> e: mike.tutkowski@solidfire.com >>>>>>> o: 303.746.7302 >>>>>>> Advancing the way the world uses the cloud< >>>>> http://solidfire.com/solution/overview/?video=3Dplay> >>>>>>> *=99* >>>>>>>=20 >>>>>>=20 >>>>>>=20 >>>>>>=20 >>>>>> -- >>>>>> *Mike Tutkowski* >>>>>> *Senior CloudStack Developer, SolidFire Inc.* >>>>>> e: mike.tutkowski@solidfire.com >>>>>> o: 303.746.7302 >>>>>> Advancing the way the world uses the >>>>>> cloud >>>>>> *=99* >>>>>=20 >>>>>=20 >>>>=20 >>>>=20 >>>> -- >>>> *Mike Tutkowski* >>>> *Senior CloudStack Developer, SolidFire Inc.* >>>> e: mike.tutkowski@solidfire.com >>>> o: 303.746.7302 >>>> Advancing the way the world uses the >>>> cloud >>>> *=99* >>>=20 >>>=20 >>=20 >>=20 >> -- >> *Mike Tutkowski* >> *Senior CloudStack Developer, SolidFire Inc.* >> e: mike.tutkowski@solidfire.com >> o: 303.746.7302 >> Advancing the way the world uses the = cloud >> *=99* >>=20 >=20 >=20 >=20 > --=20 > *Mike Tutkowski* > *Senior CloudStack Developer, SolidFire Inc.* > e: mike.tutkowski@solidfire.com > o: 303.746.7302 > Advancing the way the world uses the > cloud > *=99*