Return-Path: X-Original-To: apmail-cloudstack-dev-archive@www.apache.org Delivered-To: apmail-cloudstack-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1F46710105 for ; Tue, 4 Jun 2013 18:55:34 +0000 (UTC) Received: (qmail 64021 invoked by uid 500); 4 Jun 2013 18:55:32 -0000 Delivered-To: apmail-cloudstack-dev-archive@cloudstack.apache.org Received: (qmail 63970 invoked by uid 500); 4 Jun 2013 18:55:32 -0000 Mailing-List: contact dev-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cloudstack.apache.org Delivered-To: mailing list dev@cloudstack.apache.org Received: (qmail 63865 invoked by uid 99); 4 Jun 2013 18:55:32 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 04 Jun 2013 18:55:32 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of mike.tutkowski@solidfire.com designates 209.85.219.51 as permitted sender) Received: from [209.85.219.51] (HELO mail-oa0-f51.google.com) (209.85.219.51) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 04 Jun 2013 18:55:20 +0000 Received: by mail-oa0-f51.google.com with SMTP id f4so442557oah.24 for ; Tue, 04 Jun 2013 11:54:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:x-gm-message-state; bh=dqeIfnlY0JVxLA7Ag14UvN/VcURVPKepNw1idVNgBmA=; b=pztxXvqgkxoAZDTxiNMwkzGJhq+sX5wG5GyqdZifulDbm50JxmNJxRm1v8n1t95rXe AuzNp4oQMiBLOPi+Q4enPoQzF7EiAeBWhBHbTMssdBVDJATf3mbbu0T+Ka2x65yyT5z/ KamZhVdxKC3au4GB99iWIXInG8Yj+FfvyUXpYsRuSx72ECKakMixpjLZihCYl/BzjmNd 1+GeFg0U5J23r5usTncowRWNM1idHjyDq8BFoYA+qMNg/4ArlVPR+hU5/+vnHRfJupXw a4nmrLNkuAPIR1LB63PH0aK1rCLxdZ4iwmMYz9eTEwxpwWeuWPsQ+++5MbVKEkBjQq0e jx7Q== MIME-Version: 1.0 X-Received: by 10.60.33.202 with SMTP id t10mr12747392oei.2.1370372098383; Tue, 04 Jun 2013 11:54:58 -0700 (PDT) Received: by 10.182.10.66 with HTTP; Tue, 4 Jun 2013 11:54:58 -0700 (PDT) In-Reply-To: References: <-6717161953645650340@unknownmsgid> <7966164390304114442@unknownmsgid> <1637190856.747092.1370279686542.JavaMail.root@bbits.ca> <93B5A79D-CDC1-4D3B-9742-16CE018454BA@basho.com> Date: Tue, 4 Jun 2013 12:54:58 -0600 Message-ID: Subject: Re: [MERGE] disk_io_throttling to MASTER From: Mike Tutkowski To: "dev@cloudstack.apache.org" Content-Type: multipart/alternative; boundary=089e013c66b422c09f04de589e50 X-Gm-Message-State: ALoCoQmQJiSbRj0qleitL47CIz48l4P+C2fJrjjc9KUZZjQ0jLqglcnmpmxV5jasxVW8YqobTYE5 X-Virus-Checked: Checked by ClamAV on apache.org --089e013c66b422c09f04de589e50 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable "To ensure that we are in sync on terminology, volume, in these definitions, refers to the physical allocation on the device, correct?" Yes...when I say 'volume', I try to mean 'SAN volume'. To refer to the 'volume' the end user can make in CloudStack, I try to use 'CloudStack volume'. On Tue, Jun 4, 2013 at 12:50 PM, Mike Tutkowski < mike.tutkowski@solidfire.com> wrote: > Hi John, > > What you say here may very well make sense, but I'm having a hard time > envisioning it. > > Perhaps we should draw Edison in on this conversation as he was the > initial person to suggest the approach I took. > > What do you think? > > Thanks! > > > On Tue, Jun 4, 2013 at 12:42 PM, John Burwell wrote: > >> Mike, >> >> It feels like we are combining two distinct concepts -- storage device >> management and storage protocols. In both cases, we are communicating w= ith >> ISCSI, but one allows the system to create/delete volumes (Dynamic) on t= he >> device while the other requires the volume to be volume to be managed >> outside of the CloudStack context. To ensure that we are in sync on >> terminology, volume, in these definitions, refers to the physical >> allocation on the device, correct? Minimally, we must be able to >> communicate with a storage device to move bits from one place to another= , >> read bits, delete bits, etc. Optionally, a storage device may be able t= o >> managed by CloudStack. Therefore, we can have a unmanaged iSCSI device >> onto which we store a Xen SR, and we can have a managed SolidFire iSCSI >> device on which CloudStack is capable of allocating LUNs and storing >> volumes. Finally, while CloudStack may be able to manage a device, an >> operator may chose to leave it unmanaged by CloudStack (e.g. the device = is >> shared by many services, and the operator has chosen to dedicate only a >> portion of it to CloudStack). Does my reasoning make sense? >> >> Assuming my thoughts above are reasonable, it seems appropriate to strip >> the management concerns from StoragePoolType, add the notion of a storag= e >> device with an attached driver that indicates whether or not is managed = by >> CloudStack, and establish a abstraction representing a physical allocati= on >> on a device separate that is associated with a volume. With these notio= ns >> in place, hypervisor drivers can declare which protocols they support an= d >> when they encounter a device managed by CloudStack, utilize the manageme= nt >> operations exposed by the driver to automate allocation. If these >> thoughts/concepts make sense, then we can sit down and drill down to a m= ore >> detailed design. >> >> Thanks, >> -John >> >> On Jun 3, 2013, at 5:25 PM, Mike Tutkowski >> wrote: >> >> > Here is the difference between the current iSCSI type and the Dynamic >> type: >> > >> > iSCSI type: The admin has to go in and create a Primary Storage based = on >> > the iSCSI type. At this point in time, the iSCSI volume must exist on >> the >> > storage system (it is pre-allocated). Future CloudStack volumes are >> created >> > as VDIs on the SR that was created behind the scenes. >> > >> > Dynamic type: The admin has to go in and create Primary Storage based >> on a >> > plug-in that will create and delete volumes on its storage system >> > dynamically (as is enabled via the storage framework). When a user >> wants to >> > attach a CloudStack volume that was created, the framework tells the >> > plug-in to create a new volume. After this is done, the attach logic f= or >> > the hypervisor in question is called. No hypervisor data structure >> exists >> > at this point because the volume was just created. The hypervisor data >> > structure must be created. >> > >> > >> > On Mon, Jun 3, 2013 at 3:21 PM, Mike Tutkowski < >> mike.tutkowski@solidfire.com >> >> wrote: >> > >> >> These are new terms, so I should probably have defined them up front >> for >> >> you. :) >> >> >> >> Static storage: Storage that is pre-allocated (ex. an admin creates a >> >> volume on a SAN), then a hypervisor data structure is created to >> consume >> >> the storage (ex. XenServer SR), then that hypervisor data structure i= s >> >> consumed by CloudStack. Disks (VDI) are later placed on this hypervis= or >> >> data structure as needed. In these cases, the attach logic assumes th= e >> >> hypervisor data structure is already in place and simply attaches the >> VDI >> >> on the hypervisor data structure to the VM in question. >> >> >> >> Dynamic storage: Storage that is not pre-allocated. Instead of >> >> pre-existent storage, this could be a SAN (not a volume on a SAN, but >> the >> >> SAN itself). The hypervisor data structure must be created when an >> attach >> >> volume is performed because these types of volumes have not been >> pre-hooked >> >> up to such a hypervisor data structure by an admin. Once the attach >> logic >> >> creates, say, an SR on XenServer for this volume, it attaches the one >> and >> >> only VDI within the SR to the VM in question. >> >> >> >> >> >> On Mon, Jun 3, 2013 at 3:13 PM, John Burwell >> wrote: >> >> >> >>> Mike, >> >>> >> >>> The current implementation of the Dynamic type attach behavior works >> in >> >>> terms of Xen ISCSI which why I ask about the difference. Another wa= y >> to >> >>> ask the question -- what is the definition of a Dynamic storage pool >> type? >> >>> >> >>> Thanks, >> >>> -John >> >>> >> >>> On Jun 3, 2013, at 5:10 PM, Mike Tutkowski < >> mike.tutkowski@solidfire.com> >> >>> wrote: >> >>> >> >>>> As far as I know, the iSCSI type is uniquely used by XenServer when >> you >> >>>> want to set up Primary Storage that is directly based on an iSCSI >> >>> target. >> >>>> This allows you to skip the step of going to the hypervisor and >> >>> creating a >> >>>> storage repository based on that iSCSI target as CloudStack does th= at >> >>> part >> >>>> for you. I think this is only supported for XenServer. For all othe= r >> >>>> hypervisors, you must first go to the hypervisor and perform this >> step >> >>>> manually. >> >>>> >> >>>> I don't really know what RBD is. >> >>>> >> >>>> >> >>>> On Mon, Jun 3, 2013 at 2:13 PM, John Burwell >> >>> wrote: >> >>>> >> >>>>> Mike, >> >>>>> >> >>>>> Reading through the code, what is the difference between the ISCSI >> and >> >>>>> Dynamic types? Why isn't RBD considered Dynamic? >> >>>>> >> >>>>> Thanks, >> >>>>> -John >> >>>>> >> >>>>> On Jun 3, 2013, at 3:46 PM, Mike Tutkowski < >> >>> mike.tutkowski@solidfire.com> >> >>>>> wrote: >> >>>>> >> >>>>>> This new type of storage is defined in the Storage.StoragePoolTyp= e >> >>> class >> >>>>>> (called Dynamic): >> >>>>>> >> >>>>>> public static enum StoragePoolType { >> >>>>>> >> >>>>>> Filesystem(false), // local directory >> >>>>>> >> >>>>>> NetworkFilesystem(true), // NFS or CIFS >> >>>>>> >> >>>>>> IscsiLUN(true), // shared LUN, with a clusterfs overlay >> >>>>>> >> >>>>>> Iscsi(true), // for e.g., ZFS Comstar >> >>>>>> >> >>>>>> ISO(false), // for iso image >> >>>>>> >> >>>>>> LVM(false), // XenServer local LVM SR >> >>>>>> >> >>>>>> CLVM(true), >> >>>>>> >> >>>>>> RBD(true), >> >>>>>> >> >>>>>> SharedMountPoint(true), >> >>>>>> >> >>>>>> VMFS(true), // VMware VMFS storage >> >>>>>> >> >>>>>> PreSetup(true), // for XenServer, Storage Pool is set up by >> >>>>>> customers. >> >>>>>> >> >>>>>> EXT(false), // XenServer local EXT SR >> >>>>>> >> >>>>>> OCFS2(true), >> >>>>>> >> >>>>>> Dynamic(true); // dynamic, zone-wide storage (ex. SolidFire) >> >>>>>> >> >>>>>> >> >>>>>> boolean shared; >> >>>>>> >> >>>>>> >> >>>>>> StoragePoolType(boolean shared) { >> >>>>>> >> >>>>>> this.shared =3D shared; >> >>>>>> >> >>>>>> } >> >>>>>> >> >>>>>> >> >>>>>> public boolean isShared() { >> >>>>>> >> >>>>>> return shared; >> >>>>>> >> >>>>>> } >> >>>>>> >> >>>>>> } >> >>>>>> >> >>>>>> >> >>>>>> On Mon, Jun 3, 2013 at 1:41 PM, Mike Tutkowski < >> >>>>> mike.tutkowski@solidfire.com >> >>>>>>> wrote: >> >>>>>> >> >>>>>>> For example, let's say another storage company wants to implemen= t >> a >> >>>>>>> plug-in to leverage its Quality of Service feature. It would be >> >>> dynamic, >> >>>>>>> zone-wide storage, as well. They would need only implement a >> storage >> >>>>>>> plug-in as I've made the necessary changes to the >> hypervisor-attach >> >>>>> logic >> >>>>>>> to support their plug-in. >> >>>>>>> >> >>>>>>> >> >>>>>>> On Mon, Jun 3, 2013 at 1:39 PM, Mike Tutkowski < >> >>>>>>> mike.tutkowski@solidfire.com> wrote: >> >>>>>>> >> >>>>>>>> Oh, sorry to imply the XenServer code is SolidFire specific. It >> is >> >>> not. >> >>>>>>>> >> >>>>>>>> The XenServer attach logic is now aware of dynamic, zone-wide >> >>> storage >> >>>>>>>> (and SolidFire is an implementation of this kind of storage). >> This >> >>>>> kind of >> >>>>>>>> storage is new to 4.2 with Edison's storage framework changes. >> >>>>>>>> >> >>>>>>>> Edison created a new framework that supported the creation and >> >>> deletion >> >>>>>>>> of volumes dynamically. However, when I visited with him in >> Portland >> >>>>> back >> >>>>>>>> in April, we realized that it was not complete. We realized the= re >> >>> was >> >>>>>>>> nothing CloudStack could do with these volumes unless the attac= h >> >>> logic >> >>>>> was >> >>>>>>>> changed to recognize this new type of storage and create the >> >>>>> appropriate >> >>>>>>>> hypervisor data structure. >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> On Mon, Jun 3, 2013 at 1:28 PM, John Burwell > > >> >>>>> wrote: >> >>>>>>>> >> >>>>>>>>> Mike, >> >>>>>>>>> >> >>>>>>>>> It is generally odd to me that any operation in the Storage >> layer >> >>>>> would >> >>>>>>>>> understand or care about details. I expect to see the Storage >> >>>>> services >> >>>>>>>>> expose a set of operations that can be composed/driven by the >> >>>>> Hypervisor >> >>>>>>>>> implementations to allocate space/create structures per their >> >>> needs. >> >>>>> If >> >>>>>>>>> we >> >>>>>>>>> don't invert this dependency, we are going to end with a massi= ve >> >>>>> n-to-n >> >>>>>>>>> problem that will make the system increasingly difficult to >> >>> maintain >> >>>>> and >> >>>>>>>>> enhance. Am I understanding that the Xen specific SolidFire >> code >> >>> is >> >>>>>>>>> located in the CitrixResourceBase class? >> >>>>>>>>> >> >>>>>>>>> Thanks, >> >>>>>>>>> -John >> >>>>>>>>> >> >>>>>>>>> >> >>>>>>>>> On Mon, Jun 3, 2013 at 3:12 PM, Mike Tutkowski < >> >>>>>>>>> mike.tutkowski@solidfire.com >> >>>>>>>>>> wrote: >> >>>>>>>>> >> >>>>>>>>>> To delve into this in a bit more detail: >> >>>>>>>>>> >> >>>>>>>>>> Prior to 4.2 and aside from one setup method for XenServer, t= he >> >>> admin >> >>>>>>>>> had >> >>>>>>>>>> to first create a volume on the storage system, then go into >> the >> >>>>>>>>> hypervisor >> >>>>>>>>>> to set up a data structure to make use of the volume (ex. a >> >>> storage >> >>>>>>>>>> repository on XenServer or a datastore on ESX(i)). VMs and da= ta >> >>> disks >> >>>>>>>>> then >> >>>>>>>>>> shared this storage system's volume. >> >>>>>>>>>> >> >>>>>>>>>> With Edison's new storage framework, storage need no longer b= e >> so >> >>>>>>>>> static >> >>>>>>>>>> and you can easily create a 1:1 relationship between a storag= e >> >>>>> system's >> >>>>>>>>>> volume and the VM's data disk (necessary for storage Quality = of >> >>>>>>>>> Service). >> >>>>>>>>>> >> >>>>>>>>>> You can now write a plug-in that is called to dynamically >> create >> >>> and >> >>>>>>>>> delete >> >>>>>>>>>> volumes as needed. >> >>>>>>>>>> >> >>>>>>>>>> The problem that the storage framework did not address is in >> >>> creating >> >>>>>>>>> and >> >>>>>>>>>> deleting the hypervisor-specific data structure when >> performing an >> >>>>>>>>>> attach/detach. >> >>>>>>>>>> >> >>>>>>>>>> That being the case, I've been enhancing it to do so. I've go= t >> >>>>>>>>> XenServer >> >>>>>>>>>> worked out and submitted. I've got ESX(i) in my sandbox and c= an >> >>>>> submit >> >>>>>>>>> this >> >>>>>>>>>> if we extend the 4.2 freeze date. >> >>>>>>>>>> >> >>>>>>>>>> Does that help a bit? :) >> >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> On Mon, Jun 3, 2013 at 1:03 PM, Mike Tutkowski < >> >>>>>>>>>> mike.tutkowski@solidfire.com >> >>>>>>>>>>> wrote: >> >>>>>>>>>> >> >>>>>>>>>>> Hi John, >> >>>>>>>>>>> >> >>>>>>>>>>> The storage plug-in - by itself - is hypervisor agnostic. >> >>>>>>>>>>> >> >>>>>>>>>>> The issue is with the volume-attach logic (in the agent code= ). >> >>> The >> >>>>>>>>>> storage >> >>>>>>>>>>> framework calls into the plug-in to have it create a volume = as >> >>>>>>>>> needed, >> >>>>>>>>>> but >> >>>>>>>>>>> when the time comes to attach the volume to a hypervisor, th= e >> >>> attach >> >>>>>>>>>> logic >> >>>>>>>>>>> has to be smart enough to recognize it's being invoked on >> >>> zone-wide >> >>>>>>>>>> storage >> >>>>>>>>>>> (where the volume has just been created) and create, say, a >> >>> storage >> >>>>>>>>>>> repository (for XenServer) or a datastore (for VMware) to ma= ke >> >>> use >> >>>>>>>>> of the >> >>>>>>>>>>> volume that was just created. >> >>>>>>>>>>> >> >>>>>>>>>>> I've been spending most of my time recently making the attac= h >> >>> logic >> >>>>>>>>> work >> >>>>>>>>>>> in the agent code. >> >>>>>>>>>>> >> >>>>>>>>>>> Does that clear it up? >> >>>>>>>>>>> >> >>>>>>>>>>> Thanks! >> >>>>>>>>>>> >> >>>>>>>>>>> >> >>>>>>>>>>> On Mon, Jun 3, 2013 at 12:48 PM, John Burwell < >> >>> jburwell@basho.com> >> >>>>>>>>>> wrote: >> >>>>>>>>>>> >> >>>>>>>>>>>> Mike, >> >>>>>>>>>>>> >> >>>>>>>>>>>> Can you explain why the the storage driver is hypervisor >> >>> specific? >> >>>>>>>>>>>> >> >>>>>>>>>>>> Thanks, >> >>>>>>>>>>>> -John >> >>>>>>>>>>>> >> >>>>>>>>>>>> On Jun 3, 2013, at 1:21 PM, Mike Tutkowski < >> >>>>>>>>>> mike.tutkowski@solidfire.com> >> >>>>>>>>>>>> wrote: >> >>>>>>>>>>>> >> >>>>>>>>>>>>> Yes, ultimately I would like to support all hypervisors th= at >> >>>>>>>>>> CloudStack >> >>>>>>>>>>>>> supports. I think I'm just out of time for 4.2 to get KVM >> in. >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> Right now this plug-in supports XenServer. Depending on wh= at >> >>> we do >> >>>>>>>>>> with >> >>>>>>>>>>>>> regards to 4.2 feature freeze, I have it working for VMwar= e >> in >> >>> my >> >>>>>>>>>>>> sandbox, >> >>>>>>>>>>>>> as well. >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> Also, just to be clear, this is all in regards to Disk >> >>> Offerings. >> >>>>>>>>> I >> >>>>>>>>>>>> plan to >> >>>>>>>>>>>>> support Compute Offerings post 4.2. >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> On Mon, Jun 3, 2013 at 11:14 AM, Kelcey Jamison Damage < >> >>>>>>>>>> kelcey@bbits.ca >> >>>>>>>>>>>>> wrote: >> >>>>>>>>>>>>> >> >>>>>>>>>>>>>> Is there any plan on supporting KVM in the patch cycle po= st >> >>> 4.2? >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> ----- Original Message ----- >> >>>>>>>>>>>>>> From: "Mike Tutkowski" >> >>>>>>>>>>>>>> To: dev@cloudstack.apache.org >> >>>>>>>>>>>>>> Sent: Monday, June 3, 2013 10:12:32 AM >> >>>>>>>>>>>>>> Subject: Re: [MERGE] disk_io_throttling to MASTER >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> I agree on merging Wei's feature first, then mine. >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> If his feature is for KVM only, then it is a non issue as= I >> >>> don't >> >>>>>>>>>>>> support >> >>>>>>>>>>>>>> KVM in 4.2. >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> On Mon, Jun 3, 2013 at 8:55 AM, Wei ZHOU < >> >>> ustcweizhou@gmail.com> >> >>>>>>>>>>>> wrote: >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> John, >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> For the billing, as no one works on billing now, users >> need >> >>> to >> >>>>>>>>>>>> calculate >> >>>>>>>>>>>>>>> the billing by themselves. They can get the >> service_offering >> >>> and >> >>>>>>>>>>>>>>> disk_offering of a VMs and volumes for calculation. Of >> course >> >>>>>>>>> it is >> >>>>>>>>>>>>>> better >> >>>>>>>>>>>>>>> to tell user the exact limitation value of individual >> volume, >> >>>>>>>>> and >> >>>>>>>>>>>> network >> >>>>>>>>>>>>>>> rate limitation for nics as well. I can work on it later= . >> Do >> >>> you >> >>>>>>>>>>>> think it >> >>>>>>>>>>>>>>> is a part of I/O throttling? >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> Sorry my misunstand the second the question. >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> Agree with what you said about the two features. >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> -Wei >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> 2013/6/3 John Burwell >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> Wei, >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> On Jun 3, 2013, at 2:13 AM, Wei ZHOU < >> ustcweizhou@gmail.com >> >>>> >> >>>>>>>>>> wrote: >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>> Hi John, Mike >> >>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>> I hope Mike's aswer helps you. I am trying to adding >> more. >> >>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>> (1) I think billing should depend on IO statistics >> rather >> >>> than >> >>>>>>>>>> IOPS >> >>>>>>>>>>>>>>>>> limitation. Please review disk_io_stat if you have tim= e. >> >>>>>>>>>>>>>> disk_io_stat >> >>>>>>>>>>>>>>>> can >> >>>>>>>>>>>>>>>>> get the IO statistics including bytes/iops read/write >> for >> >>> an >> >>>>>>>>>>>>>> individual >> >>>>>>>>>>>>>>>>> virtual machine. >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> Going by the AWS model, customers are billed more for >> >>> volumes >> >>>>>>>>> with >> >>>>>>>>>>>>>>>> provisioned IOPS, as well as, for those operations ( >> >>>>>>>>>>>>>>>> http://aws.amazon.com/ebs/). I would imagine our users >> >>> would >> >>>>>>>>> like >> >>>>>>>>>>>> the >> >>>>>>>>>>>>>>>> option to employ similar cost models. Could an operato= r >> >>>>>>>>> implement >> >>>>>>>>>>>>>> such a >> >>>>>>>>>>>>>>>> billing model in the current patch? >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>> (2) Do you mean IOPS runtime change? KVM supports >> setting >> >>>>>>>>> IOPS/BPS >> >>>>>>>>>>>>>>>>> limitation for a running virtual machine through comma= nd >> >>> line. >> >>>>>>>>>>>>>> However, >> >>>>>>>>>>>>>>>>> CloudStack does not support changing the parameters of= a >> >>>>>>>>> created >> >>>>>>>>>>>>>>> offering >> >>>>>>>>>>>>>>>>> (computer offering or disk offering). >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> I meant at the Java interface level. I apologize for >> being >> >>>>>>>>>> unclear. >> >>>>>>>>>>>>>> Can >> >>>>>>>>>>>>>>>> we more generalize allocation algorithms with a set of >> >>>>>>>>> interfaces >> >>>>>>>>>>>> that >> >>>>>>>>>>>>>>>> describe the service guarantees provided by a resource? >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>> (3) It is a good question. Maybe it is better to commi= t >> >>> Mike's >> >>>>>>>>>> patch >> >>>>>>>>>>>>>>>> after >> >>>>>>>>>>>>>>>>> disk_io_throttling as Mike needs to consider the >> >>> limitation in >> >>>>>>>>>>>>>>> hypervisor >> >>>>>>>>>>>>>>>>> type, I think. >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> I will expand on my thoughts in a later response to Mik= e >> >>>>>>>>> regarding >> >>>>>>>>>>>> the >> >>>>>>>>>>>>>>>> touch points between these two features. I think that >> >>>>>>>>>>>>>> disk_io_throttling >> >>>>>>>>>>>>>>>> will need to be merged before SolidFire, but I think we >> need >> >>>>>>>>> closer >> >>>>>>>>>>>>>>>> coordination between the branches (possibly have have >> >>> solidfire >> >>>>>>>>>> track >> >>>>>>>>>>>>>>>> disk_io_throttling) to coordinate on this issue. >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>> - Wei >> >>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>> 2013/6/3 John Burwell >> >>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>> Mike, >> >>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>> The things I want to understand are the following: >> >>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>> 1) Is there value in capturing IOPS policies be >> captured >> >>> in >> >>>>>>>>> a >> >>>>>>>>>>>>>> common >> >>>>>>>>>>>>>>>>>> data model (e.g. for billing/usage purposes, expressi= ng >> >>>>>>>>>> offerings). >> >>>>>>>>>>>>>>>>>> 2) Should there be a common interface model for >> reasoning >> >>>>>>>>> about >> >>>>>>>>>>>>>> IOP >> >>>>>>>>>>>>>>>>>> provisioning at runtime? >> >>>>>>>>>>>>>>>>>> 3) How are conflicting provisioned IOPS configuration= s >> >>>>>>>>> between >> >>>>>>>>>> a >> >>>>>>>>>>>>>>>>>> hypervisor and storage device reconciled? In >> particular, >> >>> a >> >>>>>>>>>>>> scenario >> >>>>>>>>>>>>>>>> where >> >>>>>>>>>>>>>>>>>> is lead to believe (and billed) for more IOPS >> configured >> >>> for >> >>>>>>>>> a VM >> >>>>>>>>>>>>>>> than a >> >>>>>>>>>>>>>>>>>> storage device has been configured to deliver. Anoth= er >> >>>>>>>>> scenario >> >>>>>>>>>>>>>>> could a >> >>>>>>>>>>>>>>>>>> consistent configuration between a VM and a storage >> >>> device at >> >>>>>>>>>>>>>> creation >> >>>>>>>>>>>>>>>>>> time, but a later modification to storage device >> >>> introduces >> >>>>>>>>>> logical >> >>>>>>>>>>>>>>>>>> inconsistency. >> >>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>> Thanks, >> >>>>>>>>>>>>>>>>>> -John >> >>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>> On Jun 2, 2013, at 8:38 PM, Mike Tutkowski < >> >>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com> >> >>>>>>>>>>>>>>>>>> wrote: >> >>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>> Hi John, >> >>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>> I believe Wei's feature deals with controlling the ma= x >> >>>>>>>>> number of >> >>>>>>>>>>>>>> IOPS >> >>>>>>>>>>>>>>>> from >> >>>>>>>>>>>>>>>>>> the hypervisor side. >> >>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>> My feature is focused on controlling IOPS from the >> storage >> >>>>>>>>> system >> >>>>>>>>>>>>>>> side. >> >>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>> I hope that helps. :) >> >>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>> On Sun, Jun 2, 2013 at 6:35 PM, John Burwell < >> >>>>>>>>> jburwell@basho.com >> >>>>>>>>>>> >> >>>>>>>>>>>>>>>> wrote: >> >>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Wei, >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> My opinion is that no features should be merged unti= l >> all >> >>>>>>>>>>>>>> functional >> >>>>>>>>>>>>>>>>>>> issues have been resolved and it is ready to turn >> over to >> >>>>>>>>> test. >> >>>>>>>>>>>>>>> Until >> >>>>>>>>>>>>>>>>>> the >> >>>>>>>>>>>>>>>>>>> total Ops vs discrete read/write ops issue is >> addressed >> >>> and >> >>>>>>>>>>>>>>> re-reviewed >> >>>>>>>>>>>>>>>>>> by >> >>>>>>>>>>>>>>>>>>> Wido, I don't think this criteria has been satisfied= . >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Also, how does this work intersect/compliment the >> >>> SolidFire >> >>>>>>>>>> patch >> >>>>>>>>>>>> ( >> >>>>>>>>>>>>>>>>>>> https://reviews.apache.org/r/11479/)? As I >> understand >> >>> it >> >>>>>>>>> that >> >>>>>>>>>>>>>> work >> >>>>>>>>>>>>>>> is >> >>>>>>>>>>>>>>>>>>> also involves provisioned IOPS. I would like to >> ensure >> >>> we >> >>>>>>>>> don't >> >>>>>>>>>>>>>>> have a >> >>>>>>>>>>>>>>>>>>> scenario where provisioned IOPS in KVM and SolidFire >> are >> >>>>>>>>>>>>>>> unnecessarily >> >>>>>>>>>>>>>>>>>>> incompatible. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Thanks, >> >>>>>>>>>>>>>>>>>>> -John >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> On Jun 1, 2013, at 6:47 AM, Wei ZHOU < >> >>> ustcweizhou@gmail.com >> >>>>>>>>>> >> >>>>>>>>>>>>>> wrote: >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Wido, >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Sure. I will change it next week. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> -Wei >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> 2013/6/1 Wido den Hollander >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Hi Wei, >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> On 06/01/2013 08:24 AM, Wei ZHOU wrote: >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Wido, >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Exactly. I have pushed the features into master. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> If anyone object thems for technical reason till >> Monday, >> >>> I >> >>>>>>>>> will >> >>>>>>>>>>>>>>> revert >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> them. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> For the sake of clarity I just want to mention again >> >>> that we >> >>>>>>>>>>>> should >> >>>>>>>>>>>>>>>>>> change >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> the total IOps to R/W IOps asap so that we never >> release >> >>> a >> >>>>>>>>>> version >> >>>>>>>>>>>>>>> with >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> only total IOps. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> You laid the groundwork for the I/O throttling and >> that's >> >>>>>>>>> great! >> >>>>>>>>>>>> We >> >>>>>>>>>>>>>>>>>> should >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> however prevent that we create legacy from day #1. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Wido >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> -Wei >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> 2013/5/31 Wido den Hollander >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> On 05/31/2013 03:59 PM, John Burwell wrote: >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Wido, >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> +1 -- this enhancement must to discretely support re= ad >> >>> and >> >>>>>>>>> write >> >>>>>>>>>>>>>>> IOPS. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> I >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> don't see how it could be fixed later because I don'= t >> see >> >>>>>>>>> how we >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> correctly >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> split total IOPS into read and write. Therefore, we >> >>> would >> >>>>>>>>> be >> >>>>>>>>>>>> stuck >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> with a >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> total unless/until we decided to break backwards >> >>>>>>>>> compatibility. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> What Wei meant was merging it into master now so tha= t >> it >> >>>>>>>>> will go >> >>>>>>>>>>>> in >> >>>>>>>>>>>>>>> the >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> 4.2 branch and add Read / Write IOps before the 4.2 >> >>> release >> >>>>>>>>> so >> >>>>>>>>>>>> that >> >>>>>>>>>>>>>>> 4.2 >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> will be released with Read and Write instead of Tota= l >> >>> IOps. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> This is to make the May 31st feature freeze date. Bu= t >> if >> >>> the >> >>>>>>>>>>>> window >> >>>>>>>>>>>>>>>> moves >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> (see other threads) then it won't be necessary to do >> >>> that. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Wido >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> I also completely agree that there is no association >> >>> between >> >>>>>>>>>>>>>> network >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> and >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> disk I/O. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Thanks, >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> -John >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> On May 31, 2013, at 9:51 AM, Wido den Hollander < >> >>>>>>>>> wido@widodh.nl >> >>>>>>>>>>> >> >>>>>>>>>>>>>>>> wrote: >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Hi Wei, >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> On 05/31/2013 03:13 PM, Wei ZHOU wrote: >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Hi Wido, >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Thanks. Good question. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> I thought about at the beginning. Finally I decided >> to >> >>>>>>>>> ignore >> >>>>>>>>>> the >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> difference of read and write mainly because the >> network >> >>>>>>>>>> throttling >> >>>>>>>>>>>>>>> did >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> not >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> care the difference of sent and received bytes as >> well. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> That reasoning seems odd. Networking and disk I/O >> >>> completely >> >>>>>>>>>>>>>>> different. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Disk I/O is much more expensive in most situations >> then >> >>>>>>>>> network >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> bandwith. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Implementing it will be some copy-paste work. It >> could be >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> implemented in >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> few days. For the deadline of feature freeze, I will >> >>>>>>>>> implement >> >>>>>>>>>> it >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> after >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> that , if needed. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> It think it's a feature we can't miss. But if it goe= s >> >>> into >> >>>>>>>>> the >> >>>>>>>>>> 4.2 >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> window we have to make sure we don't release with on= ly >> >>> total >> >>>>>>>>>> IOps >> >>>>>>>>>>>>>> and >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> fix >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> it in 4.3, that will confuse users. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Wido >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> -Wei >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> 2013/5/31 Wido den Hollander >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Hi Wei, >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> On 05/30/2013 06:03 PM, Wei ZHOU wrote: >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Hi, >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> I would like to merge disk_io_throttling branch into >> >>> master. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> If nobody object, I will merge into master in 48 >> hours. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> The purpose is : >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Virtual machines are running on the same storage >> device >> >>>>>>>>> (local >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> storage or >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> share strage). Because of the rate limitation of >> device >> >>>>>>>>> (such as >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> iops), if >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> one VM has large disk operation, it may affect the >> disk >> >>>>>>>>>>>> performance >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> of >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> other VMs running on the same storage device. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> It is neccesary to set the maximum rate and limit th= e >> >>> disk >> >>>>>>>>> I/O >> >>>>>>>>>> of >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> VMs. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Looking at the code I see you make no difference >> between >> >>>>>>>>> Read >> >>>>>>>>>> and >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Write >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> IOps. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Qemu and libvirt support setting both a different ra= te >> >>> for >> >>>>>>>>> Read >> >>>>>>>>>>>> and >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Write >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> IOps which could benefit a lot of users. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> It's also strange, in the polling side you collect >> both >> >>> the >> >>>>>>>>> Read >> >>>>>>>>>>>>>> and >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Write >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> IOps, but on the throttling side you only go for a >> global >> >>>>>>>>> value. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Write IOps are usually much more expensive then Read >> >>> IOps, >> >>>>>>>>> so it >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> seems >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> like a valid use-case where that an admin would set = a >> >>> lower >> >>>>>>>>>> value >> >>>>>>>>>>>>>> for >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> write >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> IOps vs Read IOps. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Since this only supports KVM at this point I think i= t >> >>> would >> >>>>>>>>> be >> >>>>>>>>>> of >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> great >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> value to at least have the mechanism in place to >> support >> >>>>>>>>> both, >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> implementing >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> this later would be a lot of work. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> If a hypervisor doesn't support setting different >> values >> >>> for >> >>>>>>>>>> read >> >>>>>>>>>>>>>> and >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> write you can always sum both up and set that as the >> >>> total >> >>>>>>>>>> limit. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Can you explain why you implemented it this way? >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Wido >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> The feature includes: >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> (1) set the maximum rate of VMs (in disk_offering, a= nd >> >>>>>>>>> global >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> configuration) >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> (2) change the maximum rate of VMs >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> (3) limit the disk rate (total bps and iops) >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> JIRA ticket: https://issues.apache.org/**** >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> jira/browse/CLOUDSTACK-1192> >>>>>>>>> issues.apache.org/**** >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> jira/browse/CLOUDSTACK-1192< >> >>>>>>>>>>>>>>>>>>> >> https://issues.apache.org/**jira/browse/CLOUDSTACK-1192> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> > >>>>>>>>> issues.apache.org/**jira/**browse/CLOUDSTACK-1192< >> >>>>>>>>>>>>>>>>>>> >> http://issues.apache.org/jira/**browse/CLOUDSTACK-1192> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> <** >> >>>>>>>>>>>>>>>>>>> >> https://issues.apache.org/**jira/browse/CLOUDSTACK-1192< >> >>>>>>>>>>>>>>>>>> https://issues.apache.org/jira/browse/CLOUDSTACK-1192= > >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> FS (I will update later) : >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> >>>>>>>>> >> >>> https://cwiki.apache.org/******confluence/display/CLOUDSTACK/******< >> >>>>>>>>>>>>>>>>>> >> >>>>>>>>> https://cwiki.apache.org/****confluence/display/CLOUDSTACK/***= * >> > >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> < >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>> https://cwiki.apache.org/****confluence/display/**CLOUDSTACK/*= * >> >>>>>>>>>> < >> >>>>>>>>>>>>>>>>>> >> >>> https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> VM+Disk+IO+Throttling> >>>>>>>>>>>> cwiki.apache.org/confluence/**** >> >>>>>>>>>>>>>> < >> >>>>>>>>>>>>>>>>>>> http://cwiki.apache.org/confluence/**> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> display/CLOUDSTACK/VM+Disk+IO+****Throttling< >> >>> https://cwiki. >> >>>>>>>>> ** >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>> >> >>> apache.org/confluence/display/**CLOUDSTACK/VM+Disk+IO+**Throttling >> >>>>>>>>>>>>>> < >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> >>>>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>> >> >>>>> >> >>> >> https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Thrott= ling >> >>>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Merge check list :- >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> * Did you check the branch's RAT execution success? >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Yes >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> * Are there new dependencies introduced? >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> No >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> * What automated testing (unit and integration) is >> >>> included >> >>>>>>>>> in >> >>>>>>>>>> the >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> new >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> feature? >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Unit tests are added. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> * What testing has been done to check for potential >> >>>>>>>>> regressions? >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> (1) set the bytes rate and IOPS rate on CloudStack U= I. >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> (2) VM operations, including >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> deploy, stop, start, reboot, destroy, expunge. >> migrate, >> >>>>>>>>> restore >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> (3) Volume operations, including >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Attach, Detach >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> To review the code, you can try >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> git diff >> c30057635d04a2396f84c588127d7e******be42e503a7 >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> f2e5591b710d04cc86815044f5823e******73a4a58944 >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Best regards, >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> Wei >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> [1] >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> >>>>>>>>> >> >>> https://cwiki.apache.org/******confluence/display/CLOUDSTACK/******< >> >>>>>>>>>>>>>>>>>> >> >>>>>>>>> https://cwiki.apache.org/****confluence/display/CLOUDSTACK/***= * >> > >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> < >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>> https://cwiki.apache.org/****confluence/display/**CLOUDSTACK/*= * >> >>>>>>>>>> < >> >>>>>>>>>>>>>>>>>> >> >>> https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> VM+Disk+IO+Throttling> >>>>>>>>>>>> cwiki.apache.org/confluence/**** >> >>>>>>>>>>>>>> < >> >>>>>>>>>>>>>>>>>>> http://cwiki.apache.org/confluence/**> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> display/CLOUDSTACK/VM+Disk+IO+****Throttling< >> >>> https://cwiki. >> >>>>>>>>> ** >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>> >> >>> apache.org/confluence/display/**CLOUDSTACK/VM+Disk+IO+**Throttling >> >>>>>>>>>>>>>> < >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> >>>>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>> >> >>>>> >> >>> >> https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Thrott= ling >> >>>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> [2] refs/heads/disk_io_throttling >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> [3] >> >>>>>>>>>>>>>>>>>>> >> >>> https://issues.apache.org/******jira/browse/CLOUDSTACK-1301 >> >>>>>>>>> < >> >>>>>>>>>>>>>>>>>> >> https://issues.apache.org/****jira/browse/CLOUDSTACK-1301 >> >>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> > >>>>>>>>> issues.apache.org/****jira/browse/CLOUDSTACK-1301< >> >>>>>>>>>>>>>>>>>>> >> https://issues.apache.org/**jira/browse/CLOUDSTACK-1301> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> > >>>>>>>>> issues.apache.org/**jira/**browse/CLOUDSTACK-1301< >> >>>>>>>>>>>>>>>>>>> >> http://issues.apache.org/jira/**browse/CLOUDSTACK-1301> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> <** >> >>>>>>>>>>>>>>>>>>> >> https://issues.apache.org/**jira/browse/CLOUDSTACK-1301< >> >>>>>>>>>>>>>>>>>> https://issues.apache.org/jira/browse/CLOUDSTACK-1301= > >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> > >>>>>>>>> issues.apache.org/****jira/**browse/CLOUDSTACK-2071< >> >>>>>>>>>>>>>>>>>>> >> http://issues.apache.org/**jira/**browse/CLOUDSTACK-2071 >> >>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> **< >> >>>>>>>>>>>>>>>>>>> >> http://issues.apache.org/**jira/**browse/CLOUDSTACK-2071 >> >>> < >> >>>>>>>>>>>>>>>>>> http://issues.apache.org/jira/**browse/CLOUDSTACK-207= 1 >> > >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> <** >> >>>>>>>>>>>>>>>>>>> >> >>> https://issues.apache.org/****jira/browse/CLOUDSTACK-2071< >> >>>>>>>>>>>>>>>>>> >> https://issues.apache.org/**jira/browse/CLOUDSTACK-2071> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> > >>> issues.apache.org/jira/**browse/CLOUDSTACK-2071< >> >>>>>>>>>>>>>>>>>>> https://issues.apache.org/jira/browse/CLOUDSTACK-207= 1 >> > >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> (**CLOUDSTACK-1301 >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> - VM Disk I/O Throttling) >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>>>> -- >> >>>>>>>>>>>>>>>>>> *Mike Tutkowski* >> >>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.* >> >>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com >> >>>>>>>>>>>>>>>>>> o: 303.746.7302 >> >>>>>>>>>>>>>>>>>> Advancing the way the world uses the >> >>>>>>>>>>>>>>>>>> cloud< >> http://solidfire.com/solution/overview/?video=3Dplay> >> >>>>>>>>>>>>>>>>>> *=99* >> >>>>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> -- >> >>>>>>>>>>>>>> *Mike Tutkowski* >> >>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.* >> >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com >> >>>>>>>>>>>>>> o: 303.746.7302 >> >>>>>>>>>>>>>> Advancing the way the world uses the >> >>>>>>>>>>>>>> cloud >> >>>>>>>>>>>>>> *=99* >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> -- >> >>>>>>>>>>>>> *Mike Tutkowski* >> >>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.* >> >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com >> >>>>>>>>>>>>> o: 303.746.7302 >> >>>>>>>>>>>>> Advancing the way the world uses the >> >>>>>>>>>>>>> cloud >> >>>>>>>>>>>>> *=99* >> >>>>>>>>>>>> >> >>>>>>>>>>>> >> >>>>>>>>>>> >> >>>>>>>>>>> >> >>>>>>>>>>> -- >> >>>>>>>>>>> *Mike Tutkowski* >> >>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.* >> >>>>>>>>>>> e: mike.tutkowski@solidfire.com >> >>>>>>>>>>> o: 303.746.7302 >> >>>>>>>>>>> Advancing the way the world uses the cloud< >> >>>>>>>>>> http://solidfire.com/solution/overview/?video=3Dplay> >> >>>>>>>>>>> *=99* >> >>>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> -- >> >>>>>>>>>> *Mike Tutkowski* >> >>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.* >> >>>>>>>>>> e: mike.tutkowski@solidfire.com >> >>>>>>>>>> o: 303.746.7302 >> >>>>>>>>>> Advancing the way the world uses the >> >>>>>>>>>> cloud >> >>>>>>>>>> *=99* >> >>>>>>>>>> >> >>>>>>>>> >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> -- >> >>>>>>>> *Mike Tutkowski* >> >>>>>>>> *Senior CloudStack Developer, SolidFire Inc.* >> >>>>>>>> e: mike.tutkowski@solidfire.com >> >>>>>>>> o: 303.746.7302 >> >>>>>>>> Advancing the way the world uses the cloud< >> >>>>> http://solidfire.com/solution/overview/?video=3Dplay> >> >>>>>>>> *=99* >> >>>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> -- >> >>>>>>> *Mike Tutkowski* >> >>>>>>> *Senior CloudStack Developer, SolidFire Inc.* >> >>>>>>> e: mike.tutkowski@solidfire.com >> >>>>>>> o: 303.746.7302 >> >>>>>>> Advancing the way the world uses the cloud< >> >>>>> http://solidfire.com/solution/overview/?video=3Dplay> >> >>>>>>> *=99* >> >>>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> -- >> >>>>>> *Mike Tutkowski* >> >>>>>> *Senior CloudStack Developer, SolidFire Inc.* >> >>>>>> e: mike.tutkowski@solidfire.com >> >>>>>> o: 303.746.7302 >> >>>>>> Advancing the way the world uses the >> >>>>>> cloud >> >>>>>> *=99* >> >>>>> >> >>>>> >> >>>> >> >>>> >> >>>> -- >> >>>> *Mike Tutkowski* >> >>>> *Senior CloudStack Developer, SolidFire Inc.* >> >>>> e: mike.tutkowski@solidfire.com >> >>>> o: 303.746.7302 >> >>>> Advancing the way the world uses the >> >>>> cloud >> >>>> *=99* >> >>> >> >>> >> >> >> >> >> >> -- >> >> *Mike Tutkowski* >> >> *Senior CloudStack Developer, SolidFire Inc.* >> >> e: mike.tutkowski@solidfire.com >> >> o: 303.746.7302 >> >> Advancing the way the world uses the cloud< >> http://solidfire.com/solution/overview/?video=3Dplay> >> >> *=99* >> >> >> > >> > >> > >> > -- >> > *Mike Tutkowski* >> > *Senior CloudStack Developer, SolidFire Inc.* >> > e: mike.tutkowski@solidfire.com >> > o: 303.746.7302 >> > Advancing the way the world uses the >> > cloud >> > *=99* >> >> > > > -- > *Mike Tutkowski* > *Senior CloudStack Developer, SolidFire Inc.* > e: mike.tutkowski@solidfire.com > o: 303.746.7302 > Advancing the way the world uses the cloud > *=99* > --=20 *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkowski@solidfire.com o: 303.746.7302 Advancing the way the world uses the cloud *=99* --089e013c66b422c09f04de589e50--