Return-Path: X-Original-To: apmail-cloudstack-dev-archive@www.apache.org Delivered-To: apmail-cloudstack-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 5D238102E1 for ; Wed, 12 Jun 2013 20:55:25 +0000 (UTC) Received: (qmail 97885 invoked by uid 500); 12 Jun 2013 20:55:24 -0000 Delivered-To: apmail-cloudstack-dev-archive@cloudstack.apache.org Received: (qmail 97853 invoked by uid 500); 12 Jun 2013 20:55:24 -0000 Mailing-List: contact dev-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cloudstack.apache.org Delivered-To: mailing list dev@cloudstack.apache.org Received: (qmail 97845 invoked by uid 99); 12 Jun 2013 20:55:24 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 12 Jun 2013 20:55:24 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of jburwell@basho.com designates 209.85.216.51 as permitted sender) Received: from [209.85.216.51] (HELO mail-qa0-f51.google.com) (209.85.216.51) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 12 Jun 2013 20:55:17 +0000 Received: by mail-qa0-f51.google.com with SMTP id f11so638277qae.10 for ; Wed, 12 Jun 2013 13:54:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=content-type:mime-version:subject:from:in-reply-to:date :content-transfer-encoding:message-id:references:to:x-mailer :x-gm-message-state; bh=CMUi2sXM31wIrUEK1oYhnmNESibljBXf+JGQNESn/EM=; b=iKcnWbIzSqV2soTQMo9V91MGA7WgRCD/jbYJ/jyQcsfHTB/oPqhbMOS5pw0yZzudrC UFnN9xF3rgTun5K7kRrPcvpIQPXUaSBct/KMk4JoYAXUyDU5orXU7rDfH3Z0/r+pXNKW rL46RORV7k/0Ok7iQ9QDDlapCTATdo2c/s7p1aHouJ+qvHqF0asTtnurKJI0kC9Zj27R +8Gt+XGG0NZIPktVHzJ4msL8aFT1uMcmyevRSrxgx1rB4v9/kiF5OZiaAX+FDL1h5HLl qW3CDWj7l6X/bbens9DYhtRdK0VesCKsd6VX0zHa1P4BoMzChhoEP/kreycneIEEZfqM MkAg== X-Received: by 10.224.79.209 with SMTP id q17mr25846345qak.88.1371070496575; Wed, 12 Jun 2013 13:54:56 -0700 (PDT) Received: from jburwell-basho.westell.com (pool-71-178-115-83.washdc.east.verizon.net. [71.178.115.83]) by mx.google.com with ESMTPSA id c10sm28183743qao.10.2013.06.12.13.54.55 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 12 Jun 2013 13:54:55 -0700 (PDT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 6.5 \(1508\)) Subject: Re: [MERGE] disk_io_throttling to MASTER From: John Burwell In-Reply-To: Date: Wed, 12 Jun 2013 16:54:53 -0400 Content-Transfer-Encoding: quoted-printable Message-Id: References: <12535A06-F161-4739-855F-99B4A001EFAF@basho.com> <77B337AF224FD84CBF8401947098DD87039664@SJCPEX01CL01.citrite.net> <77B337AF224FD84CBF8401947098DD870396BD@SJCPEX01CL01.citrite.net> <77B337AF224FD84CBF8401947098DD8703977F@SJCPEX01CL01.citrite.net> <77B337AF224FD84CBF8401947098DD8704A7E8@SJCPEX01CL01.citrite.net> <4BC4E13F-B99C-44ED-B6FA-81CB9373A80F@basho.com> <9C87E699-FE4B-44D8-986 C-FE43DED33E2D@basho.com> <77B337AF224FD84CBF8401947098DD8704BA0E@SJCPEX01CL01.citrite.net> <24155E6C-01E9-4ECC-93E9-EA552B8F0D87@basho.com> <77B337AF224FD84CBF8401947098DD8704BAB8@SJCPEX01CL01.citrite.net> <1130E63E-C93A-45C4-A96D-F375E9BB1C4F@basho.com> <77B337AF224FD84CBF8401947098DD8704BB28@SJCPEX01CL01.citrite.net> <018F234D-0552-4B0D-BF73-AB25876D960B@basho.com> <1544949155485503655@unkn ownmsgid> <6343336138789983814@unknownmsgid> To: dev@cloudstack.apache.org X-Mailer: Apple Mail (2.1508) X-Gm-Message-State: ALoCoQnToRYjF0k3/hfQqtIor3maO/2Of8NA0vgu+5/qQ7DtDPFEkeqMS6bWagcqXyovbULtYQIN X-Virus-Checked: Checked by ClamAV on apache.org Mike, Looking through the code, I am trying to understand how = CreateDiskOfferingCmd would have the context to identify the conflict. = Naively, it seems to me that this rule would need to be enforced when a = virtual machine is being deployed. Looking through the code, it seems = like we should add a private validateStorageQoS method to = com.cloud.vm.UserVmManagerImpl to check this condition and throws a = ResourceAllocationException when the QoS definitions are inconsistent. = We would then add calls to it from each of the VM creation methods in = the service. Do this type of approach sound reasonable? Thanks, -John On Jun 12, 2013, at 4:30 PM, Mike Tutkowski = wrote: > Hi John, >=20 > So, here's what I was planning to do. Of course feel free to correct = me on > this approach. >=20 > I think it's OK if Wei merges his code into master and then I can draw = from > the main repo and merge master into mine locally. >=20 > 1) Once I get Wei's code and merge, I plan to add a little GUI code to = make > it user friendly (toggle between these features on the Add Disk = Offering > window). >=20 > 2) I plan to write validation logic for the create-disk-offering API > command which throws an exception if the rules are not followed (this > should never be triggered from the GUI since the GUI will have = controls in > place to toggle between the one feature and the other). >=20 > I'm not sure about documentation. I haven't had much experience with = it on > CloudStack projects yet. >=20 > Thanks! >=20 >=20 > On Wed, Jun 12, 2013 at 2:21 PM, John Burwell = wrote: >=20 >> Mike, >>=20 >> Yes, these server-side rails need to be defined and implemented = before >> either patch can be merged. =46rom my perspective, I would like to = see the >> rule implemented in the hypervisor as part of the validation of the = virtual >> machine definition. We also need to make sure that this mutual = exclusion >> is documented. Do we usually include this type of documentation with >> patches of this nature? >>=20 >> Thanks, >> -John >>=20 >> On Jun 12, 2013, at 2:18 PM, Mike Tutkowski = >> wrote: >>=20 >>> Currently they are not yet implemented. >>>=20 >>> We have to make sure they are implemented in the GUI from a = usability >>> standpoint, but the API must check for consistency and throw an = exception >>> if necessary. >>>=20 >>>=20 >>> On Wed, Jun 12, 2013 at 11:03 AM, John Burwell >> wrote: >>>=20 >>>> Mike, >>>>=20 >>>> Are the checks only implemented in the UI? >>>>=20 >>>> Thanks, >>>> -John >>>>=20 >>>>=20 >>>>=20 >>>>=20 >>>> On Jun 12, 2013, at 1:02 PM, Mike Tutkowski >>>> wrote: >>>>=20 >>>>> Hi John, >>>>>=20 >>>>> Wei and I have discussed making the two features mutually = exclusive. We >>>>> agree with you that only one should be active at a time. We plan = to >>>>> implement in the GUI a mechanism (maybe radio buttons) to turn his >>>> feature >>>>> on and mine off and vice versa. >>>>>=20 >>>>> I was thinking if I wait until he checks in his code, then I = update and >>>>> merge that I will be the person resolving merge conflicts in the >>>> JavaScript >>>>> code (there shouldn't be a problem in the Java code) as opposed to >>>> putting >>>>> that work on someone else. >>>>>=20 >>>>> Let me know what you think. >>>>>=20 >>>>> Oh, was going to ask you what "FS" stands for here. >>>>>=20 >>>>> Thanks! >>>>>=20 >>>>>=20 >>>>> On Wed, Jun 12, 2013 at 10:56 AM, John Burwell = >>>> wrote: >>>>>=20 >>>>>> Mike, >>>>>>=20 >>>>>> How have Wei and you resolved the issue of conflicting QoS = mechanisms >>>>>> between the Hypervisor and Storage layers? Have the affected FSs = been >>>>>> updated with that decision? >>>>>>=20 >>>>>> In terms of merge timing, can you describe the dependencies = between >> the >>>>>> patches? >>>>>>=20 >>>>>> Thanks, >>>>>> -John >>>>>>=20 >>>>>>=20 >>>>>>=20 >>>>>>=20 >>>>>> On Jun 12, 2013, at 12:43 PM, Mike Tutkowski >>>>>> wrote: >>>>>>=20 >>>>>>> No problem, John. >>>>>>>=20 >>>>>>> I still want to re-review it by myself before coming up with a = new >>>> patch >>>>>>> file. >>>>>>>=20 >>>>>>> Also, maybe I should first wait for Wei's changes to be checked = in >> and >>>>>>> merge those into mine before generating a new patch file? >>>>>>>=20 >>>>>>>=20 >>>>>>> On Wed, Jun 12, 2013 at 10:40 AM, John Burwell = >>>>>> wrote: >>>>>>>=20 >>>>>>>> Mike, >>>>>>>>=20 >>>>>>>> I just realized that I forgot to publish my review. I am = offline >> ATM, >>>>>>>> but I will publish it in the next couple of hours. >>>>>>>>=20 >>>>>>>> Do you plan to update your the patch in Review Board? >>>>>>>>=20 >>>>>>>> Sorry for the oversight, >>>>>>>> -John >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>> On Jun 12, 2013, at 2:26 AM, Mike Tutkowski >>>>>>>> wrote: >>>>>>>>=20 >>>>>>>>> Hi Edison, John, and Wei (and whoever else is reading this :) = ), >>>>>>>>>=20 >>>>>>>>> Just an FYI that I believe I have implemented all the areas we >> wanted >>>>>>>>> addressed. >>>>>>>>>=20 >>>>>>>>> I plan to review the code again tomorrow morning or afternoon, = then >>>>>> send >>>>>>>> in >>>>>>>>> another patch. >>>>>>>>>=20 >>>>>>>>> Thanks for all the work on this everyone! >>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>>> On Tue, Jun 11, 2013 at 12:29 PM, Mike Tutkowski < >>>>>>>>> mike.tutkowski@solidfire.com> wrote: >>>>>>>>>=20 >>>>>>>>>> Sure, that sounds good. >>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>> On Tue, Jun 11, 2013 at 12:11 PM, Wei ZHOU = >>=20 >>>>>>>> wrote: >>>>>>>>>>=20 >>>>>>>>>>> Hi Mike, >>>>>>>>>>>=20 >>>>>>>>>>> It looks the two feature do not have many conflicts in Java = code, >>>>>>>> except >>>>>>>>>>> the cloudstack UI. >>>>>>>>>>> If you do not mind, I will merge disk_io_throttling branch = into >>>>>> master >>>>>>>>>>> this >>>>>>>>>>> week, so that you can develop based on it. >>>>>>>>>>>=20 >>>>>>>>>>> -Wei >>>>>>>>>>>=20 >>>>>>>>>>>=20 >>>>>>>>>>> 2013/6/11 Mike Tutkowski >>>>>>>>>>>=20 >>>>>>>>>>>> Hey John, >>>>>>>>>>>>=20 >>>>>>>>>>>> The SolidFire patch does not depend on the object_store = branch, >>>> but >>>>>> - >>>>>>>> as >>>>>>>>>>>> Edison mentioned - it might be easier if we merge the = SolidFire >>>>>> branch >>>>>>>>>>> into >>>>>>>>>>>> the object_store branch before object_store goes into = master. >>>>>>>>>>>>=20 >>>>>>>>>>>> I'm not sure how the disk_io_throttling fits into this = merge >>>>>> strategy. >>>>>>>>>>>> Perhaps Wei can chime in on that. >>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>>> On Tue, Jun 11, 2013 at 11:07 AM, John Burwell < >>>> jburwell@basho.com> >>>>>>>>>>> wrote: >>>>>>>>>>>>=20 >>>>>>>>>>>>> Mike, >>>>>>>>>>>>>=20 >>>>>>>>>>>>> We have a delicate merge dance to perform. The >>>> disk_io_throttling, >>>>>>>>>>>>> solidfire, and object_store appear to have a number of >>>> overlapping >>>>>>>>>>>>> elements. I understand the dependencies between the = patches to >>>> be >>>>>> as >>>>>>>>>>>>> follows: >>>>>>>>>>>>>=20 >>>>>>>>>>>>> object_store <- solidfire -> disk_io_throttling >>>>>>>>>>>>>=20 >>>>>>>>>>>>> Am I correct that the device management aspects of = SolidFire >> are >>>>>>>>>>> additive >>>>>>>>>>>>> to the object_store branch or there are circular = dependency >>>> between >>>>>>>>>>> the >>>>>>>>>>>>> branches? Once we understand the dependency graph, we can >>>>>> determine >>>>>>>>>>> the >>>>>>>>>>>>> best approach to land the changes in master. >>>>>>>>>>>>>=20 >>>>>>>>>>>>> Thanks, >>>>>>>>>>>>> -John >>>>>>>>>>>>>=20 >>>>>>>>>>>>>=20 >>>>>>>>>>>>> On Jun 10, 2013, at 11:10 PM, Mike Tutkowski < >>>>>>>>>>>> mike.tutkowski@solidfire.com> >>>>>>>>>>>>> wrote: >>>>>>>>>>>>>=20 >>>>>>>>>>>>>> Also, if we are good with Edison merging my code into his >> branch >>>>>>>>>>> before >>>>>>>>>>>>>> going into master, I am good with that. >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> We can remove the StoragePoolType.Dynamic code after his = merge >>>> and >>>>>>>>>>> we >>>>>>>>>>>> can >>>>>>>>>>>>>> deal with Burst IOPS then, as well. >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> On Mon, Jun 10, 2013 at 9:08 PM, Mike Tutkowski < >>>>>>>>>>>>>> mike.tutkowski@solidfire.com> wrote: >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> Let me make sure I follow where we're going here: >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> 1) There should be NO references to hypervisor code in = the >>>>>> storage >>>>>>>>>>>>>>> plug-ins code (this includes the default storage = plug-in, >> which >>>>>>>>>>>>> currently >>>>>>>>>>>>>>> sends several commands to the hypervisor in use = (although it >>>> does >>>>>>>>>>> not >>>>>>>>>>>>> know >>>>>>>>>>>>>>> which hypervisor (XenServer, ESX(i), etc.) is actually = in >> use)) >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> 2) managed=3Dtrue or managed=3Dfalse can be placed in = the url >> field >>>>>> (if >>>>>>>>>>>> not >>>>>>>>>>>>>>> present, we default to false). This info is stored in = the >>>>>>>>>>>>>>> storage_pool_details table. >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> 3) When the "attach" command is sent to the hypervisor = in >>>>>>>>>>> question, we >>>>>>>>>>>>>>> pass the managed property along (this takes the place of = the >>>>>>>>>>>>>>> StoragePoolType.Dynamic check). >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> 4) execute(AttachVolumeCommand) in the hypervisor checks = for >>>> the >>>>>>>>>>>> managed >>>>>>>>>>>>>>> property. If true for an attach, the necessary = hypervisor >> data >>>>>>>>>>>>> structure is >>>>>>>>>>>>>>> created and the rest of the attach command executes to = attach >>>> the >>>>>>>>>>>>> volume. >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> 5) When execute(AttachVolumeCommand) is invoked to = detach a >>>>>> volume, >>>>>>>>>>>> the >>>>>>>>>>>>>>> same check is made. If managed, the hypervisor data = structure >>>> is >>>>>>>>>>>>> removed. >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> 6) I do not see an clear way to support Burst IOPS in = 4.2 >>>> unless >>>>>>>>>>> it is >>>>>>>>>>>>>>> stored in the volumes and disk_offerings table. If we = have >> some >>>>>>>>>>> idea, >>>>>>>>>>>>>>> that'd be cool. >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> Thanks! >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> On Mon, Jun 10, 2013 at 8:58 PM, Mike Tutkowski < >>>>>>>>>>>>>>> mike.tutkowski@solidfire.com> wrote: >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>> "+1 -- Burst IOPS can be implemented while avoiding >>>>>> implementation >>>>>>>>>>>>>>>> attributes. I always wondered about the details field. = I >>>> think >>>>>>>>>>> we >>>>>>>>>>>>> should >>>>>>>>>>>>>>>> beef up the description in the documentation regarding = the >>>>>>>>>>> expected >>>>>>>>>>>>> format >>>>>>>>>>>>>>>> of the field. In 4.1, I noticed that the details are = not >>>>>>>>>>> returned on >>>>>>>>>>>>> the >>>>>>>>>>>>>>>> createStoratePool updateStoragePool, or listStoragePool >>>>>> response. >>>>>>>>>>>> Why >>>>>>>>>>>>>>>> don't we return it? It seems like it would be useful = for >>>>>> clients >>>>>>>>>>> to >>>>>>>>>>>> be >>>>>>>>>>>>>>>> able to inspect the contents of the details field." >>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>> Not sure how this would work storing Burst IOPS here. >>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>> Burst IOPS need to be variable on a Disk = Offering-by-Disk >>>>>> Offering >>>>>>>>>>>>>>>> basis. For each Disk Offering created, you have to be = able >> to >>>>>>>>>>>> associate >>>>>>>>>>>>>>>> unique Burst IOPS. There is a disk_offering_details = table. >>>> Maybe >>>>>>>>>>> it >>>>>>>>>>>>> could >>>>>>>>>>>>>>>> go there? >>>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>> I'm also not sure how you would accept the Burst IOPS = in the >>>> GUI >>>>>>>>>>> if >>>>>>>>>>>>> it's >>>>>>>>>>>>>>>> not stored like the Min and Max fields are in the DB. >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> *Mike Tutkowski* >>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.* >>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com >>>>>>>>>>>>>>> o: 303.746.7302 >>>>>>>>>>>>>>> Advancing the way the world uses the cloud< >>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=3Dplay> >>>>>>>>>>>>>>> *=99* >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> *Mike Tutkowski* >>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.* >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com >>>>>>>>>>>>>> o: 303.746.7302 >>>>>>>>>>>>>> Advancing the way the world uses the >>>>>>>>>>>>>> cloud= >>>>>>>>>>>>>> *=99* >>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>>> -- >>>>>>>>>>>> *Mike Tutkowski* >>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.* >>>>>>>>>>>> e: mike.tutkowski@solidfire.com >>>>>>>>>>>> o: 303.746.7302 >>>>>>>>>>>> Advancing the way the world uses the >>>>>>>>>>>> cloud >>>>>>>>>>>> *=99* >>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>> -- >>>>>>>>>> *Mike Tutkowski* >>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.* >>>>>>>>>> e: mike.tutkowski@solidfire.com >>>>>>>>>> o: 303.746.7302 >>>>>>>>>> Advancing the way the world uses the cloud< >>>>>>>> http://solidfire.com/solution/overview/?video=3Dplay> >>>>>>>>>> *=99* >>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>>> -- >>>>>>>>> *Mike Tutkowski* >>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.* >>>>>>>>> e: mike.tutkowski@solidfire.com >>>>>>>>> o: 303.746.7302 >>>>>>>>> Advancing the way the world uses the >>>>>>>>> cloud >>>>>>>>> *=99* >>>>>>>=20 >>>>>>>=20 >>>>>>>=20 >>>>>>> -- >>>>>>> *Mike Tutkowski* >>>>>>> *Senior CloudStack Developer, SolidFire Inc.* >>>>>>> e: mike.tutkowski@solidfire.com >>>>>>> o: 303.746.7302 >>>>>>> Advancing the way the world uses the >>>>>>> cloud >>>>>>> *=99* >>>>>=20 >>>>>=20 >>>>>=20 >>>>> -- >>>>> *Mike Tutkowski* >>>>> *Senior CloudStack Developer, SolidFire Inc.* >>>>> e: mike.tutkowski@solidfire.com >>>>> o: 303.746.7302 >>>>> Advancing the way the world uses the >>>>> cloud >>>>> *=99* >>>>=20 >>>=20 >>>=20 >>>=20 >>> -- >>> *Mike Tutkowski* >>> *Senior CloudStack Developer, SolidFire Inc.* >>> e: mike.tutkowski@solidfire.com >>> o: 303.746.7302 >>> Advancing the way the world uses the >>> cloud >>> *=99* >>=20 >>=20 >=20 >=20 > --=20 > *Mike Tutkowski* > *Senior CloudStack Developer, SolidFire Inc.* > e: mike.tutkowski@solidfire.com > o: 303.746.7302 > Advancing the way the world uses the > cloud > *=99*