cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Burwell <jburw...@basho.com>
Subject Re: [MERGE] disk_io_throttling to MASTER
Date Mon, 03 Jun 2013 20:13:30 GMT
Mike,

Reading through the code, what is the difference between the ISCSI and Dynamic types?  Why isn't RBD considered Dynamic?

Thanks,
-John

On Jun 3, 2013, at 3:46 PM, Mike Tutkowski <mike.tutkowski@solidfire.com> wrote:

> This new type of storage is defined in the Storage.StoragePoolType class
> (called Dynamic):
> 
> public static enum StoragePoolType {
> 
>        Filesystem(false), // local directory
> 
>        NetworkFilesystem(true), // NFS or CIFS
> 
>        IscsiLUN(true), // shared LUN, with a clusterfs overlay
> 
>        Iscsi(true), // for e.g., ZFS Comstar
> 
>        ISO(false), // for iso image
> 
>        LVM(false), // XenServer local LVM SR
> 
>        CLVM(true),
> 
>        RBD(true),
> 
>        SharedMountPoint(true),
> 
>        VMFS(true), // VMware VMFS storage
> 
>        PreSetup(true), // for XenServer, Storage Pool is set up by
> customers.
> 
>        EXT(false), // XenServer local EXT SR
> 
>        OCFS2(true),
> 
>        Dynamic(true); // dynamic, zone-wide storage (ex. SolidFire)
> 
> 
>        boolean shared;
> 
> 
>        StoragePoolType(boolean shared) {
> 
>            this.shared = shared;
> 
>        }
> 
> 
>        public boolean isShared() {
> 
>            return shared;
> 
>        }
> 
>    }
> 
> 
> On Mon, Jun 3, 2013 at 1:41 PM, Mike Tutkowski <mike.tutkowski@solidfire.com
>> wrote:
> 
>> For example, let's say another storage company wants to implement a
>> plug-in to leverage its Quality of Service feature. It would be dynamic,
>> zone-wide storage, as well. They would need only implement a storage
>> plug-in as I've made the necessary changes to the hypervisor-attach logic
>> to support their plug-in.
>> 
>> 
>> On Mon, Jun 3, 2013 at 1:39 PM, Mike Tutkowski <
>> mike.tutkowski@solidfire.com> wrote:
>> 
>>> Oh, sorry to imply the XenServer code is SolidFire specific. It is not.
>>> 
>>> The XenServer attach logic is now aware of dynamic, zone-wide storage
>>> (and SolidFire is an implementation of this kind of storage). This kind of
>>> storage is new to 4.2 with Edison's storage framework changes.
>>> 
>>> Edison created a new framework that supported the creation and deletion
>>> of volumes dynamically. However, when I visited with him in Portland back
>>> in April, we realized that it was not complete. We realized there was
>>> nothing CloudStack could do with these volumes unless the attach logic was
>>> changed to recognize this new type of storage and create the appropriate
>>> hypervisor data structure.
>>> 
>>> 
>>> On Mon, Jun 3, 2013 at 1:28 PM, John Burwell <jburwell@basho.com> wrote:
>>> 
>>>> Mike,
>>>> 
>>>> It is generally odd to me that any operation in the Storage layer would
>>>> understand or care about details.  I expect to see the Storage services
>>>> expose a set of operations that can be composed/driven by the Hypervisor
>>>> implementations to allocate space/create structures per their needs.  If
>>>> we
>>>> don't invert this dependency, we are going to end with a massive n-to-n
>>>> problem that will make the system increasingly difficult to maintain and
>>>> enhance.  Am I understanding that the Xen specific SolidFire code is
>>>> located in the CitrixResourceBase class?
>>>> 
>>>> Thanks,
>>>> -John
>>>> 
>>>> 
>>>> On Mon, Jun 3, 2013 at 3:12 PM, Mike Tutkowski <
>>>> mike.tutkowski@solidfire.com
>>>>> wrote:
>>>> 
>>>>> To delve into this in a bit more detail:
>>>>> 
>>>>> Prior to 4.2 and aside from one setup method for XenServer, the admin
>>>> had
>>>>> to first create a volume on the storage system, then go into the
>>>> hypervisor
>>>>> to set up a data structure to make use of the volume (ex. a storage
>>>>> repository on XenServer or a datastore on ESX(i)). VMs and data disks
>>>> then
>>>>> shared this storage system's volume.
>>>>> 
>>>>> With Edison's new storage framework, storage need no longer be so
>>>> static
>>>>> and you can easily create a 1:1 relationship between a storage system's
>>>>> volume and the VM's data disk (necessary for storage Quality of
>>>> Service).
>>>>> 
>>>>> You can now write a plug-in that is called to dynamically create and
>>>> delete
>>>>> volumes as needed.
>>>>> 
>>>>> The problem that the storage framework did not address is in creating
>>>> and
>>>>> deleting the hypervisor-specific data structure when performing an
>>>>> attach/detach.
>>>>> 
>>>>> That being the case, I've been enhancing it to do so. I've got
>>>> XenServer
>>>>> worked out and submitted. I've got ESX(i) in my sandbox and can submit
>>>> this
>>>>> if we extend the 4.2 freeze date.
>>>>> 
>>>>> Does that help a bit? :)
>>>>> 
>>>>> 
>>>>> On Mon, Jun 3, 2013 at 1:03 PM, Mike Tutkowski <
>>>>> mike.tutkowski@solidfire.com
>>>>>> wrote:
>>>>> 
>>>>>> Hi John,
>>>>>> 
>>>>>> The storage plug-in - by itself - is hypervisor agnostic.
>>>>>> 
>>>>>> The issue is with the volume-attach logic (in the agent code). The
>>>>> storage
>>>>>> framework calls into the plug-in to have it create a volume as
>>>> needed,
>>>>> but
>>>>>> when the time comes to attach the volume to a hypervisor, the attach
>>>>> logic
>>>>>> has to be smart enough to recognize it's being invoked on zone-wide
>>>>> storage
>>>>>> (where the volume has just been created) and create, say, a storage
>>>>>> repository (for XenServer) or a datastore (for VMware) to make use
>>>> of the
>>>>>> volume that was just created.
>>>>>> 
>>>>>> I've been spending most of my time recently making the attach logic
>>>> work
>>>>>> in the agent code.
>>>>>> 
>>>>>> Does that clear it up?
>>>>>> 
>>>>>> Thanks!
>>>>>> 
>>>>>> 
>>>>>> On Mon, Jun 3, 2013 at 12:48 PM, John Burwell <jburwell@basho.com>
>>>>> wrote:
>>>>>> 
>>>>>>> Mike,
>>>>>>> 
>>>>>>> Can you explain why the the storage driver is hypervisor specific?
>>>>>>> 
>>>>>>> Thanks,
>>>>>>> -John
>>>>>>> 
>>>>>>> On Jun 3, 2013, at 1:21 PM, Mike Tutkowski <
>>>>> mike.tutkowski@solidfire.com>
>>>>>>> wrote:
>>>>>>> 
>>>>>>>> Yes, ultimately I would like to support all hypervisors that
>>>>> CloudStack
>>>>>>>> supports. I think I'm just out of time for 4.2 to get KVM in.
>>>>>>>> 
>>>>>>>> Right now this plug-in supports XenServer. Depending on what we do
>>>>> with
>>>>>>>> regards to 4.2 feature freeze, I have it working for VMware in my
>>>>>>> sandbox,
>>>>>>>> as well.
>>>>>>>> 
>>>>>>>> Also, just to be clear, this is all in regards to Disk Offerings.
>>>> I
>>>>>>> plan to
>>>>>>>> support Compute Offerings post 4.2.
>>>>>>>> 
>>>>>>>> 
>>>>>>>> On Mon, Jun 3, 2013 at 11:14 AM, Kelcey Jamison Damage <
>>>>> kelcey@bbits.ca
>>>>>>>> wrote:
>>>>>>>> 
>>>>>>>>> Is there any plan on supporting KVM in the patch cycle post 4.2?
>>>>>>>>> 
>>>>>>>>> ----- Original Message -----
>>>>>>>>> From: "Mike Tutkowski" <mike.tutkowski@solidfire.com>
>>>>>>>>> To: dev@cloudstack.apache.org
>>>>>>>>> Sent: Monday, June 3, 2013 10:12:32 AM
>>>>>>>>> Subject: Re: [MERGE] disk_io_throttling to MASTER
>>>>>>>>> 
>>>>>>>>> I agree on merging Wei's feature first, then mine.
>>>>>>>>> 
>>>>>>>>> If his feature is for KVM only, then it is a non issue as I don't
>>>>>>> support
>>>>>>>>> KVM in 4.2.
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> On Mon, Jun 3, 2013 at 8:55 AM, Wei ZHOU <ustcweizhou@gmail.com>
>>>>>>> wrote:
>>>>>>>>> 
>>>>>>>>>> John,
>>>>>>>>>> 
>>>>>>>>>> For the billing, as no one works on billing now, users need to
>>>>>>> calculate
>>>>>>>>>> the billing by themselves. They can get the service_offering and
>>>>>>>>>> disk_offering of a VMs and volumes for calculation. Of course
>>>> it is
>>>>>>>>> better
>>>>>>>>>> to tell user the exact limitation value of individual volume,
>>>> and
>>>>>>> network
>>>>>>>>>> rate limitation for nics as well. I can work on it later. Do you
>>>>>>> think it
>>>>>>>>>> is a part of I/O throttling?
>>>>>>>>>> 
>>>>>>>>>> Sorry my misunstand the second the question.
>>>>>>>>>> 
>>>>>>>>>> Agree with what you said about the two features.
>>>>>>>>>> 
>>>>>>>>>> -Wei
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 2013/6/3 John Burwell <jburwell@basho.com>
>>>>>>>>>> 
>>>>>>>>>>> Wei,
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> On Jun 3, 2013, at 2:13 AM, Wei ZHOU <ustcweizhou@gmail.com>
>>>>> wrote:
>>>>>>>>>>> 
>>>>>>>>>>>> Hi John, Mike
>>>>>>>>>>>> 
>>>>>>>>>>>> I hope Mike's aswer helps you. I am trying to adding more.
>>>>>>>>>>>> 
>>>>>>>>>>>> (1) I think billing should depend on IO statistics rather than
>>>>> IOPS
>>>>>>>>>>>> limitation. Please review disk_io_stat if you have time.
>>>>>>>>> disk_io_stat
>>>>>>>>>>> can
>>>>>>>>>>>> get the IO statistics including bytes/iops read/write for an
>>>>>>>>> individual
>>>>>>>>>>>> virtual machine.
>>>>>>>>>>> 
>>>>>>>>>>> Going by the AWS model, customers are billed more for volumes
>>>> with
>>>>>>>>>>> provisioned IOPS, as well as, for those operations (
>>>>>>>>>>> http://aws.amazon.com/ebs/).  I would imagine our users would
>>>> like
>>>>>>> the
>>>>>>>>>>> option to employ similar cost models.  Could an operator
>>>> implement
>>>>>>>>> such a
>>>>>>>>>>> billing model in the current patch?
>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> (2) Do you mean IOPS runtime change? KVM supports setting
>>>> IOPS/BPS
>>>>>>>>>>>> limitation for a running virtual machine through command line.
>>>>>>>>> However,
>>>>>>>>>>>> CloudStack does not support changing the parameters of a
>>>> created
>>>>>>>>>> offering
>>>>>>>>>>>> (computer offering or disk offering).
>>>>>>>>>>> 
>>>>>>>>>>> I meant at the Java interface level.  I apologize for being
>>>>> unclear.
>>>>>>>>> Can
>>>>>>>>>>> we more generalize allocation algorithms with a set of
>>>> interfaces
>>>>>>> that
>>>>>>>>>>> describe the service guarantees provided by a resource?
>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> (3) It is a good question. Maybe it is better to commit Mike's
>>>>> patch
>>>>>>>>>>> after
>>>>>>>>>>>> disk_io_throttling as Mike needs to consider the limitation in
>>>>>>>>>> hypervisor
>>>>>>>>>>>> type, I think.
>>>>>>>>>>> 
>>>>>>>>>>> I will expand on my thoughts in a later response to Mike
>>>> regarding
>>>>>>> the
>>>>>>>>>>> touch points between these two features.  I think that
>>>>>>>>> disk_io_throttling
>>>>>>>>>>> will need to be merged before SolidFire, but I think we need
>>>> closer
>>>>>>>>>>> coordination between the branches (possibly have have solidfire
>>>>> track
>>>>>>>>>>> disk_io_throttling) to coordinate on this issue.
>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> - Wei
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> 2013/6/3 John Burwell <jburwell@basho.com>
>>>>>>>>>>>> 
>>>>>>>>>>>>> Mike,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> The things I want to understand are the following:
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 1) Is there value in capturing IOPS policies be captured in
>>>> a
>>>>>>>>> common
>>>>>>>>>>>>> data model (e.g. for billing/usage purposes, expressing
>>>>> offerings).
>>>>>>>>>>>>>  2) Should there be a common interface model for reasoning
>>>> about
>>>>>>>>> IOP
>>>>>>>>>>>>> provisioning at runtime?
>>>>>>>>>>>>>  3) How are conflicting provisioned IOPS configurations
>>>> between
>>>>> a
>>>>>>>>>>>>> hypervisor and storage device reconciled?  In particular, a
>>>>>>> scenario
>>>>>>>>>>> where
>>>>>>>>>>>>> is lead to believe (and billed) for more IOPS configured for
>>>> a VM
>>>>>>>>>> than a
>>>>>>>>>>>>> storage device has been configured to deliver.  Another
>>>> scenario
>>>>>>>>>> could a
>>>>>>>>>>>>> consistent configuration between a VM and a storage device at
>>>>>>>>> creation
>>>>>>>>>>>>> time, but a later modification to storage device introduces
>>>>> logical
>>>>>>>>>>>>> inconsistency.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>> -John
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Jun 2, 2013, at 8:38 PM, Mike Tutkowski <
>>>>>>>>>>> mike.tutkowski@solidfire.com>
>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Hi John,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> I believe Wei's feature deals with controlling the max
>>>> number of
>>>>>>>>> IOPS
>>>>>>>>>>> from
>>>>>>>>>>>>> the hypervisor side.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> My feature is focused on controlling IOPS from the storage
>>>> system
>>>>>>>>>> side.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> I hope that helps. :)
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Sun, Jun 2, 2013 at 6:35 PM, John Burwell <
>>>> jburwell@basho.com
>>>>>> 
>>>>>>>>>>> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Wei,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> My opinion is that no features should be merged until all
>>>>>>>>> functional
>>>>>>>>>>>>>> issues have been resolved and it is ready to turn over to
>>>> test.
>>>>>>>>>> Until
>>>>>>>>>>>>> the
>>>>>>>>>>>>>> total Ops vs discrete read/write ops issue is addressed and
>>>>>>>>>> re-reviewed
>>>>>>>>>>>>> by
>>>>>>>>>>>>>> Wido, I don't think this criteria has been satisfied.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Also, how does this work intersect/compliment the SolidFire
>>>>> patch
>>>>>>> (
>>>>>>>>>>>>>> https://reviews.apache.org/r/11479/)?  As I understand it
>>>> that
>>>>>>>>> work
>>>>>>>>>> is
>>>>>>>>>>>>>> also involves provisioned IOPS.  I would like to ensure we
>>>> don't
>>>>>>>>>> have a
>>>>>>>>>>>>>> scenario where provisioned IOPS in KVM and SolidFire are
>>>>>>>>>> unnecessarily
>>>>>>>>>>>>>> incompatible.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>> -John
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Jun 1, 2013, at 6:47 AM, Wei ZHOU <ustcweizhou@gmail.com
>>>>> 
>>>>>>>>> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Wido,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Sure. I will change it next week.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> -Wei
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 2013/6/1 Wido den Hollander <wido@widodh.nl>
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Hi Wei,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On 06/01/2013 08:24 AM, Wei ZHOU wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Wido,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Exactly. I have pushed the features into master.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> If anyone object thems for technical reason till Monday, I
>>>> will
>>>>>>>>>> revert
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> them.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> For the sake of clarity I just want to mention again that we
>>>>>>> should
>>>>>>>>>>>>> change
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> the total IOps to R/W IOps asap so that we never release a
>>>>> version
>>>>>>>>>> with
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> only total IOps.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> You laid the groundwork for the I/O throttling and that's
>>>> great!
>>>>>>> We
>>>>>>>>>>>>> should
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> however prevent that we create legacy from day #1.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Wido
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> -Wei
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 2013/5/31 Wido den Hollander <wido@widodh.nl>
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On 05/31/2013 03:59 PM, John Burwell wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Wido,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> +1 -- this enhancement must to discretely support read and
>>>> write
>>>>>>>>>> IOPS.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> I
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> don't see how it could be fixed later because I don't see
>>>> how we
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> correctly
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> split total IOPS into read and write.  Therefore, we would
>>>> be
>>>>>>> stuck
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> with a
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> total unless/until we decided to break backwards
>>>> compatibility.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> What Wei meant was merging it into master now so that it
>>>> will go
>>>>>>> in
>>>>>>>>>> the
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 4.2 branch and add Read / Write IOps before the 4.2 release
>>>> so
>>>>>>> that
>>>>>>>>>> 4.2
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> will be released with Read and Write instead of Total IOps.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> This is to make the May 31st feature freeze date. But if the
>>>>>>> window
>>>>>>>>>>> moves
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> (see other threads) then it won't be necessary to do that.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Wido
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> I also completely agree that there is no association between
>>>>>>>>> network
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> and
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> disk I/O.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> -John
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On May 31, 2013, at 9:51 AM, Wido den Hollander <
>>>> wido@widodh.nl
>>>>>> 
>>>>>>>>>>> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Hi Wei,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On 05/31/2013 03:13 PM, Wei ZHOU wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Hi Wido,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Thanks. Good question.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> I  thought about at the beginning. Finally I decided to
>>>> ignore
>>>>> the
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> difference of read and write mainly because the network
>>>>> throttling
>>>>>>>>>> did
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> not
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> care the difference of sent and received bytes as well.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> That reasoning seems odd. Networking and disk I/O completely
>>>>>>>>>> different.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Disk I/O is much more expensive in most situations then
>>>> network
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> bandwith.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Implementing it will be some copy-paste work. It could be
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> implemented in
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> few days. For the deadline of feature freeze, I will
>>>> implement
>>>>> it
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> after
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> that , if needed.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> It think it's a feature we can't miss. But if it goes into
>>>> the
>>>>> 4.2
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> window we have to make sure we don't release with only total
>>>>> IOps
>>>>>>>>> and
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> fix
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> it in 4.3, that will confuse users.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Wido
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> -Wei
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 2013/5/31 Wido den Hollander <wido@widodh.nl>
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Hi Wei,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On 05/30/2013 06:03 PM, Wei ZHOU wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> I would like to merge disk_io_throttling branch into master.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> If nobody object, I will merge into master in 48 hours.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> The purpose is :
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Virtual machines are running on the same storage device
>>>> (local
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> storage or
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> share strage). Because of the rate limitation of device
>>>> (such as
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> iops), if
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> one VM has large disk operation, it may affect the disk
>>>>>>> performance
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> of
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> other VMs running on the same storage device.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> It is neccesary to set the maximum rate and limit the disk
>>>> I/O
>>>>> of
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> VMs.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Looking at the code I see you make no difference between
>>>> Read
>>>>> and
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Write
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> IOps.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Qemu and libvirt support setting both a different rate for
>>>> Read
>>>>>>> and
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Write
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> IOps which could benefit a lot of users.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> It's also strange, in the polling side you collect both the
>>>> Read
>>>>>>>>> and
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Write
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> IOps, but on the throttling side you only go for a global
>>>> value.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Write IOps are usually much more expensive then Read IOps,
>>>> so it
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> seems
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> like a valid use-case where that an admin would set a lower
>>>>> value
>>>>>>>>> for
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> write
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> IOps vs Read IOps.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Since this only supports KVM at this point I think it would
>>>> be
>>>>> of
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> great
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> value to at least have the mechanism in place to support
>>>> both,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> implementing
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> this later would be a lot of work.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> If a hypervisor doesn't support setting different values for
>>>>> read
>>>>>>>>> and
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> write you can always sum both up and set that as the total
>>>>> limit.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Can you explain why you implemented it this way?
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Wido
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> The feature includes:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> (1) set the maximum rate of VMs (in disk_offering, and
>>>> global
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> configuration)
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> (2) change the maximum rate of VMs
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> (3) limit the disk rate (total bps and iops)
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> JIRA ticket: https://issues.apache.org/****
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> jira/browse/CLOUDSTACK-1192<ht**tps://
>>>> issues.apache.org/****
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> jira/browse/CLOUDSTACK-1192<
>>>>>>>>>>>>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-1192>
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> <ht**tps://
>>>> issues.apache.org/**jira/**browse/CLOUDSTACK-1192<
>>>>>>>>>>>>>> http://issues.apache.org/jira/**browse/CLOUDSTACK-1192>
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> <**
>>>>>>>>>>>>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-1192<
>>>>>>>>>>>>> https://issues.apache.org/jira/browse/CLOUDSTACK-1192>
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> FS (I will update later) :
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>> 
>>>> https://cwiki.apache.org/******confluence/display/CLOUDSTACK/******<
>>>>>>>>>>>>> 
>>>> https://cwiki.apache.org/****confluence/display/CLOUDSTACK/****>
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> <
>>>>>>>>>>>>>> 
>>>> https://cwiki.apache.org/****confluence/display/**CLOUDSTACK/**
>>>>> <
>>>>>>>>>>>>> https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**>
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> VM+Disk+IO+Throttling<https://****
>>>>>>> cwiki.apache.org/confluence/****
>>>>>>>>> <
>>>>>>>>>>>>>> http://cwiki.apache.org/confluence/**>
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> display/CLOUDSTACK/VM+Disk+IO+****Throttling<https://cwiki.
>>>> **
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>> apache.org/confluence/display/**CLOUDSTACK/VM+Disk+IO+**Throttling
>>>>>>>>> <
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>> 
>>>>> 
>>>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Throttling
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Merge check list :-
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> * Did you check the branch's RAT execution success?
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Yes
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> * Are there new dependencies introduced?
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> No
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> * What automated testing (unit and integration) is included
>>>> in
>>>>> the
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> new
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> feature?
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Unit tests are added.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> * What testing has been done to check for potential
>>>> regressions?
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> (1) set the bytes rate and IOPS rate on CloudStack UI.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> (2) VM operations, including
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> deploy, stop, start, reboot, destroy, expunge. migrate,
>>>> restore
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> (3) Volume operations, including
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Attach, Detach
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> To review the code, you can try
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> git diff c30057635d04a2396f84c588127d7e******be42e503a7
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> f2e5591b710d04cc86815044f5823e******73a4a58944
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Best regards,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Wei
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> [1]
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>> 
>>>> https://cwiki.apache.org/******confluence/display/CLOUDSTACK/******<
>>>>>>>>>>>>> 
>>>> https://cwiki.apache.org/****confluence/display/CLOUDSTACK/****>
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> <
>>>>>>>>>>>>>> 
>>>> https://cwiki.apache.org/****confluence/display/**CLOUDSTACK/**
>>>>> <
>>>>>>>>>>>>> https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**>
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> VM+Disk+IO+Throttling<https://****
>>>>>>> cwiki.apache.org/confluence/****
>>>>>>>>> <
>>>>>>>>>>>>>> http://cwiki.apache.org/confluence/**>
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> display/CLOUDSTACK/VM+Disk+IO+****Throttling<https://cwiki.
>>>> **
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>> apache.org/confluence/display/**CLOUDSTACK/VM+Disk+IO+**Throttling
>>>>>>>>> <
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>> 
>>>>> 
>>>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Throttling
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> [2] refs/heads/disk_io_throttling
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> [3]
>>>>>>>>>>>>>> https://issues.apache.org/******jira/browse/CLOUDSTACK-1301
>>>> <
>>>>>>>>>>>>> https://issues.apache.org/****jira/browse/CLOUDSTACK-1301>
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> <ht**tps://
>>>> issues.apache.org/****jira/browse/CLOUDSTACK-1301<
>>>>>>>>>>>>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-1301>
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> <ht**tps://
>>>> issues.apache.org/**jira/**browse/CLOUDSTACK-1301<
>>>>>>>>>>>>>> http://issues.apache.org/jira/**browse/CLOUDSTACK-1301>
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> <**
>>>>>>>>>>>>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-1301<
>>>>>>>>>>>>> https://issues.apache.org/jira/browse/CLOUDSTACK-1301>
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> <ht**tps://
>>>> issues.apache.org/****jira/**browse/CLOUDSTACK-2071<
>>>>>>>>>>>>>> http://issues.apache.org/**jira/**browse/CLOUDSTACK-2071>
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> **<
>>>>>>>>>>>>>> http://issues.apache.org/**jira/**browse/CLOUDSTACK-2071<
>>>>>>>>>>>>> http://issues.apache.org/jira/**browse/CLOUDSTACK-2071>
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> <**
>>>>>>>>>>>>>> https://issues.apache.org/****jira/browse/CLOUDSTACK-2071<
>>>>>>>>>>>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-2071>
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> <h**ttps://issues.apache.org/jira/**browse/CLOUDSTACK-2071<
>>>>>>>>>>>>>> https://issues.apache.org/jira/browse/CLOUDSTACK-2071>
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> (**CLOUDSTACK-1301
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> -     VM Disk I/O Throttling)
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> --
>>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>> Advancing the way the world uses the
>>>>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>>>> *™*
>>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> --
>>>>>>>>> *Mike Tutkowski*
>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>> o: 303.746.7302
>>>>>>>>> Advancing the way the world uses the
>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>> *™*
>>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> --
>>>>>>>> *Mike Tutkowski*
>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>> o: 303.746.7302
>>>>>>>> Advancing the way the world uses the
>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>> *™*
>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> *Mike Tutkowski*
>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> e: mike.tutkowski@solidfire.com
>>>>>> o: 303.746.7302
>>>>>> Advancing the way the world uses the cloud<
>>>>> http://solidfire.com/solution/overview/?video=play>
>>>>>> *™*
>>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> *Mike Tutkowski*
>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> e: mike.tutkowski@solidfire.com
>>>>> o: 303.746.7302
>>>>> Advancing the way the world uses the
>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> *™*
>>>>> 
>>>> 
>>> 
>>> 
>>> 
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>> 
>> 
>> 
>> 
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>> 
> 
> 
> 
> -- 
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*


Mime
View raw message