cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Burwell <jburw...@basho.com>
Subject Re: [MERGE] disk_io_throttling to MASTER
Date Fri, 14 Jun 2013 21:42:46 GMT
Mike,

Please see my comments in-line below.

Thanks,
-John

On Jun 14, 2013, at 5:37 PM, Mike Tutkowski <mike.tutkowski@solidfire.com> wrote:

> Comments below in red (but I believe we are reaching a consensus here). :)
> 
> 
> On Fri, Jun 14, 2013 at 2:42 PM, John Burwell <jburwell@basho.com> wrote:
> 
>> Mike,
>> 
>> Please see my comments in-line below.
>> 
>> Thanks,
>> -John
>> 
>> On Jun 14, 2013, at 4:31 PM, Mike Tutkowski <mike.tutkowski@solidfire.com>
>> wrote:
>> 
>>> I am OK with that approach, John.
>>> 
>>> So, let me review to make sure I follow you correctly:
>>> 
>>> We introduce two new parameters to the plug-in: Number of Total IOPS for
>>> the SAN and an overcommit ratio. (On a side note, if we are just
>>> multiplying the two numbers, why don't we have the user just input their
>>> product?)
>> 
>> It is a straw man suggestion to allow some fine tuning of the allocation
>> algorithm. I defer to you to determine whether or not such a tunable would
>> be valuable in this use case.  My thinking was that it is tunable to
>> support operators as they more deeply understand/observe the CS workload
>> while allowing them to think about IOPS absolutely.  For example, I start
>> out at say 50% overcommit and realize that the load is actually heavier
>> than expected, I can reduce it or vice versa.
>> 
> 
> Using your example of support people changing the value after seeing how
> the system is performing:
> 
> If we had them just pass in a new Total IOPS value, that should cover it.

That is fine answer to me.  I expect that we will be refining this capability over the next few releases.  Therefore, I am cool with starting as simple as possible as it will ease/simplify future evolution.

> 
> 
>> 
>>> 
>>> Are both of these new parameters to the create storage pool API command
>> or
>>> are they passed into the create storage pool API command through its url
>>> parameter?
>>> 
>>> If they are new parameters, we should make two new columns in the
>>> storage_pool table.
>>> 
>>> If they are passed in via the url parameter, they should go in the
>>> storage_pool_details table.
>> 
>> I think these allocation parameters implementation agnostic, and should be
>> columns on the storage_pool table and first class properties on the
>> StoragePool class.
>> 
> 
> Per my most recent e-mail, I agree.
> 
> 
>> 
>>> 
>>> For 4.2, if someone wants to change these values, they must update the DB
>>> manually.
>> 
>> Is it not possible to support updates through the updateStoragePool API
>> call?
>> 
> 
> Edison - if you're reading these - do you know if the plug-ins are informed
> when the updateStoragePool API command is issued?

The way I am thinking about these thresholds is that they are part of the storage engine itself.  Why would plugin need to know that the value changed?  When the plugin needs the value, it will be retrieved from the database ...

> 
> 
>> 
>>> 
>>> Every time a volume is created or deleted for the SolidFire plug-in, the
>>> Current IOPS value (sum of all volumes' Min IOPS that are associated with
>>> the plug-in) is updated.
>> 
>> I don't think we need another column for this computed value.  It will be
>> simpler to simply ask the database to sum the values as needed.
>> 
> 
> I'm thinking now that we should follow the pattern established by the
> "size" field. There are two related size fields for a storage pool:
> capacity_bytes and available bytes.
> 
> We should just follow this pattern and create two new fields: capacity_iops
> and available_iops. The plug-in can increment or decrement available_iops
> each time a volume is created or deleted, respectively.

I would rather see us do away with available_bytes.  My experience with storing computed fields is that often get out of sync and represent premature optimization.  I missed that part of the throttled I/O patch ...

> 
> 
>> 
>>> 
>>> The allocator can use these fields to determine if it can fit in a new
>>> volume.
>> 
>> Exactly.
>> 
>>> 
>>> Does it look like my understanding is OK?
>> 
>> Yes, looks good to me.
>> 
>>> 
>>> 
>>> On Fri, Jun 14, 2013 at 2:14 PM, John Burwell <jburwell@basho.com>
>> wrote:
>>> 
>>>> Mike,
>>>> 
>>>> I apologize for not being clear -- this conversation has been admittedly
>>>> disjoint.  I think we should allow the maximum IOPS and overcommitment
>>>> values to be updated though I would recommend restricting updates to to
>> be
>>>> an increasing value for 4.2 (e.g. users can increase the number of total
>>>> IOPS from 200,000 to 250,000, but not decrease from 250,000 to 200,000).
>>>> While not ideal, given the amount of time we have left for 4.2, it will
>>>> cover most cases, and we can address the implications of reducing
>> resource
>>>> capacity in 4.3.  This approach addresses both of your concerns.
>> First, it
>>>> allows the administrator/operator to determine what portion of the
>> device
>>>> they wish to dedicate.  For example, if the device has a total capacity
>> of
>>>> 200,000 IOPS, and they only want CS to use 25% of the device then they
>> set
>>>> the maximum total IOPS to 50,000.  Second, as they grow capacity, they
>> can
>>>> update the DataStore to increase the number of IOPS they want to
>> dedicate
>>>> to CS' use.  I would imagine expansion of capacity happens infrequently
>>>> enough that increasing the maximum IOPS value would not be a significant
>>>> burden.
>>>> 
>>>> Thanks,
>>>> -John
>>>> 
>>>> On Jun 14, 2013, at 4:06 PM, Mike Tutkowski <
>> mike.tutkowski@solidfire.com>
>>>> wrote:
>>>> 
>>>>> "the administrator/operator simply needs to tell us the total number of
>>>>> IOPS that can be committed to it and an overcommitment factor."
>>>>> 
>>>>> Are you thinking when we create a plug-in as primary storage that we
>> say
>>>> -
>>>>> up front - how many IOPS the SAN can handle?
>>>>> 
>>>>> That is not a good move, in my opinion. Our SAN is designed to start
>>>> small
>>>>> and grow to PBs. As the need arises for more storage, the admin
>> purchases
>>>>> additional storage nodes that join the cluster and the performance and
>>>>> capacity go up.
>>>>> 
>>>>> We need to know how many IOPS total the SAN can handle and what it is
>>>>> committed to currently (the sum of the number of volumes' min IOPS).
>>>>> 
>>>>> We also cannot assume the SAN is dedicated to CS.
>>>>> 
>>>>> 
>>>>> On Fri, Jun 14, 2013 at 1:59 PM, John Burwell <jburwell@basho.com>
>>>> wrote:
>>>>> 
>>>>>> Simon,
>>>>>> 
>>>>>> Yes, it is CloudStack's job to protect, as best it can, from
>>>>>> oversubscribing resources.  I would argue that resource management is
>>>> one,
>>>>>> if not the most, important functions of the system.  It is no
>> different
>>>>>> than the allocation/planning performed for hosts relative to cores and
>>>>>> memory.  We can still oversubscribe resources, but we have rails +
>> knobs
>>>>>> and dials to avoid it.  Without these controls in place, we could
>> easily
>>>>>> allow users to deploy workloads that overrun resources harming all
>>>> tenants.
>>>>>> 
>>>>>> I also think that we are over thinking this issue for provisioned
>> IOPS.
>>>>>> When the DataStore is configured, the administrator/operator simply
>>>> needs
>>>>>> to tell us the total number of IOPS that can be committed to it and an
>>>>>> overcommitment factor.  As we allocate volumes to that DataStore, we
>>>> sum up
>>>>>> the committed IOPS of the existing Volumes attached to the DataStore,
>>>> apply
>>>>>> the overcommitment factor, and determine whether or not the requested
>>>>>> minimum IOPS for the new volume can be fulfilled.  We can provide both
>>>>>> general and vendor specific documentation for determining these values
>>>> --
>>>>>> be they to consume the entire device or a portion of it.
>>>>>> 
>>>>>> Querying the device is unnecessary and deceptive.  CloudStack resource
>>>>>> management is not interested in the current state of the device which
>>>> could
>>>>>> be anywhere from extremely heavy to extremely light at any given time.
>>>> We
>>>>>> are interested in the worst case load that is anticipated for
>> resource.
>>>> In
>>>>>> my view, it is up to administrators/operators to instrument their
>>>>>> environment to understand usage patterns and capacity.  We should
>>>> provide
>>>>>> information that will help determine what should be
>>>> instrumented/monitored,
>>>>>> but that function should be performed outside of CloudStack.
>>>>>> 
>>>>>> Thanks,
>>>>>> -John
>>>>>> 
>>>>>> On Jun 14, 2013, at 2:20 PM, Simon Weller <sweller@ena.com> wrote:
>>>>>> 
>>>>>>> I'd like to comment on this briefly.
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> I think an assumption is being made that the SAN is being dedicated
>> to
>>>> a
>>>>>> CS instance.
>>>>>>> 
>>>>>>> My person opinion that this whole IOPS calculation is getting rather
>>>>>> complicated, and could probably be much simpler than this. Over
>>>>>> subscription is a fact of life on virtually all storage, and is really
>>>> no
>>>>>> different in concept than multiple virt instances on a single piece of
>>>>>> hardware. All decent SANs offer many management options for the
>> storage
>>>>>> engineers to keep track of IOPS utilization, and plan for spindle
>>>>>> augmentation as required.
>>>>>>> Is it really the job of CS to become yet another management layer on
>>>> top
>>>>>> of this?
>>>>>>> 
>>>>>>> ----- Original Message -----
>>>>>>> 
>>>>>>> From: "Mike Tutkowski" <mike.tutkowski@solidfire.com>
>>>>>>> To: dev@cloudstack.apache.org
>>>>>>> Cc: "John Burwell" <jburwell@basho.com>, "Wei Zhou" <
>>>>>> ustcweizhou@gmail.com>
>>>>>>> Sent: Friday, June 14, 2013 1:00:26 PM
>>>>>>> Subject: Re: [MERGE] disk_io_throttling to MASTER
>>>>>>> 
>>>>>>> 1) We want number of IOPS currently supported by the SAN.
>>>>>>> 
>>>>>>> 2) We want the number of IOPS that are committed (sum of min IOPS for
>>>>>> each
>>>>>>> volume).
>>>>>>> 
>>>>>>> We could do the following to keep track of IOPS:
>>>>>>> 
>>>>>>> The plug-in could have a timer thread that goes off every, say, 1
>>>> minute.
>>>>>>> 
>>>>>>> It could query the SAN for the number of nodes that make up the SAN
>> and
>>>>>>> multiple this by 50,000. This is essentially the number of supported
>>>> IOPS
>>>>>>> of the SAN.
>>>>>>> 
>>>>>>> The next API call could be to get all of the volumes on the SAN.
>>>> Iterate
>>>>>>> through them all and add up their min IOPS values. This is the number
>>>> of
>>>>>>> IOPS the SAN is committed to.
>>>>>>> 
>>>>>>> These two numbers can then be updated in the storage_pool table (a
>>>> column
>>>>>>> for each value).
>>>>>>> 
>>>>>>> The allocators can get these values as needed (and they would be as
>>>>>>> accurate as the last time the thread asked the SAN for this info).
>>>>>>> 
>>>>>>> These two fields, the min IOPS of the volume to create, and the
>>>>>> overcommit
>>>>>>> ratio of the plug-in would tell the allocator if it can select the
>>>> given
>>>>>>> storage pool.
>>>>>>> 
>>>>>>> What do you think?
>>>>>>> 
>>>>>>> 
>>>>>>> On Fri, Jun 14, 2013 at 11:45 AM, Mike Tutkowski <
>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>> 
>>>>>>>> "As I mentioned previously, I am very reluctant for any feature to
>>>> come
>>>>>>>> into master that can exhaust resources."
>>>>>>>> 
>>>>>>>> Just wanted to mention that, worst case, the SAN would fail creation
>>>> of
>>>>>>>> the volume before allowing a new volume to break the system.
>>>>>>>> 
>>>>>>>> 
>>>>>>>> On Fri, Jun 14, 2013 at 11:35 AM, Mike Tutkowski <
>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>> 
>>>>>>>>> Hi John,
>>>>>>>>> 
>>>>>>>>> Are you thinking we add a column on to the storage pool table,
>>>>>>>>> IOPS_Count, where we add and subtract committed IOPS?
>>>>>>>>> 
>>>>>>>>> That is easy enough.
>>>>>>>>> 
>>>>>>>>> How do you want to determine what the SAN is capable of supporting
>>>> IOPS
>>>>>>>>> wise? Remember we're dealing with a dynamic SAN here...as you add
>>>>>> storage
>>>>>>>>> nodes to the cluster, the number of IOPS increases. Do we have a
>>>>>> thread we
>>>>>>>>> can use to query external devices like this SAN to update the
>>>> supported
>>>>>>>>> number of IOPS?
>>>>>>>>> 
>>>>>>>>> Also, how do you want to enforce the IOPS limit? Do we pass in an
>>>>>>>>> overcommit ration to the plug-in when it's created? We would need
>> to
>>>>>> store
>>>>>>>>> this in the storage_pool table, as well, I believe.
>>>>>>>>> 
>>>>>>>>> We should also get Wei involved in this as his feature will need
>>>>>> similar
>>>>>>>>> functionality.
>>>>>>>>> 
>>>>>>>>> Also, we should do this FAST as we have only two weeks left and
>> many
>>>> of
>>>>>>>>> us will be out for several days for the CS Collab Conference.
>>>>>>>>> 
>>>>>>>>> Thanks
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> On Fri, Jun 14, 2013 at 10:46 AM, John Burwell <jburwell@basho.com
>>>>>>> wrote:
>>>>>>>>> 
>>>>>>>>>> Mike,
>>>>>>>>>> 
>>>>>>>>>> Querying the SAN only indicates the number of IOPS currently in
>> use.
>>>>>>>>>> The allocator needs to check the number of IOPS committed which is
>>>>>> tracked
>>>>>>>>>> by CloudStack. For 4.2, we should be able to query the number of
>>>> IOPS
>>>>>>>>>> committed to a DataStore, and determine whether or not the number
>>>>>> requested
>>>>>>>>>> can be fulfilled by that device. It seems to be that a
>>>>>>>>>> DataStore#getCommittedIOPS() : Long method would be sufficient.
>>>>>>>>>> DataStore's that don't support provisioned IOPS would return null.
>>>>>>>>>> 
>>>>>>>>>> As I mentioned previously, I am very reluctant for any feature to
>>>> come
>>>>>>>>>> into master that can exhaust resources.
>>>>>>>>>> 
>>>>>>>>>> Thanks,
>>>>>>>>>> -John
>>>>>>>>>> 
>>>>>>>>>> On Jun 13, 2013, at 9:27 PM, Mike Tutkowski <
>>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>>>> 
>>>>>>>>>>> Yeah, I'm not sure I could come up with anything near an accurate
>>>>>>>>>>> assessment of how many IOPS are currently available on the SAN
>> (or
>>>>>>>>>> even a
>>>>>>>>>>> total number that are available for volumes). Not sure if there's
>>>> yet
>>>>>>>>>> an
>>>>>>>>>>> API call for that.
>>>>>>>>>>> 
>>>>>>>>>>> If I did know this number (total number of IOPS supported by the
>>>>>> SAN),
>>>>>>>>>> we'd
>>>>>>>>>>> still have to keep track of the total number of volumes we've
>>>> created
>>>>>>>>>> from
>>>>>>>>>>> CS on the SAN in terms of their IOPS. Also, if an admin issues an
>>>> API
>>>>>>>>>> call
>>>>>>>>>>> directly to the SAN to tweak the number of IOPS on a given volume
>>>> or
>>>>>>>>>> set of
>>>>>>>>>>> volumes (not supported from CS, but supported via the SolidFire
>>>> API),
>>>>>>>>>> our
>>>>>>>>>>> numbers in CS would be off.
>>>>>>>>>>> 
>>>>>>>>>>> I'm thinking verifying sufficient number of IOPS is a really good
>>>>>> idea
>>>>>>>>>> for
>>>>>>>>>>> a future release.
>>>>>>>>>>> 
>>>>>>>>>>> For 4.2 I think we should stick to having the allocator detect if
>>>>>>>>>> storage
>>>>>>>>>>> QoS is desired and if the storage pool in question supports that
>>>>>>>>>> feature.
>>>>>>>>>>> 
>>>>>>>>>>> If you really are over provisioned on your SAN in terms of IOPS
>> or
>>>>>>>>>>> capacity, the SAN can let the admin know in several different
>> ways
>>>>>>>>>> (e-mail,
>>>>>>>>>>> SNMP, GUI).
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> On Thu, Jun 13, 2013 at 7:02 PM, John Burwell <
>> jburwell@basho.com>
>>>>>>>>>> wrote:
>>>>>>>>>>> 
>>>>>>>>>>>> Mike,
>>>>>>>>>>>> 
>>>>>>>>>>>> Please see my comments in-line below.
>>>>>>>>>>>> 
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>> -John
>>>>>>>>>>>> 
>>>>>>>>>>>> On Jun 13, 2013, at 6:09 PM, Mike Tutkowski <
>>>>>>>>>> mike.tutkowski@solidfire.com>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>>> Comments below in red.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Thu, Jun 13, 2013 at 3:58 PM, John Burwell <
>>>> jburwell@basho.com>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Mike,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Overall, I agree with the steps to below for 4.2. However, we
>>>> may
>>>>>>>>>> want
>>>>>>>>>>>> to
>>>>>>>>>>>>>> throw an exception if we can not fulfill a requested QoS. If
>> the
>>>>>>>>>> user
>>>>>>>>>>>> is
>>>>>>>>>>>>>> expecting that the hypervisor will provide a particular QoS,
>> and
>>>>>>>>>> that is
>>>>>>>>>>>>>> not possible, it seems like we should inform them rather than
>>>>>>>>>> silently
>>>>>>>>>>>>>> ignoring their request.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Sure, that sounds reasonable.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> We'd have to come up with some way for the allocators to know
>>>> about
>>>>>>>>>> the
>>>>>>>>>>>>> requested storage QoS and then query the candidate drivers.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Any thoughts on how we might do that?
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> To collect my thoughts from previous parts of the thread, I am
>>>>>>>>>>>>>> uncomfortable with the idea that the management server can
>>>>>>>>>> overcommit a
>>>>>>>>>>>>>> resource. You had mentioned querying the device for available
>>>>>> IOPS.
>>>>>>>>>>>> While
>>>>>>>>>>>>>> that would be nice, it seems like we could fall back to a max
>>>> IOPS
>>>>>>>>>> and
>>>>>>>>>>>>>> overcommit factor manually calculated and entered by the
>>>>>>>>>>>>>> administrator/operator. I think such threshold and allocation
>>>>>> rails
>>>>>>>>>>>> should
>>>>>>>>>>>>>> be added for both provisioned IOPS and throttled I/O -- it is
>> a
>>>>>>>>>> basic
>>>>>>>>>>>>>> feature of any cloud orchestration platform.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Are you thinking this ability would make it into 4.2? Just
>>>> curious
>>>>>>>>>> what
>>>>>>>>>>>>> release we're talking about here. For the SolidFire SAN, you
>>>> might
>>>>>>>>>> have,
>>>>>>>>>>>>> say, four separate storage nodes to start (200,000 IOPS) and
>> then
>>>>>>>>>> later
>>>>>>>>>>>> add
>>>>>>>>>>>>> a new node (now you're at 250,000 IOPS). CS would have to have
>> a
>>>>>> way
>>>>>>>>>> to
>>>>>>>>>>>>> know that the number of supported IOPS has increased.
>>>>>>>>>>>> 
>>>>>>>>>>>> Yes, I think we need some *basic*/conservative rails in 4.2. For
>>>>>>>>>> example,
>>>>>>>>>>>> we may only support expanding capacity in 4.2, and not handle
>> any
>>>>>>>>>>>> re-balance scenarios -- node failure, addition, etc.
>> Extrapolating
>>>>>>>>>> a
>>>>>>>>>>>> bit, the throttled I/O enhancement seems like it needs a similar
>>>> set
>>>>>>>>>> of
>>>>>>>>>>>> rails defined per host.
>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> For 4.3, I don't like the idea that a QoS would be expressed
>> in
>>>> a
>>>>>>>>>>>>>> implementation specific manner. I think we need to arrive at a
>>>>>>>>>> general
>>>>>>>>>>>>>> model that can be exploited by the allocators and planners. We
>>>>>>>>>> should
>>>>>>>>>>>>>> restrict implementation specific key-value pairs (call them
>>>>>> details,
>>>>>>>>>>>>>> extended data, whatever) to information that is unique to the
>>>>>>>>>> driver and
>>>>>>>>>>>>>> would provide no useful information to the management server's
>>>>>>>>>>>>>> orchestration functions. A resource QoS does not seem to fit
>>>> those
>>>>>>>>>>>>>> criteria.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> I wonder if this would be a good discussion topic for Sunday's
>> CS
>>>>>>>>>> Collab
>>>>>>>>>>>>> Conf hack day that Joe just sent out an e-mail about?
>>>>>>>>>>>> 
>>>>>>>>>>>> Yes, it would -- I will put something in the wiki topic. It will
>>>>>>>>>> also be
>>>>>>>>>>>> part of my talk on Monday -- How to Run from Zombie which
>> include
>>>>>>>>>> some of
>>>>>>>>>>>> my opinions on the topic.
>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>> -John
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Jun 13, 2013, at 5:44 PM, Mike Tutkowski <
>>>>>>>>>>>> mike.tutkowski@solidfire.com>
>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> So, here's my suggestion for 4.2:
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Accept the values as they are currently required (four new
>>>> fields
>>>>>>>>>> for
>>>>>>>>>>>>>> Wei's
>>>>>>>>>>>>>>> feature or two new fields for mine).
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> The Add Disk Offering dialog needs three new radio buttons:
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 1) No QoS
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 2) Hypervisor QoS
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 3) Storage Qos
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> The admin needs to specify storage tags that only map to
>>>> storage
>>>>>>>>>> that
>>>>>>>>>>>>>>> supports Storage QoS.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> The admin needs to be aware for Hypervisor QoS that unless
>> all
>>>>>>>>>>>>>> hypervisors
>>>>>>>>>>>>>>> in use support the new fields, they may not be enforced.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Post 4.3:
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Come up with a way to more generally enter these parameters
>>>>>>>>>> (probably
>>>>>>>>>>>>>> just
>>>>>>>>>>>>>>> key/value pairs sent to the drivers).
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Have the drivers expose their feature set so the allocators
>> can
>>>>>>>>>>>> consider
>>>>>>>>>>>>>>> them more fully and throw an exception if there is not a
>>>>>> sufficient
>>>>>>>>>>>>>> match.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> On Thu, Jun 13, 2013 at 3:31 PM, Mike Tutkowski <
>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> My thinking is, for 4.2, while not ideal, we will need to
>> put
>>>>>> some
>>>>>>>>>>>>>> burden
>>>>>>>>>>>>>>>> on the admin to configure a Disk Offering in a way that
>> makes
>>>>>>>>>> sense.
>>>>>>>>>>>> For
>>>>>>>>>>>>>>>> example, if he wants to use storage QoS with supported Min
>> and
>>>>>> Max
>>>>>>>>>>>>>> values,
>>>>>>>>>>>>>>>> he'll have to put in a storage tag that references the
>>>> SolidFire
>>>>>>>>>>>> primary
>>>>>>>>>>>>>>>> storage (plug-in). If he puts in a storage tag that doesn't,
>>>>>> then
>>>>>>>>>> he's
>>>>>>>>>>>>>> not
>>>>>>>>>>>>>>>> going to get the Min and Max feature. We could add help text
>>>> to
>>>>>>>>>> the
>>>>>>>>>>>>>> pop-up
>>>>>>>>>>>>>>>> dialog that's displayed when you click in the Min and Max
>> text
>>>>>>>>>> fields.
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> Same idea for Wei's feature.
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> Not idea, true...perhaps we can brainstorm on a more
>>>>>> comprehensive
>>>>>>>>>>>>>>>> approach for post 4.2.
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> Maybe in the future we could have the drivers advertise
>> their
>>>>>>>>>>>>>> capabilities
>>>>>>>>>>>>>>>> and if the allocator feels a request is not being satisfied
>>>> (say
>>>>>>>>>> Min
>>>>>>>>>>>> was
>>>>>>>>>>>>>>>> entered, but it not's supported by any storage plug-in) it
>> can
>>>>>>>>>> throw
>>>>>>>>>>>> an
>>>>>>>>>>>>>>>> exception.
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> On Thu, Jun 13, 2013 at 3:19 PM, Mike Tutkowski <
>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> Comments below in red.
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> On Thu, Jun 13, 2013 at 2:54 PM, John Burwell <
>>>>>>>>>> jburwell@basho.com>
>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> Mike,
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> Please see my comment in-line below.
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>>>>> -John
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> On Jun 13, 2013, at 1:22 AM, Mike Tutkowski <
>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> Hi John,
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> I've put comments below in red.
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> Thanks!
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> On Wed, Jun 12, 2013 at 10:51 PM, John Burwell <
>>>>>>>>>> jburwell@basho.com
>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>> Mike,
>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>> First and foremost, we must ensure that these two
>> features
>>>>>> are
>>>>>>>>>>>>>>>>>> mutually
>>>>>>>>>>>>>>>>>>>> exclusive in 4.2. We don't want to find a configuration
>>>> that
>>>>>>>>>>>>>>>>>> contains both
>>>>>>>>>>>>>>>>>>>> hypervisor and storage IOPS guarantees that leads to
>>>>>>>>>>>>>> non-deterministic
>>>>>>>>>>>>>>>>>>>> operations.
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> Agreed
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>> Restricting QoS expression to be either hypervisor or
>>>>>> storage
>>>>>>>>>>>>>> oriented
>>>>>>>>>>>>>>>>>>>> solves the problem in short term. As I understand
>> storage
>>>>>>>>>> tags,
>>>>>>>>>>>> we
>>>>>>>>>>>>>>>>>> have no
>>>>>>>>>>>>>>>>>>>> means of expressing this type of mutual exclusion. I
>>>> wasn't
>>>>>>>>>>>>>>>>>> necessarily
>>>>>>>>>>>>>>>>>>>> intending that we implement this allocation model in
>> 4.3,
>>>>>> but
>>>>>>>>>>>>>> instead,
>>>>>>>>>>>>>>>>>>>> determine if this type model would be one we would want
>> to
>>>>>>>>>> support
>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>> future. If so, I would encourage us to ensure that the
>>>> data
>>>>>>>>>> model
>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>>>>> current implementation would not preclude evolution in
>>>> that
>>>>>>>>>>>>>>>>>> direction. My
>>>>>>>>>>>>>>>>>>>> view is that this type of allocation model is what
>> user's
>>>>>>>>>> expect
>>>>>>>>>>>> of
>>>>>>>>>>>>>>>>>> "cloud"
>>>>>>>>>>>>>>>>>>>> systems -- selecting the best available resource set to
>>>>>>>>>> fulfill a
>>>>>>>>>>>>>>>>>> set of
>>>>>>>>>>>>>>>>>>>> system requirements.
>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> I believe we have meet your requirement here in that what
>>>>>> we've
>>>>>>>>>>>>>>>>>> implemented
>>>>>>>>>>>>>>>>>>> should not make refinement difficult in the future. If we
>>>>>> don't
>>>>>>>>>>>>>> modify
>>>>>>>>>>>>>>>>>>> allocators for 4.2, but we do for 4.3, we've made
>>>> relatively
>>>>>>>>>> simple
>>>>>>>>>>>>>>>>>> changes
>>>>>>>>>>>>>>>>>>> to enhance the current functioning of the system.
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> Looking through both patches, I have to say that the
>>>>>> aggregated
>>>>>>>>>>>> result
>>>>>>>>>>>>>>>>>> seems a bit confusing. There are six new attributes for
>>>>>>>>>> throttled
>>>>>>>>>>>>>> I/O and
>>>>>>>>>>>>>>>>>> two for provisioned IOPS with no obvious grouping. My
>>>> concern
>>>>>>>>>> is
>>>>>>>>>>>> not
>>>>>>>>>>>>>>>>>> technical, but rather, about maintainability. At minimum,
>>>>>>>>>> Javadoc
>>>>>>>>>>>>>> should
>>>>>>>>>>>>>>>>>> be added explaining the two sets of attributes and their
>>>>>> mutual
>>>>>>>>>>>>>> exclusion.
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> I agree: We need JavaDoc to explain them and their mutual
>>>>>>>>>> exclusion.
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> The other part that is interesting is that throttled I/O
>>>>>>>>>> provides
>>>>>>>>>>>> both
>>>>>>>>>>>>>>>>>> an IOPS and byte measured QoS as a rate where provisioned
>>>> IOPS
>>>>>>>>>> uses
>>>>>>>>>>>> a
>>>>>>>>>>>>>>>>>> range. In order to select the best available resource to
>>>>>>>>>> fulfill a
>>>>>>>>>>>>>> QoS, we
>>>>>>>>>>>>>>>>>> would need to have the QoS expression normalized in terms
>> of
>>>>>>>>>> units
>>>>>>>>>>>>>> (IOPS or
>>>>>>>>>>>>>>>>>> bytes) and their expression (rate vs. range). If we want
>> to
>>>>>>>>>>>> achieve a
>>>>>>>>>>>>>>>>>> model like I described, I think we would need to resolve
>>>> this
>>>>>>>>>> issue
>>>>>>>>>>>>>> in 4.2
>>>>>>>>>>>>>>>>>> to ensure a viable migration path.
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> I think we're not likely to be able to normalize the input
>>>> for
>>>>>>>>>> 4.2.
>>>>>>>>>>>>>> Plus
>>>>>>>>>>>>>>>>> people probably want to input the data in terms they're
>>>>>> familiar
>>>>>>>>>> with
>>>>>>>>>>>>>> for
>>>>>>>>>>>>>>>>> the products in question.
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> Ideally we would fix the way we do storage tagging and let
>>>> the
>>>>>>>>>> user
>>>>>>>>>>>>>> send
>>>>>>>>>>>>>>>>> key/value pairs to each vendor that could be selected due
>> to
>>>> a
>>>>>>>>>> given
>>>>>>>>>>>>>>>>> storage tag. I'm still not sure that would solve it because
>>>>>> what
>>>>>>>>>>>>>> happens if
>>>>>>>>>>>>>>>>> you change the storage tag of a given Primary Storage after
>>>>>>>>>> having
>>>>>>>>>>>>>> created
>>>>>>>>>>>>>>>>> a Disk Offering?
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> Basically storage tagging is kind of a mess and we should
>>>>>>>>>> re-think
>>>>>>>>>>>> it.
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> Also, we need to have a way for the drivers to expose their
>>>>>>>>>> supported
>>>>>>>>>>>>>>>>> feature sets so the allocators can make good choices.
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>> As I think through the implications of these
>> requirements
>>>>>> and
>>>>>>>>>>>>>> reflect
>>>>>>>>>>>>>>>>>> on
>>>>>>>>>>>>>>>>>>>> the reviews, I don't understand why they haven't already
>>>>>>>>>> impacted
>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>> allocators and planners. As it stands, the current
>>>>>>>>>> provisioned
>>>>>>>>>>>> IOPS
>>>>>>>>>>>>>>>>>> has no
>>>>>>>>>>>>>>>>>>>> checks to ensure that the volumes are allocated to
>> devices
>>>>>>>>>> that
>>>>>>>>>>>> have
>>>>>>>>>>>>>>>>>>>> capacity to fulfill the requested QoS. Therefore, as I
>>>>>>>>>> understand
>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>> current patch, we can overcommit storage resources --
>>>>>>>>>> potentially
>>>>>>>>>>>>>>>>>> causing
>>>>>>>>>>>>>>>>>>>> none of the QoS obligations from being fulfilled. It
>> seems
>>>>>>>>>> to me
>>>>>>>>>>>>>>>>>> that a
>>>>>>>>>>>>>>>>>>>> DataStore supporting provisioned IOPS should express the
>>>>>>>>>> maximum
>>>>>>>>>>>>>> IOPS
>>>>>>>>>>>>>>>>>> which
>>>>>>>>>>>>>>>>>>>> it can support and some type of overcommitment factor.
>>>> This
>>>>>>>>>>>>>>>>>> information
>>>>>>>>>>>>>>>>>>>> should be used by the storage allocators to determine
>> the
>>>>>>>>>> device
>>>>>>>>>>>>>> best
>>>>>>>>>>>>>>>>>> able
>>>>>>>>>>>>>>>>>>>> to support the resources needs of a volume. It seems
>> that
>>>> a
>>>>>>>>>>>> similar
>>>>>>>>>>>>>>>>>> set of
>>>>>>>>>>>>>>>>>>>> considerations would need to be added to the Hypervisor
>>>>>> layer
>>>>>>>>>>>> which
>>>>>>>>>>>>>>>>>>>> allocating a VM to a host to prevent oversubscription.
>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> Yeah, for this first release, we just followed the path
>>>> that
>>>>>>>>>> was
>>>>>>>>>>>>>>>>>> previously
>>>>>>>>>>>>>>>>>>> established for other properties you see on dialogs in
>> CS:
>>>>>> Just
>>>>>>>>>>>>>> because
>>>>>>>>>>>>>>>>>>> they're there doesn't mean the vendor your VM is deployed
>>>> to
>>>>>>>>>>>> supports
>>>>>>>>>>>>>>>>>> them.
>>>>>>>>>>>>>>>>>>> It is then up to the admin to make sure he inputs, say, a
>>>>>>>>>> storage
>>>>>>>>>>>> tag
>>>>>>>>>>>>>>>>>> that
>>>>>>>>>>>>>>>>>>> confines the deployment only to storage that supports the
>>>>>>>>>> selected
>>>>>>>>>>>>>>>>>>> features. This is not ideal, but it's kind of the way
>>>>>>>>>> CloudStack
>>>>>>>>>>>>>> works
>>>>>>>>>>>>>>>>>>> today.
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> I understand the tag functionality, and the need for the
>>>>>>>>>>>> administrator
>>>>>>>>>>>>>>>>>> to very carefully construct offerings. My concern is that
>> we
>>>>>>>>>> over
>>>>>>>>>>>>>>>>>> guarantee a resource's available IOPS. For the purposes of
>>>>>>>>>>>>>> illustration,
>>>>>>>>>>>>>>>>>> let's say we have a SolidFire, and the max IOPS for that
>>>>>> device
>>>>>>>>>> is
>>>>>>>>>>>>>> 100000.
>>>>>>>>>>>>>>>>>> We also know that we can safely oversubscribe by 50%.
>>>>>>>>>> Therefore,
>>>>>>>>>>>> we
>>>>>>>>>>>>>>>>>> need to ensure that we don't allocate more than 150,000
>>>>>>>>>> guaranteed
>>>>>>>>>>>>>> IOPS
>>>>>>>>>>>>>>>>>> from that device. Intuitively, it seems like the DataStore
>>>>>>>>>>>>>> configuration
>>>>>>>>>>>>>>>>>> should have a max assignable IOPS and overcommitment
>> factor.
>>>>>>>>>> As we
>>>>>>>>>>>>>>>>>> allocate volumes and attach VMs, we need to ensure that
>>>>>>>>>> guarantee
>>>>>>>>>>>>>> more IOPS
>>>>>>>>>>>>>>>>>> exceed the configured maximum for a DataStore. Does that
>>>> make
>>>>>>>>>>>> sense?
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> I think that's a good idea for a future enhancement. I'm
>> not
>>>>>> even
>>>>>>>>>>>> sure
>>>>>>>>>>>>>> I
>>>>>>>>>>>>>>>>> can query the SAN to find out how many IOPS safely remain.
>>>> I'd
>>>>>>>>>> have
>>>>>>>>>>>> to
>>>>>>>>>>>>>> get
>>>>>>>>>>>>>>>>> all of the min values for all of the volumes on the SAN and
>>>>>> total
>>>>>>>>>>>> them
>>>>>>>>>>>>>> up,
>>>>>>>>>>>>>>>>> I suppose, and subtract it from the total (user facing)
>>>>>> supported
>>>>>>>>>>>> IOPS
>>>>>>>>>>>>>> of
>>>>>>>>>>>>>>>>> the system.
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>> Another question occurs to me -- should we allow non-QoS
>>>>>>>>>> resources
>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>> be
>>>>>>>>>>>>>>>>>>>> assigned to hosts/storage devices that ensure QoS? For
>>>>>>>>>>>> provisioned
>>>>>>>>>>>>>>>>>> IOPS, I
>>>>>>>>>>>>>>>>>>>> think a side effect of the current implementation is
>>>>>> SolidFire
>>>>>>>>>>>>>> volumes
>>>>>>>>>>>>>>>>>>>> always have a QoS. However, for hypervisor throttled
>> I/O,
>>>> it
>>>>>>>>>>>> seems
>>>>>>>>>>>>>>>>>>>> entirely possible for non-QoS VMs to allocated
>>>> side-by-side
>>>>>>>>>> with
>>>>>>>>>>>> QoS
>>>>>>>>>>>>>>>>>> VMs.
>>>>>>>>>>>>>>>>>>>> In this scenario, a greedy, unbounded VM could
>> potentially
>>>>>>>>>> starve
>>>>>>>>>>>>>> out
>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>> other VMs on the host -- preventing the QoSes defined
>> the
>>>>>>>>>>>> collocated
>>>>>>>>>>>>>>>>>> VMs
>>>>>>>>>>>>>>>>>>>> from being fulfilled.
>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> You can make SolidFire volumes (inside and outside of CS)
>>>> and
>>>>>>>>>> not
>>>>>>>>>>>>>>>>>> specify
>>>>>>>>>>>>>>>>>>> IOPS. You'll still get guaranteed IOPS, but it will be at
>>>> the
>>>>>>>>>>>>>> defaults
>>>>>>>>>>>>>>>>>> we
>>>>>>>>>>>>>>>>>>> choose. Unless you over-provision IOPS on a SolidFire
>> SAN,
>>>>>> you
>>>>>>>>>> will
>>>>>>>>>>>>>>>>>> have
>>>>>>>>>>>>>>>>>>> your Mins met.
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> It sounds like you're perhaps looking for a storage tags
>>>>>>>>>> exclusions
>>>>>>>>>>>>>>>>>> list,
>>>>>>>>>>>>>>>>>>> which might be nice to have at some point (i.e. don't
>>>> deploy
>>>>>> my
>>>>>>>>>>>>>> volume
>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>> storage with these following tags).
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> I don't like the idea of a storage tags exclusion list as
>> it
>>>>>>>>>> would
>>>>>>>>>>>>>>>>>> complicate component assembly. It would require a storage
>>>>>>>>>> plugin to
>>>>>>>>>>>>>>>>>> anticipate all of the possible component assemblies and
>>>>>>>>>> determine
>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>> invalid relationships. I prefer that drivers express their
>>>>>>>>>>>>>> capabilities
>>>>>>>>>>>>>>>>>> which can be matched to a set of requested requirements.
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> I'm not sure why a storage plug-in would care about
>> inclusion
>>>>>> or
>>>>>>>>>>>>>>>>> exclusion lists. It just needs to advertise its
>> functionality
>>>>>> in
>>>>>>>>>> a
>>>>>>>>>>>> way
>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>> allocator understands.
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> I agree with your assessment of Hypervisor QoS. Since it
>>>> only
>>>>>>>>>>>> limits
>>>>>>>>>>>>>>>>>> IOPS,
>>>>>>>>>>>>>>>>>>> it does not solve the Noisy Neighbor problem. Only a
>> system
>>>>>>>>>> with
>>>>>>>>>>>>>>>>>> guaranteed
>>>>>>>>>>>>>>>>>>> minimum IOPS does this.
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> As I said, for SolidFire, it sounds like this problem does
>>>> not
>>>>>>>>>>>> exist.
>>>>>>>>>>>>>>>>>> However, I am concerned with the more general case as we
>>>>>>>>>> supported
>>>>>>>>>>>>>> more
>>>>>>>>>>>>>>>>>> devices with provisioned IOPS.
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> Post 4.2 we need to investigate a way to pass
>> vendor-specific
>>>>>>>>>> values
>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>> drivers. Min and Max and pretty industry standard for
>>>>>> provisioned
>>>>>>>>>>>>>> IOPS, but
>>>>>>>>>>>>>>>>> what if you break them out by read and write or do
>> something
>>>>>>>>>> else? We
>>>>>>>>>>>>>> need
>>>>>>>>>>>>>>>>> a more general mechanism.
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>> In my opinion, we need to ensure that hypervisor
>> throttled
>>>>>>>>>> I/O
>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>>>>> storage provisioned IOPS are mutually exclusive per
>>>> volume.
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> Agreed
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>> We also need to understand the implications of these QoS
>>>>>>>>>>>> guarantees
>>>>>>>>>>>>>> on
>>>>>>>>>>>>>>>>>>>> operation of the system to ensure that the underlying
>>>>>> hardware
>>>>>>>>>>>>>>>>>> resources
>>>>>>>>>>>>>>>>>>>> can fulfill them. Given the time frame, we will likely
>> be
>>>>>>>>>> forced
>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>> make
>>>>>>>>>>>>>>>>>>>> compromises to achieve these goals, and refine the
>>>>>>>>>> implementation
>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>> 4.3.
>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> I agree, John. I also think you've come up with some
>> great
>>>>>>>>>> ideas
>>>>>>>>>>>> for
>>>>>>>>>>>>>>>>>> 4.3. :)
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>>>>>>> -John
>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>> On Jun 12, 2013, at 11:35 PM, Mike Tutkowski <
>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> Yeah, Alex, I think that's the way we were planning
>> (with
>>>>>>>>>> storage
>>>>>>>>>>>>>>>>>> tags).
>>>>>>>>>>>>>>>>>>>> I
>>>>>>>>>>>>>>>>>>>>> believe John was just throwing out an idea that - in
>>>>>>>>>> addition to
>>>>>>>>>>>>>>>>>> storage
>>>>>>>>>>>>>>>>>>>>> tags - we could look into these allocators (storage QoS
>>>>>> being
>>>>>>>>>>>>>>>>>> preferred,
>>>>>>>>>>>>>>>>>>>>> then hypervisor QoS if storage QoS is not available,
>> but
>>>>>>>>>>>> hypervisor
>>>>>>>>>>>>>>>>>> QoS
>>>>>>>>>>>>>>>>>>>> is).
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> I think John's concern is that you can enter in values
>>>> for
>>>>>>>>>> Wei's
>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>>> my
>>>>>>>>>>>>>>>>>>>>> feature that are not honored by other vendors (at least
>>>>>>>>>> yet), so
>>>>>>>>>>>> he
>>>>>>>>>>>>>>>>>> was
>>>>>>>>>>>>>>>>>>>>> hoping - in addition to storage tags - for the
>> allocators
>>>>>> to
>>>>>>>>>>>> prefer
>>>>>>>>>>>>>>>>>> these
>>>>>>>>>>>>>>>>>>>>> vendors when these fields are filled in. As it stands
>>>> today
>>>>>>>>>> in
>>>>>>>>>>>>>>>>>>>> CloudStack,
>>>>>>>>>>>>>>>>>>>>> we already have this kind of an issue with other
>> features
>>>>>>>>>> (fields
>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>>>>> dialogs for features that not all vendors support).
>>>> Perhaps
>>>>>>>>>> post
>>>>>>>>>>>>>> 4.2
>>>>>>>>>>>>>>>>>> we
>>>>>>>>>>>>>>>>>>>>> could look into generic name/value pairs (this is how
>>>>>>>>>> OpenStack
>>>>>>>>>>>>>>>>>> addresses
>>>>>>>>>>>>>>>>>>>>> the issue).
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> Honestly, I think we're too late in the game (two weeks
>>>>>> until
>>>>>>>>>>>> code
>>>>>>>>>>>>>>>>>>>> freeze)
>>>>>>>>>>>>>>>>>>>>> to go too deeply down that path in 4.2.
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> It's probably better if we - at least for 4.2 - keep
>>>> Wei's
>>>>>>>>>> fields
>>>>>>>>>>>>>>>>>> and my
>>>>>>>>>>>>>>>>>>>>> fields as is, make sure only one or the other feature
>> has
>>>>>>>>>> data
>>>>>>>>>>>>>>>>>> entered
>>>>>>>>>>>>>>>>>>>> for
>>>>>>>>>>>>>>>>>>>>> it (or neither), and call it good for 4.2.
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> Then let's step back and look into a more
>> general-purpose
>>>>>>>>>> design
>>>>>>>>>>>>>>>>>> that can
>>>>>>>>>>>>>>>>>>>>> be applied throughout CloudStack where we have these
>>>>>>>>>> situations.
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> What do you think?
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> On Wed, Jun 12, 2013 at 5:21 PM, John Burwell <
>>>>>>>>>>>> jburwell@basho.com>
>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> Mike,
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> I just published my review @
>>>>>>>>>>>> https://reviews.apache.org/r/11479/.
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> I apologize for the delay,
>>>>>>>>>>>>>>>>>>>>>> -John
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> On Jun 12, 2013, at 12:43 PM, Mike Tutkowski <
>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>> No problem, John.
>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>> I still want to re-review it by myself before coming
>> up
>>>>>>>>>> with a
>>>>>>>>>>>>>> new
>>>>>>>>>>>>>>>>>>>> patch
>>>>>>>>>>>>>>>>>>>>>>> file.
>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>> Also, maybe I should first wait for Wei's changes to
>> be
>>>>>>>>>> checked
>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>>>>>>>> merge those into mine before generating a new patch
>>>> file?
>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Jun 12, 2013 at 10:40 AM, John Burwell <
>>>>>>>>>>>>>> jburwell@basho.com
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>> Mike,
>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>> I just realized that I forgot to publish my review.
>> I
>>>> am
>>>>>>>>>>>>>> offline
>>>>>>>>>>>>>>>>>> ATM,
>>>>>>>>>>>>>>>>>>>>>>>> but I will publish it in the next couple of hours.
>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>> Do you plan to update your the patch in Review
>> Board?
>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>> Sorry for the oversight,
>>>>>>>>>>>>>>>>>>>>>>>> -John
>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>> On Jun 12, 2013, at 2:26 AM, Mike Tutkowski
>>>>>>>>>>>>>>>>>>>>>>>> <mike.tutkowski@solidfire.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>> Hi Edison, John, and Wei (and whoever else is
>> reading
>>>>>>>>>> this :)
>>>>>>>>>>>>>> ),
>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>> Just an FYI that I believe I have implemented all
>> the
>>>>>>>>>> areas
>>>>>>>>>>>> we
>>>>>>>>>>>>>>>>>> wanted
>>>>>>>>>>>>>>>>>>>>>>>>> addressed.
>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>> I plan to review the code again tomorrow morning or
>>>>>>>>>>>> afternoon,
>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>> send
>>>>>>>>>>>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>>>>>>>>> another patch.
>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>> Thanks for all the work on this everyone!
>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 11, 2013 at 12:29 PM, Mike Tutkowski <
>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>> Sure, that sounds good.
>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 11, 2013 at 12:11 PM, Wei ZHOU <
>>>>>>>>>>>>>>>>>> ustcweizhou@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi Mike,
>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>> It looks the two feature do not have many
>> conflicts
>>>>>> in
>>>>>>>>>> Java
>>>>>>>>>>>>>>>>>> code,
>>>>>>>>>>>>>>>>>>>>>>>> except
>>>>>>>>>>>>>>>>>>>>>>>>>>> the cloudstack UI.
>>>>>>>>>>>>>>>>>>>>>>>>>>> If you do not mind, I will merge
>> disk_io_throttling
>>>>>>>>>> branch
>>>>>>>>>>>>>> into
>>>>>>>>>>>>>>>>>>>>>> master
>>>>>>>>>>>>>>>>>>>>>>>>>>> this
>>>>>>>>>>>>>>>>>>>>>>>>>>> week, so that you can develop based on it.
>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>> -Wei
>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>> 2013/6/11 Mike Tutkowski <
>>>>>> mike.tutkowski@solidfire.com
>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hey John,
>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>> The SolidFire patch does not depend on the
>>>>>>>>>> object_store
>>>>>>>>>>>>>>>>>> branch,
>>>>>>>>>>>>>>>>>>>> but
>>>>>>>>>>>>>>>>>>>>>> -
>>>>>>>>>>>>>>>>>>>>>>>> as
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Edison mentioned - it might be easier if we
>> merge
>>>>>> the
>>>>>>>>>>>>>>>>>> SolidFire
>>>>>>>>>>>>>>>>>>>>>> branch
>>>>>>>>>>>>>>>>>>>>>>>>>>> into
>>>>>>>>>>>>>>>>>>>>>>>>>>>> the object_store branch before object_store goes
>>>>>> into
>>>>>>>>>>>>>> master.
>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>> I'm not sure how the disk_io_throttling fits
>> into
>>>>>> this
>>>>>>>>>>>> merge
>>>>>>>>>>>>>>>>>>>>>> strategy.
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Perhaps Wei can chime in on that.
>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 11, 2013 at 11:07 AM, John Burwell <
>>>>>>>>>>>>>>>>>>>> jburwell@basho.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mike,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> We have a delicate merge dance to perform. The
>>>>>>>>>>>>>>>>>>>> disk_io_throttling,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> solidfire, and object_store appear to have a
>>>> number
>>>>>>>>>> of
>>>>>>>>>>>>>>>>>>>> overlapping
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> elements. I understand the dependencies between
>>>> the
>>>>>>>>>>>>>> patches
>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>> be
>>>>>>>>>>>>>>>>>>>>>> as
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> follows:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> object_store <- solidfire -> disk_io_throttling
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Am I correct that the device management aspects
>>>> of
>>>>>>>>>>>>>> SolidFire
>>>>>>>>>>>>>>>>>> are
>>>>>>>>>>>>>>>>>>>>>>>>>>> additive
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to the object_store branch or there are
>> circular
>>>>>>>>>>>> dependency
>>>>>>>>>>>>>>>>>>>> between
>>>>>>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> branches? Once we understand the dependency
>>>> graph,
>>>>>>>>>> we
>>>>>>>>>>>> can
>>>>>>>>>>>>>>>>>>>>>> determine
>>>>>>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> best approach to land the changes in master.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -John
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Jun 10, 2013, at 11:10 PM, Mike Tutkowski <
>>>>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Also, if we are good with Edison merging my
>> code
>>>>>>>>>> into
>>>>>>>>>>>> his
>>>>>>>>>>>>>>>>>> branch
>>>>>>>>>>>>>>>>>>>>>>>>>>> before
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> going into master, I am good with that.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> We can remove the StoragePoolType.Dynamic code
>>>>>>>>>> after his
>>>>>>>>>>>>>>>>>> merge
>>>>>>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>>>>>>>>>>>> we
>>>>>>>>>>>>>>>>>>>>>>>>>>>> can
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> deal with Burst IOPS then, as well.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Jun 10, 2013 at 9:08 PM, Mike
>> Tutkowski
>>>> <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Let me make sure I follow where we're going
>>>> here:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 1) There should be NO references to
>> hypervisor
>>>>>>>>>> code in
>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>> storage
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> plug-ins code (this includes the default
>>>> storage
>>>>>>>>>>>> plug-in,
>>>>>>>>>>>>>>>>>> which
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> currently
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sends several commands to the hypervisor in
>> use
>>>>>>>>>>>> (although
>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>>> does
>>>>>>>>>>>>>>>>>>>>>>>>>>> not
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> know
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> which hypervisor (XenServer, ESX(i), etc.) is
>>>>>>>>>> actually
>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>> use))
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2) managed=true or managed=false can be
>> placed
>>>> in
>>>>>>>>>> the
>>>>>>>>>>>> url
>>>>>>>>>>>>>>>>>> field
>>>>>>>>>>>>>>>>>>>>>> (if
>>>>>>>>>>>>>>>>>>>>>>>>>>>> not
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> present, we default to false). This info is
>>>>>> stored
>>>>>>>>>> in
>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage_pool_details table.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 3) When the "attach" command is sent to the
>>>>>>>>>> hypervisor
>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>>>>>>>>>>> question, we
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> pass the managed property along (this takes
>> the
>>>>>>>>>> place
>>>>>>>>>>>> of
>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> StoragePoolType.Dynamic check).
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 4) execute(AttachVolumeCommand) in the
>>>> hypervisor
>>>>>>>>>>>> checks
>>>>>>>>>>>>>>>>>> for
>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>>>>>>> managed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> property. If true for an attach, the
>> necessary
>>>>>>>>>>>> hypervisor
>>>>>>>>>>>>>>>>>> data
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> structure is
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> created and the rest of the attach command
>>>>>>>>>> executes to
>>>>>>>>>>>>>>>>>> attach
>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> volume.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 5) When execute(AttachVolumeCommand) is
>> invoked
>>>>>> to
>>>>>>>>>>>>>> detach a
>>>>>>>>>>>>>>>>>>>>>> volume,
>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> same check is made. If managed, the
>> hypervisor
>>>>>> data
>>>>>>>>>>>>>>>>>> structure
>>>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> removed.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 6) I do not see an clear way to support Burst
>>>>>> IOPS
>>>>>>>>>> in
>>>>>>>>>>>> 4.2
>>>>>>>>>>>>>>>>>>>> unless
>>>>>>>>>>>>>>>>>>>>>>>>>>> it is
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> stored in the volumes and disk_offerings
>> table.
>>>>>> If
>>>>>>>>>> we
>>>>>>>>>>>>>> have
>>>>>>>>>>>>>>>>>> some
>>>>>>>>>>>>>>>>>>>>>>>>>>> idea,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that'd be cool.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks!
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Jun 10, 2013 at 8:58 PM, Mike
>>>> Tutkowski <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "+1 -- Burst IOPS can be implemented while
>>>>>>>>>> avoiding
>>>>>>>>>>>>>>>>>>>>>> implementation
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> attributes. I always wondered about the
>>>> details
>>>>>>>>>>>> field.
>>>>>>>>>>>>>> I
>>>>>>>>>>>>>>>>>>>> think
>>>>>>>>>>>>>>>>>>>>>>>>>>> we
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> should
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> beef up the description in the documentation
>>>>>>>>>> regarding
>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>>>>>> expected
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> of the field. In 4.1, I noticed that the
>>>> details
>>>>>>>>>> are
>>>>>>>>>>>>>> not
>>>>>>>>>>>>>>>>>>>>>>>>>>> returned on
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> createStoratePool updateStoragePool, or
>>>>>>>>>>>> listStoragePool
>>>>>>>>>>>>>>>>>>>>>> response.
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Why
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> don't we return it? It seems like it would
>> be
>>>>>>>>>> useful
>>>>>>>>>>>>>> for
>>>>>>>>>>>>>>>>>>>>>> clients
>>>>>>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>>>>>>>> be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> able to inspect the contents of the details
>>>>>>>>>> field."
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Not sure how this would work storing Burst
>>>> IOPS
>>>>>>>>>> here.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Burst IOPS need to be variable on a Disk
>>>>>>>>>>>>>> Offering-by-Disk
>>>>>>>>>>>>>>>>>>>>>> Offering
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> basis. For each Disk Offering created, you
>>>> have
>>>>>>>>>> to be
>>>>>>>>>>>>>>>>>> able to
>>>>>>>>>>>>>>>>>>>>>>>>>>>> associate
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> unique Burst IOPS. There is a
>>>>>>>>>> disk_offering_details
>>>>>>>>>>>>>> table.
>>>>>>>>>>>>>>>>>>>> Maybe
>>>>>>>>>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> could
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> go there?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I'm also not sure how you would accept the
>>>> Burst
>>>>>>>>>> IOPS
>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>> GUI
>>>>>>>>>>>>>>>>>>>>>>>>>>> if
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> it's
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> not stored like the Min and Max fields are
>> in
>>>>>> the
>>>>>>>>>> DB.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the cloud<
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>> http://solidfire.com/solution/overview/?video=play
>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *™*
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cloud<
>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *™*
>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the
>>>>>>>>>>>>>>>>>>>>>>>>>>>> cloud<
>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> *™*
>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the cloud<
>>>>>>>>>>>>>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>>>>>>>>>>>>>>>>> *™*
>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the
>>>>>>>>>>>>>>>>>>>>>>>>> cloud<
>>>>>> http://solidfire.com/solution/overview/?video=play
>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>> *™*
>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the
>>>>>>>>>>>>>>>>>>>>>>> cloud<
>>>> http://solidfire.com/solution/overview/?video=play
>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>> *™*
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the
>>>>>>>>>>>>>>>>>>>>> cloud<
>> http://solidfire.com/solution/overview/?video=play
>>>>> 
>>>>>>>>>>>>>>>>>>>>> *™*
>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the
>>>>>>>>>>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play
>>> 
>>>>>>>>>>>>>>>>>>> *™*
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>>>>>> Advancing the way the world uses the cloud<
>>>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>>>>>>>> *™*
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>>>>> Advancing the way the world uses the cloud<
>>>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>>>>>>> *™*
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>>>> Advancing the way the world uses the
>>>>>>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>>>>>> *™*
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> --
>>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>> Advancing the way the world uses the
>>>>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>>>> *™*
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> --
>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>> Advancing the way the world uses the
>>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>> *™*
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> --
>>>>>>>>> *Mike Tutkowski*
>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>> o: 303.746.7302
>>>>>>>>> Advancing the way the world uses the cloud<
>>>>>> http://solidfire.com/solution/overview/?video=play>
>>>>>>>>> *™*
>>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> --
>>>>>>>> *Mike Tutkowski*
>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>> o: 303.746.7302
>>>>>>>> Advancing the way the world uses the cloud<
>>>>>> http://solidfire.com/solution/overview/?video=play>
>>>>>>>> *™*
>>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> --
>>>>>>> *Mike Tutkowski*
>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>> o: 303.746.7302
>>>>>>> Advancing the way the world uses the
>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>> *™*
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> *Mike Tutkowski*
>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> e: mike.tutkowski@solidfire.com
>>>>> o: 303.746.7302
>>>>> Advancing the way the world uses the
>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> *™*
>>>> 
>>>> 
>>> 
>>> 
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the
>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>> 
>> 
> 
> 
> -- 
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*


Mime
View raw message