cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Burwell <jburw...@basho.com>
Subject Re: [MERGE] disk_io_throttling to MASTER
Date Thu, 13 Jun 2013 21:58:23 GMT
Mike,

Overall, I agree with the steps to below for 4.2.  However, we may want to throw an exception
if we can not fulfill a requested QoS.  If the user is expecting that the hypervisor will
provide a particular QoS, and that is not possible, it seems like we should inform them rather
than silently ignoring their request.

To collect my thoughts from previous parts of the thread, I am uncomfortable with the idea
that the management server can overcommit a resource.  You had mentioned querying the device
for available IOPS.  While that would be nice, it seems like we could fall back to a max IOPS
and overcommit factor manually calculated and entered by the administrator/operator.  I think
such threshold and allocation rails should be added for both provisioned IOPS and throttled
I/O -- it is a basic feature of any cloud orchestration platform.

For 4.3, I don't like the idea that a QoS would be expressed in a implementation specific
manner.  I think we need to arrive at a general model that can be exploited by the allocators
and planners.  We should restrict implementation specific key-value pairs (call them details,
extended data, whatever) to information that is unique to the driver and would provide no
useful information to the management server's orchestration functions.  A resource QoS does
not seem to fit those criteria.

Thanks,
-John

On Jun 13, 2013, at 5:44 PM, Mike Tutkowski <mike.tutkowski@solidfire.com> wrote:

> So, here's my suggestion for 4.2:
> 
> Accept the values as they are currently required (four new fields for Wei's
> feature or two new fields for mine).
> 
> The Add Disk Offering dialog needs three new radio buttons:
> 
> 1) No QoS
> 
> 2) Hypervisor QoS
> 
> 3) Storage Qos
> 
> The admin needs to specify storage tags that only map to storage that
> supports Storage QoS.
> 
> The admin needs to be aware for Hypervisor QoS that unless all hypervisors
> in use support the new fields, they may not be enforced.
> 
> Post 4.3:
> 
> Come up with a way to more generally enter these parameters (probably just
> key/value pairs sent to the drivers).
> 
> Have the drivers expose their feature set so the allocators can consider
> them more fully and throw an exception if there is not a sufficient match.
> 
> 
> On Thu, Jun 13, 2013 at 3:31 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
> 
>> My thinking is, for 4.2, while not ideal, we will need to put some burden
>> on the admin to configure a Disk Offering in a way that makes sense. For
>> example, if he wants to use storage QoS with supported Min and Max values,
>> he'll have to put in a storage tag that references the SolidFire primary
>> storage (plug-in). If he puts in a storage tag that doesn't, then he's not
>> going to get the Min and Max feature. We could add help text to the pop-up
>> dialog that's displayed when you click in the Min and Max text fields.
>> 
>> Same idea for Wei's feature.
>> 
>> Not idea, true...perhaps we can brainstorm on a more comprehensive
>> approach for post 4.2.
>> 
>> Maybe in the future we could have the drivers advertise their capabilities
>> and if the allocator feels a request is not being satisfied (say Min was
>> entered, but it not's supported by any storage plug-in) it can throw an
>> exception.
>> 
>> 
>> On Thu, Jun 13, 2013 at 3:19 PM, Mike Tutkowski <
>> mike.tutkowski@solidfire.com> wrote:
>> 
>>> Comments below in red.
>>> 
>>> Thanks
>>> 
>>> 
>>> On Thu, Jun 13, 2013 at 2:54 PM, John Burwell <jburwell@basho.com> wrote:
>>> 
>>>> Mike,
>>>> 
>>>> Please see my comment in-line below.
>>>> 
>>>> Thanks,
>>>> -John
>>>> 
>>>> On Jun 13, 2013, at 1:22 AM, Mike Tutkowski <
>>>> mike.tutkowski@solidfire.com> wrote:
>>>> 
>>>>> Hi John,
>>>>> 
>>>>> I've put comments below in red.
>>>>> 
>>>>> Thanks!
>>>>> 
>>>>> 
>>>>> On Wed, Jun 12, 2013 at 10:51 PM, John Burwell <jburwell@basho.com>
>>>> wrote:
>>>>> 
>>>>>> Mike,
>>>>>> 
>>>>>> First and foremost, we must ensure that these two features are
>>>> mutually
>>>>>> exclusive in 4.2.  We don't want to find a configuration that
>>>> contains both
>>>>>> hypervisor and storage IOPS guarantees that leads to non-deterministic
>>>>>> operations.
>>>>> 
>>>>> 
>>>>> Agreed
>>>>> 
>>>>> 
>>>>>> Restricting QoS expression to be either hypervisor or storage oriented
>>>>>> solves the problem in short term.  As I understand storage tags,
we
>>>> have no
>>>>>> means of expressing this type of mutual exclusion.  I wasn't
>>>> necessarily
>>>>>> intending that we implement this allocation model in 4.3, but instead,
>>>>>> determine if this type model would be one we would want to support
in
>>>> the
>>>>>> future.  If so, I would encourage us to ensure that the data model
and
>>>>>> current implementation would not preclude evolution in that
>>>> direction.  My
>>>>>> view is that this type of allocation model is what user's expect
of
>>>> "cloud"
>>>>>> systems -- selecting the best available resource set to fulfill 
a
>>>> set of
>>>>>> system requirements.
>>>>>> 
>>>>> 
>>>>> I believe we have meet your requirement here in that what we've
>>>> implemented
>>>>> should not make refinement difficult in the future. If we don't modify
>>>>> allocators for 4.2, but we do for 4.3, we've made relatively simple
>>>> changes
>>>>> to enhance the current functioning of the system.
>>>>> 
>>>> 
>>>> Looking through both patches, I have to say that the aggregated result
>>>> seems a bit confusing.  There are six new attributes for throttled I/O and
>>>> two for provisioned IOPS with no obvious grouping.  My concern is not
>>>> technical, but rather, about maintainability.   At minimum, Javadoc should
>>>> be added explaining the two sets of attributes and their mutual exclusion.
>>>> 
>>> 
>>> I agree: We need JavaDoc to explain them and their mutual exclusion.
>>> 
>>> 
>>>> 
>>>> The other part that is interesting is that throttled I/O provides both
>>>> an IOPS and byte measured QoS as a rate where provisioned IOPS uses a
>>>> range.  In order to select the best available resource to fulfill a QoS,
we
>>>> would need to have the QoS expression normalized in terms of units (IOPS
or
>>>> bytes) and their expression (rate vs. range).  If we want to achieve a
>>>> model like I described, I think we would need to resolve this issue in 4.2
>>>> to ensure a viable migration path.
>>> 
>>> 
>>> I think we're not likely to be able to normalize the input for 4.2. Plus
>>> people probably want to input the data in terms they're familiar with for
>>> the products in question.
>>> 
>>> Ideally we would fix the way we do storage tagging and let the user send
>>> key/value pairs to each vendor that could be selected due to a given
>>> storage tag. I'm still not sure that would solve it because what happens if
>>> you change the storage tag of a given Primary Storage after having created
>>> a Disk Offering?
>>> 
>>> Basically storage tagging is kind of a mess and we should re-think it.
>>> 
>>> Also, we need to have a way for the drivers to expose their supported
>>> feature sets so the allocators can make good choices.
>>> 
>>> 
>>>> 
>>>>> 
>>>>>> 
>>>>>> As I think through the implications of these requirements and reflect
>>>> on
>>>>>> the reviews, I don't understand why they haven't already impacted
the
>>>>>> allocators and planners.  As it stands, the current provisioned IOPS
>>>> has no
>>>>>> checks to ensure that the volumes are allocated to devices that have
>>>>>> capacity to fulfill the requested QoS.  Therefore, as I understand
the
>>>>>> current patch, we can overcommit storage resources -- potentially
>>>> causing
>>>>>> none of the QoS obligations from being fulfilled.  It seems to me
>>>> that a
>>>>>> DataStore supporting provisioned IOPS should express the maximum
IOPS
>>>> which
>>>>>> it can support and some type of overcommitment factor.  This
>>>> information
>>>>>> should be used by the storage allocators to determine the device
best
>>>> able
>>>>>> to support the resources needs of a volume.  It seems that a similar
>>>> set of
>>>>>> considerations would need to be added to the Hypervisor layer which
>>>>>> allocating a VM to a host to prevent oversubscription.
>>>>>> 
>>>>> 
>>>>> Yeah, for this first release, we just followed the path that was
>>>> previously
>>>>> established for other properties you see on dialogs in CS: Just because
>>>>> they're there doesn't mean the vendor your VM is deployed to supports
>>>> them.
>>>>> It is then up to the admin to make sure he inputs, say, a storage tag
>>>> that
>>>>> confines the deployment only to storage that supports the selected
>>>>> features. This is not ideal, but it's kind of the way CloudStack works
>>>>> today.
>>>> 
>>>> I understand the tag functionality, and the need for the administrator
>>>> to very carefully construct offerings.  My concern is that we over
>>>> guarantee a resource's available IOPS.  For the purposes of illustration,
>>>> let's say we have a SolidFire, and the max IOPS for that device is 100000.
>>>>   We also know that we can safely oversubscribe by 50%.  Therefore, we
>>>> need to ensure that we don't allocate more than 150,000 guaranteed IOPS
>>>> from that device.  Intuitively, it seems like the DataStore configuration
>>>> should have a max assignable IOPS and overcommitment factor.  As we
>>>> allocate volumes and attach VMs, we need to ensure that guarantee more IOPS
>>>> exceed the configured maximum for a DataStore.  Does that make sense?
>>>> 
>>> 
>>> I think that's a good idea for a future enhancement. I'm not even sure I
>>> can query the SAN to find out how many IOPS safely remain. I'd have to get
>>> all of the min values for all of the volumes on the SAN and total them up,
>>> I suppose, and subtract it from the total (user facing) supported IOPS of
>>> the system.
>>> 
>>> 
>>>> 
>>>>> 
>>>>> 
>>>>>> 
>>>>>> Another question occurs to me -- should we allow non-QoS resources
to
>>>> be
>>>>>> assigned to hosts/storage devices that ensure QoS?  For provisioned
>>>> IOPS, I
>>>>>> think a side effect of the current implementation is SolidFire volumes
>>>>>> always have a QoS.  However, for hypervisor throttled I/O, it seems
>>>>>> entirely possible for non-QoS VMs to allocated side-by-side with
QoS
>>>> VMs.
>>>>>> In this scenario, a greedy, unbounded VM could potentially starve
out
>>>> the
>>>>>> other VMs on the host -- preventing the QoSes defined the collocated
>>>> VMs
>>>>>> from being fulfilled.
>>>>>> 
>>>>> 
>>>>> You can make SolidFire volumes (inside and outside of CS) and not
>>>> specify
>>>>> IOPS. You'll still get guaranteed IOPS, but it will be at the defaults
>>>> we
>>>>> choose. Unless you over-provision IOPS on a SolidFire SAN, you will
>>>> have
>>>>> your Mins met.
>>>>> 
>>>>> It sounds like you're perhaps looking for a storage tags exclusions
>>>> list,
>>>>> which might be nice to have at some point (i.e. don't deploy my volume
>>>> to
>>>>> storage with these following tags).
>>>> 
>>>> I don't like the idea of a storage tags exclusion list as it would
>>>> complicate component assembly.  It would require a storage plugin to
>>>> anticipate all of the possible component assemblies and determine the
>>>> invalid relationships.  I prefer that drivers express their capabilities
>>>> which can be matched to a set of requested requirements.
>>>> 
>>> 
>>> I'm not sure why a storage plug-in would care about inclusion or
>>> exclusion lists. It just needs to advertise its functionality in a way the
>>> allocator understands.
>>> 
>>> 
>>>> 
>>>>> 
>>>>> I agree with your assessment of Hypervisor QoS. Since it only limits
>>>> IOPS,
>>>>> it does not solve the Noisy Neighbor problem. Only a system with
>>>> guaranteed
>>>>> minimum IOPS does this.
>>>> 
>>>> As I said, for SolidFire, it sounds like this problem does not exist.
>>>> However, I am concerned with the more general case as we supported more
>>>> devices with provisioned IOPS.
>>>> 
>>> 
>>> Post 4.2 we need to investigate a way to pass vendor-specific values to
>>> drivers. Min and Max and pretty industry standard for provisioned IOPS, but
>>> what if you break them out by read and write or do something else? We need
>>> a more general mechanism.
>>> 
>>>> 
>>>>> 
>>>>> 
>>>>>> 
>>>>>> In my opinion,  we need to ensure that hypervisor throttled I/O and
>>>>>> storage provisioned IOPS are mutually exclusive per volume.
>>>>> 
>>>>> 
>>>>> Agreed
>>>>> 
>>>>> 
>>>>>> We also need to understand the implications of these QoS guarantees
on
>>>>>> operation of the system to ensure that the underlying hardware
>>>> resources
>>>>>> can fulfill them.  Given the time frame, we will likely be forced
to
>>>> make
>>>>>> compromises to achieve these goals, and refine the implementation
in
>>>> 4.3.
>>>>>> 
>>>>> 
>>>>> I agree, John. I also think you've come up with some great ideas for
>>>> 4.3. :)
>>>>> 
>>>>> 
>>>>>> 
>>>>>> Thanks,
>>>>>> -John
>>>>>> 
>>>>>> On Jun 12, 2013, at 11:35 PM, Mike Tutkowski <
>>>> mike.tutkowski@solidfire.com>
>>>>>> wrote:
>>>>>> 
>>>>>>> Hi,
>>>>>>> 
>>>>>>> Yeah, Alex, I think that's the way we were planning (with storage
>>>> tags).
>>>>>> I
>>>>>>> believe John was just throwing out an idea that - in addition
to
>>>> storage
>>>>>>> tags - we could look into these allocators (storage QoS being
>>>> preferred,
>>>>>>> then hypervisor QoS if storage QoS is not available, but hypervisor
>>>> QoS
>>>>>> is).
>>>>>>> 
>>>>>>> I think John's concern is that you can enter in values for Wei's
and
>>>> my
>>>>>>> feature that are not honored by other vendors (at least yet),
so he
>>>> was
>>>>>>> hoping - in addition to storage tags - for the allocators to
prefer
>>>> these
>>>>>>> vendors when these fields are filled in. As it stands today in
>>>>>> CloudStack,
>>>>>>> we already have this kind of an issue with other features (fields
in
>>>>>>> dialogs for features that not all vendors support). Perhaps post
4.2
>>>> we
>>>>>>> could look into generic name/value pairs (this is how OpenStack
>>>> addresses
>>>>>>> the issue).
>>>>>>> 
>>>>>>> Honestly, I think we're too late in the game (two weeks until
code
>>>>>> freeze)
>>>>>>> to go too deeply down that path in 4.2.
>>>>>>> 
>>>>>>> It's probably better if we - at least for 4.2 - keep Wei's fields
>>>> and my
>>>>>>> fields as is, make sure only one or the other feature has data
>>>> entered
>>>>>> for
>>>>>>> it (or neither), and call it good for 4.2.
>>>>>>> 
>>>>>>> Then let's step back and look into a more general-purpose design
>>>> that can
>>>>>>> be applied throughout CloudStack where we have these situations.
>>>>>>> 
>>>>>>> What do you think?
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> On Wed, Jun 12, 2013 at 5:21 PM, John Burwell <jburwell@basho.com>
>>>>>> wrote:
>>>>>>> 
>>>>>>>> Mike,
>>>>>>>> 
>>>>>>>> I just published my review @ https://reviews.apache.org/r/11479/.
>>>>>>>> 
>>>>>>>> I apologize for the delay,
>>>>>>>> -John
>>>>>>>> 
>>>>>>>> On Jun 12, 2013, at 12:43 PM, Mike Tutkowski <
>>>>>> mike.tutkowski@solidfire.com>
>>>>>>>> wrote:
>>>>>>>> 
>>>>>>>>> No problem, John.
>>>>>>>>> 
>>>>>>>>> I still want to re-review it by myself before coming
up with a new
>>>>>> patch
>>>>>>>>> file.
>>>>>>>>> 
>>>>>>>>> Also, maybe I should first wait for Wei's changes to
be checked in
>>>> and
>>>>>>>>> merge those into mine before generating a new patch file?
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> On Wed, Jun 12, 2013 at 10:40 AM, John Burwell <jburwell@basho.com
>>>>> 
>>>>>>>> wrote:
>>>>>>>>> 
>>>>>>>>>> Mike,
>>>>>>>>>> 
>>>>>>>>>> I just realized that I forgot to publish my review.
 I am offline
>>>> ATM,
>>>>>>>>>> but I will publish it in the next couple of hours.
>>>>>>>>>> 
>>>>>>>>>> Do you plan to update your the patch in Review Board?
>>>>>>>>>> 
>>>>>>>>>> Sorry for the oversight,
>>>>>>>>>> -John
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> On Jun 12, 2013, at 2:26 AM, Mike Tutkowski
>>>>>>>>>> <mike.tutkowski@solidfire.com> wrote:
>>>>>>>>>> 
>>>>>>>>>>> Hi Edison, John, and Wei (and whoever else is
reading this :) ),
>>>>>>>>>>> 
>>>>>>>>>>> Just an FYI that I believe I have implemented
all the areas we
>>>> wanted
>>>>>>>>>>> addressed.
>>>>>>>>>>> 
>>>>>>>>>>> I plan to review the code again tomorrow morning
or afternoon,
>>>> then
>>>>>>>> send
>>>>>>>>>> in
>>>>>>>>>>> another patch.
>>>>>>>>>>> 
>>>>>>>>>>> Thanks for all the work on this everyone!
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> On Tue, Jun 11, 2013 at 12:29 PM, Mike Tutkowski
<
>>>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>>>>> 
>>>>>>>>>>>> Sure, that sounds good.
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> On Tue, Jun 11, 2013 at 12:11 PM, Wei ZHOU
<
>>>> ustcweizhou@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>>> Hi Mike,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> It looks the two feature do not have
many conflicts in Java
>>>> code,
>>>>>>>>>> except
>>>>>>>>>>>>> the cloudstack UI.
>>>>>>>>>>>>> If you do not mind, I will merge disk_io_throttling
branch into
>>>>>>>> master
>>>>>>>>>>>>> this
>>>>>>>>>>>>> week, so that you can develop based on
it.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> -Wei
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 2013/6/11 Mike Tutkowski <mike.tutkowski@solidfire.com>
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Hey John,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> The SolidFire patch does not depend
on the object_store
>>>> branch,
>>>>>> but
>>>>>>>> -
>>>>>>>>>> as
>>>>>>>>>>>>>> Edison mentioned - it might be easier
if we merge the
>>>> SolidFire
>>>>>>>> branch
>>>>>>>>>>>>> into
>>>>>>>>>>>>>> the object_store branch before object_store
goes into master.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> I'm not sure how the disk_io_throttling
fits into this merge
>>>>>>>> strategy.
>>>>>>>>>>>>>> Perhaps Wei can chime in on that.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Tue, Jun 11, 2013 at 11:07 AM,
John Burwell <
>>>>>> jburwell@basho.com>
>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Mike,
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> We have a delicate merge dance
to perform.  The
>>>>>> disk_io_throttling,
>>>>>>>>>>>>>>> solidfire, and object_store appear
to have a number of
>>>>>> overlapping
>>>>>>>>>>>>>>> elements.  I understand the dependencies
between the patches
>>>> to
>>>>>> be
>>>>>>>> as
>>>>>>>>>>>>>>> follows:
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>    object_store <- solidfire
-> disk_io_throttling
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Am I correct that the device
management aspects of SolidFire
>>>> are
>>>>>>>>>>>>> additive
>>>>>>>>>>>>>>> to the object_store branch or
there are circular dependency
>>>>>> between
>>>>>>>>>>>>> the
>>>>>>>>>>>>>>> branches?  Once we understand
the dependency graph, we can
>>>>>>>> determine
>>>>>>>>>>>>> the
>>>>>>>>>>>>>>> best approach to land the changes
in master.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>> -John
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> On Jun 10, 2013, at 11:10 PM,
Mike Tutkowski <
>>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> Also, if we are good with
Edison merging my code into his
>>>> branch
>>>>>>>>>>>>> before
>>>>>>>>>>>>>>>> going into master, I am good
with that.
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> We can remove the StoragePoolType.Dynamic
code after his
>>>> merge
>>>>>> and
>>>>>>>>>>>>> we
>>>>>>>>>>>>>> can
>>>>>>>>>>>>>>>> deal with Burst IOPS then,
as well.
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> On Mon, Jun 10, 2013 at 9:08
PM, Mike Tutkowski <
>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
wrote:
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> Let me make sure I follow
where we're going here:
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> 1) There should be NO
references to hypervisor code in the
>>>>>>>> storage
>>>>>>>>>>>>>>>>> plug-ins code (this includes
the default storage plug-in,
>>>> which
>>>>>>>>>>>>>>> currently
>>>>>>>>>>>>>>>>> sends several commands
to the hypervisor in use (although
>>>> it
>>>>>> does
>>>>>>>>>>>>> not
>>>>>>>>>>>>>>> know
>>>>>>>>>>>>>>>>> which hypervisor (XenServer,
ESX(i), etc.) is actually in
>>>> use))
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> 2) managed=true or managed=false
can be placed in the url
>>>> field
>>>>>>>> (if
>>>>>>>>>>>>>> not
>>>>>>>>>>>>>>>>> present, we default to
false). This info is stored in the
>>>>>>>>>>>>>>>>> storage_pool_details
table.
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> 3) When the "attach"
command is sent to the hypervisor in
>>>>>>>>>>>>> question, we
>>>>>>>>>>>>>>>>> pass the managed property
along (this takes the place of
>>>> the
>>>>>>>>>>>>>>>>> StoragePoolType.Dynamic
check).
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> 4) execute(AttachVolumeCommand)
in the hypervisor checks
>>>> for
>>>>>> the
>>>>>>>>>>>>>> managed
>>>>>>>>>>>>>>>>> property. If true for
an attach, the necessary hypervisor
>>>> data
>>>>>>>>>>>>>>> structure is
>>>>>>>>>>>>>>>>> created and the rest
of the attach command executes to
>>>> attach
>>>>>> the
>>>>>>>>>>>>>>> volume.
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> 5) When execute(AttachVolumeCommand)
is invoked to detach a
>>>>>>>> volume,
>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>> same check is made. If
managed, the hypervisor data
>>>> structure
>>>>>> is
>>>>>>>>>>>>>>> removed.
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> 6) I do not see an clear
way to support Burst IOPS in 4.2
>>>>>> unless
>>>>>>>>>>>>> it is
>>>>>>>>>>>>>>>>> stored in the volumes
and disk_offerings table. If we have
>>>> some
>>>>>>>>>>>>> idea,
>>>>>>>>>>>>>>>>> that'd be cool.
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> Thanks!
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> On Mon, Jun 10, 2013
at 8:58 PM, Mike Tutkowski <
>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
wrote:
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> "+1 -- Burst IOPS
can be implemented while avoiding
>>>>>>>> implementation
>>>>>>>>>>>>>>>>>> attributes.  I always
wondered about the details field.  I
>>>>>> think
>>>>>>>>>>>>> we
>>>>>>>>>>>>>>> should
>>>>>>>>>>>>>>>>>> beef up the description
in the documentation regarding the
>>>>>>>>>>>>> expected
>>>>>>>>>>>>>>> format
>>>>>>>>>>>>>>>>>> of the field.  In
4.1, I noticed that the details are not
>>>>>>>>>>>>> returned on
>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>> createStoratePool
updateStoragePool, or listStoragePool
>>>>>>>> response.
>>>>>>>>>>>>>> Why
>>>>>>>>>>>>>>>>>> don't we return it?
 It seems like it would be useful for
>>>>>>>> clients
>>>>>>>>>>>>> to
>>>>>>>>>>>>>> be
>>>>>>>>>>>>>>>>>> able to inspect the
contents of the details field."
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> Not sure how this
would work storing Burst IOPS here.
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> Burst IOPS need to
be variable on a Disk Offering-by-Disk
>>>>>>>> Offering
>>>>>>>>>>>>>>>>>> basis. For each Disk
Offering created, you have to be
>>>> able to
>>>>>>>>>>>>>> associate
>>>>>>>>>>>>>>>>>> unique Burst IOPS.
There is a disk_offering_details table.
>>>>>> Maybe
>>>>>>>>>>>>> it
>>>>>>>>>>>>>>> could
>>>>>>>>>>>>>>>>>> go there?
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> I'm also not sure
how you would accept the Burst IOPS in
>>>> the
>>>>>> GUI
>>>>>>>>>>>>> if
>>>>>>>>>>>>>>> it's
>>>>>>>>>>>>>>>>>> not stored like the
Min and Max fields are in the DB.
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>>>>>>> *Senior CloudStack Developer,
SolidFire Inc.*
>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>>>>>> Advancing the way the
world uses the cloud<
>>>>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>>>>>>>> *™*
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>>>>>> *Senior CloudStack Developer,
SolidFire Inc.*
>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>>>>> Advancing the way the world
uses the
>>>>>>>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>>>>>>> *™*
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire
Inc.*
>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>>> Advancing the way the world uses
the
>>>>>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>>>>> *™*
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> --
>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>> Advancing the way the world uses the cloud<
>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>>> *™*
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> --
>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>> Advancing the way the world uses the
>>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>> *™*
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> --
>>>>>>>>> *Mike Tutkowski*
>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>> o: 303.746.7302
>>>>>>>>> Advancing the way the world uses the
>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>> *™*
>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> --
>>>>>>> *Mike Tutkowski*
>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>> o: 303.746.7302
>>>>>>> Advancing the way the world uses the
>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>> *™*
>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> *Mike Tutkowski*
>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> e: mike.tutkowski@solidfire.com
>>>>> o: 303.746.7302
>>>>> Advancing the way the world uses the
>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> *™*
>>>> 
>>>> 
>>> 
>>> 
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>> 
>> 
>> 
>> 
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>> 
> 
> 
> 
> -- 
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*


Mime
View raw message