cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Tutkowski <mike.tutkow...@solidfire.com>
Subject Re: [MERGE] disk_io_throttling to MASTER
Date Fri, 07 Jun 2013 23:55:50 GMT
I don't understand this line:

"If the DataStore is not managed and the underlying storage is not managed,
an exception is thrown."

>From what I know, a DataStore is just an object that describes a row in the
storage_pool table. Is that incorrect?

I could see the underlying storage being usable in both a managed or
unmanaged mode, but the DataStore itself (a row in the storage_pool table)
would always be in only one of these states, right?


On Fri, Jun 7, 2013 at 5:20 PM, John Burwell <jburwell@basho.com> wrote:

> Edison,
>
> Please see my comments in-line below.
>
> Thanks,
> -John
>
> On Jun 7, 2013, at 6:49 PM, Edison Su <Edison.su@citrix.com> wrote:
>
> >
> >
> >> -----Original Message-----
> >> From: John Burwell [mailto:jburwell@basho.com]
> >> Sent: Friday, June 07, 2013 3:26 PM
> >> To: dev@cloudstack.apache.org
> >> Subject: Re: [MERGE] disk_io_throttling to MASTER
> >>
> >>
> >> On Jun 7, 2013, at 6:13 PM, Edison Su <Edison.su@citrix.com> wrote:
> >>
> >>>
> >>>
> >>>> -----Original Message-----
> >>>> From: Mike Tutkowski [mailto:mike.tutkowski@solidfire.com]
> >>>> Sent: Friday, June 07, 2013 2:37 PM
> >>>> To: dev@cloudstack.apache.org
> >>>> Subject: Re: [MERGE] disk_io_throttling to MASTER
> >>>>
> >>>> As we only have three weeks until feature freeze, we should come to a
> >>>> consensus on this design point as soon as possible.
> >>>>
> >>>> Right now, if the storage framework asks my driver it is is managed,
> it
> >>>> will say 'yes.' This means the framework will tell the driver to
> perform
> >>>> its management activities. This then means the driver will call into
> the
> >>>> host (it doesn't know which hypervisor, by the way) to perform the
> >> activity
> >>>> of, say, creating an SR on XenServer or a datastore on ESX.
> >>>>
> >>>> The driver doesn't know which hypervisor it's talking to, it just
> sends a
> >>>> message to the host to perform the necessary pre-attach work.
> >>>
> >>> Could we just expose a method like "attachVolume/dettachVolume" on
> >> the PrimaryDataStoreDriver driver?
> >>> For most of cases, the implementation of each driver would just send
> >> attachvolumecommand/dettachvolumecmd to hypervisor(we can put the
> >> implementation in a base class, so that can be shared by all of this
> drivers),
> >> and in each hypervisor resource code, which may just call hypervisor's
> api to
> >> attach the volume to VM, while for certain storage, like SolidFire, may
> need
> >> to create a SR first, and create volume on it, then call hypervisor's
> API to
> >> attach volume to VM.
> >>> While for some other storage vendor, may want to bypass hypervisor
> >> during attaching volume, so inside driver's attachvolume implementation,
> >> the driver can do some magic, such as, directly talk to an agent inside
> VM
> >> instance, then create a disk inside VM.
> >>>
> >>> How do you guys think?
> >>
> >> This behavior is the opposite of my expectations.  I would expect the
> >> VirtualMachineManager to coordinate this process.  Why would Storage
> >> need to know or care about a Hypervisor?  My understanding is that Xen
> >> needs to create an SR in a particular manner on all ISCSI devices upon
> >> allocation.  On attach, the Xen plugin would query the datastore to
> >> determine the presence of an SR, and create it as necessary.  It would
> >> proceed with the creating the volume, etc.  In the "real world", SANs
> have no
> >> knowledge about the concept of a hypervisor let alone specific
> >> implementations.  I think we would be wise to replicate that model in
> >> CloudStack.
> >
> > I think there are volumes managed/created by hypervisor(the normal
> case), and there are volumes managed by storage vendor(solidfire case). How
> to expose the volume to user/VM may be quite different in both cases.
> > If the volume is managed by storage vendor, then vendor wants the volume
> to be used by hypervisor, won't it be the storage vendor's responsibility
> to do whatever the hypervisor wants?
> > It's the same, if volume is managed by storage vendor, and vendor wants
> the volume to used by VM directly, won't it be the storage vendor's
> responsibility to do whatever magic they want?
> > If the responsibility is on the storage vendor, then why we can't let
> storage vendor to do whatever necessary steps to get the things done?
>
> It is not the responsibility of the storage vendor to understand the needs
> of the hypervisor.  It is the responsibility of the storage vendor to
> provide interfaces to perform certain low level functions (e.g. create a
> volume, delete a volume, etc).  Users of the DataStore compose these
> operations to achieve certain goals (e.g. attach a volume to a hypervisor,
> create structures, create volumes, etc).
>
> You are describing the managed vs. unmanaged device concept that the
> SolidFire patch introduces.  For DataStoreDrivers that can be managed, the
> associated DataStores can have management enabled.  When a device is
> manageable, then the Storage layer recognizes that it can do more things
> with it.  For example, when executing attachVolume, the orchestration
> pieces can asked managed DataStores to ensure that the underlying storage
> has been allocated to support the volume.  From a CloudStack perspective,
> we know when management functions need to be invoked -- we only need the
> driver to provide the discrete operations.  For the volume attach scenario
> we are discussing, I am thinking the basic would be as follows -- starting
> in a VirtualMachineManager:
>
>         1. VirtualMachineManager initiates the allocation process --
> creating records, networks, etc (hand wavy a bit) until we get to storage
>         2. For each Disk Offering invoke VolumeManager
>                 A. Allocate through VolumeManager
>                         i. VolumeManager acquires the DataStore, and if
> managed, allocates the underlying storage.  If the DataStore is not managed
> and the underlying storage is not managed, an exception is thrown.
>                 B. The resulting volume is attached to the new VM through
> the hypervisor plugin (callback point for Xen to create an SR structure ->
> query the new volume to determine whether or not the SR is present)
>         3. The rest of the VM allocation process completes
>
> Based on this process, the DataStoreDriver provides the discrete
> operations necessary to be composed by the Storage and Hypervisor
> orchestration components to complete various goals.
>
> Does this process make sense?
>
> >
> >>
> >>>
> >>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> On Fri, Jun 7, 2013 at 3:14 PM, Edison Su <Edison.su@citrix.com>
> wrote:
> >>>>
> >>>>>
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: Mike Tutkowski [mailto:mike.tutkowski@solidfire.com]
> >>>>>> Sent: Friday, June 07, 2013 1:14 PM
> >>>>>> To: dev@cloudstack.apache.org
> >>>>>> Subject: Re: [MERGE] disk_io_throttling to MASTER
> >>>>>>
> >>>>>> Hi John,
> >>>>>>
> >>>>>> How's about this:
> >>>>>>
> >>>>>> The driver can implement an isManaged() method. The
> >>>> VolumeManagerImpl
> >>>>>> can
> >>>>>> call into the driver to see if its managed. If it is, the
> >>>>> VolumeManagerImpl
> >>>>>> (which is responsible for calling into the hypervisor to attach the
> disk)
> >>>>>> can call into the hypervisor to create the necessary hypervisor data
> >>>>>> structure (ex. for XenServer, a storage repository).
> >>>>>
> >>>>> The problem here is that storage vendor may work differently with
> >>>>> hypervisor, for example, SolidFire wants a SR per LUN, while maybe
> >> other
> >>>>> vendor wants to totally bypass hypervisor, and assign the LUN
> directly to
> >>>>> VM instance, see the discuss(
> >>>>> http://mail-archives.apache.org/mod_mbox/cloudstack-
> >>>>
> >> dev/201303.mbox/%3C06f219312189b019a8763a5777ecc430@mail.gmail.com
> >>>> %3E
> >>>>> ).
> >>>>> So I would let storage provider to implement attach disk to VM,
> instead
> >> of
> >>>>> implemented by cloudstack itself.
> >>>>>
> >>>>>
> >>>>>>
> >>>>>> If that's what you're going for, that works for me. By the way,
> Edison's
> >>>>>> default storage plug-in (which handles the default storage behavior
> in
> >>>>>> CloudStack (ex. how pre 4.2 works)) does include code that talks to
> >>>>>> hypervisors. You might want to contact him and inform him of your
> >>>>> concerns
> >>>>>> or that logic (as is) will make it to production.
> >>>>>>
> >>>>>> Please let me know if what I wrote in above (for my solution) is OK
> with
> >>>>>> you. :)
> >>>>>>
> >>>>>> Thanks!
> >>>>>>
> >>>>>>
> >>>>>> On Fri, Jun 7, 2013 at 1:49 PM, John Burwell <jburwell@basho.com>
> >>>> wrote:
> >>>>>>
> >>>>>>> Mike,
> >>>>>>>
> >>>>>>> Please see my responses in-line below.
> >>>>>>>
> >>>>>>> Thanks,
> >>>>>>> -John
> >>>>>>>
> >>>>>>> On Jun 7, 2013, at 1:50 AM, Mike Tutkowski <
> >>>>> mike.tutkowski@solidfire.com>
> >>>>>>> wrote:
> >>>>>>>
> >>>>>>>> Hey John,
> >>>>>>>>
> >>>>>>>> I still have a bit more testing I'd like to do before I build up a
> >>>>> patch
> >>>>>>>> file, but this is the gist of what I've done:
> >>>>>>>>
> >>>>>>>> * During a volume-attach operation, after VolumeManagerImpl tells
> >>>>>>>> VolumeServiceImpl to have the driver create a volume, I have
> >>>>>>>> VolumeManagerImpl tell VolumeServiceImpl to ask the driver if it
> >>>>>> managed.
> >>>>>>>> If it is managed, VolumeServiceImpl has the driver perform
> >> whatever
> >>>>>>>> activity is required. In my case, this includes sending a message
> to
> >>>>> the
> >>>>>>>> host where the VM is running to have, say XenServer, add a storage
> >>>>>>>> repository (based on the IP address of the SAN, the IQN of the SAN
> >>>>>>> volume,
> >>>>>>>> etc.) and a single VDI (the VDI consumes all of the space it can
> on
> >>>>> the
> >>>>>>>> storage repository). After this, the normal attach-volume message
> is
> >>>>> sent
> >>>>>>>> to the host by VolumeManagerImpl.
> >>>>>>>
> >>>>>>> There should be **no** code from a storage driver to a hypervisor.
>  I
> >>>>>>> apologize for the repetition, but we simply can not have hypervisor
> >>>>>>> specific code in the storage layer.  The circular dependencies
> between
> >>>>> the
> >>>>>>> two layers are not sustainable in the long term.  Either the
> >>>>> VirtualManager
> >>>>>>> or Xen hypervisor plugin needs to be refactored/modified to
> >>>> coordinate
> >>>>>>> volume creation and then populating the SR.  Ideally, we can
> >>>>> generalize the
> >>>>>>> process flow for attaching volumes such that the Xen hypervisor
> >> plugin
> >>>>>>> would only implement callbacks to perform the attach action and
> >> create
> >>>>>> the
> >>>>>>> structure and SR.  To my mind, the SolidFire driver should only be
> >>>>>>> allocating space and providing information about contents (e.g.
> space
> >>>>>>> available, space consumed, streams to a URI, file handle for a URI,
> >>>>> etc)
> >>>>>>> and capabilities.
> >>>>>>>
> >>>>>>>
> >>>>>>>>
> >>>>>>>> * The reverse is performed for a detach-volume command.
> >>>>>>>>
> >>>>>>>> * Right now I simply "return true;" for isManaged() in my driver.
> >>>>>>> Edison's
> >>>>>>>> default driver simply does a "return false;". We could add a new
> >>>>>>> parameter
> >>>>>>>> to the createStoragePool API command, if we want, to remove the
> >>>>>>> hard-coded
> >>>>>>>> return values in the drivers (although my driver will probably
> just
> >>>>>>> ignore
> >>>>>>>> this parameter and always return true since it wouldn't make sense
> >>>>> for it
> >>>>>>>> to ever return false). We'd need another column in the
> storage_pool
> >>>>>> table
> >>>>>>>> to store this value.
> >>>>>>>
> >>>>>>> Yes, I think we should have a parameter added to the
> >>>> createStoragePool
> >>>>>>> surfaced to the HTTP API that allows DataStores to be configured
> for
> >>>>>>> management when their underlying drivers support it.  To simplify
> >>>>> things,
> >>>>>>> this flag should only be mutable when the DataStore is created. It
> >>>>> would be
> >>>>>>> a bit crazy to take a DataStore from managed to unmanaged after
> >>>>> creation.
> >>>>>>>
> >>>>>>>>
> >>>>>>>> Sound like I'm in sync with what you were thinking?
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On Thu, Jun 6, 2013 at 9:34 PM, Mike Tutkowski <
> >>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>> wrote:
> >>>>>>>>
> >>>>>>>>> I agree, John. Just wanted to point out that I have a working GUI
> >>>>> for
> >>>>>>> you
> >>>>>>>>> to review (in that document), if you'd like to check it out.
> >>>>>>>>>
> >>>>>>>>> Thanks!
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> On Thu, Jun 6, 2013 at 8:34 PM, John Burwell <jburwell@basho.com
> >
> >>>>>>> wrote:
> >>>>>>>>>
> >>>>>>>>>> Mike,
> >>>>>>>>>>
> >>>>>>>>>> I would like the UIs of two features reviewed together to ensure
> >>>>>>>>>> consistency across the concepts of hypervisor throttled IOPs and
> >>>>>>>>>> storage device provisioned IOPs.  I see the potential for
> >>>>> confusion,
> >>>>>>>>>> and I think a side-by-side Ui review of these features will help
> >>>>>>>>>> minimize any potential confusion.
> >>>>>>>>>>
> >>>>>>>>>> As I mentioned, the term reconciliation issue will work itself
> if
> >>>>> it
> >>>>>>>>>> is acceptable that a VM is only permitted utilize hypervisor
> >>>>> throttled
> >>>>>>>>>> IOPs or storage provisioned IOPs.
> >>>>>>>>>>
> >>>>>>>>>> Thanks,
> >>>>>>>>>> -John
> >>>>>>>>>>
> >>>>>>>>>> On Jun 6, 2013, at 10:05 PM, Mike Tutkowski
> >>>>>>>>>> <mike.tutkowski@solidfire.com> wrote:
> >>>>>>>>>>
> >>>>>>>>>>> Hi John,
> >>>>>>>>>>>
> >>>>>>>>>>> Yeah, when you get a chance, refer to the Google doc I sent to
> >>>> you
> >>>>>> the
> >>>>>>>>>>> other day to see how the GUI looks for provisioned storage
> IOPS.
> >>>>>>>>>>>
> >>>>>>>>>>> Several months ago, I put this topic out on the e-mail list
> and we
> >>>>>>>>>> decided
> >>>>>>>>>>> to place the Min, Max, and Burst IOPS in the Add Disk Offering
> >>>>> dialog.
> >>>>>>>>>>> Other storage vendors are coming out with QoS, so they should
> >>>> be
> >>>>>> able
> >>>>>>> to
> >>>>>>>>>>> leverage this GUI going forward (even if they, say, only use
> Max
> >>>>>>> IOPS).
> >>>>>>>>>>> These fields are optional and can be ignored for storage that
> >>>>> does not
> >>>>>>>>>>> support provisioned IOPS. Just like the Disk Size field, the
> >>>>> admin can
> >>>>>>>>>>> choose to allow the end user to fill in Min, Max, and Burst
> IOPS.
> >>>>>>>>>>>
> >>>>>>>>>>> I'm OK if we do an either/or model (either Wei's feature or
> mine,
> >>>>> as
> >>>>>>> is
> >>>>>>>>>>> decided by the admin).
> >>>>>>>>>>>
> >>>>>>>>>>> I'm not sure what we can do about these two features
> >> expressing
> >>>>> the
> >>>>>>>>>> speed
> >>>>>>>>>>> in different terms. I've never seen a SAN express the IOPS for
> >>>>> QoS in
> >>>>>>>>>> any
> >>>>>>>>>>> way other than total IOPS (i.e. not broken in into read/write
> >>>>> IOPS).
> >>>>>>>>>>>
> >>>>>>>>>>> Thanks
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> On Thu, Jun 6, 2013 at 7:16 PM, John Burwell
> >>>> <jburwell@basho.com>
> >>>>>>>>>> wrote:
> >>>>>>>>>>>
> >>>>>>>>>>>> Wei,
> >>>>>>>>>>>>
> >>>>>>>>>>>> We have been down the rabbit hole a bit on the
> >>>> Storage/Hypervisor
> >>>>>>> layer
> >>>>>>>>>>>> separation, but we still need to reconcile the behavior of
> >>>>> hypervisor
> >>>>>>>>>>>> throttled I/O and storage provisioned IOPS.  I see the
> following
> >>>>>>> issues
> >>>>>>>>>>>> outstanding:
> >>>>>>>>>>>>
> >>>>>>>>>>>> 1. Hypervisor throttled IOPS are expressed as discrete
> >>>> read/write
> >>>>>>>>>> values
> >>>>>>>>>>>> where as storage provisioned IOPS are expressed as total IOPS.
> >>>>>>>>>>>> 2. How do we handle VMs with throttled IOPS attached to
> >>>> storage
> >>>>>>> volumes
> >>>>>>>>>>>> with provisioned IOPS?
> >>>>>>>>>>>> 3. How should usage data be captured for throttled and
> >>>>> provisioned
> >>>>>>> IOPS
> >>>>>>>>>>>> that will permit providers to discriminate these guaranteed
> >>>>>>> operations
> >>>>>>>>>> in
> >>>>>>>>>>>> the event they want to bill for it differently?
> >>>>>>>>>>>> 4. What is the user experience for throttled and provisioned
> >>>> IOPS
> >>>>>>> that
> >>>>>>>>>>>> minimizes confusion of these concepts?
> >>>>>>>>>>>>
> >>>>>>>>>>>> My thinking is that a VM can have either utilize hypervisor
> >>>>> throttled
> >>>>>>>>>> IOPS
> >>>>>>>>>>>> or storage provisioned IOPS.  This policy would neatly solve
> >>>>> items 1
> >>>>>>>>>> and 2.
> >>>>>>>>>>>> Since the two facilities would not be permitted to operate
> >>>>> together,
> >>>>>>>>>> they
> >>>>>>>>>>>> do not need to be semantically compatible.  I think item 3 can
> >> be
> >>>>>>>>>> resolved
> >>>>>>>>>>>> with an additional flag or two on the usage records.  As for
> >>>>> Item 4,
> >>>>>>> I
> >>>>>>>>>> am
> >>>>>>>>>>>> not familiar with how these two enhancements are depicted in
> >>>> the
> >>>>>> user
> >>>>>>>>>>>> interface.  I think we need to review the UI enhancements for
> >>>>> both
> >>>>>>>>>>>> enhancements and ensure they are consistent.
> >>>>>>>>>>>>
> >>>>>>>>>>>> Do these solutions make sense?
> >>>>>>>>>>>>
> >>>>>>>>>>>> Thanks,
> >>>>>>>>>>>> -John
> >>>>>>>>>>>>
> >>>>>>>>>>>> On Jun 6, 2013, at 5:22 PM, Wei ZHOU
> >> <ustcweizhou@gmail.com>
> >>>>>> wrote:
> >>>>>>>>>>>>
> >>>>>>>>>>>>> John and Mike,
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> I was busy working on other issues (CLOUDSTACK-2780/2729,
> >>>>>>>>>>>>> CLOUDSTACK-2856/2857/2865, CLOUDSTACK-2823 ,
> >>>> CLOUDSTACK-
> >>>>>> 2875 ) this
> >>>>>>>>>> week.
> >>>>>>>>>>>>> I will start to develop on iops/bps changes tomorrow, and ask
> >>>>> for
> >>>>>>>>>> second
> >>>>>>>>>>>>> merge request after testing.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> -Wei
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> 2013/6/6 Mike Tutkowski <mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>> I believe I understand where you're going with this, John,
> >> and
> >>>>>> have
> >>>>>>>>>> been
> >>>>>>>>>>>>>> re-working this section of code today.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> I should be able to run it by you tomorrow.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Thanks for the comments,
> >>>>>>>>>>>>>> Mike
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> On Thu, Jun 6, 2013 at 3:12 PM, John Burwell
> >>>>>> <jburwell@basho.com>
> >>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> See my responses in-line below.
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> On Jun 6, 2013, at 11:09 AM, Mike Tutkowski <
> >>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> Hi John,
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> Thanks for the response.
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> "I am fine with the VolumeManager determining whether
> >> or
> >>>>>> not a
> >>>>>>>>>> Volume
> >>>>>>>>>>>> is
> >>>>>>>>>>>>>>> managed (i.e. not based on the StoragePoolType, but an
> >>>> actual
> >>>>>>>>>> isManaged
> >>>>>>>>>>>>>>> method), and asking the device driver to allocate resources
> >>>>> for
> >>>>>>> the
> >>>>>>>>>>>>>> volume
> >>>>>>>>>>>>>>> if it is managed."
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> Are you thinking you'd like to see an isManaged() method
> >>>>>> added to
> >>>>>>>>>> the
> >>>>>>>>>>>>>>> PrimaryDataStoreDriver interface? If it returns true, then
> >> the
> >>>>>>>>>> storage
> >>>>>>>>>>>>>>> framework could call the manage() (or whatever name)
> >>>> method
> >>>>>> (which
> >>>>>>>>>>>> would
> >>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>> new to the PrimaryDataStoreDriver interface, as well) and
> >>>> this
> >>>>>>> would
> >>>>>>>>>>>> call
> >>>>>>>>>>>>>>> into a new method in the hypervisor code to create, say on
> >>>>>>>>>> XenServer,
> >>>>>>>>>>>> an
> >>>>>>>>>>>>>> SR?
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> I would like to see canBeManaged() : boolean on
> >>>>>> DataStoreDriver.
> >>>>>>>>>> Since
> >>>>>>>>>>>>>>> the notion of Volumes only pertains to primary storage, I
> >>>>> would
> >>>>>>> add
> >>>>>>>>>> a
> >>>>>>>>>>>>>>> allocateStorage and deallocateStorage (Storage is a straw
> >>>> man
> >>>>>> term
> >>>>>>>>>> --
> >>>>>>>>>>>>>>> something other than volume) methods to
> >>>>>>>>>>>> allocate/create/deallocate/delete
> >>>>>>>>>>>>>>> underlying storage.  To my mind, managed is a mutable
> >>>> property
> >>>>>> of
> >>>>>>>>>>>>>> DataStore
> >>>>>>>>>>>>>>> which can be enabled if/when the underlying
> >>>> DataStoreDriver
> >>>>>> can be
> >>>>>>>>>>>>>> managed.
> >>>>>>>>>>>>>>> This approach allows operators to override manageability of
> >>>>>>> devices.
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> In terms of orchestration/process flow for SR, the Xen
> >> plugin
> >>>>>>> would
> >>>>>>>>>> be
> >>>>>>>>>>>>>>> responsible for composing DataStore/Volume methods to
> >>>>>> create any
> >>>>>>>>>>>>>>> directories or files necessary for the SR.  There should be
> >> no
> >>>>>>>>>>>>>> dependencies
> >>>>>>>>>>>>>>> from the Storage to the Hypervisor layer.  As I said
> earlier,
> >>>>> such
> >>>>>>>>>>>>>> circular
> >>>>>>>>>>>>>>> dependencies will lead to a tangled, unmaintainable mess.
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> Just want to make sure I'm on the same page with you.
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> Thanks again, John
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> On Thu, Jun 6, 2013 at 7:44 AM, John Burwell
> >>>>>> <jburwell@basho.com>
> >>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Fundamentally, we can't end up with a Storage layer that
> >>>>>>> supports n
> >>>>>>>>>>>>>>>> devices types with each specific behaviors of m
> >> hypervisors.
> >>>>>>> Such
> >>>>>>>>>> a
> >>>>>>>>>>>>>>>> scenario will create an unmaintainable and untestable
> >>>> beast.
> >>>>>>>>>>>>>> Therefore, my
> >>>>>>>>>>>>>>>> thoughts and recommendations are driven to evolve the
> >>>>>> Storage
> >>>>>>> layer
> >>>>>>>>>>>>>> towards
> >>>>>>>>>>>>>>>> this separation.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> I am fine with the VolumeManager determining whether
> >>>> or
> >>>>>> not a
> >>>>>>>>>> Volume
> >>>>>>>>>>>> is
> >>>>>>>>>>>>>>>> managed (i.e. not based on the StoragePoolType, but an
> >>>> actual
> >>>>>>>>>>>> isManaged
> >>>>>>>>>>>>>>>> method), and asking the device driver to allocate
> >> resources
> >>>>> for
> >>>>>>> the
> >>>>>>>>>>>>>> volume
> >>>>>>>>>>>>>>>> if it is managed.  Furthermore, the device driver needs to
> >>>>>>> indicate
> >>>>>>>>>>>>>> whether
> >>>>>>>>>>>>>>>> or not it supports management operations.  Finally, I
> think
> >>>>> we
> >>>>>>>>>> need to
> >>>>>>>>>>>>>>>> provide the ability for an administrator to elect to have
> >>>>>>> something
> >>>>>>>>>>>>>> that is
> >>>>>>>>>>>>>>>> manageable be unmanaged (i.e. the driver is capable
> >>>> managing
> >>>>>> the
> >>>>>>>>>>>> device,
> >>>>>>>>>>>>>>>> but the administrator has elected to leave it unmanaged).
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Creation of a structure on the volume should be done in
> >>>> the
> >>>>>> Xen
> >>>>>>>>>>>>>>>> hypervisor module using methods exposed by the
> >> Storage
> >>>>>> layer to
> >>>>>>>>>>>> perform
> >>>>>>>>>>>>>>>> low-level operations (e.g. make directories, create a
> file,
> >>>>> etc).
> >>>>>>>>>>>> This
> >>>>>>>>>>>>>>>> structure is specific to the operation of the Xen
> >>>>> hypervisor, as
> >>>>>>>>>> such,
> >>>>>>>>>>>>>>>> should be confined to its implementation.  From my
> >>>>>> perspective,
> >>>>>>>>>>>> nothing
> >>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>> the Storage layer should be concerned with content.
> >> From
> >>>> its
> >>>>>>>>>>>>>> perspective,
> >>>>>>>>>>>>>>>> structure and data are opaque.  It provides the means to
> >>>>> query
> >>>>>>> the
> >>>>>>>>>>>> data
> >>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>> support the interpretation of the content by higher-level
> >>>>>> layers
> >>>>>>>>>> (e.g.
> >>>>>>>>>>>>>>>> Hypervisors).  To my mind, attach should be a composition
> >>>> of
> >>>>>>>>>>>> operations
> >>>>>>>>>>>>>>>> from the Storage layer that varies based on the Volume
> >>>>>> storage
> >>>>>>>>>>>> protocol
> >>>>>>>>>>>>>>>> (iSCSI, local file system, NFS, RBD, etc).
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> On Jun 5, 2013, at 12:25 PM, Mike Tutkowski <
> >>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Hi John,
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Alternatively to the way the attach logic is implemented
> in
> >>>>> my
> >>>>>>>>>> patch,
> >>>>>>>>>>>> we
> >>>>>>>>>>>>>>>> could do the following:
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Leave the attach logic in the agent code alone. In
> >>>>>>>>>> VolumeManagerImpl
> >>>>>>>>>>>> we
> >>>>>>>>>>>>>>>> create an AttachVolumeCommand and send it to the
> >>>>>> hypervisor.
> >>>>>>> Before
> >>>>>>>>>>>> this
> >>>>>>>>>>>>>>>> command is sent, we could check to see if we're dealing
> >>>> with
> >>>>>>>>>> Dynamic
> >>>>>>>>>>>> (or
> >>>>>>>>>>>>>>>> whatever we want to call it) storage and - if so - send a
> >>>>> "Create
> >>>>>>>>>> SR"
> >>>>>>>>>>>>>>>> command to the hypervisor. If this returns OK, we would
> >>>> then
> >>>>>>>>>> proceed
> >>>>>>>>>>>> to
> >>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>> AttachVolumeCommand, as usual.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> This way the attach logic remains the same and we just
> >> add
> >>>>>>> another
> >>>>>>>>>>>>>>>> command to the agent code that is called for this
> >> particular
> >>>>>> type
> >>>>>>>>>> of
> >>>>>>>>>>>>>>>> storage.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> What do you think?
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 5:42 PM, Mike Tutkowski <
> >>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> Hey John,
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> I created a document for a customer today that outlines
> >>>> how
> >>>>>> the
> >>>>>>>>>>>> plug-in
> >>>>>>>>>>>>>>>>> works from a user standpoint. This will probably be of
> >> use
> >>>>> to
> >>>>>>>>>> you, as
> >>>>>>>>>>>>>> well,
> >>>>>>>>>>>>>>>>> as you perform the code review.
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> I have shared this document with you (you should have
> >>>>>> received
> >>>>>>>>>> that
> >>>>>>>>>>>>>>>>> information in a separate e-mail).
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> Talk to you later!
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 3:48 PM, Mike Tutkowski <
> >>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> Oh, OK, that sounds really good, John.
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> Thanks and talk to you tomorrow! :)
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 3:42 PM, John Burwell <
> >>>>>>> jburwell@basho.com
> >>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> I am never at a loss for an opinion.  I some thoughts,
> >>>> but
> >>>>>>> want
> >>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>> confirm assumptions and ideas against the solidfire,
> >>>>>>>>>>>>>> disk_io_throttle,
> >>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>> object_store branches.  I hope to collect them in a
> >>>>>> coherent
> >>>>>>>>>> form
> >>>>>>>>>>>>>>>>>>> tomorrow (5 June 2013).
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> On Jun 4, 2013, at 5:29 PM, Mike Tutkowski <
> >>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> "So, in essence, the SolidFire plugin introduces the
> >>>>> notion
> >>>>>>> of
> >>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>> managed
> >>>>>>>>>>>>>>>>>>>> iSCSI device and provisioned IOPS."
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> Technically, the SolidFire plug-in just introduces the
> >>>>>> notion
> >>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>> provisioned storage IOPS.
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> The storage framework that leverages the plug-in
> >> was
> >>>>>>>>>> incomplete,
> >>>>>>>>>>>> so
> >>>>>>>>>>>>>>>>>>> I had
> >>>>>>>>>>>>>>>>>>>> to try to add in the notion of a managed iSCSI device.
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> I appreciate all the time you've been spending on
> >> this.
> >>>>> :)
> >>>>>> Do
> >>>>>>>>>> you
> >>>>>>>>>>>>>>>>>>> have a
> >>>>>>>>>>>>>>>>>>>> recommendation as to how we should accomplish
> >>>> what
> >>>>>> you're
> >>>>>>>>>> looking
> >>>>>>>>>>>>>>>>>>> for?
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> Thanks!
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 3:19 PM, John Burwell <
> >>>>>>>>>> jburwell@basho.com>
> >>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> So, in essence, the SolidFire plugin introduces the
> >>>>> notion
> >>>>>>> of
> >>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>> managed
> >>>>>>>>>>>>>>>>>>>>> iSCSI device and provisioned IOPS.
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> I want to see a separation of the management
> >>>>>> capabilities
> >>>>>>>>>> (i.e.
> >>>>>>>>>>>>>> can
> >>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>> device be managed/does an operator want it
> >>>> managed
> >>>>>> by
> >>>>>>>>>> CloudStack)
> >>>>>>>>>>>>>>>>>>> from the
> >>>>>>>>>>>>>>>>>>>>> storage protocol.  Ideally, we should end up with a
> >>>>>> semantic
> >>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>> will
> >>>>>>>>>>>>>>>>>>>>> allow any type of storage device to be managed.  I
> >>>> also
> >>>>>> want
> >>>>>>>>>> to
> >>>>>>>>>>>>>> make
> >>>>>>>>>>>>>>>>>>>>> progress on decoupling the storage types from the
> >>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>> definitions.
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> On Jun 4, 2013, at 5:13 PM, Mike Tutkowski <
> >>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> Hi John,
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> No problem. Answers are below in red.
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> Thanks!
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 2:55 PM, John Burwell <
> >>>>>>>>>>>> jburwell@basho.com
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> Could you please answer the following questions
> >>>> for
> >>>>>> me
> >>>>>>> with
> >>>>>>>>>>>>>>>>>>> regards to
> >>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>> operation of the SolidFire plugin:
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> What is the cardinality between iSCSI LUNs and
> >>>> SAN
> >>>>>>> volumes?
> >>>>>>>>>>>>>>>>>>>>>> Each SAN volume is equivalent to a single LUN
> >> (LUN
> >>>> 0).
> >>>>>>>>>>>>>>>>>>>>>> 1 SAN volume : 1 LUN
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> What is the cardinality between SAN Volumes
> >> and
> >>>>>> CloudStack
> >>>>>>>>>>>>>>>>>>> Volumes?
> >>>>>>>>>>>>>>>>>>>>>> 1 SAN volume : 1 CloudStack volume
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> Are the LUN(s) created by the management
> >>>> server or
> >>>>>>>>>> externally
> >>>>>>>>>>>> by
> >>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>> operator?
> >>>>>>>>>>>>>>>>>>>>>> When used with the SolidFire plug-in, a SAN
> >>>> volume
> >>>>>> (same
> >>>>>>> as a
> >>>>>>>>>>>> SAN
> >>>>>>>>>>>>>>>>>>> LUN) is
> >>>>>>>>>>>>>>>>>>>>>> created by the management server (via the plug-
> >> in)
> >>>>>> the
> >>>>>>> first
> >>>>>>>>>>>> time
> >>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>> CloudStack volume is attached to a hypervisor.
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> If you don't want to use the SolidFire plug-in, but
> >>>>> still
> >>>>>>>>>> want
> >>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>> use a
> >>>>>>>>>>>>>>>>>>>>>> SolidFire volume (LUN), you can do this already
> >>>> today
> >>>>>>> (prior
> >>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>> 4.2). The
> >>>>>>>>>>>>>>>>>>>>>> admin manually creates the SAN volume and - in
> >>>> this
> >>>>>> case -
> >>>>>>>>>>>>>>>>>>> multiple VMs
> >>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>> data disks can share this SAN volume. While you
> >>>> can do
> >>>>>> this
> >>>>>>>>>>>>>> today,
> >>>>>>>>>>>>>>>>>>> it is
> >>>>>>>>>>>>>>>>>>>>>> not useful if you want to enforce storage QoS.
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> Are the SAN volumes by the management server
> >>>> or
> >>>>>> externally
> >>>>>>>>>> by
> >>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>> operator?
> >>>>>>>>>>>>>>>>>>>>>> When the SolidFire plug-in is used, the SAN
> >>>> volumes
> >>>>>> are
> >>>>>>>>>>>>>> completely
> >>>>>>>>>>>>>>>>>>>>> managed
> >>>>>>>>>>>>>>>>>>>>>> by the management server (via the plug-in).
> >> There
> >>>> is
> >>>>>> no
> >>>>>>> admin
> >>>>>>>>>>>>>>>>>>>>> interaction.
> >>>>>>>>>>>>>>>>>>>>>> This allows for a 1:1 mapping between a SAN
> >>>> volume
> >>>>>> and a
> >>>>>>>>>>>>>> CloudStack
> >>>>>>>>>>>>>>>>>>>>> volume,
> >>>>>>>>>>>>>>>>>>>>>> which is necessary for any storage vendor that
> >>>>>> supports
> >>>>>>> true
> >>>>>>>>>>>> QoS.
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> I would like to clarify how these pieces are
> >> related
> >>>>>> and
> >>>>>>>>>>>>>> expected
> >>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>> operate.
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> On Jun 4, 2013, at 3:46 PM, Mike Tutkowski <
> >>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> "In particular, how do we ensure that multiple
> >>>> VMs
> >>>>>> with
> >>>>>>>>>>>>>>>>>>> provisioned
> >>>>>>>>>>>>>>>>>>>>> IOPS
> >>>>>>>>>>>>>>>>>>>>>>>> won't be cut off by the underlying storage."
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> In the storage QoS world, we need to map a
> >>>> single
> >>>>>> SAN
> >>>>>>>>>> volume
> >>>>>>>>>>>>>>>>>>> (LUN) to a
> >>>>>>>>>>>>>>>>>>>>>>>> single CloudStack volume. We cannot have
> >>>> multiple
> >>>>>>>>>> CloudStack
> >>>>>>>>>>>>>>>>>>> volumes
> >>>>>>>>>>>>>>>>>>>>>>>> sharing a single SAN volume and still guarantee
> >>>> QoS.
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> If the user wants to have a single SAN volume
> >>>> house
> >>>>>> more
> >>>>>>>>>> than
> >>>>>>>>>>>>>> one
> >>>>>>>>>>>>>>>>>>>>>>>> CloudStack volume, then can do that today
> >>>> without
> >>>>>> any of
> >>>>>>> my
> >>>>>>>>>>>>>>>>>>> plug-in
> >>>>>>>>>>>>>>>>>>>>> code.
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 1:43 PM, Mike Tutkowski
> >> <
> >>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>> "The administrator will allocate a SAN volume
> >>>> for
> >>>>>>>>>>>> CloudStack's
> >>>>>>>>>>>>>>>>>>> use
> >>>>>>>>>>>>>>>>>>>>> onto
> >>>>>>>>>>>>>>>>>>>>>>>>> which CloudStack volumes will be created."
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>> I think we crossed e-mails. :)
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>> Check out my recent e-mail on this.
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 1:41 PM, John Burwell <
> >>>>>>>>>>>>>>>>>>> jburwell@basho.com>
> >>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> You are coming to the part which concerns
> >> me
> >>>> --
> >>>>>>> concepts
> >>>>>>>>>>>> from
> >>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>> hypervisor are leaking into storage layer.
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> On Jun 4, 2013, at 3:35 PM, Mike Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>> The weird part is that the iSCSI type is today
> >>>>> only
> >>>>>>> used
> >>>>>>>>>>>> (as
> >>>>>>>>>>>>>>>>>>> far as
> >>>>>>>>>>>>>>>>>>>>> I
> >>>>>>>>>>>>>>>>>>>>>>>>>> know)
> >>>>>>>>>>>>>>>>>>>>>>>>>>> in regards to XenServer (when you have
> >> not
> >>>>>> PreSetup an
> >>>>>>>>>> SR).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>> If you want to use your iSCSI volume from
> >>>>>> VMware, it
> >>>>>>>>>> uses
> >>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>> vmfs
> >>>>>>>>>>>>>>>>>>>>>>> type.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>> If you want to use your iSCSI volume from
> >>>> KVM,
> >>>>>> it uses
> >>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>> SharedMountPoint
> >>>>>>>>>>>>>>>>>>>>>>>>>>> type.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>> So, I suppose mine and Edison's thinking
> >>>> here
> >>>>>> was to
> >>>>>>>>>> make a
> >>>>>>>>>>>>>>>>>>> new type
> >>>>>>>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>>> storage to describe this dynamic ability
> >>>> Edison
> >>>>>> added
> >>>>>>>>>> into
> >>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>> framework. Maybe it should be more
> >>>> specificy,
> >>>>>> though:
> >>>>>>>>>>>>>>>>>>> Dynamic_iSCSI
> >>>>>>>>>>>>>>>>>>>>>>>>>> versus,
> >>>>>>>>>>>>>>>>>>>>>>>>>>> say, Dynamic_FC.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 1:27 PM, Mike
> >>>> Tutkowski
> >>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> "The storage device itself shouldn't know
> >>>> or
> >>>>>> care
> >>>>>>> that
> >>>>>>>>>> it
> >>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>> being
> >>>>>>>>>>>>>>>>>>>>>>> used
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> for a Xen SR -- simply be able to answer
> >>>>>> questions
> >>>>>>>>>> about
> >>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>>>> storing."
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> I see...so your concern here is that the
> >>>>>> SolidFire
> >>>>>>>>>> plug-in
> >>>>>>>>>>>>>>>>>>> needs to
> >>>>>>>>>>>>>>>>>>>>>>>>>> call
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> itself "Dynamic" storage so that the
> >>>> hypervisor
> >>>>>> logic
> >>>>>>>>>>>> knows
> >>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>> treat
> >>>>>>>>>>>>>>>>>>>>>>>>>> it as
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> such.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> I'm totally open to removing that
> >> constraint
> >>>>>> and just
> >>>>>>>>>>>>>>>>>>> calling it
> >>>>>>>>>>>>>>>>>>>>>>> iSCSI
> >>>>>>>>>>>>>>>>>>>>>>>>>> or
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> whatever. We would just need a way for
> >>>> the
> >>>>>> hypervisor
> >>>>>>>>>>>>>> attach
> >>>>>>>>>>>>>>>>>>> logic
> >>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> detect this new requirement and perform
> >>>> the
> >>>>>> necessary
> >>>>>>>>>>>>>>>>>>> activities.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 1:24 PM, John
> >>>> Burwell <
> >>>>>>>>>>>>>>>>>>> jburwell@basho.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> See my responses in-line.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Jun 4, 2013, at 3:10 PM, Mike
> >>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I'm trying to picture this:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Finally, while CloudStack may be able
> >> to
> >>>>>> manage a
> >>>>>>>>>>>>>> device,
> >>>>>>>>>>>>>>>>>>> an
> >>>>>>>>>>>>>>>>>>>>>>>>>> operator
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> may
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> chose to leave it unmanaged by
> >>>> CloudStack
> >>>>>> (e.g. the
> >>>>>>>>>>>>>> device
> >>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>> shared
> >>>>>>>>>>>>>>>>>>>>>>>>>> by
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> many services, and the operator has
> >>>> chosen
> >>>>>> to
> >>>>>>>>>> dedicate
> >>>>>>>>>>>>>>>>>>> only a
> >>>>>>>>>>>>>>>>>>>>>>> portion
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> of it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to CloudStack).  Does my reasoning
> >>>> make
> >>>>>> sense?"
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess I'm not sure how creating a SAN
> >>>>>> volume via
> >>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>> plug-in
> >>>>>>>>>>>>>>>>>>>>>>>>>> (before
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> an
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> attach request to the hypervisor) would
> >>>>>> work unless
> >>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> consumes
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the SAN volume in the form of, say, an
> >>>> SR.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> My thinking is that, independent of
> >>>>>> CloudStack, an
> >>>>>>>>>>>>>> operator
> >>>>>>>>>>>>>>>>>>>>>>> allocates
> >>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> chunk of  a SAN to CloudStack, and
> >>>> exposes it
> >>>>>>> through
> >>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>> LUN.  They
> >>>>>>>>>>>>>>>>>>>>>>>>>> simply
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> want to turn control of that LUN over to
> >>>>>> CloudStack,
> >>>>>>>>>> but
> >>>>>>>>>>>>>>>>>>> not allow
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> CloudStack to allocate anymore LUNs.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> As the attach logic stands prior to my
> >>>>>> changes, we
> >>>>>>>>>> would
> >>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>> passing
> >>>>>>>>>>>>>>>>>>>>>>>>>> in a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> SAN volume that does not have the
> >>>>>> necessary
> >>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>> support
> >>>>>>>>>>>>>>>>>>>>>>> (like
> >>>>>>>>>>>>>>>>>>>>>>>>>> an
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> SR)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and the logic will fail.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Are you thinking we should maybe
> >> have
> >>>> the
> >>>>>> storage
> >>>>>>>>>>>>>> framework
> >>>>>>>>>>>>>>>>>>>>> itself
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> detect
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that such a SAN volume needs support
> >>>> from
> >>>>>> the
> >>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>> side and
> >>>>>>>>>>>>>>>>>>>>>>>>>> have
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> call into the agent code specifically to
> >>>> create
> >>>>>> the
> >>>>>>>>>> SR
> >>>>>>>>>>>>>>>>>>> before the
> >>>>>>>>>>>>>>>>>>>>>>>>>> attach
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logic runs in the agent code?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> I think the hypervisor management
> >> plugin
> >>>>>> should
> >>>>>>> have a
> >>>>>>>>>>>>>> rich
> >>>>>>>>>>>>>>>>>>> enough
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> interface to storage to determine
> >>>> available
> >>>>>> for
> >>>>>>> volume
> >>>>>>>>>>>>>>>>>>> storage.
> >>>>>>>>>>>>>>>>>>>>> For
> >>>>>>>>>>>>>>>>>>>>>>>>>> Xen,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> this interface would allow the
> >>>> interrogation of
> >>>>>> the
> >>>>>>>>>>>> device
> >>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>> determine the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> SR is present.   The storage device itself
> >>>>>> shouldn't
> >>>>>>>>>> know
> >>>>>>>>>>>>>>>>>>> or care
> >>>>>>>>>>>>>>>>>>>>>>>>>> that it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> is being used for a Xen SR -- simply be
> >> able
> >>>> to
> >>>>>>> answer
> >>>>>>>>>>>>>>>>>>> questions
> >>>>>>>>>>>>>>>>>>>>>>>>>> about it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> is storing.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 1:01 PM, Mike
> >>>>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So, the flow is as follows:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> * The admin registers the SolidFire
> >>>> driver
> >>>>>> (which
> >>>>>>>>>> is a
> >>>>>>>>>>>>>>>>>>> type of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> so-called
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Dynamic storage). Once this is done, a
> >>>> new
> >>>>>> Primary
> >>>>>>>>>>>>>>>>>>> Storage shows
> >>>>>>>>>>>>>>>>>>>>>>> up
> >>>>>>>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> applicable zone.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> * The admin creates a Disk Offering
> >>>> that
> >>>>>>> references
> >>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>> tag
> >>>>>>>>>>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> newly created Primary Storage.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> * The end user creates a CloudStack
> >>>>>> volume. This
> >>>>>>>>>> leads
> >>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>> a new
> >>>>>>>>>>>>>>>>>>>>>>> row
> >>>>>>>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cloud.volumes table.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> * The end user attaches the
> >> CloudStack
> >>>>>> volume to a
> >>>>>>>>>> VM
> >>>>>>>>>>>>>>>>>>> (attach
> >>>>>>>>>>>>>>>>>>>>>>> disk).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> This
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> leads to the storage framework calling
> >>>> the
> >>>>>> plug-in
> >>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>> create a
> >>>>>>>>>>>>>>>>>>>>> new
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> volume
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> on its storage system (in my case, a
> >>>> SAN).
> >>>>>> The
> >>>>>>>>>> plug-in
> >>>>>>>>>>>>>>>>>>> also
> >>>>>>>>>>>>>>>>>>>>>>> updates
> >>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cloud.volumes row with applicable
> >> data
> >>>>>> (like the
> >>>>>>>>>> IQN of
> >>>>>>>>>>>>>>>>>>> the SAN
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> volume).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This plug-in code is only invoked if the
> >>>>>>> CloudStack
> >>>>>>>>>>>>>>>>>>> volume is in
> >>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 'Allocated' state. After the attach, the
> >>>>>> volume
> >>>>>>>>>> will be
> >>>>>>>>>>>>>>>>>>> in the
> >>>>>>>>>>>>>>>>>>>>>>>>>> 'Ready'
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> state (even after a detach disk) and
> >> the
> >>>>>> plug-in
> >>>>>>>>>> code
> >>>>>>>>>>>>>>>>>>> will not
> >>>>>>>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> called
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> again to create this SAN volume.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> * The hypervisor-attach logic is run
> >> and
> >>>>>> detects
> >>>>>>> the
> >>>>>>>>>>>>>>>>>>> CloudStack
> >>>>>>>>>>>>>>>>>>>>>>>>>> volume
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> attach needs "assistance" in the form
> >>>> of a
> >>>>>>>>>> hypervisor
> >>>>>>>>>>>>>> data
> >>>>>>>>>>>>>>>>>>>>>>> structure
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> (ex.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> an SR on XenServer).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 12:54 PM, Mike
> >>>>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "To ensure that we are in sync on
> >>>>>> terminology,
> >>>>>>>>>> volume,
> >>>>>>>>>>>>>>>>>>> in these
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> definitions, refers to the physical
> >>>>>> allocation on
> >>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>> device,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> correct?"
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes...when I say 'volume', I try to
> >>>> mean
> >>>>>> 'SAN
> >>>>>>>>>> volume'.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To refer to the 'volume' the end user
> >>>> can
> >>>>>> make in
> >>>>>>>>>>>>>>>>>>> CloudStack, I
> >>>>>>>>>>>>>>>>>>>>>>>>>> try to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> use 'CloudStack volume'.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 12:50 PM,
> >> Mike
> >>>>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
> >> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi John,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What you say here may very well
> >>>> make
> >>>>>> sense, but
> >>>>>>>>>> I'm
> >>>>>>>>>>>>>>>>>>> having a
> >>>>>>>>>>>>>>>>>>>>>>> hard
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> time
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> envisioning it.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Perhaps we should draw Edison in
> >> on
> >>>>>> this
> >>>>>>>>>> conversation
> >>>>>>>>>>>>>>>>>>> as he
> >>>>>>>>>>>>>>>>>>>>> was
> >>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> initial person to suggest the
> >>>> approach I
> >>>>>> took.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What do you think?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks!
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 12:42 PM,
> >>>> John
> >>>>>> Burwell <
> >>>>>>>>>>>>>>>>>>>>>>> jburwell@basho.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It feels like we are combining two
> >>>>>> distinct
> >>>>>>>>>> concepts
> >>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> device
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> management and storage
> >> protocols.
> >>>> In
> >>>>>> both
> >>>>>>>>>> cases, we
> >>>>>>>>>>>>>>>>>>> are
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> communicating with
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ISCSI, but one allows the system
> >> to
> >>>>>>> create/delete
> >>>>>>>>>>>>>>>>>>> volumes
> >>>>>>>>>>>>>>>>>>>>>>>>>> (Dynamic)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> on the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> device while the other requires
> >> the
> >>>>>> volume to
> >>>>>>> be
> >>>>>>>>>>>>>>>>>>> volume to be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> managed
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> outside of the CloudStack context.
> >>>> To
> >>>>>> ensure
> >>>>>>>>>> that
> >>>>>>>>>>>> we
> >>>>>>>>>>>>>>>>>>> are in
> >>>>>>>>>>>>>>>>>>>>>>>>>> sync on
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> terminology, volume, in these
> >>>>>> definitions,
> >>>>>>>>>> refers to
> >>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>> physical
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> allocation on the device, correct?
> >>>>>> Minimally,
> >>>>>>> we
> >>>>>>>>>>>>>> must
> >>>>>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>> able
> >>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> communicate with a storage
> >> device
> >>>> to
> >>>>>> move bits
> >>>>>>>>>> from
> >>>>>>>>>>>>>>>>>>> one place
> >>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> another,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> read bits, delete bits, etc.
> >>>> Optionally, a
> >>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>> device
> >>>>>>>>>>>>>>>>>>>>> may
> >>>>>>>>>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> able to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> managed by CloudStack.
> >> Therefore,
> >>>>>> we can have
> >>>>>>> a
> >>>>>>>>>>>>>>>>>>> unmanaged
> >>>>>>>>>>>>>>>>>>>>>>> iSCSI
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> device
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> onto which we store a Xen SR, and
> >>>> we
> >>>>>> can have a
> >>>>>>>>>>>>>> managed
> >>>>>>>>>>>>>>>>>>>>>>> SolidFire
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> iSCSI
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> device on which CloudStack is
> >>>> capable
> >>>>>> of
> >>>>>>>>>> allocating
> >>>>>>>>>>>>>>>>>>> LUNs and
> >>>>>>>>>>>>>>>>>>>>>>>>>> storing
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> volumes.  Finally, while CloudStack
> >>>> may
> >>>>>> be able
> >>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>> manage a
> >>>>>>>>>>>>>>>>>>>>>>>>>> device,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> an
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> operator may chose to leave it
> >>>>>> unmanaged by
> >>>>>>>>>>>>>> CloudStack
> >>>>>>>>>>>>>>>>>>> (e.g.
> >>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> device is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> shared by many services, and the
> >>>>>> operator has
> >>>>>>>>>> chosen
> >>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>> dedicate
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> only a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> portion of it to CloudStack).  Does
> >>>> my
> >>>>>>> reasoning
> >>>>>>>>>>>> make
> >>>>>>>>>>>>>>>>>>> sense?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Assuming my thoughts above are
> >>>>>> reasonable, it
> >>>>>>>>>> seems
> >>>>>>>>>>>>>>>>>>>>> appropriate
> >>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> strip the management concerns
> >>>> from
> >>>>>>>>>> StoragePoolType,
> >>>>>>>>>>>>>>>>>>> add the
> >>>>>>>>>>>>>>>>>>>>>>>>>> notion
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> of a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage device with an attached
> >>>> driver
> >>>>>> that
> >>>>>>>>>>>> indicates
> >>>>>>>>>>>>>>>>>>> whether
> >>>>>>>>>>>>>>>>>>>>>>> or
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> not is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> managed by CloudStack, and
> >>>> establish
> >>>>>> a
> >>>>>>>>>> abstraction
> >>>>>>>>>>>>>>>>>>>>>>> representing a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> physical
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> allocation on a device separate
> >> that
> >>>> is
> >>>>>>>>>> associated
> >>>>>>>>>>>>>>>>>>> with a
> >>>>>>>>>>>>>>>>>>>>>>> volume.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> With
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> these notions in place, hypervisor
> >>>>>> drivers can
> >>>>>>>>>>>>>> declare
> >>>>>>>>>>>>>>>>>>> which
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> protocols they
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> support and when they encounter
> >>>> a
> >>>>>> device
> >>>>>>> managed
> >>>>>>>>>> by
> >>>>>>>>>>>>>>>>>>>>> CloudStack,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> utilize the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> management operations exposed
> >>>> by
> >>>>>> the driver to
> >>>>>>>>>>>>>> automate
> >>>>>>>>>>>>>>>>>>>>>>>>>> allocation.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> If
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> these thoughts/concepts make
> >>>> sense,
> >>>>>> then we can
> >>>>>>>>>> sit
> >>>>>>>>>>>>>>>>>>> down and
> >>>>>>>>>>>>>>>>>>>>>>>>>> drill
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> down to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a more detailed design.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Jun 3, 2013, at 5:25 PM, Mike
> >>>>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
> >>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Here is the difference between
> >>>> the
> >>>>>> current
> >>>>>>> iSCSI
> >>>>>>>>>>>>>> type
> >>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Dynamic
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> type:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> iSCSI type: The admin has to go in
> >>>> and
> >>>>>> create
> >>>>>>> a
> >>>>>>>>>>>>>>>>>>> Primary
> >>>>>>>>>>>>>>>>>>>>>>> Storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> based
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> on
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the iSCSI type. At this point in
> >> time,
> >>>>>> the
> >>>>>>> iSCSI
> >>>>>>>>>>>>>>>>>>> volume must
> >>>>>>>>>>>>>>>>>>>>>>>>>> exist
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> on
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage system (it is pre-
> >> allocated).
> >>>>>> Future
> >>>>>>>>>>>>>>>>>>> CloudStack
> >>>>>>>>>>>>>>>>>>>>>>> volumes
> >>>>>>>>>>>>>>>>>>>>>>>>>> are
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> created
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as VDIs on the SR that was
> >> created
> >>>>>> behind the
> >>>>>>>>>>>>>> scenes.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Dynamic type: The admin has to
> >>>> go in
> >>>>>> and
> >>>>>>> create
> >>>>>>>>>>>>>>>>>>> Primary
> >>>>>>>>>>>>>>>>>>>>>>> Storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> based
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> on a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> plug-in that will create and
> >> delete
> >>>>>> volumes on
> >>>>>>>>>> its
> >>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>> system
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> dynamically (as is enabled via the
> >>>>>> storage
> >>>>>>>>>>>>>>>>>>> framework). When
> >>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>> user
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wants to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> attach a CloudStack volume that
> >>>> was
> >>>>>> created,
> >>>>>>> the
> >>>>>>>>>>>>>>>>>>> framework
> >>>>>>>>>>>>>>>>>>>>>>> tells
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> plug-in to create a new volume.
> >>>> After
> >>>>>> this is
> >>>>>>>>>> done,
> >>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>> attach
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> logic
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the hypervisor in question is
> >> called.
> >>>>>> No
> >>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>> data
> >>>>>>>>>>>>>>>>>>>>>>>>>> structure
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> exists
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> at this point because the volume
> >>>> was
> >>>>>> just
> >>>>>>>>>> created.
> >>>>>>>>>>>>>> The
> >>>>>>>>>>>>>>>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> data
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> structure must be created.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Jun 3, 2013 at 3:21 PM,
> >>>> Mike
> >>>>>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> These are new terms, so I
> >> should
> >>>>>> probably
> >>>>>>> have
> >>>>>>>>>>>>>>>>>>> defined them
> >>>>>>>>>>>>>>>>>>>>>>> up
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> front
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you. :)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Static storage: Storage that is
> >>>> pre-
> >>>>>> allocated
> >>>>>>>>>> (ex.
> >>>>>>>>>>>>>>>>>>> an admin
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> creates a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> volume on a SAN), then a
> >>>> hypervisor
> >>>>>> data
> >>>>>>>>>> structure
> >>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>> created
> >>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> consume
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the storage (ex. XenServer SR),
> >>>> then
> >>>>>> that
> >>>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>> data
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> structure
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> consumed by CloudStack. Disks
> >>>> (VDI)
> >>>>>> are later
> >>>>>>>>>>>>>> placed
> >>>>>>>>>>>>>>>>>>> on
> >>>>>>>>>>>>>>>>>>>>> this
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> data structure as needed. In
> >>>> these
> >>>>>> cases, the
> >>>>>>>>>>>>>> attach
> >>>>>>>>>>>>>>>>>>> logic
> >>>>>>>>>>>>>>>>>>>>>>>>>> assumes
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hypervisor data structure is
> >>>> already
> >>>>>> in place
> >>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>> simply
> >>>>>>>>>>>>>>>>>>>>>>>>>> attaches
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the VDI
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> on the hypervisor data
> >> structure
> >>>> to
> >>>>>> the VM in
> >>>>>>>>>>>>>>>>>>> question.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Dynamic storage: Storage that is
> >>>> not
> >>>>>>>>>>>> pre-allocated.
> >>>>>>>>>>>>>>>>>>> Instead
> >>>>>>>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> pre-existent storage, this could
> >>>> be a
> >>>>>> SAN
> >>>>>>> (not
> >>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>> volume on
> >>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>> SAN,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> but the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> SAN itself). The hypervisor data
> >>>>>> structure
> >>>>>>>>>> must be
> >>>>>>>>>>>>>>>>>>> created
> >>>>>>>>>>>>>>>>>>>>>>>>>> when an
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> attach
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> volume is performed because
> >>>> these
> >>>>>> types of
> >>>>>>>>>> volumes
> >>>>>>>>>>>>>>>>>>> have not
> >>>>>>>>>>>>>>>>>>>>>>>>>> been
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> pre-hooked
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> up to such a hypervisor data
> >>>>>> structure by an
> >>>>>>>>>>>> admin.
> >>>>>>>>>>>>>>>>>>> Once
> >>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> attach
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logic
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> creates, say, an SR on
> >> XenServer
> >>>> for
> >>>>>> this
> >>>>>>>>>> volume,
> >>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>> attaches
> >>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> one and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> only VDI within the SR to the
> >> VM
> >>>> in
> >>>>>> question.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Jun 3, 2013 at 3:13 PM,
> >>>>>> John Burwell
> >>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>> jburwell@basho.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The current implementation
> >> of
> >>>> the
> >>>>>> Dynamic
> >>>>>>> type
> >>>>>>>>>>>>>>>>>>> attach
> >>>>>>>>>>>>>>>>>>>>>>> behavior
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> works in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> terms of Xen ISCSI which why I
> >>>> ask
> >>>>>> about the
> >>>>>>>>>>>>>>>>>>> difference.
> >>>>>>>>>>>>>>>>>>>>>>>>>> Another
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> way to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ask the question -- what is the
> >>>>>> definition
> >>>>>>> of
> >>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>> Dynamic
> >>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> pool type?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Jun 3, 2013, at 5:10 PM,
> >>>> Mike
> >>>>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> As far as I know, the iSCSI
> >> type
> >>>> is
> >>>>>>> uniquely
> >>>>>>>>>>>> used
> >>>>>>>>>>>>>>>>>>> by
> >>>>>>>>>>>>>>>>>>>>>>>>>> XenServer
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> when you
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> want to set up Primary
> >>>> Storage
> >>>>>> that is
> >>>>>>>>>> directly
> >>>>>>>>>>>>>>>>>>> based on
> >>>>>>>>>>>>>>>>>>>>> an
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> iSCSI
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> target.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This allows you to skip the
> >>>> step of
> >>>>>> going
> >>>>>>> to
> >>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> creating a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage repository based on
> >>>> that
> >>>>>> iSCSI
> >>>>>>>>>> target as
> >>>>>>>>>>>>>>>>>>>>> CloudStack
> >>>>>>>>>>>>>>>>>>>>>>>>>> does
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> part
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for you. I think this is only
> >>>>>> supported for
> >>>>>>>>>>>>>>>>>>> XenServer.
> >>>>>>>>>>>>>>>>>>>>> For
> >>>>>>>>>>>>>>>>>>>>>>>>>> all
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> other
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hypervisors, you must first
> >> go
> >>>> to
> >>>>>> the
> >>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>> perform
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> this
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> step
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> manually.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I don't really know what RBD
> >> is.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Jun 3, 2013 at 2:13
> >>>> PM,
> >>>>>> John
> >>>>>>> Burwell
> >>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> jburwell@basho.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Reading through the code,
> >>>> what
> >>>>>> is the
> >>>>>>>>>>>> difference
> >>>>>>>>>>>>>>>>>>> between
> >>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ISCSI and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Dynamic types?  Why isn't
> >>>> RBD
> >>>>>> considered
> >>>>>>>>>>>>>> Dynamic?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Jun 3, 2013, at 3:46 PM,
> >>>> Mike
> >>>>>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This new type of storage is
> >>>>>> defined in
> >>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Storage.StoragePoolType
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> class
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (called Dynamic):
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> public static enum
> >>>>>> StoragePoolType {
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Filesystem(false), // local
> >>>>>> directory
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> NetworkFilesystem(true),
> >>>> //
> >>>>>> NFS or CIFS
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IscsiLUN(true), // shared
> >>>> LUN,
> >>>>>> with a
> >>>>>>>>>>>> clusterfs
> >>>>>>>>>>>>>>>>>>>>> overlay
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Iscsi(true), // for e.g., ZFS
> >>>>>> Comstar
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ISO(false), // for iso image
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> LVM(false), // XenServer
> >>>> local
> >>>>>> LVM SR
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> CLVM(true),
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> RBD(true),
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> SharedMountPoint(true),
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> VMFS(true), // VMware
> >>>> VMFS
> >>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> PreSetup(true), // for
> >>>>>> XenServer, Storage
> >>>>>>>>>> Pool
> >>>>>>>>>>>>>>>>>>> is set
> >>>>>>>>>>>>>>>>>>>>>>> up
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> by
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> customers.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> EXT(false), // XenServer
> >>>> local
> >>>>>> EXT SR
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> OCFS2(true),
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Dynamic(true); // dynamic,
> >>>>>> zone-wide
> >>>>>>>>>> storage
> >>>>>>>>>>>>>> (ex.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> SolidFire)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> boolean shared;
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> StoragePoolType(boolean
> >>>>>> shared) {
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this.shared = shared;
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> }
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> public boolean isShared() {
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> return shared;
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> }
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> }
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Jun 3, 2013 at
> >> 1:41
> >>>> PM,
> >>>>>> Mike
> >>>>>>>>>> Tutkowski
> >>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> For example, let's say
> >>>> another
> >>>>>> storage
> >>>>>>>>>>>> company
> >>>>>>>>>>>>>>>>>>> wants
> >>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> implement a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> plug-in to leverage its
> >>>> Quality
> >>>>>> of
> >>>>>>> Service
> >>>>>>>>>>>>>>>>>>> feature. It
> >>>>>>>>>>>>>>>>>>>>>>>>>> would
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> dynamic,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> zone-wide storage, as
> >> well.
> >>>>>> They would
> >>>>>>>>>> need
> >>>>>>>>>>>>>> only
> >>>>>>>>>>>>>>>>>>>>>>>>>> implement a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> plug-in as I've made the
> >>>>>> necessary
> >>>>>>>>>> changes to
> >>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hypervisor-attach
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logic
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to support their plug-in.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Jun 3, 2013 at
> >>>> 1:39
> >>>>>> PM, Mike
> >>>>>>>>>>>>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> mike.tutkowski@solidfire.com> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Oh, sorry to imply the
> >>>>>> XenServer code
> >>>>>>> is
> >>>>>>>>>>>>>>>>>>> SolidFire
> >>>>>>>>>>>>>>>>>>>>>>>>>> specific.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> not.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The XenServer attach
> >>>> logic is
> >>>>>> now aware
> >>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>> dynamic,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> zone-wide
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (and SolidFire is an
> >>>>>> implementation of
> >>>>>>>>>> this
> >>>>>>>>>>>>>>>>>>> kind of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kind of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage is new to 4.2
> >>>> with
> >>>>>> Edison's
> >>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>> framework
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> changes.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Edison created a new
> >>>>>> framework that
> >>>>>>>>>>>> supported
> >>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>> creation
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> deletion
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> of volumes dynamically.
> >>>>>> However, when I
> >>>>>>>>>>>>>>>>>>> visited with
> >>>>>>>>>>>>>>>>>>>>>>> him
> >>>>>>>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Portland
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> back
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in April, we realized that
> >>>> it
> >>>>>> was not
> >>>>>>>>>>>>>>>>>>> complete. We
> >>>>>>>>>>>>>>>>>>>>>>>>>> realized
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> there
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> was
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> nothing CloudStack
> >> could
> >>>> do
> >>>>>> with these
> >>>>>>>>>>>>>> volumes
> >>>>>>>>>>>>>>>>>>> unless
> >>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> attach
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logic
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> was
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> changed to recognize
> >>>> this
> >>>>>> new type of
> >>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>> create
> >>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> appropriate
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hypervisor data
> >> structure.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Jun 3, 2013 at
> >>>> 1:28
> >>>>>> PM, John
> >>>>>>>>>> Burwell
> >>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> jburwell@basho.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It is generally odd to
> >> me
> >>>>>> that any
> >>>>>>>>>>>> operation
> >>>>>>>>>>>>>>>>>>> in the
> >>>>>>>>>>>>>>>>>>>>>>>>>> Storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> layer
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> would
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> understand or care
> >>>> about
> >>>>>> details.  I
> >>>>>>>>>> expect
> >>>>>>>>>>>>>>>>>>> to see
> >>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> services
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> expose a set of
> >>>> operations
> >>>>>> that can be
> >>>>>>>>>>>>>>>>>>>>> composed/driven
> >>>>>>>>>>>>>>>>>>>>>>>>>> by
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> implementations to
> >>>>>> allocate
> >>>>>>> space/create
> >>>>>>>>>>>>>>>>>>> structures
> >>>>>>>>>>>>>>>>>>>>>>> per
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> their
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> needs.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> we
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> don't invert this
> >>>>>> dependency, we are
> >>>>>>>>>> going
> >>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>> end
> >>>>>>>>>>>>>>>>>>>>>>> with a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> massive
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-to-n
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> problem that will
> >> make
> >>>> the
> >>>>>> system
> >>>>>>>>>>>>>> increasingly
> >>>>>>>>>>>>>>>>>>>>>>>>>> difficult to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> maintain
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> enhance.  Am I
> >>>>>> understanding that the
> >>>>>>>>>> Xen
> >>>>>>>>>>>>>>>>>>> specific
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> SolidFire
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> code
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> located in the
> >>>>>> CitrixResourceBase
> >>>>>>> class?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Jun 3, 2013 at
> >>>> 3:12
> >>>>>> PM, Mike
> >>>>>>>>>>>>>>>>>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To delve into this in a
> >>>> bit
> >>>>>> more
> >>>>>>>>>> detail:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Prior to 4.2 and aside
> >>>> from
> >>>>>> one setup
> >>>>>>>>>>>>>> method
> >>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> XenServer,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> admin
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> had
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to first create a
> >>>> volume on
> >>>>>> the
> >>>>>>> storage
> >>>>>>>>>>>>>>>>>>> system,
> >>>>>>>>>>>>>>>>>>>>> then
> >>>>>>>>>>>>>>>>>>>>>>> go
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> into
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to set up a data
> >>>> structure
> >>>>>> to make
> >>>>>>> use
> >>>>>>>>>> of
> >>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>> volume
> >>>>>>>>>>>>>>>>>>>>>>>>>> (ex.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> repository on
> >>>> XenServer
> >>>>>> or a
> >>>>>>> datastore
> >>>>>>>>>> on
> >>>>>>>>>>>>>>>>>>> ESX(i)).
> >>>>>>>>>>>>>>>>>>>>>>> VMs
> >>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> data
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> disks
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> shared this storage
> >>>>>> system's volume.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> With Edison's new
> >>>> storage
> >>>>>> framework,
> >>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>> need
> >>>>>>>>>>>>>>>>>>>>> no
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> longer
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be so
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> static
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and you can easily
> >>>> create
> >>>>>> a 1:1
> >>>>>>>>>>>>>> relationship
> >>>>>>>>>>>>>>>>>>>>> between
> >>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> system's
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> volume and the VM's
> >>>> data
> >>>>>> disk
> >>>>>>>>>> (necessary
> >>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Quality
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Service).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You can now write a
> >>>> plug-
> >>>>>> in that is
> >>>>>>>>>> called
> >>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>> dynamically
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> create
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> delete
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> volumes as needed.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The problem that the
> >>>>>> storage
> >>>>>>> framework
> >>>>>>>>>> did
> >>>>>>>>>>>>>>>>>>> not
> >>>>>>>>>>>>>>>>>>>>>>> address
> >>>>>>>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> creating
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> deleting the
> >>>> hypervisor-
> >>>>>> specific data
> >>>>>>>>>>>>>>>>>>> structure
> >>>>>>>>>>>>>>>>>>>>> when
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> performing an
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> attach/detach.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> That being the case,
> >>>> I've
> >>>>>> been
> >>>>>>>>>> enhancing
> >>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>> to do
> >>>>>>>>>>>>>>>>>>>>> so.
> >>>>>>>>>>>>>>>>>>>>>>>>>> I've
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> got
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> XenServer
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> worked out and
> >>>>>> submitted. I've got
> >>>>>>>>>> ESX(i)
> >>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>> my
> >>>>>>>>>>>>>>>>>>>>>>> sandbox
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> can
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> submit
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> if we extend the 4.2
> >>>>>> freeze date.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Does that help a
> >> bit? :)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Jun 3, 2013
> >> at
> >>>>>> 1:03 PM, Mike
> >>>>>>>>>>>>>>>>>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi John,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The storage plug-in
> >> -
> >>>> by
> >>>>>> itself - is
> >>>>>>>>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>> agnostic.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The issue is with the
> >>>>>> volume-attach
> >>>>>>>>>> logic
> >>>>>>>>>>>>>>>>>>> (in the
> >>>>>>>>>>>>>>>>>>>>>>>>>> agent
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> code).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> framework calls into
> >>>> the
> >>>>>> plug-in to
> >>>>>>>>>> have
> >>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>> create a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> volume
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> needed,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> but
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> when the time
> >>>> comes to
> >>>>>> attach the
> >>>>>>>>>> volume
> >>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>> hypervisor,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> attach
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logic
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> has to be smart
> >>>> enough
> >>>>>> to recognize
> >>>>>>>>>> it's
> >>>>>>>>>>>>>>>>>>> being
> >>>>>>>>>>>>>>>>>>>>>>>>>> invoked on
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> zone-wide
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (where the volume
> >>>> has
> >>>>>> just been
> >>>>>>>>>> created)
> >>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>> create,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> say, a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> repository (for
> >>>>>> XenServer) or a
> >>>>>>>>>> datastore
> >>>>>>>>>>>>>>>>>>> (for
> >>>>>>>>>>>>>>>>>>>>>>>>>> VMware) to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> make
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> use
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> of the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> volume that was
> >> just
> >>>>>> created.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I've been spending
> >>>> most
> >>>>>> of my time
> >>>>>>>>>>>>>> recently
> >>>>>>>>>>>>>>>>>>> making
> >>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> attach
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logic
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> work
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in the agent code.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Does that clear it up?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks!
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Jun 3, 2013
> >>>> at
> >>>>>> 12:48 PM,
> >>>>>>> John
> >>>>>>>>>>>>>>>>>>> Burwell <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> jburwell@basho.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you explain
> >>>> why
> >>>>>> the the storage
> >>>>>>>>>>>>>> driver
> >>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> specific?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Jun 3, 2013, at
> >>>> 1:21
> >>>>>> PM, Mike
> >>>>>>>>>>>>>> Tutkowski
> >>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, ultimately I
> >>>>>> would like to
> >>>>>>>>>> support
> >>>>>>>>>>>>>>>>>>> all
> >>>>>>>>>>>>>>>>>>>>>>>>>> hypervisors
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> CloudStack
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> supports. I think
> >>>> I'm
> >>>>>> just out of
> >>>>>>>>>> time
> >>>>>>>>>>>>>>>>>>> for 4.2
> >>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>> get
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> KVM
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Right now this
> >>>> plug-in
> >>>>>> supports
> >>>>>>>>>>>>>> XenServer.
> >>>>>>>>>>>>>>>>>>>>>>>>>> Depending on
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> what
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> we do
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> with
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> regards to 4.2
> >>>> feature
> >>>>>> freeze, I
> >>>>>>>>>> have
> >>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>> working
> >>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> VMware in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> my
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sandbox,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Also, just to be
> >>>> clear,
> >>>>>> this is
> >>>>>>> all
> >>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>> regards
> >>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>> Disk
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Offerings.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> plan to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> support
> >> Compute
> >>>>>> Offerings post
> >>>>>>> 4.2.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Jun 3,
> >>>> 2013
> >>>>>> at 11:14 AM,
> >>>>>>>>>> Kelcey
> >>>>>>>>>>>>>>>>>>> Jamison
> >>>>>>>>>>>>>>>>>>>>>>>>>> Damage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kelcey@bbits.ca
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Is there any
> >> plan
> >>>> on
> >>>>>> supporting
> >>>>>>>>>> KVM in
> >>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>> patch
> >>>>>>>>>>>>>>>>>>>>>>>>>> cycle
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> post
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 4.2?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ----- Original
> >>>>>> Message -----
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> From: "Mike
> >>>>>> Tutkowski" <
> >>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To:
> >>>>>> dev@cloudstack.apache.org
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Sent: Monday,
> >>>> June
> >>>>>> 3, 2013
> >>>>>>>>>> 10:12:32 AM
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re:
> >>>> [MERGE]
> >>>>>>>>>>>> disk_io_throttling
> >>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>> MASTER
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I agree on
> >>>> merging
> >>>>>> Wei's feature
> >>>>>>>>>>>> first,
> >>>>>>>>>>>>>>>>>>> then
> >>>>>>>>>>>>>>>>>>>>>>> mine.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If his feature is
> >>>> for
> >>>>>> KVM only,
> >>>>>>>>>> then
> >>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>> is a
> >>>>>>>>>>>>>>>>>>>>> non
> >>>>>>>>>>>>>>>>>>>>>>>>>> issue
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> don't
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> support
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> KVM in 4.2.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Jun 3,
> >>>> 2013
> >>>>>> at 8:55 AM,
> >>>>>>> Wei
> >>>>>>>>>>>>>> ZHOU
> >>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ustcweizhou@gmail.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> John,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> For the billing,
> >>>> as
> >>>>>> no one works
> >>>>>>>>>> on
> >>>>>>>>>>>>>>>>>>> billing
> >>>>>>>>>>>>>>>>>>>>> now,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> users
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> need
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> calculate
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the billing by
> >>>>>> themselves. They
> >>>>>>>>>> can
> >>>>>>>>>>>>>> get
> >>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> service_offering
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> disk_offering
> >> of
> >>>> a
> >>>>>> VMs and
> >>>>>>> volumes
> >>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>> calculation.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> course
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> it is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> better
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to tell user the
> >>>>>> exact
> >>>>>>> limitation
> >>>>>>>>>>>>>> value
> >>>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>> individual
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> volume,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> rate limitation
> >>>> for
> >>>>>> nics as
> >>>>>>> well.
> >>>>>>>>>> I
> >>>>>>>>>>>>>> can
> >>>>>>>>>>>>>>>>>>> work
> >>>>>>>>>>>>>>>>>>>>> on
> >>>>>>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> later. Do
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> think it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is a part of I/O
> >>>>>> throttling?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Sorry my
> >>>>>> misunstand the second
> >>>>>>> the
> >>>>>>>>>>>>>>>>>>> question.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Agree with
> >>>> what
> >>>>>> you said about
> >>>>>>> the
> >>>>>>>>>>>> two
> >>>>>>>>>>>>>>>>>>>>> features.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -Wei
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2013/6/3 John
> >>>>>> Burwell <
> >>>>>>>>>>>>>>>>>>> jburwell@basho.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Wei,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Jun 3,
> >> 2013,
> >>>> at
> >>>>>> 2:13 AM, Wei
> >>>>>>>>>> ZHOU
> >>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ustcweizhou@gmail.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi John,
> >> Mike
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I hope
> >> Mike's
> >>>>>> aswer helps you.
> >>>>>>>>>> I am
> >>>>>>>>>>>>>>>>>>> trying
> >>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> adding
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> more.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (1) I think
> >>>> billing
> >>>>>> should
> >>>>>>>>>> depend
> >>>>>>>>>>>> on
> >>>>>>>>>>>>>>>>>>> IO
> >>>>>>>>>>>>>>>>>>>>>>>>>> statistics
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> rather
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> than
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IOPS
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> limitation.
> >>>> Please
> >>>>>> review
> >>>>>>>>>>>>>>>>>>> disk_io_stat if
> >>>>>>>>>>>>>>>>>>>>> you
> >>>>>>>>>>>>>>>>>>>>>>>>>> have
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> time.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> disk_io_stat
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> can
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> get the IO
> >>>>>> statistics
> >>>>>>> including
> >>>>>>>>>>>>>>>>>>> bytes/iops
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> read/write
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> an
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> individual
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> virtual
> >>>> machine.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Going by the
> >>>> AWS
> >>>>>> model,
> >>>>>>> customers
> >>>>>>>>>>>> are
> >>>>>>>>>>>>>>>>>>> billed
> >>>>>>>>>>>>>>>>>>>>>>> more
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> volumes
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> with
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> provisioned
> >>>> IOPS,
> >>>>>> as well as,
> >>>>>>> for
> >>>>>>>>>>>>>> those
> >>>>>>>>>>>>>>>>>>>>>>>>>> operations (
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> http://aws.amazon.com/ebs/).
> >>>>>>> I
> >>>>>>>>>>>>>> would
> >>>>>>>>>>>>>>>>>>>>> imagine
> >>>>>>>>>>>>>>>>>>>>>>>>>> our
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> users
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> would
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> option to
> >>>> employ
> >>>>>> similar cost
> >>>>>>>>>>>> models.
> >>>>>>>>>>>>>>>>>>> Could
> >>>>>>>>>>>>>>>>>>>>> an
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> operator
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> implement
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> such a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> billing model
> >>>> in
> >>>>>> the current
> >>>>>>>>>> patch?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (2) Do you
> >>>> mean
> >>>>>> IOPS runtime
> >>>>>>>>>>>> change?
> >>>>>>>>>>>>>>>>>>> KVM
> >>>>>>>>>>>>>>>>>>>>>>>>>> supports
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> setting
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IOPS/BPS
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> limitation
> >> for
> >>>> a
> >>>>>> running
> >>>>>>> virtual
> >>>>>>>>>>>>>>>>>>> machine
> >>>>>>>>>>>>>>>>>>>>>>> through
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> command
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> line.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> However,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> CloudStack
> >>>> does
> >>>>>> not support
> >>>>>>>>>>>> changing
> >>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>> parameters
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> of a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> created
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> offering
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (computer
> >>>>>> offering or disk
> >>>>>>>>>>>>>> offering).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I meant at
> >> the
> >>>>>> Java interface
> >>>>>>>>>> level.
> >>>>>>>>>>>>>> I
> >>>>>>>>>>>>>>>>>>>>>>> apologize
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> being
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> unclear.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> we more
> >>>>>> generalize allocation
> >>>>>>>>>>>>>>>>>>> algorithms
> >>>>>>>>>>>>>>>>>>>>> with a
> >>>>>>>>>>>>>>>>>>>>>>>>>> set
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> interfaces
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> describe the
> >>>>>> service guarantees
> >>>>>>>>>>>>>>>>>>> provided by a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> resource?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (3) It is a
> >>>> good
> >>>>>> question.
> >>>>>>>>>> Maybe it
> >>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>> better
> >>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> commit
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mike's
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> patch
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> after
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> disk_io_throttling as Mike
> >>>>>>>>>> needs to
> >>>>>>>>>>>>>>>>>>> consider
> >>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> limitation in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> type, I think.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I will expand
> >>>> on
> >>>>>> my thoughts
> >>>>>>> in a
> >>>>>>>>>>>>>> later
> >>>>>>>>>>>>>>>>>>>>>>> response
> >>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mike
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> regarding
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> touch points
> >>>>>> between these two
> >>>>>>>>>>>>>>>>>>> features.  I
> >>>>>>>>>>>>>>>>>>>>>>> think
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> disk_io_throttling
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> will need to
> >> be
> >>>>>> merged before
> >>>>>>>>>>>>>>>>>>> SolidFire, but
> >>>>>>>>>>>>>>>>>>>>> I
> >>>>>>>>>>>>>>>>>>>>>>>>>> think
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> we need
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> closer
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> coordination
> >>>>>> between the
> >>>>>>> branches
> >>>>>>>>>>>>>>>>>>> (possibly
> >>>>>>>>>>>>>>>>>>>>>>> have
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> have
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> solidfire
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> track
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> disk_io_throttling)
> >>>>>> to
> >>>>>>>>>> coordinate on
> >>>>>>>>>>>>>>>>>>> this
> >>>>>>>>>>>>>>>>>>>>>>> issue.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> - Wei
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2013/6/3
> >>>> John
> >>>>>> Burwell <
> >>>>>>>>>>>>>>>>>>> jburwell@basho.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The things
> >> I
> >>>>>> want to
> >>>>>>> understand
> >>>>>>>>>>>> are
> >>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>> following:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 1) Is there
> >>>>>> value in
> >>>>>>> capturing
> >>>>>>>>>>>> IOPS
> >>>>>>>>>>>>>>>>>>>>> policies
> >>>>>>>>>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> captured
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> common
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> data
> >> model
> >>>> (e.g.
> >>>>>> for
> >>>>>>>>>> billing/usage
> >>>>>>>>>>>>>>>>>>>>> purposes,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> expressing
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> offerings).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2) Should
> >>>> there
> >>>>>> be a common
> >>>>>>>>>>>>>>>>>>> interface model
> >>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> reasoning
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> about
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IOP
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >> provisioning
> >>>> at
> >>>>>> runtime?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 3) How are
> >>>>>> conflicting
> >>>>>>>>>> provisioned
> >>>>>>>>>>>>>>>>>>> IOPS
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> configurations
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> between
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hypervisor
> >>>> and
> >>>>>> storage device
> >>>>>>>>>>>>>>>>>>> reconciled?
> >>>>>>>>>>>>>>>>>>>>> In
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> particular,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> scenario
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> where
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is lead to
> >>>>>> believe (and
> >>>>>>> billed)
> >>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>> more
> >>>>>>>>>>>>>>>>>>>>> IOPS
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> configured
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a VM
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> than a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>> device
> >>>>>> has been
> >>>>>>>>>> configured
> >>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>> deliver.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Another
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> scenario
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> could a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> consistent
> >>>>>> configuration
> >>>>>>>>>> between a
> >>>>>>>>>>>>>>>>>>> VM and a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> device at
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> creation
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> time, but a
> >>>>>> later
> >>>>>>> modification
> >>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>> device
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> introduces
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logical
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> inconsistency.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Jun 2,
> >>>> 2013,
> >>>>>> at 8:38 PM,
> >>>>>>>>>> Mike
> >>>>>>>>>>>>>>>>>>> Tutkowski
> >>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi John,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I believe
> >>>> Wei's
> >>>>>> feature deals
> >>>>>>>>>> with
> >>>>>>>>>>>>>>>>>>>>>>> controlling
> >>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> max
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> number of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IOPS
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>> hypervisor
> >>>>>> side.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> My
> >> feature
> >>>> is
> >>>>>> focused on
> >>>>>>>>>>>>>> controlling
> >>>>>>>>>>>>>>>>>>> IOPS
> >>>>>>>>>>>>>>>>>>>>>>> from
> >>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> system
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> side.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I hope that
> >>>>>> helps. :)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sun,
> >> Jun
> >>>> 2,
> >>>>>> 2013 at 6:35
> >>>>>>> PM,
> >>>>>>>>>>>>>> John
> >>>>>>>>>>>>>>>>>>>>> Burwell
> >>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> jburwell@basho.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Wei,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> My
> >>>> opinion is
> >>>>>> that no
> >>>>>>> features
> >>>>>>>>>>>>>>>>>>> should be
> >>>>>>>>>>>>>>>>>>>>>>>>>> merged
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> until all
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> functional
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> issues
> >>>> have
> >>>>>> been resolved
> >>>>>>> and
> >>>>>>>>>> it
> >>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>> ready
> >>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>> turn
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> over to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> test.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Until
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> total Ops
> >>>> vs
> >>>>>> discrete
> >>>>>>>>>> read/write
> >>>>>>>>>>>>>>>>>>> ops issue
> >>>>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> addressed
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> re-reviewed
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> by
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Wido, I
> >>>> don't
> >>>>>> think this
> >>>>>>>>>> criteria
> >>>>>>>>>>>>>>>>>>> has been
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> satisfied.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Also,
> >> how
> >>>>>> does this work
> >>>>>>>>>>>>>>>>>>>>>>> intersect/compliment
> >>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> SolidFire
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> patch
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>> https://reviews.apache.org/r/11479/
> >>>>>>>>>>>>>>>>>>> )?
> >>>>>>>>>>>>>>>>>>>>> As I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> understand
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> work
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> also
> >>>> involves
> >>>>>> provisioned
> >>>>>>>>>> IOPS. I
> >>>>>>>>>>>>>>>>>>> would
> >>>>>>>>>>>>>>>>>>>>>>> like
> >>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ensure
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> we
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> don't
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> have a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> scenario
> >>>>>> where provisioned
> >>>>>>>>>> IOPS
> >>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>> KVM and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> SolidFire are
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> unnecessarily
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> incompatible.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Jun 1,
> >>>> 2013,
> >>>>>> at 6:47 AM,
> >>>>>>>>>> Wei
> >>>>>>>>>>>>>>>>>>> ZHOU <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ustcweizhou@gmail.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Wido,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Sure. I
> >> will
> >>>>>> change it next
> >>>>>>>>>> week.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -Wei
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2013/6/1
> >>>>>> Wido den Hollander
> >>>>>>> <
> >>>>>>>>>>>>>>>>>>>>> wido@widodh.nl
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi Wei,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On
> >>>>>> 06/01/2013 08:24 AM, Wei
> >>>>>>>>>> ZHOU
> >>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Wido,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Exactly. I
> >>>> have
> >>>>>> pushed the
> >>>>>>>>>>>>>> features
> >>>>>>>>>>>>>>>>>>> into
> >>>>>>>>>>>>>>>>>>>>>>>>>> master.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If anyone
> >>>>>> object thems for
> >>>>>>>>>>>>>> technical
> >>>>>>>>>>>>>>>>>>>>> reason
> >>>>>>>>>>>>>>>>>>>>>>>>>> till
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Monday,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> will
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> revert
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> them.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> For the
> >>>> sake
> >>>>>> of clarity I
> >>>>>>> just
> >>>>>>>>>>>>>> want
> >>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>> mention
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> again
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that we
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> should
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> change
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the total
> >>>> IOps
> >>>>>> to R/W IOps
> >>>>>>>>>> asap
> >>>>>>>>>>>> so
> >>>>>>>>>>>>>>>>>>> that we
> >>>>>>>>>>>>>>>>>>>>>>>>>> never
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> release
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> version
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> with
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> only total
> >>>> IOps.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You laid
> >>>> the
> >>>>>> groundwork for
> >>>>>>>>>> the
> >>>>>>>>>>>>>> I/O
> >>>>>>>>>>>>>>>>>>>>>>> throttling
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that's
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> great!
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> We
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> should
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> however
> >>>>>> prevent that we
> >>>>>>> create
> >>>>>>>>>>>>>>>>>>> legacy from
> >>>>>>>>>>>>>>>>>>>>>>> day
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> #1.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Wido
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -Wei
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >> 2013/5/31
> >>>>>> Wido den
> >>>>>>> Hollander <
> >>>>>>>>>>>>>>>>>>>>>>> wido@widodh.nl>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On
> >>>>>> 05/31/2013 03:59 PM, John
> >>>>>>>>>>>>>> Burwell
> >>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Wido,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> +1 -- this
> >>>>>> enhancement must
> >>>>>>> to
> >>>>>>>>>>>>>>>>>>> discretely
> >>>>>>>>>>>>>>>>>>>>>>>>>> support
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> read
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> write
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IOPS.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> don't see
> >>>> how
> >>>>>> it could be
> >>>>>>>>>> fixed
> >>>>>>>>>>>>>>>>>>> later
> >>>>>>>>>>>>>>>>>>>>>>> because
> >>>>>>>>>>>>>>>>>>>>>>>>>> I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> don't see
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> how we
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> correctly
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> split total
> >>>> IOPS
> >>>>>> into read
> >>>>>>> and
> >>>>>>>>>>>>>>>>>>> write.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Therefore, we
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> would
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> stuck
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> with a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> total
> >>>>>> unless/until we
> >>>>>>> decided
> >>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>> break
> >>>>>>>>>>>>>>>>>>>>>>>>>> backwards
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> compatibility.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What
> >> Wei
> >>>>>> meant was merging
> >>>>>>> it
> >>>>>>>>>>>> into
> >>>>>>>>>>>>>>>>>>> master
> >>>>>>>>>>>>>>>>>>>>>>> now
> >>>>>>>>>>>>>>>>>>>>>>>>>> so
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> will go
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 4.2
> >> branch
> >>>>>> and add Read /
> >>>>>>>>>> Write
> >>>>>>>>>>>>>> IOps
> >>>>>>>>>>>>>>>>>>>>> before
> >>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> 4.2
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> release
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> so
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 4.2
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> will be
> >>>>>> released with Read
> >>>>>>> and
> >>>>>>>>>>>>>> Write
> >>>>>>>>>>>>>>>>>>>>> instead
> >>>>>>>>>>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Total
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IOps.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is to
> >>>>>> make the May 31st
> >>>>>>>>>>>>>> feature
> >>>>>>>>>>>>>>>>>>>>> freeze
> >>>>>>>>>>>>>>>>>>>>>>>>>> date.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> But if
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> window
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> moves
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (see
> >> other
> >>>>>> threads) then it
> >>>>>>>>>> won't
> >>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>>>> necessary
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> to do
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Wido
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I also
> >>>>>> completely agree that
> >>>>>>>>>>>> there
> >>>>>>>>>>>>>>>>>>> is no
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> association
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> between
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> disk I/O.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On May
> >> 31,
> >>>>>> 2013, at 9:51 AM,
> >>>>>>>>>> Wido
> >>>>>>>>>>>>>>>>>>> den
> >>>>>>>>>>>>>>>>>>>>>>>>>> Hollander <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wido@widodh.nl
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi Wei,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On
> >>>>>> 05/31/2013 03:13 PM, Wei
> >>>>>>>>>> ZHOU
> >>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi Wido,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks.
> >>>> Good
> >>>>>> question.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I  thought
> >>>>>> about at the
> >>>>>>>>>>>> beginning.
> >>>>>>>>>>>>>>>>>>>>> Finally I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> decided to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ignore
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >> difference
> >>>> of
> >>>>>> read and write
> >>>>>>>>>>>>>> mainly
> >>>>>>>>>>>>>>>>>>>>> because
> >>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> throttling
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> did
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> not
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> care the
> >>>>>> difference of sent
> >>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>> received
> >>>>>>>>>>>>>>>>>>>>>>>>>> bytes as
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> well.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> That
> >>>>>> reasoning seems odd.
> >>>>>>>>>>>>>>>>>>> Networking and
> >>>>>>>>>>>>>>>>>>>>>>> disk
> >>>>>>>>>>>>>>>>>>>>>>>>>> I/O
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> completely
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> different.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Disk I/O
> >> is
> >>>>>> much more
> >>>>>>>>>> expensive
> >>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>> most
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> situations
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> bandwith.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Implementing
> >>>>>> it will be some
> >>>>>>>>>>>>>>>>>>> copy-paste
> >>>>>>>>>>>>>>>>>>>>>>> work.
> >>>>>>>>>>>>>>>>>>>>>>>>>> It
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> could be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> implemented
> >>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> few days.
> >>>> For
> >>>>>> the deadline
> >>>>>>> of
> >>>>>>>>>>>>>>>>>>> feature
> >>>>>>>>>>>>>>>>>>>>>>> freeze,
> >>>>>>>>>>>>>>>>>>>>>>>>>> I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> will
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> implement
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> after
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that , if
> >>>>>> needed.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It think
> >> it's
> >>>> a
> >>>>>> feature we
> >>>>>>>>>> can't
> >>>>>>>>>>>>>>>>>>> miss. But
> >>>>>>>>>>>>>>>>>>>>>>> if
> >>>>>>>>>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> goes
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> into
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 4.2
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> window
> >>>> we
> >>>>>> have to make sure
> >>>>>>> we
> >>>>>>>>>>>>>> don't
> >>>>>>>>>>>>>>>>>>>>> release
> >>>>>>>>>>>>>>>>>>>>>>>>>> with
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> only
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> total
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IOps
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> fix
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> it in 4.3,
> >>>> that
> >>>>>> will confuse
> >>>>>>>>>>>>>> users.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Wido
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -Wei
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >> 2013/5/31
> >>>>>> Wido den
> >>>>>>> Hollander <
> >>>>>>>>>>>>>>>>>>>>>>> wido@widodh.nl>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi Wei,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On
> >>>>>> 05/30/2013 06:03 PM, Wei
> >>>>>>>>>> ZHOU
> >>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would
> >>>> like to
> >>>>>> merge
> >>>>>>>>>>>>>>>>>>> disk_io_throttling
> >>>>>>>>>>>>>>>>>>>>>>>>>> branch
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> into
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> master.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If nobody
> >>>>>> object, I will
> >>>>>>> merge
> >>>>>>>>>>>>>> into
> >>>>>>>>>>>>>>>>>>> master
> >>>>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>> 48
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hours.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The
> >>>> purpose
> >>>>>> is :
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Virtual
> >>>>>> machines are running
> >>>>>>>>>> on
> >>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>> same
> >>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> device
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (local
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >> or
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> share
> >>>> strage).
> >>>>>> Because of
> >>>>>>> the
> >>>>>>>>>>>> rate
> >>>>>>>>>>>>>>>>>>>>>>> limitation
> >>>>>>>>>>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> device
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (such as
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> iops), if
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> one VM
> >>>> has
> >>>>>> large disk
> >>>>>>>>>> operation,
> >>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>> may
> >>>>>>>>>>>>>>>>>>>>>>> affect
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> disk
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> performance
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> other
> >> VMs
> >>>>>> running on the
> >>>>>>> same
> >>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>> device.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It is
> >>>> neccesary
> >>>>>> to set the
> >>>>>>>>>>>> maximum
> >>>>>>>>>>>>>>>>>>> rate
> >>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>> limit
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> disk
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I/O
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> VMs.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Looking
> >> at
> >>>> the
> >>>>>> code I see
> >>>>>>> you
> >>>>>>>>>>>> make
> >>>>>>>>>>>>>>>>>>> no
> >>>>>>>>>>>>>>>>>>>>>>>>>> difference
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> between
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Read
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Write
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IOps.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Qemu
> >> and
> >>>>>> libvirt support
> >>>>>>>>>> setting
> >>>>>>>>>>>>>>>>>>> both a
> >>>>>>>>>>>>>>>>>>>>>>>>>> different
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> rate
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Read
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Write
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IOps
> >>>> which
> >>>>>> could benefit a
> >>>>>>>>>> lot of
> >>>>>>>>>>>>>>>>>>> users.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It's also
> >>>>>> strange, in the
> >>>>>>>>>> polling
> >>>>>>>>>>>>>>>>>>> side you
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> collect
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> both
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Read
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Write
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IOps, but
> >>>> on
> >>>>>> the throttling
> >>>>>>>>>> side
> >>>>>>>>>>>>>>>>>>> you only
> >>>>>>>>>>>>>>>>>>>>> go
> >>>>>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> global
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> value.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Write
> >> IOps
> >>>> are
> >>>>>> usually much
> >>>>>>>>>> more
> >>>>>>>>>>>>>>>>>>> expensive
> >>>>>>>>>>>>>>>>>>>>>>>>>> then
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Read
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IOps,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> so it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> seems
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like a
> >> valid
> >>>>>> use-case where
> >>>>>>>>>> that
> >>>>>>>>>>>>>> an
> >>>>>>>>>>>>>>>>>>> admin
> >>>>>>>>>>>>>>>>>>>>>>>>>> would
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> set
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> lower
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> value
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> write
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IOps vs
> >>>> Read
> >>>>>> IOps.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Since this
> >>>> only
> >>>>>> supports KVM
> >>>>>>>>>> at
> >>>>>>>>>>>>>> this
> >>>>>>>>>>>>>>>>>>>>> point I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> think
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> would
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> great
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> value to
> >> at
> >>>>>> least have the
> >>>>>>>>>>>>>>>>>>> mechanism in
> >>>>>>>>>>>>>>>>>>>>>>> place
> >>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> support
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> both,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> implementing
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this later
> >>>>>> would be a lot of
> >>>>>>>>>>>> work.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If a
> >>>> hypervisor
> >>>>>> doesn't
> >>>>>>>>>> support
> >>>>>>>>>>>>>>>>>>> setting
> >>>>>>>>>>>>>>>>>>>>>>>>>> different
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> values
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> read
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> write you
> >>>> can
> >>>>>> always sum
> >>>>>>> both
> >>>>>>>>>> up
> >>>>>>>>>>>>>>>>>>> and set
> >>>>>>>>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>>>>>>> as
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> total
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> limit.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you
> >>>>>> explain why you
> >>>>>>>>>>>>>> implemented
> >>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>> this
> >>>>>>>>>>>>>>>>>>>>>>>>>> way?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Wido
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The
> >>>> feature
> >>>>>> includes:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (1) set
> >> the
> >>>>>> maximum rate of
> >>>>>>>>>> VMs
> >>>>>>>>>>>>>> (in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> disk_offering,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> global
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> configuration)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (2)
> >> change
> >>>> the
> >>>>>> maximum rate
> >>>>>>> of
> >>>>>>>>>>>> VMs
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (3) limit
> >>>> the
> >>>>>> disk rate
> >>>>>>> (total
> >>>>>>>>>>>> bps
> >>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>> iops)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> JIRA
> >> ticket:
> >>>>>>>>>>>>>>>>>>>>> https://issues.apache.org/****
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> jira/browse/CLOUDSTACK-1192<ht**tps://
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> issues.apache.org/****
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> jira/browse/CLOUDSTACK-1192<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-
> >>>>>> 1192>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> <ht**tps://
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> issues.apache.org/**jira/**browse/CLOUDSTACK-
> >>>> 1192<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>> http://issues.apache.org/jira/**browse/CLOUDSTACK-1192
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <**
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-
> >>>>>> 1192<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>> https://issues.apache.org/jira/browse/CLOUDSTACK-1192>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> FS (I will
> >>>>>> update later) :
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>
> >>>>>>
> >>>>
> >> https://cwiki.apache.org/******confluence/display/CLOUDSTACK/******
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>> https://cwiki.apache.org/****confluence/display/CLOUDSTACK/****
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>> https://cwiki.apache.org/****confluence/display/**CLOUDSTACK/**
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>> https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> VM+Disk+IO+Throttling<https://
> >>>>>>>>>>>>>> ****
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> cwiki.apache.org/confluence/****
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> http://cwiki.apache.org/confluence/**>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>> display/CLOUDSTACK/VM+Disk+IO+****Throttling<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://cwiki.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> **
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>
> >>>>
> >> apache.org/confluence/display/**CLOUDSTACK/VM+Disk+IO+**Throttling
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>
> >>>>>>
> >>>>
> >> https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Th
> >>>>>> rottling
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Merge
> >>>> check
> >>>>>> list :-
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> * Did you
> >>>>>> check the branch's
> >>>>>>>>>> RAT
> >>>>>>>>>>>>>>>>>>> execution
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> success?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> * Are
> >>>> there
> >>>>>> new dependencies
> >>>>>>>>>>>>>>>>>>> introduced?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> No
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> * What
> >>>>>> automated testing
> >>>>>>> (unit
> >>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>> integration)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> included
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> new
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> feature?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Unit tests
> >>>> are
> >>>>>> added.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> * What
> >>>>>> testing has been done
> >>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>> check for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> potential
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> regressions?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (1) set
> >> the
> >>>>>> bytes rate and
> >>>>>>>>>> IOPS
> >>>>>>>>>>>>>>>>>>> rate on
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> CloudStack
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> UI.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (2) VM
> >>>>>> operations, including
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> deploy,
> >>>> stop,
> >>>>>> start, reboot,
> >>>>>>>>>>>>>>>>>>> destroy,
> >>>>>>>>>>>>>>>>>>>>>>> expunge.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> migrate,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> restore
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (3)
> >>>> Volume
> >>>>>> operations,
> >>>>>>>>>> including
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Attach,
> >>>>>> Detach
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To
> >> review
> >>>> the
> >>>>>> code, you can
> >>>>>>>>>> try
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> git diff
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> c30057635d04a2396f84c588127d7e******be42e503a7
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>> f2e5591b710d04cc86815044f5823e******73a4a58944
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Best
> >>>> regards,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Wei
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [1]
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>
> >>>>>>
> >>>>
> >> https://cwiki.apache.org/******confluence/display/CLOUDSTACK/******
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>> https://cwiki.apache.org/****confluence/display/CLOUDSTACK/****
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>> https://cwiki.apache.org/****confluence/display/**CLOUDSTACK/**
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>> https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> VM+Disk+IO+Throttling<https://
> >>>>>>>>>>>>>> ****
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> cwiki.apache.org/confluence/****
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> http://cwiki.apache.org/confluence/**>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>> display/CLOUDSTACK/VM+Disk+IO+****Throttling<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://cwiki.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> **
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>
> >>>>
> >> apache.org/confluence/display/**CLOUDSTACK/VM+Disk+IO+**Throttling
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>
> >>>>>>
> >>>>
> >> https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Th
> >>>>>> rottling
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [2]
> >>>>>>>>>> refs/heads/disk_io_throttling
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [3]
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>> https://issues.apache.org/******jira/browse/CLOUDSTACK-1301
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>> https://issues.apache.org/****jira/browse/CLOUDSTACK-
> >>>>>> 1301
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> <ht**tps://
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> issues.apache.org/****jira/browse/CLOUDSTACK-
> >>>> 1301<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-
> >>>>>> 1301>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> <ht**tps://
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> issues.apache.org/**jira/**browse/CLOUDSTACK-
> >>>> 1301<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>> http://issues.apache.org/jira/**browse/CLOUDSTACK-1301
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <**
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-
> >>>>>> 1301<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>> https://issues.apache.org/jira/browse/CLOUDSTACK-1301>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> <ht**tps://
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> issues.apache.org/****jira/**browse/CLOUDSTACK-
> >>>> 2071
> >>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>> http://issues.apache.org/**jira/**browse/CLOUDSTACK-
> >>>>>> 2071
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> **<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>> http://issues.apache.org/**jira/**browse/CLOUDSTACK-
> >>>>>> 2071
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>> http://issues.apache.org/jira/**browse/CLOUDSTACK-2071
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <**
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>> https://issues.apache.org/****jira/browse/CLOUDSTACK-
> >>>>>> 2071
> >>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-
> >>>>>> 2071>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> <h**ttps://
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>> issues.apache.org/jira/**browse/CLOUDSTACK-2071<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>> https://issues.apache.org/jira/browse/CLOUDSTACK-2071>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> (**CLOUDSTACK-1301
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -     VM
> >>>> Disk
> >>>>>> I/O
> >>>>>>> Throttling)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike
> >>>>>> Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior
> >>>>>> CloudStack Developer,
> >>>>>>>>>>>>>>>>>>> SolidFire
> >>>>>>>>>>>>>>>>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>>>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o:
> >>>> 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing
> >>>> the
> >>>>>> way the world
> >>>>>>>>>> uses
> >>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike
> >>>> Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior
> >>>> CloudStack
> >>>>>> Developer,
> >>>>>>>>>>>> SolidFire
> >>>>>>>>>>>>>>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the
> >>>> way
> >>>>>> the world uses
> >>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> http://solidfire.com/solution/overview/?video=play
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike
> >> Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior
> >>>> CloudStack
> >>>>>> Developer,
> >>>>>>>>>> SolidFire
> >>>>>>>>>>>>>>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the
> >>>> way
> >>>>>> the world uses
> >>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack
> >>>>>> Developer,
> >>>>>>>>>> SolidFire
> >>>>>>>>>>>>>>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way
> >>>> the
> >>>>>> world uses the
> >>>>>>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack
> >>>>>> Developer,
> >>>>>>> SolidFire
> >>>>>>>>>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way
> >>>> the
> >>>>>> world uses the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack
> >>>>>> Developer, SolidFire
> >>>>>>>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the
> >>>>>> world uses the
> >>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack
> >>>> Developer,
> >>>>>> SolidFire
> >>>>>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the
> >>>> world
> >>>>>> uses the
> >>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack
> >>>> Developer,
> >>>>>> SolidFire
> >>>>>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the
> >>>> world
> >>>>>> uses the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>
> >> http://solidfire.com/solution/overview/?video=play
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack
> >> Developer,
> >>>>>> SolidFire
> >>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world
> >>>> uses
> >>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer,
> >>>>>> SolidFire Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world
> >>>> uses
> >>>>>> the cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer,
> >>>>>> SolidFire Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world
> >> uses
> >>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer,
> >>>> SolidFire
> >>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses
> >>>> the
> >>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer,
> >>>> SolidFire
> >>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses
> >> the
> >>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer,
> >>>> SolidFire
> >>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the
> >>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire
> >>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cloud<
> >>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire
> >>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the
> >>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire
> >>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the
> >>>>>>>>>>>>>>>>>>>>>>>>>>> cloud<
> >>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the cloud<
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> cloud<http://solidfire.com/solution/overview/?video=play
> >>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the
> >>>>>>>>>>>>>>>>>>>>
> >>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>> Advancing the way the world uses the cloud<
> >>>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>> Advancing the way the world uses the cloud<
> >>>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>> Advancing the way the world uses the cloud<
> >>>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>> Advancing the way the world uses the cloud<
> >>>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> --
> >>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>> Advancing the way the world uses the
> >>>>>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> --
> >>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>> Advancing the way the world uses the
> >>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>> *(tm)*
> >>>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> --
> >>>>>>>>> *Mike Tutkowski*
> >>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>> o: 303.746.7302
> >>>>>>>>> Advancing the way the world uses the cloud<
> >>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>> *(tm)*
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> --
> >>>>>>>> *Mike Tutkowski*
> >>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>> o: 303.746.7302
> >>>>>>>> Advancing the way the world uses the
> >>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>>>> *(tm)*
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>>
> >>>>>> --
> >>>>>> *Mike Tutkowski*
> >>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>> e: mike.tutkowski@solidfire.com
> >>>>>> o: 303.746.7302
> >>>>>> Advancing the way the world uses the
> >>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>> *(tm)*
> >>>>>
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> *Mike Tutkowski*
> >>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>> e: mike.tutkowski@solidfire.com
> >>>> o: 303.746.7302
> >>>> Advancing the way the world uses the
> >>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>> *(tm)*
> >
>
>


-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message