cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Tutkowski <mike.tutkow...@solidfire.com>
Subject Re: [MERGE] disk_io_throttling to MASTER
Date Mon, 10 Jun 2013 20:43:35 GMT
OK, thanks for that info, John.


On Mon, Jun 10, 2013 at 2:25 PM, John Burwell <jburwell@basho.com> wrote:

> Mike,
>
> Yes, a vendor may hand a non-managed plugin.  For example, on the
> secondary storage side, S3 and Swift don't support the concept of being
> managed -- an operator is required to create a bucket and point CloudStack
> at it.  Another example would be a generic iSCSI device (e.g. OpenFiler,
> FreeNAS) which CloudStack can not directly manage, but can consume a LUN
> that an operator has created for it.
>
> From an operational perspective, an administrator may elect to say, "Hey,
> I know you can manage that device for me, but it is has a wider use than
> CloudStack can manage.  Let me handle it, and just provide the information
> CloudStack needs."  In this scenario, the operator is accepting a greater
> management burden in exchange for control.  I don't think many will elect
> to manage these devices themselves, but this ability will be extremely
> important for the edge cases that require it.  Speaking as someone who has
> run cloud operations, I can testify to this need -- nothing aggravates you
> more than a system insisting on managing a device or service without the
> opportunity to opt-out
>
> Thanks,
> -John
>
> On Jun 10, 2013, at 3:19 PM, Mike Tutkowski <mike.tutkowski@solidfire.com>
> wrote:
>
> > So, would a storage vendor write a non-CloudStack-managed plug-in and
> maybe
> > just implement the snapshot feature or something?
> >
> > In this case, they'd have to set the hypervisor up ahead of time with a
> > large portion of their storage, but they'd still get the vendor-snapshot
> > ability.
> >
> > Just trying to undertand this more.
> >
> > Thanks
> >
> >
> > On Mon, Jun 10, 2013 at 1:04 PM, John Burwell <jburwell@basho.com>
> wrote:
> >
> >> Edison,
> >>
> >> TL;DR Always give operators/system administrators the ability to manage
> >> things themselves, if they like.
> >>
> >> As more manageable drivers emerge, I think it will be import to allow
> >> operators to opt-out of CloudStack device management.  Therefore, a
> driver
> >> declares whether or not it can support device management (many, in fact
> >> most, won't).  For those drivers that do support management, allow
> >> operators to opt-in/out of CloudStack management on a per device basis.
>  To
> >> support this notion, we need a flag on the storage_pool table and
> >> associated classes to indicate whether or not the operator wants
> CloudStack
> >> to manage the device.
> >>
> >> Does this explanation make more sense?
> >>
> >> Thanks,
> >> -John
> >>
> >> On Jun 10, 2013, at 2:58 PM, Edison Su <Edison.su@citrix.com> wrote:
> >>
> >>> I don't understand why we need to add a "managed" in the UI and in
> >> "storage_pool" table?
> >>> AFAIK, what you guys trying to do is to enable
> >> attachVolume/DettachVolume if the volume is created on Solidfire, right?
> >>> If the volume is created on Solidfire already, by createAsync, then why
> >> we can't just send out all the information about the volume(it's
> volumeTO
> >> which will be returned by each storage driver) to hypervisor, then at
> the
> >> hypervisor side, check the volumeTO, if the volumeTO is managed, then
> doing
> >> whatever necessary operations(create SR, create VDI etc), then attach
> the
> >> VDI to volume.
> >>>
> >>>
> >>>> -----Original Message-----
> >>>> From: Mike Tutkowski [mailto:mike.tutkowski@solidfire.com]
> >>>> Sent: Monday, June 10, 2013 11:10 AM
> >>>> To: John Burwell
> >>>> Cc: Edison Su; dev@cloudstack.apache.org
> >>>> Subject: Re: [MERGE] disk_io_throttling to MASTER
> >>>>
> >>>> I believe we're on the same page, John.
> >>>>
> >>>> I would add a new parameter to the createStoragePool API command
> (let's
> >>>> call it "managed" and it can be true or false). If you pass in "true",
> >> but
> >>>> the driver does not support being managed by CloudStack, an exception
> is
> >>>> thrown by the driver. If you pass in "false", but the driver does not
> >>>> support being managed outside of CloudStack, an exception is thrown by
> >> the
> >>>> driver. It is possible one could write a driver that can be managed
> both
> >>>> inside or outside of CloudStack (the SolidFire driver requires being
> >>>> managed only by CloudStack).
> >>>>
> >>>> This managed value would end up in the storage_pool table in a new
> >> column
> >>>> ("managed").
> >>>>
> >>>> An example usage: After the storage framework calls on the driver's
> >>>> createAsync method, it will check to see if the driver is managed. If
> >> the
> >>>> driver is managed, the storage framework will invoke the proper (new)
> >>>> command of the hypervisor in use to set up, say, the SR for the SAN
> >> volume.
> >>>>
> >>>> There may be other cases when the isManaged() method is invoked on the
> >>>> driver and - if true is returned - the storage framework sends a
> >> particular
> >>>> command to the hypervisor in use.
> >>>>
> >>>> ********************
> >>>>
> >>>> As far as reworking the default storage plug-in (which sends commands
> to
> >>>> hypervisors) for 4.2, I would check with Edison on his schedule.
> >>>>
> >>>> Thanks!
> >>>>
> >>>>
> >>>> On Mon, Jun 10, 2013 at 11:53 AM, John Burwell <jburwell@basho.com>
> >>>> wrote:
> >>>>
> >>>>> Mike,
> >>>>>
> >>>>> I want to make sure we are on the same page regarding isManaged.  I
> see
> >>>>> the addition of the following operations:
> >>>>>
> >>>>> * DataStoreDriver#supportsManagement() : boolean -- indicates whether
> >>>> or
> >>>>> not the driver support/implements the management functions
> >>>>> * DataStore#isManaged() : boolean -- indicates whether or not a
> device
> >> is
> >>>>> managed by CloudStack
> >>>>>
> >>>>> Only DataStores associated with DataStoreDrivers that support
> >>>> management
> >>>>> would be able to have the management flag enabled.  This behavior
> >> allows
> >>>>> operators to declare their intention to allow CloudStack to manage a
> >>>>> device.  This behavior would need to be exposed as flag on the HTTP
> >> API.
> >>>>> Additionally, this flag would only be settable when a device is
> >>>>> defined/created in the system (i.e. once a device is managed by
> >> CloudStack,
> >>>>> you can't unmanaged it without dissociating it from CloudStack).  In
> >> the
> >>>>> future, we may consider relaxing that rule, but, in the short term,
> it
> >>>>> greatly decreases the alternative flows.
> >>>>>
> >>>>> In terms of removing the dependencies from Storage to Hypervisor, I
> >>>> would
> >>>>> really like to see this accomplished in 4.2.  It will be much more
> >>>>> difficult to address this issue post 4.2 because we will have more
> >> plugins
> >>>>> to address.  I also see this issue as a critical architectural issues
> >> that
> >>>>> for the management server.  I recommend we identify the dependencies
> >>>> that
> >>>>> need to be inverted, and see if we can divide the work across a few
> >> more
> >>>>> resources.
> >>>>>
> >>>>> Thanks,
> >>>>> -John
> >>>>>
> >>>>> On Jun 10, 2013, at 1:36 PM, Mike Tutkowski
> >>>> <mike.tutkowski@solidfire.com>
> >>>>> wrote:
> >>>>>
> >>>>> Sounds good, Edison :)
> >>>>>
> >>>>> I can implement John's isManaged() logic and resubmit my code for
> >> review.
> >>>>>
> >>>>> Perhaps in 4.3 I could address the direct-attach scenario. If it
> >> requires
> >>>>> some refactoring of the storage framework, then I could do that.
> >>>>>
> >>>>> Also, the default storage plug-in does have hypervisor logic in it in
> >> 4.2.
> >>>>> I recommend we leave this as is for 4.2 and it can be refactored in
> 4.3
> >>>>> with the above work I mentioned.
> >>>>>
> >>>>> Are you OK with that, guys?
> >>>>>
> >>>>>
> >>>>> On Mon, Jun 10, 2013 at 11:30 AM, Edison Su <Edison.su@citrix.com>
> >> wrote:
> >>>>>
> >>>>>> Ok, I am Ok with the way you are talking about in 4.2.****
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> *From:* Mike Tutkowski [mailto:mike.tutkowski@solidfire.com]
> >>>>>> *Sent:* Sunday, June 09, 2013 8:59 PM
> >>>>>> *To:* dev@cloudstack.apache.org
> >>>>>> *Cc:* Edison Su; John Burwell
> >>>>>> *Subject:* Re: [MERGE] disk_io_throttling to MASTER****
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> Another thing we might want to consider for 4.2 is that the way the
> >> code
> >>>>>> is currently implemented in master, I don't think it's even possible
> >> to
> >>>>>> write a plug-in that connects storage directly to a VM because the
> >>>>>> CloudStack storage framework will call the "attach" method of the
> >>>>>> hypervisor in use and this method will fail because it assumes there
> >> is
> >>>>>> (talking Xen here) an SR existent and in this situation there isn't
> >> one.*
> >>>>>> ***
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> My thinking is that we should ignore the
> direct-attach-of-volume-to-VM
> >>>>>> use case for 4.2. I think it will require too much of a rewrite at
> >> this
> >>>>>> point in development (fewer than three weeks left until code
> >>>> freeze).****
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> If the plug-in implements an isManaged() method (as John suggested),
> >>>> the
> >>>>>> storage framework can invoke the necessary commands on the
> >>>> hypervisor in
> >>>>>> use.****
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> For example, the createAsync method is called. The storage plug-in
> >>>>>> creates a SAN volume and updates the CS DB. The storage framework
> >>>> then asks
> >>>>>> the driver if it's managed. If it is, the storage framework
> executes a
> >>>>>> command against the hypervisor in use to have it create, say, an SR
> >> (with
> >>>> a
> >>>>>> single VDI that takes up the entire SR). Now, when the storage
> >>>> framework
> >>>>>> executes the "attach" command against the hypervisor, this command
> >> will
> >>>>>> work, as well (because that logic assumes the prior existence of an
> >> SR and
> >>>>>> it will, in fact, be present).****
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> On Sun, Jun 9, 2013 at 12:37 AM, Mike Tutkowski <
> >>>>>> mike.tutkowski@solidfire.com> wrote:****
> >>>>>>
> >>>>>> Hi Edison and John,****
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> I wanted to point out something I thought was relevant to this
> >>>>>> conversation.****
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> My plug-in supports both XenServer and ESX(i).****
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> These two hypervisors handle their respective hypervisor data
> >> structures
> >>>>>> (XenServer SR or ESX(i) datastore) quite differently.****
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> In XenServer, the SR that resides on the SAN LUN is simply
> "forgotten"
> >>>>>> when you issue a remove-SR command to XenServer.****
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> When you later want to access the contents of that data, it is still
> >>>>>> safely stored on the LUN and can be accessed from XenServer (from
> the
> >>>> same
> >>>>>> or different cluster (what XenServer calls a resource pool)) by
> >> creating a
> >>>>>> new SR that is based on the same IQN and LUN.****
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> XenServer SRs exist at the cluster level.****
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> In ESX(i), the datastore that resides on the SAN LUN is actually
> >>>>>> destroyed when you issue a remove-datastore command to ESX(i). In
> >>>> other
> >>>>>> words, the contents of the SAN LUN are destroyed.****
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> ESX(i) datastores exist at the datacenter level.****
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> The XenServer behavior (not destroying the contents of the LUN when
> >>>> the
> >>>>>> SR is removed) is desirable; the ESX(i) behavior (destroying the
> >> contents
> >>>>>> of the LUN when the datastore is removed) is not desirable.****
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> I can get around the ESX(i) behavior by removing the referenced IQN
> >>>> from
> >>>>>> each host in the cluster when the CS storage framework notifies my
> >> plug-
> >>>> in
> >>>>>> of a detach-volume event. When the CS volume is later attached again
> >>>>>> (either to the same or a different ESX(i) cluster), I can have each
> >> host in
> >>>>>> the relevant ESX(i) cluster reference the IQN and the datastore will
> >> be
> >>>>>> back again.****
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> This sequence of events is critical for my plug-in to support both
> >>>>>> environments:****
> >>>>>>
> >>>>>> *XenServer flow (events for Driver class):*****
> >>>>>>
> >>>>>> createAsync (called right before first attach) = creates SAN
> >> volume****
> >>>>>>
> >>>>>> preAttachVolume = creates SR and creates VDI or just introduces
> SR****
> >>>>>>
> >>>>>> (CS volume gets attached and later detached)****
> >>>>>>
> >>>>>> postDetachVolume = removes SR (this does not delete data on SAN
> >>>> volume)**
> >>>>>> **
> >>>>>>
> >>>>>> deleteAsync (can be called when volume is not attached) = deletes
> the
> >>>> SAN
> >>>>>> volume****
> >>>>>>
> >>>>>> *VMware flow (events for Driver class):*****
> >>>>>>
> >>>>>> createAsync (called right before first attach) = creates SAN volume,
> >>>>>> creates datastore, and creates VMDK file****
> >>>>>>
> >>>>>> preAttachVolume = adds iSCSI target to each host in cluster****
> >>>>>>
> >>>>>> (CS volume gets attached and later detached)****
> >>>>>>
> >>>>>> postDetachVolume = removes iSCSI target from each host in
> cluster****
> >>>>>>
> >>>>>> deleteAsync (can be called when volume is not attached) = deletes
> >>>>>> datastore and deletes the SAN volume****
> >>>>>>
> >>>>>> ************************
> >>>>>>
> >>>>>> So, the way I have it written up here, the driver is talking to the
> >>>>>> hypervisor. As John has pointed out, this may not be ideal.****
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> Whatever we decide on, it needs to be able to support this kind of
> >> flow.
> >>>>>> :)****
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> We could go John's route, where the driver says if it's managed or
> >> not.**
> >>>>>> **
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> If it is, then the storage framework could send the necessary
> >> messages to
> >>>>>> the hypervisor in question at the right times (just migrate the
> >> hypervisor
> >>>>>> logic above from the driver to the right parts of the storage
> >> framework).
> >>>>>> ****
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> This is fine, except that it does not resolve Edison's use case of
> >>>>>> desiring to have the storage LUN potentially connected directly to a
> >> VM.*
> >>>>>> ***
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> I'm not sure we should design around that use case, though. Is it
> >> common
> >>>>>> enough? Has anyone asked for it? Is it even desirable? How are usage
> >>>>>> statistics gathered for such volumes? It's nice to offer the
> >> flexibility,
> >>>>>> but is it a good idea in this case?****
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> Thanks!****
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> On Fri, Jun 7, 2013 at 6:19 PM, Edison Su <Edison.su@citrix.com>
> >> wrote:**
> >>>>>> **
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>> -----Original Message-----
> >>>>>>> From: Mike Tutkowski [mailto:mike.tutkowski@solidfire.com]
> >>>>>>> Sent: Friday, June 07, 2013 4:08 PM
> >>>>>>> To: dev@cloudstack.apache.org
> >>>>>>> Subject: Re: [MERGE] disk_io_throttling to MASTER
> >>>>>>>
> >>>>>>> Hi John,
> >>>>>>>
> >>>>>>> Yes, you are correct that Xen must layer an SR on top of the iSCSI
> >>>>>> volume
> >>>>>>> to use it. Same story for VMware: ESX(i) must layer a datastore on
> >> top
> >>>>>> of
> >>>>>>> the iSCSI volume to use it. I look at it like they are layering a
> >>>>>> clustered
> >>>>>>> file system on the SAN volume so the hypervisors can share access
> to
> >>>> the
> >>>>>>> contents of the volume.
> >>>>>>>
> >>>>>>> In 4.1, by the time the hypervisor-attach-volume logic is called,
> the
> >>>>>> SR or
> >>>>>>> datastore has already been created (usually manually by an admin).
> >>>>>>>
> >>>>>>> This pre-setup of the, say, SR is, of course, not acceptable in an
> >>>>>>> environment where each CloudStack volume that a user creates is
> >>>> mapped
> >>>>>>> to a
> >>>>>>> single SAN volume (via the SR).
> >>>>>>>
> >>>>>>> The question comes down to who should allocate the SR dynamically.
> >>>>>>>
> >>>>>>> We could have the storage framework ask the storage plug-in if it
> is
> >>>>>>> managed. If it is, then the storage framework could send a message
> to
> >>>>>> the
> >>>>>>> hypervisor in question to create (let's talk Xen here) the SR ahead
> >> of
> >>>>>>> time. Then, when the storage framework next sends the attach-volume
> >>>>>>> command
> >>>>>>> to the hypervisor, it should work without changes to that
> >> attach-volume
> >>>>>>> logic (because - from the point of view of the attach logic - the
> SR
> >> is
> >>>>>>> already existent, as expected).
> >>>>>>>
> >>>>>>> Now, as Edison has pointed out, this limits the power of the
> storage
> >>>>>>> plug-in. A storage plug-in in this model cannot directly attach
> >> storage
> >>>>>> to
> >>>>>>> a VM (it must go through the hypervisor). Perhaps that is OK. We
> need
> >>>> to
> >>>>>>> make a call on that.
> >>>>>>
> >>>>>>
> >>>>>> Yes, I think we need to address this usage case, and won't limit the
> >>>>>> power of the storage plug-in.
> >>>>>> John, what's your idea to fit this usage case into your abstraction?
> >>>>>>
> >>>>>>
> >>>>>>>
> >>>>>>> If we want to give more power to the storage vendor, we would have
> to
> >>>>>>> have
> >>>>>>> the storage framework call into the storage plug-in to send the
> >>>>>> appropriate
> >>>>>>> attach commands to the hypervisor.
> >>>>>>>
> >>>>>>> Let's discuss and come to a consensus (or at least agree on a path)
> >> as
> >>>>>> soon
> >>>>>>> as we can. :)
> >>>>>>>
> >>>>>>> Thanks!
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> On Fri, Jun 7, 2013 at 4:28 PM, John Burwell <jburwell@basho.com>
> >>>>>> wrote:
> >>>>>>>
> >>>>>>>> Mike,
> >>>>>>>>
> >>>>>>>> My understanding is the Xen expects an SR structure on all ISCSI
> >>>>>> devices
> >>>>>>>> -- at least that is how I read the code in your patch.  Is my
> >>>>>> understanding
> >>>>>>>> correct?  If so, the Xen plugin should be able to query the
> storage
> >>>>>> device
> >>>>>>>> to determine the presence of the SR structure and create it if it
> >>>>>> does not
> >>>>>>>> exist.  Am I missing something in the implementation that makes
> that
> >>>>>> type
> >>>>>>>> of implementation impossible?
> >>>>>>>>
> >>>>>>>> Thanks,
> >>>>>>>> -John
> >>>>>>>>
> >>>>>>>> On Jun 7, 2013, at 6:18 PM, Mike Tutkowski <
> >>>>>> mike.tutkowski@solidfire.com>
> >>>>>>>> wrote:
> >>>>>>>>
> >>>>>>>>> Yeah, if a storage vendor wanted to attach a volume directly to a
> >>>>>> VM in
> >>>>>>>> 4.2
> >>>>>>>>> today, it would probably fail because the attach-volume logic
> >>>>>> assumes
> >>>>>>> the
> >>>>>>>>> existence of the necessary hypervisor data structure (ex. SR on
> >>>>>> Xen).
> >>>>>>>>>
> >>>>>>>>> If we wanted to enable such an attach, we could do it the way
> >>>> Edison
> >>>>>>>>> suggests.
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> On Fri, Jun 7, 2013 at 4:13 PM, Edison Su <Edison.su@citrix.com>
> >>>>>> wrote:
> >>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>> -----Original Message-----
> >>>>>>>>>>> From: Mike Tutkowski [mailto:mike.tutkowski@solidfire.com]
> >>>>>>>>>>> Sent: Friday, June 07, 2013 2:37 PM
> >>>>>>>>>>> To: dev@cloudstack.apache.org
> >>>>>>>>>>> Subject: Re: [MERGE] disk_io_throttling to MASTER
> >>>>>>>>>>>
> >>>>>>>>>>> As we only have three weeks until feature freeze, we should
> >>>> come
> >>>>>> to
> >>>>>>> a
> >>>>>>>>>>> consensus on this design point as soon as possible.
> >>>>>>>>>>>
> >>>>>>>>>>> Right now, if the storage framework asks my driver it is is
> >>>>>> managed, it
> >>>>>>>>>>> will say 'yes.' This means the framework will tell the driver
> to
> >>>>>>>> perform
> >>>>>>>>>>> its management activities. This then means the driver will call
> >>>>>> into
> >>>>>>>> the
> >>>>>>>>>>> host (it doesn't know which hypervisor, by the way) to perform
> >>>> the
> >>>>>>>>>> activity
> >>>>>>>>>>> of, say, creating an SR on XenServer or a datastore on ESX.
> >>>>>>>>>>>
> >>>>>>>>>>> The driver doesn't know which hypervisor it's talking to, it
> just
> >>>>>>>> sends a
> >>>>>>>>>>> message to the host to perform the necessary pre-attach work.
> >>>>>>>>>>
> >>>>>>>>>> Could we just expose a method like
> >>>> "attachVolume/dettachVolume" on
> >>>>>>> the
> >>>>>>>>>> PrimaryDataStoreDriver driver?
> >>>>>>>>>> For most of cases, the implementation of each driver would just
> >>>>>> send
> >>>>>>>>>> attachvolumecommand/dettachvolumecmd to hypervisor(we can
> >>>> put
> >>>>>>> the
> >>>>>>>>>> implementation in a base class, so that can be shared by all of
> >>>>>> this
> >>>>>>>>>> drivers), and in each hypervisor resource code, which may just
> >> call
> >>>>>>>>>> hypervisor's api to attach the volume to VM, while for certain
> >>>>>> storage,
> >>>>>>>>>> like SolidFire, may need to create a SR first, and create volume
> >>>>>> on it,
> >>>>>>>>>> then call hypervisor's API to attach volume to VM.
> >>>>>>>>>> While for some other storage vendor, may want to bypass
> >>>> hypervisor
> >>>>>>>> during
> >>>>>>>>>> attaching volume, so inside driver's attachvolume
> implementation,
> >>>>>> the
> >>>>>>>>>> driver can do some magic, such as, directly talk to an agent
> >>>>>> inside VM
> >>>>>>>>>> instance, then create a disk inside VM.
> >>>>>>>>>>
> >>>>>>>>>> How do you guys think?
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> On Fri, Jun 7, 2013 at 3:14 PM, Edison Su <
> Edison.su@citrix.com>
> >>>>>>>> wrote:
> >>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>>> -----Original Message-----
> >>>>>>>>>>>>> From: Mike Tutkowski [mailto:mike.tutkowski@solidfire.com]
> >>>>>>>>>>>>> Sent: Friday, June 07, 2013 1:14 PM
> >>>>>>>>>>>>> To: dev@cloudstack.apache.org
> >>>>>>>>>>>>> Subject: Re: [MERGE] disk_io_throttling to MASTER
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Hi John,
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> How's about this:
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> The driver can implement an isManaged() method. The
> >>>>>>>>>>> VolumeManagerImpl
> >>>>>>>>>>>>> can
> >>>>>>>>>>>>> call into the driver to see if its managed. If it is, the
> >>>>>>>>>>>> VolumeManagerImpl
> >>>>>>>>>>>>> (which is responsible for calling into the hypervisor to
> attach
> >>>>>> the
> >>>>>>>>>> disk)
> >>>>>>>>>>>>> can call into the hypervisor to create the necessary
> hypervisor
> >>>>>> data
> >>>>>>>>>>>>> structure (ex. for XenServer, a storage repository).
> >>>>>>>>>>>>
> >>>>>>>>>>>> The problem here is that storage vendor may work differently
> >>>> with
> >>>>>>>>>>>> hypervisor, for example, SolidFire wants a SR per LUN, while
> >>>>>> maybe
> >>>>>>>>>> other
> >>>>>>>>>>>> vendor wants to totally bypass hypervisor, and assign the LUN
> >>>>>>> directly
> >>>>>>>>>> to
> >>>>>>>>>>>> VM instance, see the discuss(
> >>>>>>>>>>>> http://mail-archives.apache.org/mod_mbox/cloudstack-
> >>>>>>>>>>>
> >>>>>>>
> >>>> dev/201303.mbox/%3C06f219312189b019a8763a5777ecc430@mail.gmail.com
> >>>>>>>>>>> %3E
> >>>>>>>>>>>> ).
> >>>>>>>>>>>> So I would let storage provider to implement attach disk to
> VM,
> >>>>>>>>>> instead of
> >>>>>>>>>>>> implemented by cloudstack itself.
> >>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> If that's what you're going for, that works for me. By the
> way,
> >>>>>>>>>> Edison's
> >>>>>>>>>>>>> default storage plug-in (which handles the default storage
> >>>>>> behavior
> >>>>>>>>>> in
> >>>>>>>>>>>>> CloudStack (ex. how pre 4.2 works)) does include code that
> >>>>>> talks to
> >>>>>>>>>>>>> hypervisors. You might want to contact him and inform him of
> >>>>>> your
> >>>>>>>>>>>> concerns
> >>>>>>>>>>>>> or that logic (as is) will make it to production.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Please let me know if what I wrote in above (for my solution)
> >>>>>> is OK
> >>>>>>>>>> with
> >>>>>>>>>>>>> you. :)
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Thanks!
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> On Fri, Jun 7, 2013 at 1:49 PM, John Burwell <
> >>>>>> jburwell@basho.com>
> >>>>>>>>>>> wrote:
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Please see my responses in-line below.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> On Jun 7, 2013, at 1:50 AM, Mike Tutkowski <
> >>>>>>>>>>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> Hey John,
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> I still have a bit more testing I'd like to do before I
> build
> >>>>>> up
> >>>>>>>>>> a
> >>>>>>>>>>>> patch
> >>>>>>>>>>>>>>> file, but this is the gist of what I've done:
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> * During a volume-attach operation, after
> >>>> VolumeManagerImpl
> >>>>>>> tells
> >>>>>>>>>>>>>>> VolumeServiceImpl to have the driver create a volume, I
> >>>> have
> >>>>>>>>>>>>>>> VolumeManagerImpl tell VolumeServiceImpl to ask the
> >>>> driver if
> >>>>>> it
> >>>>>>>>>>>>> managed.
> >>>>>>>>>>>>>>> If it is managed, VolumeServiceImpl has the driver perform
> >>>>>>>>>> whatever
> >>>>>>>>>>>>>>> activity is required. In my case, this includes sending a
> >>>>>>>>>> message to
> >>>>>>>>>>>> the
> >>>>>>>>>>>>>>> host where the VM is running to have, say XenServer, add a
> >>>>>>>>>> storage
> >>>>>>>>>>>>>>> repository (based on the IP address of the SAN, the IQN of
> >>>> the
> >>>>>>>>>> SAN
> >>>>>>>>>>>>>> volume,
> >>>>>>>>>>>>>>> etc.) and a single VDI (the VDI consumes all of the space
> it
> >>>>>> can
> >>>>>>>>>> on
> >>>>>>>>>>>> the
> >>>>>>>>>>>>>>> storage repository). After this, the normal attach-volume
> >>>>>>>>>> message is
> >>>>>>>>>>>> sent
> >>>>>>>>>>>>>>> to the host by VolumeManagerImpl.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> There should be **no** code from a storage driver to a
> >>>>>> hypervisor.
> >>>>>>>>>> I
> >>>>>>>>>>>>>> apologize for the repetition, but we simply can not have
> >>>>>> hypervisor
> >>>>>>>>>>>>>> specific code in the storage layer.  The circular
> dependencies
> >>>>>>>>>> between
> >>>>>>>>>>>> the
> >>>>>>>>>>>>>> two layers are not sustainable in the long term.  Either the
> >>>>>>>>>>>> VirtualManager
> >>>>>>>>>>>>>> or Xen hypervisor plugin needs to be refactored/modified to
> >>>>>>>>>>> coordinate
> >>>>>>>>>>>>>> volume creation and then populating the SR.  Ideally, we can
> >>>>>>>>>>>> generalize the
> >>>>>>>>>>>>>> process flow for attaching volumes such that the Xen
> >>>> hypervisor
> >>>>>>>>>> plugin
> >>>>>>>>>>>>>> would only implement callbacks to perform the attach action
> >>>> and
> >>>>>>>>>> create
> >>>>>>>>>>>>> the
> >>>>>>>>>>>>>> structure and SR.  To my mind, the SolidFire driver should
> >>>>>> only be
> >>>>>>>>>>>>>> allocating space and providing information about contents
> >>>> (e.g.
> >>>>>>>>>> space
> >>>>>>>>>>>>>> available, space consumed, streams to a URI, file handle
> for a
> >>>>>> URI,
> >>>>>>>>>>>> etc)
> >>>>>>>>>>>>>> and capabilities.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> * The reverse is performed for a detach-volume command.
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> * Right now I simply "return true;" for isManaged() in my
> >>>>>> driver.
> >>>>>>>>>>>>>> Edison's
> >>>>>>>>>>>>>>> default driver simply does a "return false;". We could add
> a
> >>>>>> new
> >>>>>>>>>>>>>> parameter
> >>>>>>>>>>>>>>> to the createStoragePool API command, if we want, to
> >>>> remove
> >>>>>>> the
> >>>>>>>>>>>>>> hard-coded
> >>>>>>>>>>>>>>> return values in the drivers (although my driver will
> >> probably
> >>>>>>>>>> just
> >>>>>>>>>>>>>> ignore
> >>>>>>>>>>>>>>> this parameter and always return true since it wouldn't
> make
> >>>>>>>>>> sense
> >>>>>>>>>>>> for it
> >>>>>>>>>>>>>>> to ever return false). We'd need another column in the
> >>>>>>>>>> storage_pool
> >>>>>>>>>>>>> table
> >>>>>>>>>>>>>>> to store this value.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Yes, I think we should have a parameter added to the
> >>>>>>>>>>> createStoragePool
> >>>>>>>>>>>>>> surfaced to the HTTP API that allows DataStores to be
> >>>>>> configured
> >>>>>>>>>> for
> >>>>>>>>>>>>>> management when their underlying drivers support it.  To
> >>>>>> simplify
> >>>>>>>>>>>> things,
> >>>>>>>>>>>>>> this flag should only be mutable when the DataStore is
> >>>>>> created. It
> >>>>>>>>>>>> would be
> >>>>>>>>>>>>>> a bit crazy to take a DataStore from managed to unmanaged
> >>>> after
> >>>>>>>>>>>> creation.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> Sound like I'm in sync with what you were thinking?
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> On Thu, Jun 6, 2013 at 9:34 PM, Mike Tutkowski <
> >>>>>>>>>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> I agree, John. Just wanted to point out that I have a
> >>>> working
> >>>>>>>>>> GUI
> >>>>>>>>>>>> for
> >>>>>>>>>>>>>> you
> >>>>>>>>>>>>>>>> to review (in that document), if you'd like to check it
> out.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Thanks!
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> On Thu, Jun 6, 2013 at 8:34 PM, John Burwell <
> >>>>>>>>>> jburwell@basho.com>
> >>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> I would like the UIs of two features reviewed together to
> >>>>>>>>>> ensure
> >>>>>>>>>>>>>>>>> consistency across the concepts of hypervisor throttled
> >>>> IOPs
> >>>>>>>>>> and
> >>>>>>>>>>>>>>>>> storage device provisioned IOPs.  I see the potential for
> >>>>>>>>>>>> confusion,
> >>>>>>>>>>>>>>>>> and I think a side-by-side Ui review of these features
> will
> >>>>>>>>>> help
> >>>>>>>>>>>>>>>>> minimize any potential confusion.
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> As I mentioned, the term reconciliation issue will work
> >>>>>> itself
> >>>>>>>>>> if
> >>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>> is acceptable that a VM is only permitted utilize
> >> hypervisor
> >>>>>>>>>>>> throttled
> >>>>>>>>>>>>>>>>> IOPs or storage provisioned IOPs.
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> On Jun 6, 2013, at 10:05 PM, Mike Tutkowski
> >>>>>>>>>>>>>>>>> <mike.tutkowski@solidfire.com> wrote:
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> Hi John,
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> Yeah, when you get a chance, refer to the Google doc I
> >>>> sent
> >>>>>>> to
> >>>>>>>>>>> you
> >>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>> other day to see how the GUI looks for provisioned
> >>>> storage
> >>>>>>>>>> IOPS.
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> Several months ago, I put this topic out on the e-mail
> >> list
> >>>>>>>>>> and we
> >>>>>>>>>>>>>>>>> decided
> >>>>>>>>>>>>>>>>>> to place the Min, Max, and Burst IOPS in the Add Disk
> >>>>>>> Offering
> >>>>>>>>>>>> dialog.
> >>>>>>>>>>>>>>>>>> Other storage vendors are coming out with QoS, so they
> >>>>>>> should
> >>>>>>>>>>> be
> >>>>>>>>>>>>> able
> >>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>> leverage this GUI going forward (even if they, say, only
> >>>>>> use
> >>>>>>>>>> Max
> >>>>>>>>>>>>>> IOPS).
> >>>>>>>>>>>>>>>>>> These fields are optional and can be ignored for storage
> >>>>>> that
> >>>>>>>>>>>> does not
> >>>>>>>>>>>>>>>>>> support provisioned IOPS. Just like the Disk Size field,
> >>>>>> the
> >>>>>>>>>>>> admin can
> >>>>>>>>>>>>>>>>>> choose to allow the end user to fill in Min, Max, and
> >>>> Burst
> >>>>>>>>>> IOPS.
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> I'm OK if we do an either/or model (either Wei's feature
> >>>> or
> >>>>>>>>>> mine,
> >>>>>>>>>>>> as
> >>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>> decided by the admin).
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> I'm not sure what we can do about these two features
> >>>>>>>>>> expressing
> >>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>> speed
> >>>>>>>>>>>>>>>>>> in different terms. I've never seen a SAN express the
> >>>> IOPS
> >>>>>> for
> >>>>>>>>>>>> QoS in
> >>>>>>>>>>>>>>>>> any
> >>>>>>>>>>>>>>>>>> way other than total IOPS (i.e. not broken in into
> >>>>>> read/write
> >>>>>>>>>>>> IOPS).
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> Thanks
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> On Thu, Jun 6, 2013 at 7:16 PM, John Burwell
> >>>>>>>>>>> <jburwell@basho.com>
> >>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> Wei,
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> We have been down the rabbit hole a bit on the
> >>>>>>>>>>> Storage/Hypervisor
> >>>>>>>>>>>>>> layer
> >>>>>>>>>>>>>>>>>>> separation, but we still need to reconcile the behavior
> >>>> of
> >>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>> throttled I/O and storage provisioned IOPS.  I see the
> >>>>>>>>>> following
> >>>>>>>>>>>>>> issues
> >>>>>>>>>>>>>>>>>>> outstanding:
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> 1. Hypervisor throttled IOPS are expressed as discrete
> >>>>>>>>>>> read/write
> >>>>>>>>>>>>>>>>> values
> >>>>>>>>>>>>>>>>>>> where as storage provisioned IOPS are expressed as
> >>>> total
> >>>>>>>>>> IOPS.
> >>>>>>>>>>>>>>>>>>> 2. How do we handle VMs with throttled IOPS attached
> >>>> to
> >>>>>>>>>>> storage
> >>>>>>>>>>>>>> volumes
> >>>>>>>>>>>>>>>>>>> with provisioned IOPS?
> >>>>>>>>>>>>>>>>>>> 3. How should usage data be captured for throttled and
> >>>>>>>>>>>> provisioned
> >>>>>>>>>>>>>> IOPS
> >>>>>>>>>>>>>>>>>>> that will permit providers to discriminate these
> >>>>>> guaranteed
> >>>>>>>>>>>>>> operations
> >>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>> the event they want to bill for it differently?
> >>>>>>>>>>>>>>>>>>> 4. What is the user experience for throttled and
> >>>>>> provisioned
> >>>>>>>>>>> IOPS
> >>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>> minimizes confusion of these concepts?
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> My thinking is that a VM can have either utilize
> >>>>>> hypervisor
> >>>>>>>>>>>> throttled
> >>>>>>>>>>>>>>>>> IOPS
> >>>>>>>>>>>>>>>>>>> or storage provisioned IOPS.  This policy would neatly
> >>>>>> solve
> >>>>>>>>>>>> items 1
> >>>>>>>>>>>>>>>>> and 2.
> >>>>>>>>>>>>>>>>>>> Since the two facilities would not be permitted to
> >>>> operate
> >>>>>>>>>>>> together,
> >>>>>>>>>>>>>>>>> they
> >>>>>>>>>>>>>>>>>>> do not need to be semantically compatible.  I think
> item
> >>>> 3
> >>>>>>>>>> can be
> >>>>>>>>>>>>>>>>> resolved
> >>>>>>>>>>>>>>>>>>> with an additional flag or two on the usage records.
>  As
> >>>>>> for
> >>>>>>>>>>>> Item 4,
> >>>>>>>>>>>>>> I
> >>>>>>>>>>>>>>>>> am
> >>>>>>>>>>>>>>>>>>> not familiar with how these two enhancements are
> >>>>>>> depicted in
> >>>>>>>>>>> the
> >>>>>>>>>>>>> user
> >>>>>>>>>>>>>>>>>>> interface.  I think we need to review the UI
> >>>> enhancements
> >>>>>>> for
> >>>>>>>>>>>> both
> >>>>>>>>>>>>>>>>>>> enhancements and ensure they are consistent.
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> Do these solutions make sense?
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> On Jun 6, 2013, at 5:22 PM, Wei ZHOU
> >>>>>>> <ustcweizhou@gmail.com>
> >>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> John and Mike,
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> I was busy working on other issues (CLOUDSTACK-
> >>>>>>> 2780/2729,
> >>>>>>>>>>>>>>>>>>>> CLOUDSTACK-2856/2857/2865, CLOUDSTACK-2823 ,
> >>>>>>>>>>> CLOUDSTACK-
> >>>>>>>>>>>>> 2875 ) this
> >>>>>>>>>>>>>>>>> week.
> >>>>>>>>>>>>>>>>>>>> I will start to develop on iops/bps changes tomorrow,
> >>>> and
> >>>>>>>>>> ask
> >>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>> second
> >>>>>>>>>>>>>>>>>>>> merge request after testing.
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> -Wei
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> 2013/6/6 Mike Tutkowski
> >>>> <mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> I believe I understand where you're going with this,
> >>>>>> John,
> >>>>>>>>>> and
> >>>>>>>>>>>>> have
> >>>>>>>>>>>>>>>>> been
> >>>>>>>>>>>>>>>>>>>>> re-working this section of code today.
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> I should be able to run it by you tomorrow.
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> Thanks for the comments,
> >>>>>>>>>>>>>>>>>>>>> Mike
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> On Thu, Jun 6, 2013 at 3:12 PM, John Burwell
> >>>>>>>>>>>>> <jburwell@basho.com>
> >>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> See my responses in-line below.
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> On Jun 6, 2013, at 11:09 AM, Mike Tutkowski <
> >>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> Hi John,
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> Thanks for the response.
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> "I am fine with the VolumeManager determining
> >>>>>>> whether or
> >>>>>>>>>>>>> not a
> >>>>>>>>>>>>>>>>> Volume
> >>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>> managed (i.e. not based on the StoragePoolType,
> >>>> but an
> >>>>>>>>>>> actual
> >>>>>>>>>>>>>>>>> isManaged
> >>>>>>>>>>>>>>>>>>>>>> method), and asking the device driver to allocate
> >>>>>>>>>> resources
> >>>>>>>>>>>> for
> >>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>> volume
> >>>>>>>>>>>>>>>>>>>>>> if it is managed."
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> Are you thinking you'd like to see an isManaged()
> >>>>>>> method
> >>>>>>>>>>>>> added to
> >>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>> PrimaryDataStoreDriver interface? If it returns
> true,
> >>>>>>>>>> then the
> >>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>> framework could call the manage() (or whatever
> >>>> name)
> >>>>>>>>>>> method
> >>>>>>>>>>>>> (which
> >>>>>>>>>>>>>>>>>>> would
> >>>>>>>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>>> new to the PrimaryDataStoreDriver interface, as
> >>>> well)
> >>>>>>> and
> >>>>>>>>>>> this
> >>>>>>>>>>>>>> would
> >>>>>>>>>>>>>>>>>>> call
> >>>>>>>>>>>>>>>>>>>>>> into a new method in the hypervisor code to create,
> >>>> say
> >>>>>>> on
> >>>>>>>>>>>>>>>>> XenServer,
> >>>>>>>>>>>>>>>>>>> an
> >>>>>>>>>>>>>>>>>>>>> SR?
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> I would like to see canBeManaged() : boolean on
> >>>>>>>>>>>>> DataStoreDriver.
> >>>>>>>>>>>>>>>>> Since
> >>>>>>>>>>>>>>>>>>>>>> the notion of Volumes only pertains to primary
> >>>> storage,
> >>>>>>> I
> >>>>>>>>>>>> would
> >>>>>>>>>>>>>> add
> >>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>> allocateStorage and deallocateStorage (Storage is a
> >>>>>>> straw
> >>>>>>>>>>> man
> >>>>>>>>>>>>> term
> >>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>> something other than volume) methods to
> >>>>>>>>>>>>>>>>>>> allocate/create/deallocate/delete
> >>>>>>>>>>>>>>>>>>>>>> underlying storage.  To my mind, managed is a
> >>>> mutable
> >>>>>>>>>>> property
> >>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>> DataStore
> >>>>>>>>>>>>>>>>>>>>>> which can be enabled if/when the underlying
> >>>>>>>>>>> DataStoreDriver
> >>>>>>>>>>>>> can be
> >>>>>>>>>>>>>>>>>>>>> managed.
> >>>>>>>>>>>>>>>>>>>>>> This approach allows operators to override
> >>>>>>> manageability
> >>>>>>>>>> of
> >>>>>>>>>>>>>> devices.
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> In terms of orchestration/process flow for SR, the
> >>>> Xen
> >>>>>>>>>> plugin
> >>>>>>>>>>>>>> would
> >>>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>>> responsible for composing DataStore/Volume
> >>>> methods
> >>>>>>> to
> >>>>>>>>>>>>> create any
> >>>>>>>>>>>>>>>>>>>>>> directories or files necessary for the SR.  There
> >>>>>> should
> >>>>>>>>>> be no
> >>>>>>>>>>>>>>>>>>>>> dependencies
> >>>>>>>>>>>>>>>>>>>>>> from the Storage to the Hypervisor layer.  As I said
> >>>>>>>>>> earlier,
> >>>>>>>>>>>> such
> >>>>>>>>>>>>>>>>>>>>> circular
> >>>>>>>>>>>>>>>>>>>>>> dependencies will lead to a tangled,
> >>>> unmaintainable
> >>>>>>> mess.
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> Just want to make sure I'm on the same page with
> >>>> you.
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> Thanks again, John
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> On Thu, Jun 6, 2013 at 7:44 AM, John Burwell
> >>>>>>>>>>>>> <jburwell@basho.com>
> >>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> Fundamentally, we can't end up with a Storage
> >>>> layer
> >>>>>>> that
> >>>>>>>>>>>>>> supports n
> >>>>>>>>>>>>>>>>>>>>>>> devices types with each specific behaviors of m
> >>>>>>>>>> hypervisors.
> >>>>>>>>>>>>>> Such
> >>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>> scenario will create an unmaintainable and
> >>>> untestable
> >>>>>>>>>>> beast.
> >>>>>>>>>>>>>>>>>>>>> Therefore, my
> >>>>>>>>>>>>>>>>>>>>>>> thoughts and recommendations are driven to
> >>>> evolve
> >>>>>>> the
> >>>>>>>>>>>>> Storage
> >>>>>>>>>>>>>> layer
> >>>>>>>>>>>>>>>>>>>>> towards
> >>>>>>>>>>>>>>>>>>>>>>> this separation.
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> I am fine with the VolumeManager determining
> >>>>>>> whether
> >>>>>>>>>>> or
> >>>>>>>>>>>>> not a
> >>>>>>>>>>>>>>>>> Volume
> >>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>> managed (i.e. not based on the StoragePoolType,
> >>>> but
> >>>>>>> an
> >>>>>>>>>>> actual
> >>>>>>>>>>>>>>>>>>> isManaged
> >>>>>>>>>>>>>>>>>>>>>>> method), and asking the device driver to allocate
> >>>>>>>>>> resources
> >>>>>>>>>>>> for
> >>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>> volume
> >>>>>>>>>>>>>>>>>>>>>>> if it is managed.  Furthermore, the device driver
> >>>>>> needs
> >>>>>>>>>> to
> >>>>>>>>>>>>>> indicate
> >>>>>>>>>>>>>>>>>>>>> whether
> >>>>>>>>>>>>>>>>>>>>>>> or not it supports management operations.
> >>>> Finally, I
> >>>>>>>>>> think
> >>>>>>>>>>>> we
> >>>>>>>>>>>>>>>>> need to
> >>>>>>>>>>>>>>>>>>>>>>> provide the ability for an administrator to elect
> to
> >>>>>> have
> >>>>>>>>>>>>>> something
> >>>>>>>>>>>>>>>>>>>>> that is
> >>>>>>>>>>>>>>>>>>>>>>> manageable be unmanaged (i.e. the driver is
> >>>> capable
> >>>>>>>>>>> managing
> >>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>> device,
> >>>>>>>>>>>>>>>>>>>>>>> but the administrator has elected to leave it
> >>>>>>> unmanaged).
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> Creation of a structure on the volume should be
> >>>> done
> >>>>>>> in
> >>>>>>>>>>> the
> >>>>>>>>>>>>> Xen
> >>>>>>>>>>>>>>>>>>>>>>> hypervisor module using methods exposed by the
> >>>>>>> Storage
> >>>>>>>>>>>>> layer to
> >>>>>>>>>>>>>>>>>>> perform
> >>>>>>>>>>>>>>>>>>>>>>> low-level operations (e.g. make directories,
> >>>> create a
> >>>>>>>>>> file,
> >>>>>>>>>>>> etc).
> >>>>>>>>>>>>>>>>>>> This
> >>>>>>>>>>>>>>>>>>>>>>> structure is specific to the operation of the Xen
> >>>>>>>>>>>> hypervisor, as
> >>>>>>>>>>>>>>>>> such,
> >>>>>>>>>>>>>>>>>>>>>>> should be confined to its implementation.  From
> >>>> my
> >>>>>>>>>>>>> perspective,
> >>>>>>>>>>>>>>>>>>> nothing
> >>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>> the Storage layer should be concerned with
> >>>> content.
> >>>>>>> From
> >>>>>>>>>>> its
> >>>>>>>>>>>>>>>>>>>>> perspective,
> >>>>>>>>>>>>>>>>>>>>>>> structure and data are opaque.  It provides the
> >>>> means
> >>>>>>> to
> >>>>>>>>>>>> query
> >>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>> data
> >>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>> support the interpretation of the content by
> >>>> higher-
> >>>>>>> level
> >>>>>>>>>>>>> layers
> >>>>>>>>>>>>>>>>> (e.g.
> >>>>>>>>>>>>>>>>>>>>>>> Hypervisors).  To my mind, attach should be a
> >>>>>>> composition
> >>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>> operations
> >>>>>>>>>>>>>>>>>>>>>>> from the Storage layer that varies based on the
> >>>>>>> Volume
> >>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>> protocol
> >>>>>>>>>>>>>>>>>>>>>>> (iSCSI, local file system, NFS, RBD, etc).
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> On Jun 5, 2013, at 12:25 PM, Mike Tutkowski <
> >>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> Hi John,
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> Alternatively to the way the attach logic is
> >>>>>>> implemented
> >>>>>>>>>> in
> >>>>>>>>>>>> my
> >>>>>>>>>>>>>>>>> patch,
> >>>>>>>>>>>>>>>>>>> we
> >>>>>>>>>>>>>>>>>>>>>>> could do the following:
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> Leave the attach logic in the agent code alone. In
> >>>>>>>>>>>>>>>>> VolumeManagerImpl
> >>>>>>>>>>>>>>>>>>> we
> >>>>>>>>>>>>>>>>>>>>>>> create an AttachVolumeCommand and send it to
> >>>> the
> >>>>>>>>>>>>> hypervisor.
> >>>>>>>>>>>>>> Before
> >>>>>>>>>>>>>>>>>>> this
> >>>>>>>>>>>>>>>>>>>>>>> command is sent, we could check to see if we're
> >>>>>>> dealing
> >>>>>>>>>>> with
> >>>>>>>>>>>>>>>>> Dynamic
> >>>>>>>>>>>>>>>>>>> (or
> >>>>>>>>>>>>>>>>>>>>>>> whatever we want to call it) storage and - if so -
> >>>>>> send a
> >>>>>>>>>>>> "Create
> >>>>>>>>>>>>>>>>> SR"
> >>>>>>>>>>>>>>>>>>>>>>> command to the hypervisor. If this returns OK, we
> >>>>>>> would
> >>>>>>>>>>> then
> >>>>>>>>>>>>>>>>> proceed
> >>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>> AttachVolumeCommand, as usual.
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> This way the attach logic remains the same and
> >>>> we just
> >>>>>>>>>> add
> >>>>>>>>>>>>>> another
> >>>>>>>>>>>>>>>>>>>>>>> command to the agent code that is called for this
> >>>>>>>>>> particular
> >>>>>>>>>>>>> type
> >>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>> storage.
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> What do you think?
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 5:42 PM, Mike Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> Hey John,
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> I created a document for a customer today that
> >>>>>>> outlines
> >>>>>>>>>>> how
> >>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>> plug-in
> >>>>>>>>>>>>>>>>>>>>>>>> works from a user standpoint. This will probably
> >>>> be
> >>>>>> of
> >>>>>>>>>> use
> >>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>> you, as
> >>>>>>>>>>>>>>>>>>>>> well,
> >>>>>>>>>>>>>>>>>>>>>>>> as you perform the code review.
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> I have shared this document with you (you
> >>>> should
> >>>>>>> have
> >>>>>>>>>>>>> received
> >>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>>>>> information in a separate e-mail).
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> Talk to you later!
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 3:48 PM, Mike Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>> Oh, OK, that sounds really good, John.
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>> Thanks and talk to you tomorrow! :)
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 3:42 PM, John Burwell <
> >>>>>>>>>>>>>> jburwell@basho.com
> >>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> I am never at a loss for an opinion.  I some
> >>>>>>> thoughts,
> >>>>>>>>>>> but
> >>>>>>>>>>>>>> want
> >>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>> confirm assumptions and ideas against the
> >>>>>> solidfire,
> >>>>>>>>>>>>>>>>>>>>> disk_io_throttle,
> >>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>> object_store branches.  I hope to collect them
> >>>> in a
> >>>>>>>>>>>>> coherent
> >>>>>>>>>>>>>>>>> form
> >>>>>>>>>>>>>>>>>>>>>>>>>> tomorrow (5 June 2013).
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> On Jun 4, 2013, at 5:29 PM, Mike Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>> "So, in essence, the SolidFire plugin
> >>>> introduces
> >>>>>>> the
> >>>>>>>>>>>> notion
> >>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>> managed
> >>>>>>>>>>>>>>>>>>>>>>>>>>> iSCSI device and provisioned IOPS."
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>> Technically, the SolidFire plug-in just
> >>>> introduces
> >>>>>>>>>> the
> >>>>>>>>>>>>> notion
> >>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>>> provisioned storage IOPS.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>> The storage framework that leverages the
> >>>> plug-in
> >>>>>>> was
> >>>>>>>>>>>>>>>>> incomplete,
> >>>>>>>>>>>>>>>>>>> so
> >>>>>>>>>>>>>>>>>>>>>>>>>> I had
> >>>>>>>>>>>>>>>>>>>>>>>>>>> to try to add in the notion of a managed iSCSI
> >>>>>>>>>> device.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>> I appreciate all the time you've been
> >>>> spending on
> >>>>>>>>>> this.
> >>>>>>>>>>>> :)
> >>>>>>>>>>>>> Do
> >>>>>>>>>>>>>>>>> you
> >>>>>>>>>>>>>>>>>>>>>>>>>> have a
> >>>>>>>>>>>>>>>>>>>>>>>>>>> recommendation as to how we should
> >>>> accomplish
> >>>>>>>>>>> what
> >>>>>>>>>>>>> you're
> >>>>>>>>>>>>>>>>> looking
> >>>>>>>>>>>>>>>>>>>>>>>>>> for?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks!
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 3:19 PM, John Burwell
> >>>> <
> >>>>>>>>>>>>>>>>> jburwell@basho.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> So, in essence, the SolidFire plugin
> >>>> introduces
> >>>>>>> the
> >>>>>>>>>>>> notion
> >>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>> managed
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> iSCSI device and provisioned IOPS.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> I want to see a separation of the
> >>>> management
> >>>>>>>>>>>>> capabilities
> >>>>>>>>>>>>>>>>> (i.e.
> >>>>>>>>>>>>>>>>>>>>> can
> >>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> device be managed/does an operator want
> >>>> it
> >>>>>>>>>>> managed
> >>>>>>>>>>>>> by
> >>>>>>>>>>>>>>>>> CloudStack)
> >>>>>>>>>>>>>>>>>>>>>>>>>> from the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> storage protocol.  Ideally, we should end up
> >>>> with
> >>>>>>> a
> >>>>>>>>>>>>> semantic
> >>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>>>>>>> will
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> allow any type of storage device to be
> >>>> managed.
> >>>>>>> I
> >>>>>>>>>>> also
> >>>>>>>>>>>>> want
> >>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>> make
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> progress on decoupling the storage types
> >>>> from
> >>>>>>> the
> >>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>> definitions.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> On Jun 4, 2013, at 5:13 PM, Mike Tutkowski
> >>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi John,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> No problem. Answers are below in red.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks!
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 2:55 PM, John
> >>>> Burwell <
> >>>>>>>>>>>>>>>>>>> jburwell@basho.com
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Could you please answer the following
> >>>>>>> questions
> >>>>>>>>>>> for
> >>>>>>>>>>>>> me
> >>>>>>>>>>>>>> with
> >>>>>>>>>>>>>>>>>>>>>>>>>> regards to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> operation of the SolidFire plugin:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What is the cardinality between iSCSI
> >>>> LUNs
> >>>>>>> and
> >>>>>>>>>>> SAN
> >>>>>>>>>>>>>> volumes?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Each SAN volume is equivalent to a single
> >>>> LUN
> >>>>>>> (LUN
> >>>>>>>>>>> 0).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> 1 SAN volume : 1 LUN
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What is the cardinality between SAN
> >>>> Volumes
> >>>>>>> and
> >>>>>>>>>>>>> CloudStack
> >>>>>>>>>>>>>>>>>>>>>>>>>> Volumes?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> 1 SAN volume : 1 CloudStack volume
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Are the LUN(s) created by the
> >>>> management
> >>>>>>>>>>> server or
> >>>>>>>>>>>>>>>>> externally
> >>>>>>>>>>>>>>>>>>> by
> >>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> operator?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> When used with the SolidFire plug-in, a
> >>>> SAN
> >>>>>>>>>>> volume
> >>>>>>>>>>>>> (same
> >>>>>>>>>>>>>> as a
> >>>>>>>>>>>>>>>>>>> SAN
> >>>>>>>>>>>>>>>>>>>>>>>>>> LUN) is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> created by the management server (via
> >>>> the
> >>>>>>> plug-in)
> >>>>>>>>>>>>> the
> >>>>>>>>>>>>>> first
> >>>>>>>>>>>>>>>>>>> time
> >>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> CloudStack volume is attached to a
> >>>> hypervisor.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> If you don't want to use the SolidFire plug-
> >>>> in,
> >>>>>>> but
> >>>>>>>>>>>> still
> >>>>>>>>>>>>>>>>> want
> >>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>> use a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> SolidFire volume (LUN), you can do this
> >>>> already
> >>>>>>>>>>> today
> >>>>>>>>>>>>>> (prior
> >>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>> 4.2). The
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> admin manually creates the SAN volume
> >>>> and -
> >>>>>>> in
> >>>>>>>>>>> this
> >>>>>>>>>>>>> case -
> >>>>>>>>>>>>>>>>>>>>>>>>>> multiple VMs
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> data disks can share this SAN volume.
> >>>> While
> >>>>>>> you
> >>>>>>>>>>> can do
> >>>>>>>>>>>>> this
> >>>>>>>>>>>>>>>>>>>>> today,
> >>>>>>>>>>>>>>>>>>>>>>>>>> it is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> not useful if you want to enforce storage
> >>>> QoS.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Are the SAN volumes by the
> >>>> management
> >>>>>>> server
> >>>>>>>>>>> or
> >>>>>>>>>>>>> externally
> >>>>>>>>>>>>>>>>> by
> >>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> operator?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> When the SolidFire plug-in is used, the
> >>>> SAN
> >>>>>>>>>>> volumes
> >>>>>>>>>>>>> are
> >>>>>>>>>>>>>>>>>>>>> completely
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> managed
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> by the management server (via the plug-
> >>>> in).
> >>>>>>> There
> >>>>>>>>>>> is
> >>>>>>>>>>>>> no
> >>>>>>>>>>>>>> admin
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> interaction.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> This allows for a 1:1 mapping between a
> >>>> SAN
> >>>>>>>>>>> volume
> >>>>>>>>>>>>> and a
> >>>>>>>>>>>>>>>>>>>>> CloudStack
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> volume,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> which is necessary for any storage vendor
> >>>> that
> >>>>>>>>>>>>> supports
> >>>>>>>>>>>>>> true
> >>>>>>>>>>>>>>>>>>> QoS.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would like to clarify how these pieces
> >>>> are
> >>>>>>>>>> related
> >>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>> expected
> >>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> operate.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Jun 4, 2013, at 3:46 PM, Mike
> >>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "In particular, how do we ensure that
> >>>>>>> multiple
> >>>>>>>>>>> VMs
> >>>>>>>>>>>>> with
> >>>>>>>>>>>>>>>>>>>>>>>>>> provisioned
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> IOPS
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> won't be cut off by the underlying
> >>>> storage."
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> In the storage QoS world, we need to
> >>>> map a
> >>>>>>>>>>> single
> >>>>>>>>>>>>> SAN
> >>>>>>>>>>>>>>>>> volume
> >>>>>>>>>>>>>>>>>>>>>>>>>> (LUN) to a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> single CloudStack volume. We cannot
> >>>> have
> >>>>>>>>>>> multiple
> >>>>>>>>>>>>>>>>> CloudStack
> >>>>>>>>>>>>>>>>>>>>>>>>>> volumes
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sharing a single SAN volume and still
> >>>>>>> guarantee
> >>>>>>>>>>> QoS.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If the user wants to have a single SAN
> >>>>>>> volume
> >>>>>>>>>>> house
> >>>>>>>>>>>>> more
> >>>>>>>>>>>>>>>>> than
> >>>>>>>>>>>>>>>>>>>>> one
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> CloudStack volume, then can do that
> >>>> today
> >>>>>>>>>>> without
> >>>>>>>>>>>>> any of
> >>>>>>>>>>>>>> my
> >>>>>>>>>>>>>>>>>>>>>>>>>> plug-in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> code.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 1:43 PM, Mike
> >>>>>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "The administrator will allocate a SAN
> >>>>>>> volume
> >>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>> CloudStack's
> >>>>>>>>>>>>>>>>>>>>>>>>>> use
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> onto
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> which CloudStack volumes will be
> >>>> created."
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I think we crossed e-mails. :)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Check out my recent e-mail on this.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 1:41 PM, John
> >>>>>>> Burwell <
> >>>>>>>>>>>>>>>>>>>>>>>>>> jburwell@basho.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You are coming to the part which
> >>>>>>> concerns me
> >>>>>>>>>>> --
> >>>>>>>>>>>>>> concepts
> >>>>>>>>>>>>>>>>>>> from
> >>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hypervisor are leaking into storage
> >>>> layer.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Jun 4, 2013, at 3:35 PM, Mike
> >>>>>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The weird part is that the iSCSI type
> >>>> is
> >>>>>>> today
> >>>>>>>>>>>> only
> >>>>>>>>>>>>>> used
> >>>>>>>>>>>>>>>>>>> (as
> >>>>>>>>>>>>>>>>>>>>>>>>>> far as
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> know)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in regards to XenServer (when you
> >>>> have
> >>>>>>> not
> >>>>>>>>>>>>> PreSetup an
> >>>>>>>>>>>>>>>>> SR).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If you want to use your iSCSI
> >>>> volume
> >>>>>>> from
> >>>>>>>>>>>>> VMware, it
> >>>>>>>>>>>>>>>>> uses
> >>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>> vmfs
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> type.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If you want to use your iSCSI
> >>>> volume
> >>>>>>> from
> >>>>>>>>>>> KVM,
> >>>>>>>>>>>>> it uses
> >>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> SharedMountPoint
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> type.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So, I suppose mine and Edison's
> >>>> thinking
> >>>>>>>>>>> here
> >>>>>>>>>>>>> was to
> >>>>>>>>>>>>>>>>> make a
> >>>>>>>>>>>>>>>>>>>>>>>>>> new type
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage to describe this dynamic
> >>>> ability
> >>>>>>>>>>> Edison
> >>>>>>>>>>>>> added
> >>>>>>>>>>>>>>>>> into
> >>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> framework. Maybe it should be
> >>>> more
> >>>>>>>>>>> specificy,
> >>>>>>>>>>>>> though:
> >>>>>>>>>>>>>>>>>>>>>>>>>> Dynamic_iSCSI
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> versus,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> say, Dynamic_FC.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 1:27 PM,
> >>>> Mike
> >>>>>>>>>>> Tutkowski
> >>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "The storage device itself
> >>>> shouldn't
> >>>>>>> know
> >>>>>>>>>>> or
> >>>>>>>>>>>>> care
> >>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>>>> being
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> used
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for a Xen SR -- simply be able to
> >>>> answer
> >>>>>>>>>>>>> questions
> >>>>>>>>>>>>>>>>> about
> >>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storing."
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I see...so your concern here is that
> >>>> the
> >>>>>>>>>>>>> SolidFire
> >>>>>>>>>>>>>>>>> plug-in
> >>>>>>>>>>>>>>>>>>>>>>>>>> needs to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> call
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> itself "Dynamic" storage so that
> >>>> the
> >>>>>>>>>>> hypervisor
> >>>>>>>>>>>>> logic
> >>>>>>>>>>>>>>>>>>> knows
> >>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> treat
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> it as
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> such.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I'm totally open to removing that
> >>>>>>> constraint
> >>>>>>>>>>>>> and just
> >>>>>>>>>>>>>>>>>>>>>>>>>> calling it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> iSCSI
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> or
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> whatever. We would just need a
> >>>> way
> >>>>>>> for
> >>>>>>>>>>> the
> >>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>> attach
> >>>>>>>>>>>>>>>>>>>>>>>>>> logic
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> detect this new requirement and
> >>>>>>> perform
> >>>>>>>>>>> the
> >>>>>>>>>>>>> necessary
> >>>>>>>>>>>>>>>>>>>>>>>>>> activities.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 1:24 PM,
> >>>> John
> >>>>>>>>>>> Burwell <
> >>>>>>>>>>>>>>>>>>>>>>>>>> jburwell@basho.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> See my responses in-line.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Jun 4, 2013, at 3:10 PM, Mike
> >>>>>>>>>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I'm trying to picture this:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Finally, while CloudStack may
> >>>> be
> >>>>>>> able to
> >>>>>>>>>>>>> manage a
> >>>>>>>>>>>>>>>>>>>>> device,
> >>>>>>>>>>>>>>>>>>>>>>>>>> an
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> operator
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> may
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> chose to leave it unmanaged by
> >>>>>>>>>>> CloudStack
> >>>>>>>>>>>>> (e.g. the
> >>>>>>>>>>>>>>>>>>>>> device
> >>>>>>>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> shared
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> by
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> many services, and the
> >>>> operator has
> >>>>>>>>>>> chosen
> >>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>> dedicate
> >>>>>>>>>>>>>>>>>>>>>>>>>> only a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> portion
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> of it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to CloudStack).  Does my
> >>>> reasoning
> >>>>>>>>>>> make
> >>>>>>>>>>>>> sense?"
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess I'm not sure how
> >>>> creating a
> >>>>>>> SAN
> >>>>>>>>>>>>> volume via
> >>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>> plug-in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (before
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> an
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> attach request to the
> >>>> hypervisor)
> >>>>>>> would
> >>>>>>>>>>>>> work unless
> >>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> consumes
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the SAN volume in the form of,
> >>>> say,
> >>>>>>> an
> >>>>>>>>>>> SR.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> My thinking is that, independent
> >>>> of
> >>>>>>>>>>>>> CloudStack, an
> >>>>>>>>>>>>>>>>>>>>> operator
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> allocates
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> chunk of  a SAN to CloudStack,
> >>>> and
> >>>>>>>>>>> exposes it
> >>>>>>>>>>>>>> through
> >>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>> LUN.  They
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> simply
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> want to turn control of that LUN
> >>>> over
> >>>>>>> to
> >>>>>>>>>>>>> CloudStack,
> >>>>>>>>>>>>>>>>> but
> >>>>>>>>>>>>>>>>>>>>>>>>>> not allow
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> CloudStack to allocate anymore
> >>>> LUNs.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> As the attach logic stands prior
> >>>> to my
> >>>>>>>>>>>>> changes, we
> >>>>>>>>>>>>>>>>> would
> >>>>>>>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> passing
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> SAN volume that does not have
> >>>> the
> >>>>>>>>>>>>> necessary
> >>>>>>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>> support
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (like
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> an
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> SR)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and the logic will fail.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Are you thinking we should
> >>>> maybe
> >>>>>>> have
> >>>>>>>>>>> the
> >>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>> framework
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> itself
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> detect
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that such a SAN volume needs
> >>>>>>> support
> >>>>>>>>>>> from
> >>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>> side and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> have
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> call into the agent code
> >>>> specifically
> >>>>>>> to
> >>>>>>>>>>> create
> >>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>> SR
> >>>>>>>>>>>>>>>>>>>>>>>>>> before the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> attach
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logic runs in the agent code?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I think the hypervisor
> >>>> management
> >>>>>>> plugin
> >>>>>>>>>>>>> should
> >>>>>>>>>>>>>> have a
> >>>>>>>>>>>>>>>>>>>>> rich
> >>>>>>>>>>>>>>>>>>>>>>>>>> enough
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> interface to storage to
> >>>> determine
> >>>>>>>>>>> available
> >>>>>>>>>>>>> for
> >>>>>>>>>>>>>> volume
> >>>>>>>>>>>>>>>>>>>>>>>>>> storage.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> For
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Xen,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this interface would allow the
> >>>>>>>>>>> interrogation of
> >>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>> device
> >>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> determine the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> SR is present.   The storage
> >>>> device
> >>>>>>> itself
> >>>>>>>>>>>>> shouldn't
> >>>>>>>>>>>>>>>>> know
> >>>>>>>>>>>>>>>>>>>>>>>>>> or care
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is being used for a Xen SR --
> >>>> simply be
> >>>>>>> able
> >>>>>>>>>>> to
> >>>>>>>>>>>>>> answer
> >>>>>>>>>>>>>>>>>>>>>>>>>> questions
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> about it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is storing.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 1:01 PM,
> >>>> Mike
> >>>>>>>>>>>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So, the flow is as follows:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> * The admin registers the
> >>>> SolidFire
> >>>>>>>>>>> driver
> >>>>>>>>>>>>> (which
> >>>>>>>>>>>>>>>>> is a
> >>>>>>>>>>>>>>>>>>>>>>>>>> type of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> so-called
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Dynamic storage). Once this is
> >>>>>>> done, a
> >>>>>>>>>>> new
> >>>>>>>>>>>>> Primary
> >>>>>>>>>>>>>>>>>>>>>>>>>> Storage shows
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> up
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> applicable zone.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> * The admin creates a Disk
> >>>> Offering
> >>>>>>>>>>> that
> >>>>>>>>>>>>>> references
> >>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> tag
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> newly created Primary
> >>>> Storage.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> * The end user creates a
> >>>>>>> CloudStack
> >>>>>>>>>>>>> volume. This
> >>>>>>>>>>>>>>>>> leads
> >>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>> a new
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> row
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cloud.volumes table.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> * The end user attaches the
> >>>>>>> CloudStack
> >>>>>>>>>>>>> volume to a
> >>>>>>>>>>>>>>>>> VM
> >>>>>>>>>>>>>>>>>>>>>>>>>> (attach
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> disk).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> leads to the storage
> >>>> framework
> >>>>>>> calling
> >>>>>>>>>>> the
> >>>>>>>>>>>>> plug-in
> >>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>> create a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> new
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> volume
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> on its storage system (in my
> >>>> case, a
> >>>>>>>>>>> SAN).
> >>>>>>>>>>>>> The
> >>>>>>>>>>>>>>>>> plug-in
> >>>>>>>>>>>>>>>>>>>>>>>>>> also
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> updates
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cloud.volumes row with
> >>>> applicable
> >>>>>>> data
> >>>>>>>>>>>>> (like the
> >>>>>>>>>>>>>>>>> IQN of
> >>>>>>>>>>>>>>>>>>>>>>>>>> the SAN
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> volume).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This plug-in code is only
> >>>> invoked if
> >>>>>>> the
> >>>>>>>>>>>>>> CloudStack
> >>>>>>>>>>>>>>>>>>>>>>>>>> volume is in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 'Allocated' state. After the
> >>>> attach,
> >>>>>>> the
> >>>>>>>>>>>>> volume
> >>>>>>>>>>>>>>>>> will be
> >>>>>>>>>>>>>>>>>>>>>>>>>> in the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 'Ready'
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> state (even after a detach disk)
> >>>> and
> >>>>>>> the
> >>>>>>>>>>>>> plug-in
> >>>>>>>>>>>>>>>>> code
> >>>>>>>>>>>>>>>>>>>>>>>>>> will not
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> called
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> again to create this SAN
> >>>> volume.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> * The hypervisor-attach logic
> >>>> is run
> >>>>>>> and
> >>>>>>>>>>>>> detects
> >>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>> CloudStack
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> volume
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> attach needs "assistance" in
> >>>> the
> >>>>>>> form
> >>>>>>>>>>> of a
> >>>>>>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>> data
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> structure
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (ex.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> an SR on XenServer).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 12:54
> >>>> PM,
> >>>>>>> Mike
> >>>>>>>>>>>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
> >>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "To ensure that we are in
> >>>> sync on
> >>>>>>>>>>>>> terminology,
> >>>>>>>>>>>>>>>>> volume,
> >>>>>>>>>>>>>>>>>>>>>>>>>> in these
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> definitions, refers to the
> >>>> physical
> >>>>>>>>>>>>> allocation on
> >>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>> device,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> correct?"
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes...when I say 'volume', I
> >>>> try to
> >>>>>>>>>>> mean
> >>>>>>>>>>>>> 'SAN
> >>>>>>>>>>>>>>>>> volume'.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To refer to the 'volume' the
> >>>> end
> >>>>>>> user
> >>>>>>>>>>> can
> >>>>>>>>>>>>> make in
> >>>>>>>>>>>>>>>>>>>>>>>>>> CloudStack, I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> try to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> use 'CloudStack volume'.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 12:50
> >>>> PM,
> >>>>>>> Mike
> >>>>>>>>>>>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> mike.tutkowski@solidfire.com>
> >>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi John,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What you say here may
> >>>> very
> >>>>>>> well
> >>>>>>>>>>> make
> >>>>>>>>>>>>> sense, but
> >>>>>>>>>>>>>>>>> I'm
> >>>>>>>>>>>>>>>>>>>>>>>>>> having a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hard
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> time
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> envisioning it.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Perhaps we should draw
> >>>> Edison
> >>>>>>> in on
> >>>>>>>>>>>>> this
> >>>>>>>>>>>>>>>>> conversation
> >>>>>>>>>>>>>>>>>>>>>>>>>> as he
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> was
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> initial person to suggest the
> >>>>>>>>>>> approach I
> >>>>>>>>>>>>> took.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What do you think?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks!
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jun 4, 2013 at 12:42
> >>>> PM,
> >>>>>>>>>>> John
> >>>>>>>>>>>>> Burwell <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> jburwell@basho.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It feels like we are
> >>>> combining
> >>>>>>> two
> >>>>>>>>>>>>> distinct
> >>>>>>>>>>>>>>>>> concepts
> >>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> device
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> management and storage
> >>>>>>> protocols.
> >>>>>>>>>>> In
> >>>>>>>>>>>>> both
> >>>>>>>>>>>>>>>>> cases, we
> >>>>>>>>>>>>>>>>>>>>>>>>>> are
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> communicating with
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ISCSI, but one allows the
> >>>>>>> system to
> >>>>>>>>>>>>>> create/delete
> >>>>>>>>>>>>>>>>>>>>>>>>>> volumes
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (Dynamic)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> on the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> device while the other
> >>>> requires
> >>>>>>> the
> >>>>>>>>>>>>> volume to
> >>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>>>>>>> volume to be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> managed
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> outside of the CloudStack
> >>>>>>> context.
> >>>>>>>>>>> To
> >>>>>>>>>>>>> ensure
> >>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>> we
> >>>>>>>>>>>>>>>>>>>>>>>>>> are in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sync on
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> terminology, volume, in
> >>>> these
> >>>>>>>>>>>>> definitions,
> >>>>>>>>>>>>>>>>> refers to
> >>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> physical
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> allocation on the device,
> >>>>>>> correct?
> >>>>>>>>>>>>> Minimally,
> >>>>>>>>>>>>>> we
> >>>>>>>>>>>>>>>>>>>>> must
> >>>>>>>>>>>>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> able
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> communicate with a
> >>>> storage
> >>>>>>> device
> >>>>>>>>>>> to
> >>>>>>>>>>>>> move bits
> >>>>>>>>>>>>>>>>> from
> >>>>>>>>>>>>>>>>>>>>>>>>>> one place
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> another,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> read bits, delete bits, etc.
> >>>>>>>>>>> Optionally, a
> >>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>> device
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> may
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> able to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> managed by CloudStack.
> >>>>>>> Therefore,
> >>>>>>>>>>>>> we can have
> >>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>> unmanaged
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> iSCSI
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> device
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> onto which we store a Xen
> >>>> SR,
> >>>>>>> and
> >>>>>>>>>>> we
> >>>>>>>>>>>>> can have a
> >>>>>>>>>>>>>>>>>>>>> managed
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> SolidFire
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> iSCSI
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> device on which
> >>>> CloudStack is
> >>>>>>>>>>> capable
> >>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>> allocating
> >>>>>>>>>>>>>>>>>>>>>>>>>> LUNs and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storing
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> volumes.  Finally, while
> >>>>>>> CloudStack
> >>>>>>>>>>> may
> >>>>>>>>>>>>> be able
> >>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>> manage a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> device,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> an
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> operator may chose to
> >>>> leave it
> >>>>>>>>>>>>> unmanaged by
> >>>>>>>>>>>>>>>>>>>>> CloudStack
> >>>>>>>>>>>>>>>>>>>>>>>>>> (e.g.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> device is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> shared by many services,
> >>>> and
> >>>>>>> the
> >>>>>>>>>>>>> operator has
> >>>>>>>>>>>>>>>>> chosen
> >>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> dedicate
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> only a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> portion of it to
> >>>> CloudStack).
> >>>>>>> Does
> >>>>>>>>>>> my
> >>>>>>>>>>>>>> reasoning
> >>>>>>>>>>>>>>>>>>> make
> >>>>>>>>>>>>>>>>>>>>>>>>>> sense?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Assuming my thoughts
> >>>> above
> >>>>>>> are
> >>>>>>>>>>>>> reasonable, it
> >>>>>>>>>>>>>>>>> seems
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> appropriate
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> strip the management
> >>>>>>> concerns
> >>>>>>>>>>> from
> >>>>>>>>>>>>>>>>> StoragePoolType,
> >>>>>>>>>>>>>>>>>>>>>>>>>> add the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> notion
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> of a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage device with an
> >>>>>>> attached
> >>>>>>>>>>> driver
> >>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>> indicates
> >>>>>>>>>>>>>>>>>>>>>>>>>> whether
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> or
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> not is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> managed by CloudStack,
> >>>> and
> >>>>>>>>>>> establish
> >>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>> abstraction
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> representing a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> physical
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> allocation on a device
> >>>> separate
> >>>>>>> that
> >>>>>>>>>>> is
> >>>>>>>>>>>>>>>>> associated
> >>>>>>>>>>>>>>>>>>>>>>>>>> with a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> volume.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> With
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> these notions in place,
> >>>>>>> hypervisor
> >>>>>>>>>>>>> drivers can
> >>>>>>>>>>>>>>>>>>>>> declare
> >>>>>>>>>>>>>>>>>>>>>>>>>> which
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> protocols they
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> support and when they
> >>>>>>> encounter
> >>>>>>>>>>> a
> >>>>>>>>>>>>> device
> >>>>>>>>>>>>>> managed
> >>>>>>>>>>>>>>>>> by
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> CloudStack,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> utilize the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> management operations
> >>>>>>> exposed
> >>>>>>>>>>> by
> >>>>>>>>>>>>> the driver to
> >>>>>>>>>>>>>>>>>>>>> automate
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> allocation.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> these thoughts/concepts
> >>>> make
> >>>>>>>>>>> sense,
> >>>>>>>>>>>>> then we can
> >>>>>>>>>>>>>>>>> sit
> >>>>>>>>>>>>>>>>>>>>>>>>>> down and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> drill
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> down to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a more detailed design.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Jun 3, 2013, at 5:25 PM,
> >>>>>>> Mike
> >>>>>>>>>>>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Here is the difference
> >>>>>>> between
> >>>>>>>>>>> the
> >>>>>>>>>>>>> current
> >>>>>>>>>>>>>> iSCSI
> >>>>>>>>>>>>>>>>>>>>> type
> >>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Dynamic
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> type:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> iSCSI type: The admin
> >>>> has to
> >>>>>>> go in
> >>>>>>>>>>> and
> >>>>>>>>>>>>> create
> >>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>> Primary
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> based
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> on
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the iSCSI type. At this
> >>>> point in
> >>>>>>> time,
> >>>>>>>>>>>>> the
> >>>>>>>>>>>>>> iSCSI
> >>>>>>>>>>>>>>>>>>>>>>>>>> volume must
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> exist
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> on
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage system (it is pre-
> >>>>>>> allocated).
> >>>>>>>>>>>>> Future
> >>>>>>>>>>>>>>>>>>>>>>>>>> CloudStack
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> volumes
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> are
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> created
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as VDIs on the SR that
> >>>> was
> >>>>>>> created
> >>>>>>>>>>>>> behind the
> >>>>>>>>>>>>>>>>>>>>> scenes.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Dynamic type: The admin
> >>>> has
> >>>>>>> to
> >>>>>>>>>>> go in
> >>>>>>>>>>>>> and
> >>>>>>>>>>>>>> create
> >>>>>>>>>>>>>>>>>>>>>>>>>> Primary
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> based
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> on a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> plug-in that will create
> >>>> and
> >>>>>>> delete
> >>>>>>>>>>>>> volumes on
> >>>>>>>>>>>>>>>>> its
> >>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> system
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> dynamically (as is
> >>>> enabled via
> >>>>>>> the
> >>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>> framework). When
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> user
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wants to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> attach a CloudStack
> >>>> volume
> >>>>>>> that
> >>>>>>>>>>> was
> >>>>>>>>>>>>> created,
> >>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>> framework
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> tells
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> plug-in to create a new
> >>>>>>> volume.
> >>>>>>>>>>> After
> >>>>>>>>>>>>> this is
> >>>>>>>>>>>>>>>>> done,
> >>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> attach
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logic
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the hypervisor in
> >>>> question is
> >>>>>>> called.
> >>>>>>>>>>>>> No
> >>>>>>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>> data
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> structure
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> exists
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> at this point because the
> >>>>>>> volume
> >>>>>>>>>>> was
> >>>>>>>>>>>>> just
> >>>>>>>>>>>>>>>>> created.
> >>>>>>>>>>>>>>>>>>>>> The
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> data
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> structure must be
> >>>> created.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Jun 3, 2013 at
> >>>> 3:21
> >>>>>>> PM,
> >>>>>>>>>>> Mike
> >>>>>>>>>>>>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> These are new terms,
> >>>> so I
> >>>>>>> should
> >>>>>>>>>>>>> probably
> >>>>>>>>>>>>>> have
> >>>>>>>>>>>>>>>>>>>>>>>>>> defined them
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> up
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> front
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you. :)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Static storage: Storage
> >>>> that is
> >>>>>>>>>>> pre-
> >>>>>>>>>>>>> allocated
> >>>>>>>>>>>>>>>>> (ex.
> >>>>>>>>>>>>>>>>>>>>>>>>>> an admin
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> creates a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> volume on a SAN), then
> >>>> a
> >>>>>>>>>>> hypervisor
> >>>>>>>>>>>>> data
> >>>>>>>>>>>>>>>>> structure
> >>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> created
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> consume
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the storage (ex.
> >>>> XenServer
> >>>>>>> SR),
> >>>>>>>>>>> then
> >>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>> data
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> structure
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> consumed by
> >>>> CloudStack.
> >>>>>>> Disks
> >>>>>>>>>>> (VDI)
> >>>>>>>>>>>>> are later
> >>>>>>>>>>>>>>>>>>>>> placed
> >>>>>>>>>>>>>>>>>>>>>>>>>> on
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> this
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> data structure as
> >>>> needed. In
> >>>>>>>>>>> these
> >>>>>>>>>>>>> cases, the
> >>>>>>>>>>>>>>>>>>>>> attach
> >>>>>>>>>>>>>>>>>>>>>>>>>> logic
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> assumes
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hypervisor data
> >>>> structure is
> >>>>>>>>>>> already
> >>>>>>>>>>>>> in place
> >>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>> simply
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> attaches
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the VDI
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> on the hypervisor data
> >>>>>>> structure
> >>>>>>>>>>> to
> >>>>>>>>>>>>> the VM in
> >>>>>>>>>>>>>>>>>>>>>>>>>> question.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Dynamic storage:
> >>>> Storage
> >>>>>>> that is
> >>>>>>>>>>> not
> >>>>>>>>>>>>>>>>>>> pre-allocated.
> >>>>>>>>>>>>>>>>>>>>>>>>>> Instead
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> pre-existent storage,
> >>>> this
> >>>>>>> could
> >>>>>>>>>>> be a
> >>>>>>>>>>>>> SAN
> >>>>>>>>>>>>>> (not
> >>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>> volume on
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> SAN,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> but the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> SAN itself). The
> >>>> hypervisor
> >>>>>>> data
> >>>>>>>>>>>>> structure
> >>>>>>>>>>>>>>>>> must be
> >>>>>>>>>>>>>>>>>>>>>>>>>> created
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> when an
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> attach
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> volume is performed
> >>>>>>> because
> >>>>>>>>>>> these
> >>>>>>>>>>>>> types of
> >>>>>>>>>>>>>>>>> volumes
> >>>>>>>>>>>>>>>>>>>>>>>>>> have not
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> been
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> pre-hooked
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> up to such a hypervisor
> >>>> data
> >>>>>>>>>>>>> structure by an
> >>>>>>>>>>>>>>>>>>> admin.
> >>>>>>>>>>>>>>>>>>>>>>>>>> Once
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> attach
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logic
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> creates, say, an SR on
> >>>>>>> XenServer
> >>>>>>>>>>> for
> >>>>>>>>>>>>> this
> >>>>>>>>>>>>>>>>> volume,
> >>>>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> attaches
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> one and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> only VDI within the SR
> >>>> to the
> >>>>>>> VM
> >>>>>>>>>>> in
> >>>>>>>>>>>>> question.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Jun 3, 2013 at
> >>>> 3:13
> >>>>>>> PM,
> >>>>>>>>>>>>> John Burwell
> >>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> jburwell@basho.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The current
> >>>>>>> implementation of
> >>>>>>>>>>> the
> >>>>>>>>>>>>> Dynamic
> >>>>>>>>>>>>>> type
> >>>>>>>>>>>>>>>>>>>>>>>>>> attach
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> behavior
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> works in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> terms of Xen ISCSI
> >>>> which
> >>>>>>> why I
> >>>>>>>>>>> ask
> >>>>>>>>>>>>> about the
> >>>>>>>>>>>>>>>>>>>>>>>>>> difference.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Another
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> way to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ask the question --
> >>>> what is
> >>>>>>> the
> >>>>>>>>>>>>> definition
> >>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>> Dynamic
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> pool type?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Jun 3, 2013, at 5:10
> >>>> PM,
> >>>>>>>>>>> Mike
> >>>>>>>>>>>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> As far as I know, the
> >>>> iSCSI
> >>>>>>> type
> >>>>>>>>>>> is
> >>>>>>>>>>>>>> uniquely
> >>>>>>>>>>>>>>>>>>> used
> >>>>>>>>>>>>>>>>>>>>>>>>>> by
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> XenServer
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> when you
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> want to set up
> >>>> Primary
> >>>>>>>>>>> Storage
> >>>>>>>>>>>>> that is
> >>>>>>>>>>>>>>>>> directly
> >>>>>>>>>>>>>>>>>>>>>>>>>> based on
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> an
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> iSCSI
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> target.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This allows you to
> >>>> skip the
> >>>>>>>>>>> step of
> >>>>>>>>>>>>> going
> >>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> creating a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage repository
> >>>> based
> >>>>>>> on
> >>>>>>>>>>> that
> >>>>>>>>>>>>> iSCSI
> >>>>>>>>>>>>>>>>> target as
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> CloudStack
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> does
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> part
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for you. I think this is
> >>>> only
> >>>>>>>>>>>>> supported for
> >>>>>>>>>>>>>>>>>>>>>>>>>> XenServer.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> For
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> all
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> other
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hypervisors, you
> >>>> must
> >>>>>>> first go
> >>>>>>>>>>> to
> >>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> perform
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> step
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> manually.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I don't really know
> >>>> what
> >>>>>>> RBD is.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Jun 3, 2013
> >>>> at
> >>>>>>> 2:13
> >>>>>>>>>>> PM,
> >>>>>>>>>>>>> John
> >>>>>>>>>>>>>> Burwell
> >>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> jburwell@basho.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Reading through
> >>>> the
> >>>>>>> code,
> >>>>>>>>>>> what
> >>>>>>>>>>>>> is the
> >>>>>>>>>>>>>>>>>>> difference
> >>>>>>>>>>>>>>>>>>>>>>>>>> between
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ISCSI and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Dynamic types?
> >>>> Why
> >>>>>>> isn't
> >>>>>>>>>>> RBD
> >>>>>>>>>>>>> considered
> >>>>>>>>>>>>>>>>>>>>> Dynamic?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Jun 3, 2013, at
> >>>> 3:46
> >>>>>>> PM,
> >>>>>>>>>>> Mike
> >>>>>>>>>>>>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This new type of
> >>>>>>> storage is
> >>>>>>>>>>>>> defined in
> >>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Storage.StoragePoolType
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> class
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (called Dynamic):
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> public static enum
> >>>>>>>>>>>>> StoragePoolType {
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Filesystem(false),
> >>>> //
> >>>>>>> local
> >>>>>>>>>>>>> directory
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> NetworkFilesystem(true),
> >>>>>>>>>>> //
> >>>>>>>>>>>>> NFS or CIFS
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IscsiLUN(true), //
> >>>>>>> shared
> >>>>>>>>>>> LUN,
> >>>>>>>>>>>>> with a
> >>>>>>>>>>>>>>>>>>> clusterfs
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> overlay
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Iscsi(true), // for
> >>>> e.g.,
> >>>>>>> ZFS
> >>>>>>>>>>>>> Comstar
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ISO(false), // for
> >>>> iso
> >>>>>>> image
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> LVM(false), //
> >>>>>>> XenServer
> >>>>>>>>>>> local
> >>>>>>>>>>>>> LVM SR
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> CLVM(true),
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> RBD(true),
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> SharedMountPoint(true),
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> VMFS(true), //
> >>>>>>> VMware
> >>>>>>>>>>> VMFS
> >>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> PreSetup(true), //
> >>>> for
> >>>>>>>>>>>>> XenServer, Storage
> >>>>>>>>>>>>>>>>> Pool
> >>>>>>>>>>>>>>>>>>>>>>>>>> is set
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> up
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> by
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> customers.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> EXT(false), //
> >>>>>>> XenServer
> >>>>>>>>>>> local
> >>>>>>>>>>>>> EXT SR
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> OCFS2(true),
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Dynamic(true); //
> >>>>>>> dynamic,
> >>>>>>>>>>>>> zone-wide
> >>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>> (ex.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> SolidFire)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> boolean shared;
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> StoragePoolType(boolean
> >>>>>>>>>>>>> shared) {
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this.shared =
> >>>> shared;
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> }
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> public boolean
> >>>>>>> isShared() {
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> return shared;
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> }
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> }
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Jun 3,
> >>>> 2013 at
> >>>>>>> 1:41
> >>>>>>>>>>> PM,
> >>>>>>>>>>>>> Mike
> >>>>>>>>>>>>>>>>> Tutkowski
> >>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> For example,
> >>>> let's say
> >>>>>>>>>>> another
> >>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>> company
> >>>>>>>>>>>>>>>>>>>>>>>>>> wants
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> implement a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> plug-in to
> >>>> leverage its
> >>>>>>>>>>> Quality
> >>>>>>>>>>>>> of
> >>>>>>>>>>>>>> Service
> >>>>>>>>>>>>>>>>>>>>>>>>>> feature. It
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> would
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> dynamic,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> zone-wide
> >>>> storage, as
> >>>>>>> well.
> >>>>>>>>>>>>> They would
> >>>>>>>>>>>>>>>>> need
> >>>>>>>>>>>>>>>>>>>>> only
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> implement a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> plug-in as I've
> >>>> made
> >>>>>>> the
> >>>>>>>>>>>>> necessary
> >>>>>>>>>>>>>>>>> changes to
> >>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hypervisor-attach
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logic
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to support their
> >>>> plug-
> >>>>>>> in.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Jun 3,
> >>>> 2013
> >>>>>>> at
> >>>>>>>>>>> 1:39
> >>>>>>>>>>>>> PM, Mike
> >>>>>>>>>>>>>>>>>>>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Oh, sorry to
> >>>> imply
> >>>>>>> the
> >>>>>>>>>>>>> XenServer code
> >>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>>>> SolidFire
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> specific.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> not.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The XenServer
> >>>>>>> attach
> >>>>>>>>>>> logic is
> >>>>>>>>>>>>> now aware
> >>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>> dynamic,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> zone-wide
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (and SolidFire is
> >>>> an
> >>>>>>>>>>>>> implementation of
> >>>>>>>>>>>>>>>>> this
> >>>>>>>>>>>>>>>>>>>>>>>>>> kind of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kind of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage is new
> >>>> to 4.2
> >>>>>>>>>>> with
> >>>>>>>>>>>>> Edison's
> >>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>> framework
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> changes.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Edison created
> >>>> a
> >>>>>>> new
> >>>>>>>>>>>>> framework that
> >>>>>>>>>>>>>>>>>>> supported
> >>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> creation
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> deletion
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> of volumes
> >>>>>>> dynamically.
> >>>>>>>>>>>>> However, when I
> >>>>>>>>>>>>>>>>>>>>>>>>>> visited with
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> him
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Portland
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> back
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in April, we
> >>>> realized
> >>>>>>> that
> >>>>>>>>>>> it
> >>>>>>>>>>>>> was not
> >>>>>>>>>>>>>>>>>>>>>>>>>> complete. We
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> realized
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> there
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> was
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> nothing
> >>>> CloudStack
> >>>>>>> could
> >>>>>>>>>>> do
> >>>>>>>>>>>>> with these
> >>>>>>>>>>>>>>>>>>>>> volumes
> >>>>>>>>>>>>>>>>>>>>>>>>>> unless
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> attach
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logic
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> was
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> changed to
> >>>>>>> recognize
> >>>>>>>>>>> this
> >>>>>>>>>>>>> new type of
> >>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> create
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> appropriate
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hypervisor data
> >>>>>>> structure.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Jun 3,
> >>>> 2013
> >>>>>>> at
> >>>>>>>>>>> 1:28
> >>>>>>>>>>>>> PM, John
> >>>>>>>>>>>>>>>>> Burwell
> >>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> jburwell@basho.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It is generally
> >>>> odd
> >>>>>>> to me
> >>>>>>>>>>>>> that any
> >>>>>>>>>>>>>>>>>>> operation
> >>>>>>>>>>>>>>>>>>>>>>>>>> in the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> layer
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> would
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> understand or
> >>>> care
> >>>>>>>>>>> about
> >>>>>>>>>>>>> details.  I
> >>>>>>>>>>>>>>>>> expect
> >>>>>>>>>>>>>>>>>>>>>>>>>> to see
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> services
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> expose a set
> >>>> of
> >>>>>>>>>>> operations
> >>>>>>>>>>>>> that can be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> composed/driven
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> by
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> implementations to
> >>>>>>>>>>>>> allocate
> >>>>>>>>>>>>>> space/create
> >>>>>>>>>>>>>>>>>>>>>>>>>> structures
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> per
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> their
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> needs.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> we
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> don't invert
> >>>> this
> >>>>>>>>>>>>> dependency, we are
> >>>>>>>>>>>>>>>>> going
> >>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>> end
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> with a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> massive
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-to-n
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> problem that
> >>>> will
> >>>>>>> make
> >>>>>>>>>>> the
> >>>>>>>>>>>>> system
> >>>>>>>>>>>>>>>>>>>>> increasingly
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> difficult to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> maintain
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> enhance.  Am
> >>>> I
> >>>>>>>>>>>>> understanding that the
> >>>>>>>>>>>>>>>>> Xen
> >>>>>>>>>>>>>>>>>>>>>>>>>> specific
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> SolidFire
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> code
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> located in the
> >>>>>>>>>>>>> CitrixResourceBase
> >>>>>>>>>>>>>> class?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Jun 3,
> >>>>>>> 2013 at
> >>>>>>>>>>> 3:12
> >>>>>>>>>>>>> PM, Mike
> >>>>>>>>>>>>>>>>>>>>>>>>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To delve into
> >>>> this
> >>>>>>> in a
> >>>>>>>>>>> bit
> >>>>>>>>>>>>> more
> >>>>>>>>>>>>>>>>> detail:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Prior to 4.2
> >>>> and
> >>>>>>> aside
> >>>>>>>>>>> from
> >>>>>>>>>>>>> one setup
> >>>>>>>>>>>>>>>>>>>>> method
> >>>>>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> XenServer,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> admin
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> had
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to first create
> >>>> a
> >>>>>>>>>>> volume on
> >>>>>>>>>>>>> the
> >>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>> system,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> then
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> go
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> into
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to set up a
> >>>> data
> >>>>>>>>>>> structure
> >>>>>>>>>>>>> to make
> >>>>>>>>>>>>>> use
> >>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> volume
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (ex.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> repository on
> >>>>>>>>>>> XenServer
> >>>>>>>>>>>>> or a
> >>>>>>>>>>>>>> datastore
> >>>>>>>>>>>>>>>>> on
> >>>>>>>>>>>>>>>>>>>>>>>>>> ESX(i)).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> VMs
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> data
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> disks
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> shared this
> >>>>>>> storage
> >>>>>>>>>>>>> system's volume.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> With Edison's
> >>>> new
> >>>>>>>>>>> storage
> >>>>>>>>>>>>> framework,
> >>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>> need
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> no
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> longer
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be so
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> static
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and you can
> >>>> easily
> >>>>>>>>>>> create
> >>>>>>>>>>>>> a 1:1
> >>>>>>>>>>>>>>>>>>>>> relationship
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> between
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> system's
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> volume and
> >>>> the
> >>>>>>> VM's
> >>>>>>>>>>> data
> >>>>>>>>>>>>> disk
> >>>>>>>>>>>>>>>>> (necessary
> >>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Quality
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Service).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You can now
> >>>> write
> >>>>>>> a
> >>>>>>>>>>> plug-
> >>>>>>>>>>>>> in that is
> >>>>>>>>>>>>>>>>> called
> >>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> dynamically
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> create
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> delete
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> volumes as
> >>>>>>> needed.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The problem
> >>>> that
> >>>>>>> the
> >>>>>>>>>>>>> storage
> >>>>>>>>>>>>>> framework
> >>>>>>>>>>>>>>>>> did
> >>>>>>>>>>>>>>>>>>>>>>>>>> not
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> address
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> creating
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> deleting the
> >>>>>>>>>>> hypervisor-
> >>>>>>>>>>>>> specific data
> >>>>>>>>>>>>>>>>>>>>>>>>>> structure
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> when
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> performing an
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> attach/detach.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> That being
> >>>> the
> >>>>>>> case,
> >>>>>>>>>>> I've
> >>>>>>>>>>>>> been
> >>>>>>>>>>>>>>>>> enhancing
> >>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>>>>>>> to do
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> so.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I've
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> got
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> XenServer
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> worked out
> >>>> and
> >>>>>>>>>>>>> submitted. I've got
> >>>>>>>>>>>>>>>>> ESX(i)
> >>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>> my
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sandbox
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> can
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> submit
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> if we extend
> >>>> the
> >>>>>>> 4.2
> >>>>>>>>>>>>> freeze date.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Does that
> >>>> help a
> >>>>>>> bit? :)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Jun
> >>>> 3,
> >>>>>>> 2013 at
> >>>>>>>>>>>>> 1:03 PM, Mike
> >>>>>>>>>>>>>>>>>>>>>>>>>> Tutkowski <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi John,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The storage
> >>>>>>> plug-in -
> >>>>>>>>>>> by
> >>>>>>>>>>>>> itself - is
> >>>>>>>>>>>>>>>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> agnostic.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The issue is
> >>>> with
> >>>>>>> the
> >>>>>>>>>>>>> volume-attach
> >>>>>>>>>>>>>>>>> logic
> >>>>>>>>>>>>>>>>>>>>>>>>>> (in the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> agent
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> code).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> framework
> >>>> calls
> >>>>>>> into
> >>>>>>>>>>> the
> >>>>>>>>>>>>> plug-in to
> >>>>>>>>>>>>>>>>> have
> >>>>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> create a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> volume
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> needed,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> but
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> when the
> >>>> time
> >>>>>>>>>>> comes to
> >>>>>>>>>>>>> attach the
> >>>>>>>>>>>>>>>>> volume
> >>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hypervisor,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> attach
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logic
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> has to be
> >>>> smart
> >>>>>>>>>>> enough
> >>>>>>>>>>>>> to recognize
> >>>>>>>>>>>>>>>>> it's
> >>>>>>>>>>>>>>>>>>>>>>>>>> being
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> invoked on
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> zone-wide
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (where the
> >>>>>>> volume
> >>>>>>>>>>> has
> >>>>>>>>>>>>> just been
> >>>>>>>>>>>>>>>>> created)
> >>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> create,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> say, a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> repository
> >>>> (for
> >>>>>>>>>>>>> XenServer) or a
> >>>>>>>>>>>>>>>>> datastore
> >>>>>>>>>>>>>>>>>>>>>>>>>> (for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> VMware) to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> make
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> use
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> of the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> volume that
> >>>> was
> >>>>>>> just
> >>>>>>>>>>>>> created.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I've been
> >>>>>>> spending
> >>>>>>>>>>> most
> >>>>>>>>>>>>> of my time
> >>>>>>>>>>>>>>>>>>>>> recently
> >>>>>>>>>>>>>>>>>>>>>>>>>> making
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> attach
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logic
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> work
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in the agent
> >>>>>>> code.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Does that
> >>>> clear it
> >>>>>>> up?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks!
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon,
> >>>> Jun 3,
> >>>>>>> 2013
> >>>>>>>>>>> at
> >>>>>>>>>>>>> 12:48 PM,
> >>>>>>>>>>>>>> John
> >>>>>>>>>>>>>>>>>>>>>>>>>> Burwell <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> jburwell@basho.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mike,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you
> >>>> explain
> >>>>>>>>>>> why
> >>>>>>>>>>>>> the the storage
> >>>>>>>>>>>>>>>>>>>>> driver
> >>>>>>>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> specific?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -John
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Jun 3,
> >>>> 2013,
> >>>>>>> at
> >>>>>>>>>>> 1:21
> >>>>>>>>>>>>> PM, Mike
> >>>>>>>>>>>>>>>>>>>>> Tutkowski
> >>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes,
> >>>>>>> ultimately I
> >>>>>>>>>>>>> would like to
> >>>>>>>>>>>>>>>>> support
> >>>>>>>>>>>>>>>>>>>>>>>>>> all
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hypervisors
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> CloudStack
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> supports.
> >>>> I
> >>>>>>> think
> >>>>>>>>>>> I'm
> >>>>>>>>>>>>> just out of
> >>>>>>>>>>>>>>>>> time
> >>>>>>>>>>>>>>>>>>>>>>>>>> for 4.2
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> get
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> KVM
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Right
> >>>> now this
> >>>>>>>>>>> plug-in
> >>>>>>>>>>>>> supports
> >>>>>>>>>>>>>>>>>>>>> XenServer.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Depending on
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> what
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> we do
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> with
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> regards
> >>>> to 4.2
> >>>>>>>>>>> feature
> >>>>>>>>>>>>> freeze, I
> >>>>>>>>>>>>>>>>> have
> >>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>>>>>>> working
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> VMware in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> my
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sandbox,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Also, just
> >>>> to
> >>>>>>> be
> >>>>>>>>>>> clear,
> >>>>>>>>>>>>> this is
> >>>>>>>>>>>>>> all
> >>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>> regards
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Disk
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Offerings.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> plan to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> support
> >>>>>>> Compute
> >>>>>>>>>>>>> Offerings post
> >>>>>>>>>>>>>> 4.2.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon,
> >>>> Jun 3,
> >>>>>>>>>>> 2013
> >>>>>>>>>>>>> at 11:14 AM,
> >>>>>>>>>>>>>>>>> Kelcey
> >>>>>>>>>>>>>>>>>>>>>>>>>> Jamison
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Damage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> kelcey@bbits.ca
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Is there
> >>>> any
> >>>>>>> plan
> >>>>>>>>>>> on
> >>>>>>>>>>>>> supporting
> >>>>>>>>>>>>>>>>> KVM in
> >>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> patch
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cycle
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> post
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 4.2?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -----
> >>>> Original
> >>>>>>>>>>>>> Message -----
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> From:
> >>>> "Mike
> >>>>>>>>>>>>> Tutkowski" <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To:
> >>>>>>>>>>>>> dev@cloudstack.apache.org
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Sent:
> >>>>>>> Monday,
> >>>>>>>>>>> June
> >>>>>>>>>>>>> 3, 2013
> >>>>>>>>>>>>>>>>> 10:12:32 AM
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject:
> >>>> Re:
> >>>>>>>>>>> [MERGE]
> >>>>>>>>>>>>>>>>>>> disk_io_throttling
> >>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> MASTER
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I agree
> >>>> on
> >>>>>>>>>>> merging
> >>>>>>>>>>>>> Wei's feature
> >>>>>>>>>>>>>>>>>>> first,
> >>>>>>>>>>>>>>>>>>>>>>>>>> then
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mine.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If his
> >>>> feature
> >>>>>>> is
> >>>>>>>>>>> for
> >>>>>>>>>>>>> KVM only,
> >>>>>>>>>>>>>>>>> then
> >>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>>>>>>> is a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> non
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> issue
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> don't
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> support
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> KVM in
> >>>> 4.2.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon,
> >>>> Jun
> >>>>>>> 3,
> >>>>>>>>>>> 2013
> >>>>>>>>>>>>> at 8:55 AM,
> >>>>>>>>>>>>>> Wei
> >>>>>>>>>>>>>>>>>>>>> ZHOU
> >>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> ustcweizhou@gmail.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> John,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> For
> >>>> the
> >>>>>>> billing,
> >>>>>>>>>>> as
> >>>>>>>>>>>>> no one works
> >>>>>>>>>>>>>>>>> on
> >>>>>>>>>>>>>>>>>>>>>>>>>> billing
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> now,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> users
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> need
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> calculate
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>> billing
> >>>>>>> by
> >>>>>>>>>>>>> themselves. They
> >>>>>>>>>>>>>>>>> can
> >>>>>>>>>>>>>>>>>>>>> get
> >>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> service_offering
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> disk_offering of
> >>>>>>>>>>> a
> >>>>>>>>>>>>> VMs and
> >>>>>>>>>>>>>> volumes
> >>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> calculation.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> course
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> it is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> better
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to tell
> >>>> user
> >>>>>>> the
> >>>>>>>>>>>>> exact
> >>>>>>>>>>>>>> limitation
> >>>>>>>>>>>>>>>>>>>>> value
> >>>>>>>>>>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> individual
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> volume,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> rate
> >>>>>>> limitation
> >>>>>>>>>>> for
> >>>>>>>>>>>>> nics as
> >>>>>>>>>>>>>> well.
> >>>>>>>>>>>>>>>>> I
> >>>>>>>>>>>>>>>>>>>>> can
> >>>>>>>>>>>>>>>>>>>>>>>>>> work
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> on
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> later. Do
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> think it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is a
> >>>> part of
> >>>>>>> I/O
> >>>>>>>>>>>>> throttling?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Sorry
> >>>> my
> >>>>>>>>>>>>> misunstand the second
> >>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>> question.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Agree
> >>>> with
> >>>>>>>>>>> what
> >>>>>>>>>>>>> you said about
> >>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>> two
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> features.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -Wei
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> 2013/6/3
> >>>>>>> John
> >>>>>>>>>>>>> Burwell <
> >>>>>>>>>>>>>>>>>>>>>>>>>> jburwell@basho.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Wei,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On
> >>>> Jun 3,
> >>>>>>> 2013,
> >>>>>>>>>>> at
> >>>>>>>>>>>>> 2:13 AM, Wei
> >>>>>>>>>>>>>>>>> ZHOU
> >>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ustcweizhou@gmail.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi
> >>>> John,
> >>>>>>> Mike
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I
> >>>> hope
> >>>>>>> Mike's
> >>>>>>>>>>>>> aswer helps you.
> >>>>>>>>>>>>>>>>> I am
> >>>>>>>>>>>>>>>>>>>>>>>>>> trying
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> adding
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> more.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (1) I
> >>>>>>> think
> >>>>>>>>>>> billing
> >>>>>>>>>>>>> should
> >>>>>>>>>>>>>>>>> depend
> >>>>>>>>>>>>>>>>>>> on
> >>>>>>>>>>>>>>>>>>>>>>>>>> IO
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> statistics
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> rather
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> than
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IOPS
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> limitation.
> >>>>>>>>>>> Please
> >>>>>>>>>>>>> review
> >>>>>>>>>>>>>>>>>>>>>>>>>> disk_io_stat if
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> you
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> have
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> time.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> disk_io_stat
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> can
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> get
> >>>> the
> >>>>>>> IO
> >>>>>>>>>>>>> statistics
> >>>>>>>>>>>>>> including
> >>>>>>>>>>>>>>>>>>>>>>>>>> bytes/iops
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> read/write
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> an
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> individual
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> virtual
> >>>>>>>>>>> machine.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Going by
> >>>>>>> the
> >>>>>>>>>>> AWS
> >>>>>>>>>>>>> model,
> >>>>>>>>>>>>>> customers
> >>>>>>>>>>>>>>>>>>> are
> >>>>>>>>>>>>>>>>>>>>>>>>>> billed
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> more
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> volumes
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> with
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> provisioned
> >>>>>>>>>>> IOPS,
> >>>>>>>>>>>>> as well as,
> >>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>> those
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> operations (
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> http://aws.amazon.com/ebs/).
> >>>>>>>>>>>>>> I
> >>>>>>>>>>>>>>>>>>>>> would
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> imagine
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> our
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> users
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> would
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> option to
> >>>>>>>>>>> employ
> >>>>>>>>>>>>> similar cost
> >>>>>>>>>>>>>>>>>>> models.
> >>>>>>>>>>>>>>>>>>>>>>>>>> Could
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> an
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> operator
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> implement
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> such a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> billing
> >>>>>>> model
> >>>>>>>>>>> in
> >>>>>>>>>>>>> the current
> >>>>>>>>>>>>>>>>> patch?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (2)
> >>>> Do
> >>>>>>> you
> >>>>>>>>>>> mean
> >>>>>>>>>>>>> IOPS runtime
> >>>>>>>>>>>>>>>>>>> change?
> >>>>>>>>>>>>>>>>>>>>>>>>>> KVM
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> supports
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> setting
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IOPS/BPS
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> limitation for
> >>>>>>>>>>> a
> >>>>>>>>>>>>> running
> >>>>>>>>>>>>>> virtual
> >>>>>>>>>>>>>>>>>>>>>>>>>> machine
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> through
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> command
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> line.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> However,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> CloudStack
> >>>>>>>>>>> does
> >>>>>>>>>>>>> not support
> >>>>>>>>>>>>>>>>>>> changing
> >>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> parameters
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> of a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> created
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> offering
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> (computer
> >>>>>>>>>>>>> offering or disk
> >>>>>>>>>>>>>>>>>>>>> offering).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I
> >>>> meant at
> >>>>>>> the
> >>>>>>>>>>>>> Java interface
> >>>>>>>>>>>>>>>>> level.
> >>>>>>>>>>>>>>>>>>>>> I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> apologize
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> being
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> unclear.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> we
> >>>> more
> >>>>>>>>>>>>> generalize allocation
> >>>>>>>>>>>>>>>>>>>>>>>>>> algorithms
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> with a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> set
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> interfaces
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> describe
> >>>>>>> the
> >>>>>>>>>>>>> service guarantees
> >>>>>>>>>>>>>>>>>>>>>>>>>> provided by a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> resource?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (3)
> >>>> It is a
> >>>>>>>>>>> good
> >>>>>>>>>>>>> question.
> >>>>>>>>>>>>>>>>> Maybe it
> >>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> better
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> commit
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mike's
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> patch
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> after
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> disk_io_throttling as Mike
> >>>>>>>>>>>>>>>>> needs to
> >>>>>>>>>>>>>>>>>>>>>>>>>> consider
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> limitation in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> hypervisor
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> type, I
> >>>>>>> think.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I will
> >>>>>>> expand
> >>>>>>>>>>> on
> >>>>>>>>>>>>> my thoughts
> >>>>>>>>>>>>>> in a
> >>>>>>>>>>>>>>>>>>>>> later
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> response
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mike
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> regarding
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> touch
> >>>>>>> points
> >>>>>>>>>>>>> between these two
> >>>>>>>>>>>>>>>>>>>>>>>>>> features.  I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> think
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> disk_io_throttling
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> will
> >>>> need
> >>>>>>> to be
> >>>>>>>>>>>>> merged before
> >>>>>>>>>>>>>>>>>>>>>>>>>> SolidFire, but
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> think
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> we need
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> closer
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> coordination
> >>>>>>>>>>>>> between the
> >>>>>>>>>>>>>> branches
> >>>>>>>>>>>>>>>>>>>>>>>>>> (possibly
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> have
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> have
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> solidfire
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> track
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> disk_io_throttling)
> >>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>> coordinate on
> >>>>>>>>>>>>>>>>>>>>>>>>>> this
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> issue.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -
> >>>> Wei
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> 2013/6/3
> >>>>>>>>>>> John
> >>>>>>>>>>>>> Burwell <
> >>>>>>>>>>>>>>>>>>>>>>>>>> jburwell@basho.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Mike,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> The
> >>>>>>> things I
> >>>>>>>>>>>>> want to
> >>>>>>>>>>>>>> understand
> >>>>>>>>>>>>>>>>>>> are
> >>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> following:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 1)
> >>>> Is
> >>>>>>> there
> >>>>>>>>>>>>> value in
> >>>>>>>>>>>>>> capturing
> >>>>>>>>>>>>>>>>>>> IOPS
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> policies
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> captured
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> common
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> data
> >>>>>>> model
> >>>>>>>>>>> (e.g.
> >>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>> billing/usage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> purposes,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> expressing
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> offerings).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2)
> >>>>>>> Should
> >>>>>>>>>>> there
> >>>>>>>>>>>>> be a common
> >>>>>>>>>>>>>>>>>>>>>>>>>> interface model
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> reasoning
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> about
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IOP
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> provisioning
> >>>>>>>>>>> at
> >>>>>>>>>>>>> runtime?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 3)
> >>>> How
> >>>>>>> are
> >>>>>>>>>>>>> conflicting
> >>>>>>>>>>>>>>>>> provisioned
> >>>>>>>>>>>>>>>>>>>>>>>>>> IOPS
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> configurations
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> between
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> hypervisor
> >>>>>>>>>>> and
> >>>>>>>>>>>>> storage device
> >>>>>>>>>>>>>>>>>>>>>>>>>> reconciled?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> In
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> particular,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> scenario
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> where
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is
> >>>> lead
> >>>>>>> to
> >>>>>>>>>>>>> believe (and
> >>>>>>>>>>>>>> billed)
> >>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>> more
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> IOPS
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> configured
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a VM
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> than a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> storage
> >>>>>>>>>>> device
> >>>>>>>>>>>>> has been
> >>>>>>>>>>>>>>>>> configured
> >>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> deliver.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Another
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> scenario
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> could
> >>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> consistent
> >>>>>>>>>>>>> configuration
> >>>>>>>>>>>>>>>>> between a
> >>>>>>>>>>>>>>>>>>>>>>>>>> VM and a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> device at
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> creation
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> time,
> >>>>>>> but a
> >>>>>>>>>>>>> later
> >>>>>>>>>>>>>> modification
> >>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> device
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> introduces
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logical
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> inconsistency.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -
> >>>> John
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On
> >>>> Jun
> >>>>>>> 2,
> >>>>>>>>>>> 2013,
> >>>>>>>>>>>>> at 8:38 PM,
> >>>>>>>>>>>>>>>>> Mike
> >>>>>>>>>>>>>>>>>>>>>>>>>> Tutkowski
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> mike.tutkowski@solidfire.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi
> >>>> John,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I
> >>>>>>> believe
> >>>>>>>>>>> Wei's
> >>>>>>>>>>>>> feature deals
> >>>>>>>>>>>>>>>>> with
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> controlling
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> max
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> number of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IOPS
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> the
> >>>>>>>>>>> hypervisor
> >>>>>>>>>>>>> side.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> My
> >>>>>>> feature
> >>>>>>>>>>> is
> >>>>>>>>>>>>> focused on
> >>>>>>>>>>>>>>>>>>>>> controlling
> >>>>>>>>>>>>>>>>>>>>>>>>>> IOPS
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> system
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> side.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I
> >>>> hope
> >>>>>>> that
> >>>>>>>>>>>>> helps. :)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On
> >>>> Sun,
> >>>>>>> Jun
> >>>>>>>>>>> 2,
> >>>>>>>>>>>>> 2013 at 6:35
> >>>>>>>>>>>>>> PM,
> >>>>>>>>>>>>>>>>>>>>> John
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Burwell
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> jburwell@basho.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Wei,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> My
> >>>>>>>>>>> opinion is
> >>>>>>>>>>>>> that no
> >>>>>>>>>>>>>> features
> >>>>>>>>>>>>>>>>>>>>>>>>>> should be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> merged
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> until all
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> functional
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> issues
> >>>>>>>>>>> have
> >>>>>>>>>>>>> been resolved
> >>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>>>> ready
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> turn
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> over to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> test.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Until
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> total
> >>>>>>> Ops
> >>>>>>>>>>> vs
> >>>>>>>>>>>>> discrete
> >>>>>>>>>>>>>>>>> read/write
> >>>>>>>>>>>>>>>>>>>>>>>>>> ops issue
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> addressed
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> re-
> >>>>>>> reviewed
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> by
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Wido,
> >>>>>>> I
> >>>>>>>>>>> don't
> >>>>>>>>>>>>> think this
> >>>>>>>>>>>>>>>>> criteria
> >>>>>>>>>>>>>>>>>>>>>>>>>> has been
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> satisfied.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Also,
> >>>>>>> how
> >>>>>>>>>>>>> does this work
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> intersect/compliment
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> SolidFire
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> patch
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> https://reviews.apache.org/r/11479/
> >>>>>>>>>>>>>>>>>>>>>>>>>> )?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> As I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> understand
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> work
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> also
> >>>>>>>>>>> involves
> >>>>>>>>>>>>> provisioned
> >>>>>>>>>>>>>>>>> IOPS. I
> >>>>>>>>>>>>>>>>>>>>>>>>>> would
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ensure
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> we
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> don't
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> have a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> scenario
> >>>>>>>>>>>>> where provisioned
> >>>>>>>>>>>>>>>>> IOPS
> >>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>> KVM and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> SolidFire are
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> unnecessarily
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> incompatible.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -
> >>>> John
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> On
> >>>>>>> Jun 1,
> >>>>>>>>>>> 2013,
> >>>>>>>>>>>>> at 6:47 AM,
> >>>>>>>>>>>>>>>>> Wei
> >>>>>>>>>>>>>>>>>>>>>>>>>> ZHOU <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> ustcweizhou@gmail.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Wido,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Sure. I
> >>>>>>> will
> >>>>>>>>>>>>> change it next
> >>>>>>>>>>>>>>>>> week.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -
> >>>> Wei
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> 2013/6/1
> >>>>>>>>>>>>> Wido den Hollander
> >>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> wido@widodh.nl
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Hi Wei,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> On
> >>>>>>>>>>>>> 06/01/2013 08:24 AM, Wei
> >>>>>>>>>>>>>>>>> ZHOU
> >>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Wido,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> Exactly. I
> >>>>>>>>>>> have
> >>>>>>>>>>>>> pushed the
> >>>>>>>>>>>>>>>>>>>>> features
> >>>>>>>>>>>>>>>>>>>>>>>>>> into
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> master.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If
> >>>>>>> anyone
> >>>>>>>>>>>>> object thems for
> >>>>>>>>>>>>>>>>>>>>> technical
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> reason
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> till
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Monday,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> will
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> revert
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> them.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> For
> >>>>>>> the
> >>>>>>>>>>> sake
> >>>>>>>>>>>>> of clarity I
> >>>>>>>>>>>>>> just
> >>>>>>>>>>>>>>>>>>>>> want
> >>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mention
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> again
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that we
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> should
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> change
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> the
> >>>>>>> total
> >>>>>>>>>>> IOps
> >>>>>>>>>>>>> to R/W IOps
> >>>>>>>>>>>>>>>>> asap
> >>>>>>>>>>>>>>>>>>> so
> >>>>>>>>>>>>>>>>>>>>>>>>>> that we
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> never
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> release
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> version
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> with
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> only
> >>>>>>> total
> >>>>>>>>>>> IOps.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> You
> >>>>>>> laid
> >>>>>>>>>>> the
> >>>>>>>>>>>>> groundwork for
> >>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>> I/O
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> throttling
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that's
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> great!
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> We
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> should
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> however
> >>>>>>>>>>>>> prevent that we
> >>>>>>>>>>>>>> create
> >>>>>>>>>>>>>>>>>>>>>>>>>> legacy from
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> day
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> #1.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Wido
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -
> >>>> Wei
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> 2013/5/31
> >>>>>>>>>>>>> Wido den
> >>>>>>>>>>>>>> Hollander <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wido@widodh.nl>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> On
> >>>>>>>>>>>>> 05/31/2013 03:59 PM, John
> >>>>>>>>>>>>>>>>>>>>> Burwell
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Wido,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> +1 --
> >>>>>>> this
> >>>>>>>>>>>>> enhancement must
> >>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>> discretely
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> support
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> read
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> write
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IOPS.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> don't
> >>>>>>> see
> >>>>>>>>>>> how
> >>>>>>>>>>>>> it could be
> >>>>>>>>>>>>>>>>> fixed
> >>>>>>>>>>>>>>>>>>>>>>>>>> later
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> because
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> don't see
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> how we
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> correctly
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> split
> >>>>>>> total
> >>>>>>>>>>> IOPS
> >>>>>>>>>>>>> into read
> >>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>> write.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Therefore, we
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> would
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> stuck
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> with a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> total
> >>>>>>>>>>>>> unless/until we
> >>>>>>>>>>>>>> decided
> >>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>> break
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> backwards
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> compatibility.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> What
> >>>>>>> Wei
> >>>>>>>>>>>>> meant was merging
> >>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>> into
> >>>>>>>>>>>>>>>>>>>>>>>>>> master
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> now
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> so
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> will go
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> 4.2
> >>>>>>> branch
> >>>>>>>>>>>>> and add Read /
> >>>>>>>>>>>>>>>>> Write
> >>>>>>>>>>>>>>>>>>>>> IOps
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> before
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 4.2
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> release
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> so
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 4.2
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> will be
> >>>>>>>>>>>>> released with Read
> >>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>> Write
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> instead
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Total
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IOps.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> This is
> >>>>>>> to
> >>>>>>>>>>>>> make the May 31st
> >>>>>>>>>>>>>>>>>>>>> feature
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> freeze
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> date.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> But if
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> window
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> moves
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> (see
> >>>>>>> other
> >>>>>>>>>>>>> threads) then it
> >>>>>>>>>>>>>>>>> won't
> >>>>>>>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> necessary
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to do
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Wido
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I
> >>>> also
> >>>>>>>>>>>>> completely agree that
> >>>>>>>>>>>>>>>>>>> there
> >>>>>>>>>>>>>>>>>>>>>>>>>> is no
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> association
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> between
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> network
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> disk
> >>>>>>> I/O.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> Thanks,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -
> >>>> John
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> On
> >>>>>>> May 31,
> >>>>>>>>>>>>> 2013, at 9:51 AM,
> >>>>>>>>>>>>>>>>> Wido
> >>>>>>>>>>>>>>>>>>>>>>>>>> den
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hollander <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> wido@widodh.nl
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Hi Wei,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> On
> >>>>>>>>>>>>> 05/31/2013 03:13 PM, Wei
> >>>>>>>>>>>>>>>>> ZHOU
> >>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Hi
> >>>>>>> Wido,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> Thanks.
> >>>>>>>>>>> Good
> >>>>>>>>>>>>> question.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I
> >>>>>>> thought
> >>>>>>>>>>>>> about at the
> >>>>>>>>>>>>>>>>>>> beginning.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Finally I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> decided to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ignore
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> difference
> >>>>>>>>>>> of
> >>>>>>>>>>>>> read and write
> >>>>>>>>>>>>>>>>>>>>> mainly
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> because
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> throttling
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> did
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> not
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> care
> >>>>>>> the
> >>>>>>>>>>>>> difference of sent
> >>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>> received
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> bytes as
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> well.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> That
> >>>>>>>>>>>>> reasoning seems odd.
> >>>>>>>>>>>>>>>>>>>>>>>>>> Networking and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> disk
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I/O
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> completely
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> different.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Disk
> >>>>>>> I/O is
> >>>>>>>>>>>>> much more
> >>>>>>>>>>>>>>>>> expensive
> >>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>> most
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> situations
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> bandwith.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> Implementing
> >>>>>>>>>>>>> it will be some
> >>>>>>>>>>>>>>>>>>>>>>>>>> copy-paste
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> work.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> could be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> implemented
> >>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> few
> >>>>>>> days.
> >>>>>>>>>>> For
> >>>>>>>>>>>>> the deadline
> >>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>> feature
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> freeze,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> will
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> implement
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> after
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> that ,
> >>>>>>> if
> >>>>>>>>>>>>> needed.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It
> >>>>>>> think it's
> >>>>>>>>>>> a
> >>>>>>>>>>>>> feature we
> >>>>>>>>>>>>>>>>> can't
> >>>>>>>>>>>>>>>>>>>>>>>>>> miss. But
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> if
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> goes
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> into
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 4.2
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> window
> >>>>>>>>>>> we
> >>>>>>>>>>>>> have to make sure
> >>>>>>>>>>>>>> we
> >>>>>>>>>>>>>>>>>>>>> don't
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> release
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> with
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> only
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> total
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IOps
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> fix
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> it
> >>>> in
> >>>>>>> 4.3,
> >>>>>>>>>>> that
> >>>>>>>>>>>>> will confuse
> >>>>>>>>>>>>>>>>>>>>> users.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Wido
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -
> >>>> Wei
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> 2013/5/31
> >>>>>>>>>>>>> Wido den
> >>>>>>>>>>>>>> Hollander <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wido@widodh.nl>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Hi Wei,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> On
> >>>>>>>>>>>>> 05/30/2013 06:03 PM, Wei
> >>>>>>>>>>>>>>>>> ZHOU
> >>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Hi,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I
> >>>>>>> would
> >>>>>>>>>>> like to
> >>>>>>>>>>>>> merge
> >>>>>>>>>>>>>>>>>>>>>>>>>> disk_io_throttling
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> branch
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> into
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> master.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If
> >>>>>>> nobody
> >>>>>>>>>>>>> object, I will
> >>>>>>>>>>>>>> merge
> >>>>>>>>>>>>>>>>>>>>> into
> >>>>>>>>>>>>>>>>>>>>>>>>>> master
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 48
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hours.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> The
> >>>>>>>>>>> purpose
> >>>>>>>>>>>>> is :
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Virtual
> >>>>>>>>>>>>> machines are running
> >>>>>>>>>>>>>>>>> on
> >>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>> same
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> device
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (local
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> storage or
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> share
> >>>>>>>>>>> strage).
> >>>>>>>>>>>>> Because of
> >>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>> rate
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> limitation
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> device
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (such as
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> iops),
> >>>>>>> if
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> one
> >>>>>>> VM
> >>>>>>>>>>> has
> >>>>>>>>>>>>> large disk
> >>>>>>>>>>>>>>>>> operation,
> >>>>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>>>>>>> may
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> affect
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> disk
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> performance
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> other
> >>>>>>> VMs
> >>>>>>>>>>>>> running on the
> >>>>>>>>>>>>>> same
> >>>>>>>>>>>>>>>>>>>>>>>>>> storage
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> device.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It
> >>>> is
> >>>>>>>>>>> neccesary
> >>>>>>>>>>>>> to set the
> >>>>>>>>>>>>>>>>>>> maximum
> >>>>>>>>>>>>>>>>>>>>>>>>>> rate
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> limit
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> disk
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I/O
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> VMs.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> Looking at
> >>>>>>>>>>> the
> >>>>>>>>>>>>> code I see
> >>>>>>>>>>>>>> you
> >>>>>>>>>>>>>>>>>>> make
> >>>>>>>>>>>>>>>>>>>>>>>>>> no
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> difference
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> between
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Read
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Write
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> IOps.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Qemu
> >>>>>>> and
> >>>>>>>>>>>>> libvirt support
> >>>>>>>>>>>>>>>>> setting
> >>>>>>>>>>>>>>>>>>>>>>>>>> both a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> different
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> rate
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Read
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Write
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> IOps
> >>>>>>>>>>> which
> >>>>>>>>>>>>> could benefit a
> >>>>>>>>>>>>>>>>> lot of
> >>>>>>>>>>>>>>>>>>>>>>>>>> users.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> It's
> >>>>>>> also
> >>>>>>>>>>>>> strange, in the
> >>>>>>>>>>>>>>>>> polling
> >>>>>>>>>>>>>>>>>>>>>>>>>> side you
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> collect
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> both
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Read
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Write
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> IOps,
> >>>>>>> but
> >>>>>>>>>>> on
> >>>>>>>>>>>>> the throttling
> >>>>>>>>>>>>>>>>> side
> >>>>>>>>>>>>>>>>>>>>>>>>>> you only
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> go
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> global
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> value.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Write
> >>>>>>> IOps
> >>>>>>>>>>> are
> >>>>>>>>>>>>> usually much
> >>>>>>>>>>>>>>>>> more
> >>>>>>>>>>>>>>>>>>>>>>>>>> expensive
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Read
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IOps,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> so it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> seems
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> like a
> >>>>>>> valid
> >>>>>>>>>>>>> use-case where
> >>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>> an
> >>>>>>>>>>>>>>>>>>>>>>>>>> admin
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> would
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> set
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> lower
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> value
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> write
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> IOps
> >>>>>>> vs
> >>>>>>>>>>> Read
> >>>>>>>>>>>>> IOps.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Since
> >>>>>>> this
> >>>>>>>>>>> only
> >>>>>>>>>>>>> supports KVM
> >>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>>>> this
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> point I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> think
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> would
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> great
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> value
> >>>>>>> to at
> >>>>>>>>>>>>> least have the
> >>>>>>>>>>>>>>>>>>>>>>>>>> mechanism in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> place
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> support
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> both,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> implementing
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> this
> >>>>>>> later
> >>>>>>>>>>>>> would be a lot of
> >>>>>>>>>>>>>>>>>>> work.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If
> >>>> a
> >>>>>>>>>>> hypervisor
> >>>>>>>>>>>>> doesn't
> >>>>>>>>>>>>>>>>> support
> >>>>>>>>>>>>>>>>>>>>>>>>>> setting
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> different
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> values
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> read
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> write
> >>>>>>> you
> >>>>>>>>>>> can
> >>>>>>>>>>>>> always sum
> >>>>>>>>>>>>>> both
> >>>>>>>>>>>>>>>>> up
> >>>>>>>>>>>>>>>>>>>>>>>>>> and set
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> total
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> limit.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Can
> >>>>>>> you
> >>>>>>>>>>>>> explain why you
> >>>>>>>>>>>>>>>>>>>>> implemented
> >>>>>>>>>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> this
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> way?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Wido
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> The
> >>>>>>>>>>> feature
> >>>>>>>>>>>>> includes:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> (1) set
> >>>>>>> the
> >>>>>>>>>>>>> maximum rate of
> >>>>>>>>>>>>>>>>> VMs
> >>>>>>>>>>>>>>>>>>>>> (in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> disk_offering,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> global
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> configuration)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> (2)
> >>>>>>> change
> >>>>>>>>>>> the
> >>>>>>>>>>>>> maximum rate
> >>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>> VMs
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> (3)
> >>>>>>> limit
> >>>>>>>>>>> the
> >>>>>>>>>>>>> disk rate
> >>>>>>>>>>>>>> (total
> >>>>>>>>>>>>>>>>>>> bps
> >>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> iops)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> JIRA
> >>>>>>> ticket:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> https://issues.apache.org/****
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> jira/browse/CLOUDSTACK-1192<ht**tps://
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> issues.apache.org/****
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> jira/browse/CLOUDSTACK-1192<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-
> >>>>>>>>>>>>> 1192>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> <ht**tps://
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> issues.apache.org/**jira/**browse/CLOUDSTACK-
> >>>>>>>>>>> 1192<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>> http://issues.apache.org/jira/**browse/CLOUDSTACK-
> >>>>>>> 1192
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> <**
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-
> >>>>>>>>>>>>> 1192<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>> https://issues.apache.org/jira/browse/CLOUDSTACK-
> >>>>>>> 1192>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> FS (I
> >>>>>>> will
> >>>>>>>>>>>>> update later) :
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>
> >>>> https://cwiki.apache.org/******confluence/display/CLOUDSTACK/******
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>> https://cwiki.apache.org/****confluence/display/CLOUDSTACK/****
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>> https://cwiki.apache.org/****confluence/display/**CLOUDSTACK/**
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>> https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>> VM+Disk+IO+Throttling<https://
> >>>>>> <https://%0b/>> > >>>>>>>>>>>>> ****
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> cwiki.apache.org/confluence/****
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> http://cwiki.apache.org/confluence/**>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> display/CLOUDSTACK/VM+Disk+IO+****Throttling<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://cwiki.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> **
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>
> >>>> apache.org/confluence/display/**CLOUDSTACK/VM+Disk+IO+**Throttling
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>
> >>>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Th
> >>>>>>>>>>>>> rottling
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> Merge
> >>>>>>>>>>> check
> >>>>>>>>>>>>> list :-
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *
> >>>> Did
> >>>>>>> you
> >>>>>>>>>>>>> check the branch's
> >>>>>>>>>>>>>>>>> RAT
> >>>>>>>>>>>>>>>>>>>>>>>>>> execution
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> success?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Yes
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *
> >>>> Are
> >>>>>>>>>>> there
> >>>>>>>>>>>>> new dependencies
> >>>>>>>>>>>>>>>>>>>>>>>>>> introduced?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> No
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *
> >>>>>>> What
> >>>>>>>>>>>>> automated testing
> >>>>>>>>>>>>>> (unit
> >>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> integration)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> included
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> new
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> feature?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Unit
> >>>>>>> tests
> >>>>>>>>>>> are
> >>>>>>>>>>>>> added.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *
> >>>>>>> What
> >>>>>>>>>>>>> testing has been done
> >>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>>> check for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> potential
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> regressions?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> (1) set
> >>>>>>> the
> >>>>>>>>>>>>> bytes rate and
> >>>>>>>>>>>>>>>>> IOPS
> >>>>>>>>>>>>>>>>>>>>>>>>>> rate on
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> CloudStack
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> UI.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> (2)
> >>>>>>> VM
> >>>>>>>>>>>>> operations, including
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> deploy,
> >>>>>>>>>>> stop,
> >>>>>>>>>>>>> start, reboot,
> >>>>>>>>>>>>>>>>>>>>>>>>>> destroy,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> expunge.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> migrate,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> restore
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> (3)
> >>>>>>>>>>> Volume
> >>>>>>>>>>>>> operations,
> >>>>>>>>>>>>>>>>> including
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Attach,
> >>>>>>>>>>>>> Detach
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> To
> >>>>>>> review
> >>>>>>>>>>> the
> >>>>>>>>>>>>> code, you can
> >>>>>>>>>>>>>>>>> try
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> git diff
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> c30057635d04a2396f84c588127d7e******be42e503a7
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> f2e5591b710d04cc86815044f5823e******73a4a58944
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Best
> >>>>>>>>>>> regards,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Wei
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> [1]
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>
> >>>> https://cwiki.apache.org/******confluence/display/CLOUDSTACK/******
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>> https://cwiki.apache.org/****confluence/display/CLOUDSTACK/****
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>> https://cwiki.apache.org/****confluence/display/**CLOUDSTACK/**
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>> https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>> VM+Disk+IO+Throttling<https://
> >>>>>> <https://%0b/>> > >>>>>>>>>>>>> ****
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> cwiki.apache.org/confluence/****
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> http://cwiki.apache.org/confluence/**>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> display/CLOUDSTACK/VM+Disk+IO+****Throttling<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://cwiki.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> **
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>
> >>>> apache.org/confluence/display/**CLOUDSTACK/VM+Disk+IO+**Throttling
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>
> >>>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Th
> >>>>>>>>>>>>> rottling
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> [2]
> >>>>>>>>>>>>>>>>> refs/heads/disk_io_throttling
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> [3]
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> https://issues.apache.org/******jira/browse/CLOUDSTACK-
> >>>> 1301
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> https://issues.apache.org/****jira/browse/CLOUDSTACK-
> >>>>>>>>>>>>> 1301
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> <ht**tps://
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> issues.apache.org/****jira/browse/CLOUDSTACK-
> >>>>>>>>>>> 1301<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-
> >>>>>>>>>>>>> 1301>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> <ht**tps://
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> issues.apache.org/**jira/**browse/CLOUDSTACK-
> >>>>>>>>>>> 1301<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>> http://issues.apache.org/jira/**browse/CLOUDSTACK-
> >>>>>>> 1301
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> <**
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-
> >>>>>>>>>>>>> 1301<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>> https://issues.apache.org/jira/browse/CLOUDSTACK-
> >>>>>>> 1301>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> <ht**tps://
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> issues.apache.org/****jira/**browse/CLOUDSTACK-
> >>>>>>>>>>> 2071
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> http://issues.apache.org/**jira/**browse/CLOUDSTACK-
> >>>>>>>>>>>>> 2071
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> **<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> http://issues.apache.org/**jira/**browse/CLOUDSTACK-
> >>>>>>>>>>>>> 2071
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>> http://issues.apache.org/jira/**browse/CLOUDSTACK-
> >>>>>>> 2071
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> <**
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> https://issues.apache.org/****jira/browse/CLOUDSTACK-
> >>>>>>>>>>>>> 2071
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-
> >>>>>>>>>>>>> 2071>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> <h**ttps://
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> issues.apache.org/jira/**browse/CLOUDSTACK-2071<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>> https://issues.apache.org/jira/browse/CLOUDSTACK-
> >>>>>>> 2071>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> (**CLOUDSTACK-1301
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -
> >>>>>>> VM
> >>>>>>>>>>> Disk
> >>>>>>>>>>>>> I/O
> >>>>>>>>>>>>>> Throttling)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> *Mike
> >>>>>>>>>>>>> Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> *Senior
> >>>>>>>>>>>>> CloudStack Developer,
> >>>>>>>>>>>>>>>>>>>>>>>>>> SolidFire
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>>>>>>>>>>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o:
> >>>>>>>>>>> 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> Advancing
> >>>>>>>>>>> the
> >>>>>>>>>>>>> way the world
> >>>>>>>>>>>>>>>>> uses
> >>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike
> >>>>>>>>>>> Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior
> >>>>>>>>>>> CloudStack
> >>>>>>>>>>>>> Developer,
> >>>>>>>>>>>>>>>>>>> SolidFire
> >>>>>>>>>>>>>>>>>>>>>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>>>>>>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o:
> >>>>>>> 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Advancing
> >>>>>>> the
> >>>>>>>>>>> way
> >>>>>>>>>>>>> the world uses
> >>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike
> >>>>>>> Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior
> >>>>>>>>>>> CloudStack
> >>>>>>>>>>>>> Developer,
> >>>>>>>>>>>>>>>>> SolidFire
> >>>>>>>>>>>>>>>>>>>>>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>>>>>>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o:
> >>>>>>> 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Advancing the
> >>>>>>>>>>> way
> >>>>>>>>>>>>> the world uses
> >>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike
> >>>>>>> Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior
> >>>>>>> CloudStack
> >>>>>>>>>>>>> Developer,
> >>>>>>>>>>>>>>>>> SolidFire
> >>>>>>>>>>>>>>>>>>>>>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>>>>>>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o:
> >>>> 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing
> >>>> the
> >>>>>>> way
> >>>>>>>>>>> the
> >>>>>>>>>>>>> world uses the
> >>>>>>>>>>>>>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> http://solidfire.com/solution/overview/?video=play
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike
> >>>> Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior
> >>>>>>> CloudStack
> >>>>>>>>>>>>> Developer,
> >>>>>>>>>>>>>> SolidFire
> >>>>>>>>>>>>>>>>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>>>>>>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o:
> >>>> 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing
> >>>> the
> >>>>>>> way
> >>>>>>>>>>> the
> >>>>>>>>>>>>> world uses the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike
> >>>> Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior
> >>>> CloudStack
> >>>>>>>>>>>>> Developer, SolidFire
> >>>>>>>>>>>>>>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>>>>>>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the
> >>>> way
> >>>>>>> the
> >>>>>>>>>>>>> world uses the
> >>>>>>>>>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike
> >>>> Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior
> >>>> CloudStack
> >>>>>>>>>>> Developer,
> >>>>>>>>>>>>> SolidFire
> >>>>>>>>>>>>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>>>>>>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the
> >>>> way
> >>>>>>> the
> >>>>>>>>>>> world
> >>>>>>>>>>>>> uses the
> >>>>>>>>>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior
> >>>> CloudStack
> >>>>>>>>>>> Developer,
> >>>>>>>>>>>>> SolidFire
> >>>>>>>>>>>>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>>>>>>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the
> >>>> way the
> >>>>>>>>>>> world
> >>>>>>>>>>>>> uses the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> http://solidfire.com/solution/overview/?video=play
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack
> >>>>>>> Developer,
> >>>>>>>>>>>>> SolidFire
> >>>>>>>>>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>>>>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way
> >>>> the
> >>>>>>> world
> >>>>>>>>>>> uses
> >>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> http://solidfire.com/solution/overview/?video=play
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack
> >>>>>>> Developer,
> >>>>>>>>>>>>> SolidFire Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the
> >>>>>>> world
> >>>>>>>>>>> uses
> >>>>>>>>>>>>> the cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack
> >>>> Developer,
> >>>>>>>>>>>>> SolidFire Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the
> >>>> world
> >>>>>>> uses
> >>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack
> >>>> Developer,
> >>>>>>>>>>> SolidFire
> >>>>>>>>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the
> >>>> world
> >>>>>>> uses
> >>>>>>>>>>> the
> >>>>>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack
> >>>> Developer,
> >>>>>>>>>>> SolidFire
> >>>>>>>>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world
> >>>> uses
> >>>>>>> the
> >>>>>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer,
> >>>>>>>>>>> SolidFire
> >>>>>>>>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world
> >>>> uses
> >>>>>>> the
> >>>>>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer,
> >>>>>>> SolidFire
> >>>>>>>>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e:
> >>>> mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world
> >>>> uses
> >>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer,
> >>>>>>> SolidFire
> >>>>>>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses
> >>>> the
> >>>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer,
> >>>> SolidFire
> >>>>>>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses
> >>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cloud<
> >>>>>>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer,
> >>>> SolidFire
> >>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the
> >>>>>>> cloud<
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer,
> >>>> SolidFire
> >>>>>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire
> >>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire
> >>>> Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the cloud<
> >>>>>>>>>>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the cloud<
> >>>>>>>>>>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the cloud<
> >>>>>>>>>>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the cloud<
> >>>>>>>>>>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>>>> Advancing the way the world uses the
> >>>>>>>>>>>>>>>>>>
> >>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>>> Advancing the way the world uses the cloud<
> >>>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>> Advancing the way the world uses the
> >>>>>>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> --
> >>>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>> Advancing the way the world uses the
> >>>>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> --
> >>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>> Advancing the way the world uses the
> >>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>> *(tm)*
> >>>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> --
> >>>>>>>>> *Mike Tutkowski*
> >>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>> o: 303.746.7302
> >>>>>>>>> Advancing the way the world uses the
> >>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>> *(tm)*
> >>>>>>>>
> >>>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> --
> >>>>>>> *Mike Tutkowski*
> >>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>> o: 303.746.7302
> >>>>>>> Advancing the way the world uses the
> >>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>>> *(tm)*****
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> ****
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> --
> >>>>>> *Mike Tutkowski*****
> >>>>>>
> >>>>>> *Senior CloudStack Developer, SolidFire Inc.*****
> >>>>>>
> >>>>>> e: mike.tutkowski@solidfire.com****
> >>>>>>
> >>>>>> o: 303.746.7302****
> >>>>>>
> >>>>>> Advancing the way the world uses the
> >>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>> *(tm)*****
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> ****
> >>>>>>
> >>>>>> ** **
> >>>>>>
> >>>>>> --
> >>>>>> *Mike Tutkowski*****
> >>>>>>
> >>>>>> *Senior CloudStack Developer, SolidFire Inc.*****
> >>>>>>
> >>>>>> e: mike.tutkowski@solidfire.com****
> >>>>>>
> >>>>>> o: 303.746.7302****
> >>>>>>
> >>>>>> Advancing the way the world uses the
> >>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>> *(tm)*****
> >>>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> --
> >>>>> *Mike Tutkowski*
> >>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>> e: mike.tutkowski@solidfire.com
> >>>>> o: 303.746.7302
> >>>>> Advancing the way the world uses the
> >>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>> *(tm)*
> >>>>>
> >>>>>
> >>>>>
> >>>>
> >>>>
> >>>> --
> >>>> *Mike Tutkowski*
> >>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>> e: mike.tutkowski@solidfire.com
> >>>> o: 303.746.7302
> >>>> Advancing the way the world uses the
> >>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>> *(tm)*
> >>
> >>
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
>
>


-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message