cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Todd Pigram <t...@toddpigram.com>
Subject Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?
Date Fri, 06 Jun 2014 12:14:29 GMT
Sorry, thought you were based off the link you provided in this reply.

"In our case, we are using CloudStack integrated in VDI solution to provived
pooled VM type[1]. So may be my approach can bring better UX for user with
lower bootime ...

A short change in design are followings
- VM will be deployed with golden primary storage if primary storage is
marked golden and this VM template is also marked as golden.
- Choosing the best deploy destionation for both golden primary storage and
normal root volume primary storage. Chosen host can also access both
storage pools.
- New Xen Server plug-in for modifying VHD parent id.

Is there some place for me to submit my design and code. Can I write a new
proposal in CS wiki ?

[1]:
http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html
 "


On Thu, Jun 5, 2014 at 11:55 PM, Hieu LE <hieulq19@gmail.com> wrote:

> Hi Todd,
>
>
> On Fri, Jun 6, 2014 at 9:17 AM, Todd Pigram <todd@toddpigram.com> wrote:
>
> > Hieu,
> >
> > I assume you are using MCS for you golden image? What version of XD?
> Given
> > you are using pooled desktops, have you thought about using a PVS BDM iso
> > and mount it with in your 1000 VMs? This way you can stagger reboots via
> > PVS console or Studio. This would require a change to your delivery
> group.
> >
> >
> Sorry but I did not use MCS or XenDesktop in my company :-)
>
>
> >
> > On Thu, Jun 5, 2014 at 9:28 PM, Mike Tutkowski <
> > mike.tutkowski@solidfire.com
> > > wrote:
> >
> > > 6) The copy_vhd_from_secondarystorage XenServer plug-in is not used
> when
> > > you're using XenServer + XS62ESP1 + XS62ESP1004. In that case, please
> > refer
> > > to copyTemplateToPrimaryStorage(CopyCommand) method in the
> > > Xenserver625StorageProcessor class.
> > >
> >
>
> Thank Mike, I will take note of that.
>
>
> >  >
> > > On Thu, Jun 5, 2014 at 1:56 PM, Mike Tutkowski <
> > > mike.tutkowski@solidfire.com
> > > > wrote:
> > >
> > > > Other than going through a "for" loop and deploying VM after VM, I
> > don't
> > > > think CloudStack currently supports a bulk-VM-deploy operation.
> > > >
> > > > It would be nice if CS did so at some point in the future; however,
> > that
> > > > is probably a separate proposal from Hieu's.
> > > >
> > > >
> > > > On Thu, Jun 5, 2014 at 12:13 AM, Amit Das <amit.das@cloudbyte.com>
> > > wrote:
> > > >
> > > >> Hi Hieu,
> > > >>
> > > >> Will it be good to include bulk operation of this feature? In
> > addition,
> > > >> does Xen support parallel execution of these operations ?
> > > >>
> > > >> Regards,
> > > >> Amit
> > > >> *CloudByte Inc.* <http://www.cloudbyte.com/>
> > > >>
> > > >>
> > > >> On Thu, Jun 5, 2014 at 8:59 AM, Hieu LE <hieulq19@gmail.com>
wrote:
> > > >>
> > > >> > Mike, Punith,
> > > >> >
> > > >> > Please review "Golden Primary Storage" proposal. [1]
> > > >> >
> > > >> > Thank you.
> > > >> >
> > > >> > [1]:
> > > >> >
> > > >>
> > >
> >
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage
> > > >> >
> > > >> >
> > > >> > On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski <
> > > >> > mike.tutkowski@solidfire.com> wrote:
> > > >> >
> > > >> >> Daan helped out with this. You should be good to go now.
> > > >> >>
> > > >> >>
> > > >> >> On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE <hieulq19@gmail.com>
> > wrote:
> > > >> >>
> > > >> >> > Hi Mike,
> > > >> >> >
> > > >> >> > Could you please give edit/create permission on ASF
Jira/Wiki
> > > >> >> confluence ?
> > > >> >> > I can not add a new Wiki page.
> > > >> >> >
> > > >> >> > My Jira ID: hieulq
> > > >> >> > Wiki: hieulq89
> > > >> >> > Review Board: hieulq
> > > >> >> >
> > > >> >> > Thanks !
> > > >> >> >
> > > >> >> >
> > > >> >> > On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski <
> > > >> >> > mike.tutkowski@solidfire.com
> > > >> >> > > wrote:
> > > >> >> >
> > > >> >> > > Hi,
> > > >> >> > >
> > > >> >> > > Yes, please feel free to add a new Wiki page for
your design.
> > > >> >> > >
> > > >> >> > > Here is a link to applicable design info:
> > > >> >> > >
> > > >> >> > >
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design
> > > >> >> > >
> > > >> >> > > Also, feel free to ask more questions and have
me review your
> > > >> design.
> > > >> >> > >
> > > >> >> > > Thanks!
> > > >> >> > > Mike
> > > >> >> > >
> > > >> >> > >
> > > >> >> > > On Tue, Jun 3, 2014 at 7:29 PM, Hieu LE <hieulq19@gmail.com>
> > > >> wrote:
> > > >> >> > >
> > > >> >> > > > Hi Mike,
> > > >> >> > > >
> > > >> >> > > > You are right, performance will be decreased
over time
> > because
> > > >> >> writes
> > > >> >> > > IOPS
> > > >> >> > > > will always end up on slower storage pool.
> > > >> >> > > >
> > > >> >> > > > In our case, we are using CloudStack integrated
in VDI
> > solution
> > > >> to
> > > >> >> > > provived
> > > >> >> > > > pooled VM type[1]. So may be my approach can
bring better
> UX
> > > for
> > > >> >> user
> > > >> >> > > with
> > > >> >> > > > lower bootime ...
> > > >> >> > > >
> > > >> >> > > > A short change in design are followings
> > > >> >> > > > - VM will be deployed with golden primary
storage if
> primary
> > > >> >> storage is
> > > >> >> > > > marked golden and this VM template is also
marked as
> golden.
> > > >> >> > > > - Choosing the best deploy destionation for
both golden
> > primary
> > > >> >> storage
> > > >> >> > > and
> > > >> >> > > > normal root volume primary storage. Chosen
host can also
> > access
> > > >> both
> > > >> >> > > > storage pools.
> > > >> >> > > > - New Xen Server plug-in for modifying VHD
parent id.
> > > >> >> > > >
> > > >> >> > > > Is there some place for me to submit my design
and code.
> Can
> > I
> > > >> >> write a
> > > >> >> > > new
> > > >> >> > > > proposal in CS wiki ?
> > > >> >> > > >
> > > >> >> > > > [1]:
> > > >> >> > > >
> > > >> >> > > >
> > > >> >> > >
> > > >> >> >
> > > >> >>
> > > >>
> > >
> >
> http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html
> > > >> >> > > >
> > > >> >> > > >
> > > >> >> > > > On Mon, Jun 2, 2014 at 9:04 PM, Mike Tutkowski
<
> > > >> >> > > > mike.tutkowski@solidfire.com
> > > >> >> > > > > wrote:
> > > >> >> > > >
> > > >> >> > > > > It is an interesting idea. If the constraints
you face at
> > > your
> > > >> >> > company
> > > >> >> > > > can
> > > >> >> > > > > be corrected somewhat by implementing
this, then you
> should
> > > go
> > > >> for
> > > >> >> > it.
> > > >> >> > > > >
> > > >> >> > > > > It sounds like writes will be placed
on the slower
> storage
> > > >> pool.
> > > >> >> This
> > > >> >> > > > means
> > > >> >> > > > > as you update OS components, those updates
will be placed
> > on
> > > >> the
> > > >> >> > slower
> > > >> >> > > > > storage pool. As such, your performance
is likely to
> > somewhat
> > > >> >> > decrease
> > > >> >> > > > over
> > > >> >> > > > > time (as more and more writes end up
on the slower
> storage
> > > >> pool).
> > > >> >> > > > >
> > > >> >> > > > > That may be OK for your use case(s),
though.
> > > >> >> > > > >
> > > >> >> > > > > You'll have to update the storage-pool
orchestration
> logic
> > to
> > > >> take
> > > >> >> > this
> > > >> >> > > > new
> > > >> >> > > > > scheme into account.
> > > >> >> > > > >
> > > >> >> > > > > Also, we'll have to figure out how this
ties into storage
> > > >> tagging
> > > >> >> (if
> > > >> >> > > at
> > > >> >> > > > > all).
> > > >> >> > > > >
> > > >> >> > > > > I'd be happy to review your design and
code.
> > > >> >> > > > >
> > > >> >> > > > >
> > > >> >> > > > > On Mon, Jun 2, 2014 at 1:54 AM, Hieu
LE <
> > hieulq19@gmail.com>
> > > >> >> wrote:
> > > >> >> > > > >
> > > >> >> > > > > > Thanks Mike and Punith for quick
reply.
> > > >> >> > > > > >
> > > >> >> > > > > > Both solutions you gave here are
absolutely correct.
> But
> > > as I
> > > >> >> > > mentioned
> > > >> >> > > > > in
> > > >> >> > > > > > the first email, I want another
better solution for
> > current
> > > >> >> > > > > infrastructure
> > > >> >> > > > > > at my company.
> > > >> >> > > > > >
> > > >> >> > > > > > Creating a high IOPS primary storage
using storage tags
> > is
> > > >> good
> > > >> >> but
> > > >> >> > > it
> > > >> >> > > > > will
> > > >> >> > > > > > be very waste of disk capacity.
For example, if I only
> > have
> > > >> 1TB
> > > >> >> SSD
> > > >> >> > > and
> > > >> >> > > > > > deploy 100 VM from a 100GB template.
> > > >> >> > > > > >
> > > >> >> > > > > > So I think about a solution where
a high IOPS primary
> > > storage
> > > >> >> can
> > > >> >> > > only
> > > >> >> > > > > > store golden image (master image),
and a child image of
> > > this
> > > >> VM
> > > >> >> > will
> > > >> >> > > be
> > > >> >> > > > > > stored in another normal (NFS, ISCSI...)
storage. In
> this
> > > >> case,
> > > >> >> > with
> > > >> >> > > > 1TB
> > > >> >> > > > > > SSD Primary Storage I can store
as much golden image
> as I
> > > >> need.
> > > >> >> > > > > >
> > > >> >> > > > > > I have also tested it with 256 GB
SSD mounted on Xen
> > Server
> > > >> >> 6.2.0
> > > >> >> > > with
> > > >> >> > > > > 2TB
> > > >> >> > > > > > local storage 10000RPM, 6TB NFS
share storage with 1GB
> > > >> network.
> > > >> >> The
> > > >> >> > > > IOPS
> > > >> >> > > > > of
> > > >> >> > > > > > VMs which have golden image (master
image) in SSD and
> > child
> > > >> >> image
> > > >> >> > in
> > > >> >> > > > NFS
> > > >> >> > > > > > increate more than 30-40% compare
with VMs which have
> > both
> > > >> >> golden
> > > >> >> > > image
> > > >> >> > > > > and
> > > >> >> > > > > > child image in NFS. The boot time
of each VM is also
> > > >> decrease.
> > > >> >> > > ('cause
> > > >> >> > > > > > golden image in SSD only reduced
READ IOPS).
> > > >> >> > > > > >
> > > >> >> > > > > > Do you think this approach OK ?
> > > >> >> > > > > >
> > > >> >> > > > > >
> > > >> >> > > > > > On Mon, Jun 2, 2014 at 12:50 PM,
Mike Tutkowski <
> > > >> >> > > > > > mike.tutkowski@solidfire.com>
wrote:
> > > >> >> > > > > >
> > > >> >> > > > > > > Thanks, Punith - this is similar
to what I was going
> to
> > > >> say.
> > > >> >> > > > > > >
> > > >> >> > > > > > > Any time a set of CloudStack
volumes share IOPS from
> a
> > > >> common
> > > >> >> > pool,
> > > >> >> > > > you
> > > >> >> > > > > > > cannot guarantee IOPS to a
given CloudStack volume
> at a
> > > >> given
> > > >> >> > time.
> > > >> >> > > > > > >
> > > >> >> > > > > > > Your choices at present are:
> > > >> >> > > > > > >
> > > >> >> > > > > > > 1) Use managed storage (where
you can create a 1:1
> > > mapping
> > > >> >> > between
> > > >> >> > > a
> > > >> >> > > > > > > CloudStack volume and a volume
on a storage system
> that
> > > has
> > > >> >> QoS).
> > > >> >> > > As
> > > >> >> > > > > > Punith
> > > >> >> > > > > > > mentioned, this requires that
you purchase storage
> > from a
> > > >> >> vendor
> > > >> >> > > who
> > > >> >> > > > > > > provides guaranteed QoS on
a volume-by-volume bases
> AND
> > > has
> > > >> >> this
> > > >> >> > > > > > integrated
> > > >> >> > > > > > > into CloudStack.
> > > >> >> > > > > > >
> > > >> >> > > > > > > 2) Create primary storage in
CloudStack that is not
> > > >> managed,
> > > >> >> but
> > > >> >> > > has
> > > >> >> > > > a
> > > >> >> > > > > > high
> > > >> >> > > > > > > number of IOPS (ex. using SSDs).
You can then storage
> > tag
> > > >> this
> > > >> >> > > > primary
> > > >> >> > > > > > > storage and create Compute
and Disk Offerings that
> use
> > > this
> > > >> >> > storage
> > > >> >> > > > tag
> > > >> >> > > > > > to
> > > >> >> > > > > > > make sure their volumes end
up on this storage pool
> > > >> (primary
> > > >> >> > > > storage).
> > > >> >> > > > > > This
> > > >> >> > > > > > > will still not guarantee IOPS
on a CloudStack
> > > >> volume-by-volume
> > > >> >> > > basis,
> > > >> >> > > > > but
> > > >> >> > > > > > > it will at least place the
CloudStack volumes that
> > need a
> > > >> >> better
> > > >> >> > > > chance
> > > >> >> > > > > > of
> > > >> >> > > > > > > getting higher IOPS on a storage
pool that could
> > provide
> > > >> the
> > > >> >> > > > necessary
> > > >> >> > > > > > > IOPS. A big downside here is
that you want to watch
> how
> > > >> many
> > > >> >> > > > CloudStack
> > > >> >> > > > > > > volumes get deployed on this
primary storage because
> > > you'll
> > > >> >> need
> > > >> >> > to
> > > >> >> > > > > > > essentially over-provision
IOPS in this primary
> storage
> > > to
> > > >> >> > increase
> > > >> >> > > > the
> > > >> >> > > > > > > probability that each and every
CloudStack volume
> that
> > > uses
> > > >> >> this
> > > >> >> > > > > primary
> > > >> >> > > > > > > storage gets the necessary
IOPS (and isn't as likely
> to
> > > >> suffer
> > > >> >> > from
> > > >> >> > > > the
> > > >> >> > > > > > > Noisy Neighbor Effect). You
should be able to tell
> > > >> CloudStack
> > > >> >> to
> > > >> >> > > only
> > > >> >> > > > > > use,
> > > >> >> > > > > > > say, 80% (or whatever) of the
storage you're
> providing
> > to
> > > >> it
> > > >> >> (so
> > > >> >> > as
> > > >> >> > > > to
> > > >> >> > > > > > > increase your effective IOPS
per GB ratio). This
> > > >> >> > over-provisioning
> > > >> >> > > of
> > > >> >> > > > > > IOPS
> > > >> >> > > > > > > to control Noisy Neighbors
is avoided in option 1. In
> > > that
> > > >> >> > > situation,
> > > >> >> > > > > you
> > > >> >> > > > > > > only provision the IOPS and
capacity you actually
> need.
> > > It
> > > >> is
> > > >> >> a
> > > >> >> > > much
> > > >> >> > > > > more
> > > >> >> > > > > > > sophisticated approach.
> > > >> >> > > > > > >
> > > >> >> > > > > > > Thanks,
> > > >> >> > > > > > > Mike
> > > >> >> > > > > > >
> > > >> >> > > > > > >
> > > >> >> > > > > > > On Sun, Jun 1, 2014 at 11:36
PM, Punith S <
> > > >> >> > punith.s@cloudbyte.com>
> > > >> >> > > > > > wrote:
> > > >> >> > > > > > >
> > > >> >> > > > > > > > hi hieu,
> > > >> >> > > > > > > >
> > > >> >> > > > > > > > your problem is the bottle
neck we see as a storage
> > > >> vendors
> > > >> >> in
> > > >> >> > > the
> > > >> >> > > > > > cloud,
> > > >> >> > > > > > > > meaning all the vms in
the cloud have not been
> > > guaranteed
> > > >> >> iops
> > > >> >> > > from
> > > >> >> > > > > the
> > > >> >> > > > > > > > primary storage, because
in your case i'm assuming
> > you
> > > >> are
> > > >> >> > > running
> > > >> >> > > > > > > 1000vms
> > > >> >> > > > > > > > on a xen cluster whose
all vm's disks are lying on
> a
> > > same
> > > >> >> > primary
> > > >> >> > > > nfs
> > > >> >> > > > > > > > storage mounted to the
cluster,
> > > >> >> > > > > > > > hence you won't get the
dedicated iops for each vm
> > > since
> > > >> >> every
> > > >> >> > vm
> > > >> >> > > > is
> > > >> >> > > > > > > > sharing the same storage.
to solve this issue in
> > > >> cloudstack
> > > >> >> we
> > > >> >> > > the
> > > >> >> > > > > > third
> > > >> >> > > > > > > > party vendors have implemented
the plugin(namely
> > > >> cloudbyte ,
> > > >> >> > > > > solidfire
> > > >> >> > > > > > > etc)
> > > >> >> > > > > > > > to support managed storage(dedicated
volumes with
> > > >> guaranteed
> > > >> >> > qos
> > > >> >> > > > for
> > > >> >> > > > > > each
> > > >> >> > > > > > > > vms) , where we are mapping
each root disk(vdi) or
> > data
> > > >> disk
> > > >> >> > of a
> > > >> >> > > > vm
> > > >> >> > > > > > with
> > > >> >> > > > > > > > one nfs or iscsi share
coming out of a pool, also
> we
> > > are
> > > >> >> > > proposing
> > > >> >> > > > > the
> > > >> >> > > > > > > new
> > > >> >> > > > > > > > feature to change volume
iops on fly in 4.5, where
> > you
> > > >> can
> > > >> >> > > increase
> > > >> >> > > > > or
> > > >> >> > > > > > > > decrease your root disk
iops while booting or at
> peak
> > > >> times.
> > > >> >> > but
> > > >> >> > > to
> > > >> >> > > > > use
> > > >> >> > > > > > > > this plugin you have to
buy our storage solution.
> > > >> >> > > > > > > >
> > > >> >> > > > > > > > if not , you can try creating
a nfs share out of
> ssd
> > > pool
> > > >> >> > storage
> > > >> >> > > > and
> > > >> >> > > > > > > > create a primary storage
in cloudstack out of it
> > named
> > > as
> > > >> >> > golden
> > > >> >> > > > > > primary
> > > >> >> > > > > > > > storage with specific
tag like gold, and create a
> > > compute
> > > >> >> > > offering
> > > >> >> > > > > for
> > > >> >> > > > > > > your
> > > >> >> > > > > > > > template with the storage
tag as gold, hence all
> the
> > > vm's
> > > >> >> you
> > > >> >> > > > create
> > > >> >> > > > > > will
> > > >> >> > > > > > > > sit on this gold primary
storage with high iops.
> and
> > > >> other
> > > >> >> data
> > > >> >> > > > disks
> > > >> >> > > > > > on
> > > >> >> > > > > > > > other primary storage
but still here you cannot
> > > guarantee
> > > >> >> the
> > > >> >> > qos
> > > >> >> > > > at
> > > >> >> > > > > vm
> > > >> >> > > > > > > > level.
> > > >> >> > > > > > > >
> > > >> >> > > > > > > > thanks
> > > >> >> > > > > > > >
> > > >> >> > > > > > > >
> > > >> >> > > > > > > > On Mon, Jun 2, 2014 at
10:12 AM, Hieu LE <
> > > >> >> hieulq19@gmail.com>
> > > >> >> > > > wrote:
> > > >> >> > > > > > > >
> > > >> >> > > > > > > >> Hi all,
> > > >> >> > > > > > > >>
> > > >> >> > > > > > > >> There are some problems
while deploying a large
> > amount
> > > >> of
> > > >> >> VMs
> > > >> >> > in
> > > >> >> > > > my
> > > >> >> > > > > > > >> company
> > > >> >> > > > > > > >> with CloudStack. All
VMs are deployed from same
> > > template
> > > >> >> (e.g:
> > > >> >> > > > > Windows
> > > >> >> > > > > > > 7)
> > > >> >> > > > > > > >> and the quantity is
approximately ~1000VMs. The
> > > problems
> > > >> >> here
> > > >> >> > is
> > > >> >> > > > low
> > > >> >> > > > > > > IOPS,
> > > >> >> > > > > > > >> low performance of
VM (about ~10-11 IOPS, boot
> time
> > is
> > > >> very
> > > >> >> > > high).
> > > >> >> > > > > The
> > > >> >> > > > > > > >> storage of my company
is SAN/NAS with NFS and Xen
> > > Server
> > > >> >> > 6.2.0.
> > > >> >> > > > All
> > > >> >> > > > > > Xen
> > > >> >> > > > > > > >> Server nodes have
standard server HDD disk raid.
> > > >> >> > > > > > > >>
> > > >> >> > > > > > > >> I have found some
solutions for this such as:
> > > >> >> > > > > > > >>
> > > >> >> > > > > > > >>    - Enable Xen Server
Intellicache and some
> tweaks
> > in
> > > >> >> > > CloudStack
> > > >> >> > > > > > codes
> > > >> >> > > > > > > to
> > > >> >> > > > > > > >>    deploy and start
VM in Intellicache mode. But
> > this
> > > >> >> solution
> > > >> >> > > > will
> > > >> >> > > > > > > >> transfer
> > > >> >> > > > > > > >>    all IOPS from shared
storage to all local
> > storage,
> > > >> hence
> > > >> >> > > affect
> > > >> >> > > > > and
> > > >> >> > > > > > > >> limit
> > > >> >> > > > > > > >>    some CloudStack
features.
> > > >> >> > > > > > > >>    - Buying some expensive
storage solutions and
> > > >> network to
> > > >> >> > > > increase
> > > >> >> > > > > > > IOPS.
> > > >> >> > > > > > > >>    Nah..
> > > >> >> > > > > > > >>
> > > >> >> > > > > > > >> So, I am thinking
about a new feature that (may
> be)
> > > >> >> increasing
> > > >> >> > > > IOPS
> > > >> >> > > > > > and
> > > >> >> > > > > > > >> performance of VMs:
> > > >> >> > > > > > > >>
> > > >> >> > > > > > > >>    1. Separate golden
image in high IOPS
> partition:
> > > >> buying
> > > >> >> new
> > > >> >> > > > SSD,
> > > >> >> > > > > > plug
> > > >> >> > > > > > > >> in
> > > >> >> > > > > > > >>    Xen Server and
deployed a new VM in NFS storage
> > > WITH
> > > >> >> golden
> > > >> >> > > > image
> > > >> >> > > > > > in
> > > >> >> > > > > > > >> this
> > > >> >> > > > > > > >>    new SSD partition.
This can reduce READ IOPS in
> > > >> shared
> > > >> >> > > storage
> > > >> >> > > > > and
> > > >> >> > > > > > > >> decrease
> > > >> >> > > > > > > >>    boot time of VM.
(Currenty, VM deployed in Xen
> > > Server
> > > >> >> > always
> > > >> >> > > > > have a
> > > >> >> > > > > > > >> master
> > > >> >> > > > > > > >>    image (golden image
- in VMWare) always in the
> > same
> > > >> >> storage
> > > >> >> > > > > > > repository
> > > >> >> > > > > > > >> with
> > > >> >> > > > > > > >>    different image
(child image)). We can do this
> > > trick
> > > >> by
> > > >> >> > > > tweaking
> > > >> >> > > > > in
> > > >> >> > > > > > > VHD
> > > >> >> > > > > > > >>    header file with
new Xen Server plug-in.
> > > >> >> > > > > > > >>    2. Create golden
primary storage and VM
> template
> > > that
> > > >> >> > enable
> > > >> >> > > > this
> > > >> >> > > > > > > >>    feature.
> > > >> >> > > > > > > >>    3. So, all VMs
deployed from template that had
> > > >> enabled
> > > >> >> this
> > > >> >> > > > > feature
> > > >> >> > > > > > > >> will
> > > >> >> > > > > > > >>    have a golden image
stored in golden primary
> > > storage
> > > >> >> (SSD
> > > >> >> > or
> > > >> >> > > > some
> > > >> >> > > > > > > high
> > > >> >> > > > > > > >> IOPS
> > > >> >> > > > > > > >>    partition), and
different image (child image)
> > > stored
> > > >> in
> > > >> >> > other
> > > >> >> > > > > > normal
> > > >> >> > > > > > > >>    primary storage.
> > > >> >> > > > > > > >>
> > > >> >> > > > > > > >> This new feature will
not transfer all IOPS from
> > > shared
> > > >> >> > storage
> > > >> >> > > to
> > > >> >> > > > > > local
> > > >> >> > > > > > > >> storage (because high
IOPS partition can be
> another
> > > high
> > > >> >> IOPS
> > > >> >> > > > shared
> > > >> >> > > > > > > >> storage) and require
less money than buying new
> > > storage
> > > >> >> > > solution.
> > > >> >> > > > > > > >>
> > > >> >> > > > > > > >> What do you think
? If possible, may I write a
> > > proposal
> > > >> in
> > > >> >> > > > > CloudStack
> > > >> >> > > > > > > >> wiki ?
> > > >> >> > > > > > > >>
> > > >> >> > > > > > > >> BRs.
> > > >> >> > > > > > > >>
> > > >> >> > > > > > > >> Hieu Lee
> > > >> >> > > > > > > >>
> > > >> >> > > > > > > >> --
> > > >> >> > > > > > > >> -----BEGIN GEEK CODE
BLOCK-----
> > > >> >> > > > > > > >> Version: 3.1
> > > >> >> > > > > > > >> GCS/CM/IT/M/MU d-@?
s+(++):+(++) !a C++++(++++)$
> > > >> >> > ULC++++(++)$ P
> > > >> >> > > > > > > >> L++(+++)$ E
> > > >> >> > > > > > > >> !W N* o+ K w O- M
V- PS+ PE++ Y+ PGP+ t 5 X R tv+
> > > >> >> b+(++)>+++
> > > >> >> > DI-
> > > >> >> > > > D+
> > > >> >> > > > > G
> > > >> >> > > > > > > >> e++(+++) h-- r(++)>+++
y-
> > > >> >> > > > > > > >> ------END GEEK CODE
BLOCK------
> > > >> >> > > > > > > >>
> > > >> >> > > > > > > >
> > > >> >> > > > > > > >
> > > >> >> > > > > > > >
> > > >> >> > > > > > > > --
> > > >> >> > > > > > > > regards,
> > > >> >> > > > > > > >
> > > >> >> > > > > > > > punith s
> > > >> >> > > > > > > > cloudbyte.com
> > > >> >> > > > > > > >
> > > >> >> > > > > > >
> > > >> >> > > > > > >
> > > >> >> > > > > > >
> > > >> >> > > > > > > --
> > > >> >> > > > > > > *Mike Tutkowski*
> > > >> >> > > > > > > *Senior CloudStack Developer,
SolidFire Inc.*
> > > >> >> > > > > > > e: mike.tutkowski@solidfire.com
> > > >> >> > > > > > > o: 303.746.7302
> > > >> >> > > > > > > Advancing the way the world
uses the cloud
> > > >> >> > > > > > > <http://solidfire.com/solution/overview/?video=play
> > >*™*
> > > >> >> > > > > > >
> > > >> >> > > > > >
> > > >> >> > > > > >
> > > >> >> > > > > >
> > > >> >> > > > > > --
> > > >> >> > > > > > -----BEGIN GEEK CODE BLOCK-----
> > > >> >> > > > > > Version: 3.1
> > > >> >> > > > > > GCS/CM/IT/M/MU d-@? s+(++):+(++)
!a C++++(++++)$
> > > >> ULC++++(++)$ P
> > > >> >> > > > > L++(+++)$
> > > >> >> > > > > > E
> > > >> >> > > > > > !W N* o+ K w O- M V- PS+ PE++ Y+
PGP+ t 5 X R tv+
> > > b+(++)>+++
> > > >> DI-
> > > >> >> > D+ G
> > > >> >> > > > > > e++(+++) h-- r(++)>+++ y-
> > > >> >> > > > > > ------END GEEK CODE BLOCK------
> > > >> >> > > > > >
> > > >> >> > > > >
> > > >> >> > > > >
> > > >> >> > > > >
> > > >> >> > > > > --
> > > >> >> > > > > *Mike Tutkowski*
> > > >> >> > > > > *Senior CloudStack Developer, SolidFire
Inc.*
> > > >> >> > > > > e: mike.tutkowski@solidfire.com
> > > >> >> > > > > o: 303.746.7302
> > > >> >> > > > > Advancing the way the world uses the
cloud
> > > >> >> > > > > <http://solidfire.com/solution/overview/?video=play>*™*
> > > >> >> > > > >
> > > >> >> > > >
> > > >> >> > > >
> > > >> >> > > >
> > > >> >> > > > --
> > > >> >> > > > -----BEGIN GEEK CODE BLOCK-----
> > > >> >> > > > Version: 3.1
> > > >> >> > > > GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C++++(++++)$
> > ULC++++(++)$
> > > P
> > > >> >> > > L++(+++)$
> > > >> >> > > > E
> > > >> >> > > > !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5
X R tv+
> b+(++)>+++
> > > DI-
> > > >> D+
> > > >> >> G
> > > >> >> > > > e++(+++) h-- r(++)>+++ y-
> > > >> >> > > > ------END GEEK CODE BLOCK------
> > > >> >> > > >
> > > >> >> > >
> > > >> >> > >
> > > >> >> > >
> > > >> >> > > --
> > > >> >> > > *Mike Tutkowski*
> > > >> >> > > *Senior CloudStack Developer, SolidFire Inc.*
> > > >> >> > > e: mike.tutkowski@solidfire.com
> > > >> >> > > o: 303.746.7302
> > > >> >> > > Advancing the way the world uses the cloud
> > > >> >> > > <http://solidfire.com/solution/overview/?video=play>*™*
> > > >> >> > >
> > > >> >> >
> > > >> >> >
> > > >> >> >
> > > >> >> > --
> > > >> >> > -----BEGIN GEEK CODE BLOCK-----
> > > >> >> > Version: 3.1
> > > >> >> > GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C++++(++++)$ ULC++++(++)$
> P
> > > >> >> L++(+++)$
> > > >> >> > E
> > > >> >> > !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+ b+(++)>+++
> DI-
> > > D+ G
> > > >> >> > e++(+++) h-- r(++)>+++ y-
> > > >> >> > ------END GEEK CODE BLOCK------
> > > >> >> >
> > > >> >>
> > > >> >>
> > > >> >>
> > > >> >> --
> > > >> >> *Mike Tutkowski*
> > > >> >> *Senior CloudStack Developer, SolidFire Inc.*
> > > >> >> e: mike.tutkowski@solidfire.com
> > > >> >> o: 303.746.7302
> > > >> >> Advancing the way the world uses the cloud
> > > >> >> <http://solidfire.com/solution/overview/?video=play>*™*
> > > >> >>
> > > >> >
> > > >> >
> > > >> >
> > > >> > --
> > > >> > -----BEGIN GEEK CODE BLOCK-----
> > > >> > Version: 3.1
> > > >> > GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C++++(++++)$ ULC++++(++)$
P
> > > >> L++(+++)$
> > > >> > E !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+ b+(++)>+++
DI-
> > D+
> > > G
> > > >> > e++(+++) h-- r(++)>+++ y-
> > > >> > ------END GEEK CODE BLOCK------
> > > >> >
> > > >>
> > > >
> > > >
> > > >
> > > > --
> > > > *Mike Tutkowski*
> > > > *Senior CloudStack Developer, SolidFire Inc.*
> > > > e: mike.tutkowski@solidfire.com
> > > > o: 303.746.7302
> > > > Advancing the way the world uses the cloud
> > > > <http://solidfire.com/solution/overview/?video=play>*™*
> > > >
> > >
> > >
> > >
> > > --
> > > *Mike Tutkowski*
> > > *Senior CloudStack Developer, SolidFire Inc.*
> > > e: mike.tutkowski@solidfire.com
> > > o: 303.746.7302
> > > Advancing the way the world uses the cloud
> > > <http://solidfire.com/solution/overview/?video=play>*™*
> > >
> >
> >
> >
> > --
> >
> >
> > Todd Pigram
> > http://about.me/ToddPigram
> > www.linkedin.com/in/toddpigram/
> > @pigram86 on twitter
> > https://plus.google.com/+ToddPigram86
> > Mobile - 216-224-5769
> >
>
>
>
> --
> -----BEGIN GEEK CODE BLOCK-----
> Version: 3.1
> GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C++++(++++)$ ULC++++(++)$ P L++(+++)$
> E
> !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+ b+(++)>+++ DI- D+ G
> e++(+++) h-- r(++)>+++ y-
> ------END GEEK CODE BLOCK------
>



-- 


Todd Pigram
http://about.me/ToddPigram
www.linkedin.com/in/toddpigram/
@pigram86 on twitter
https://plus.google.com/+ToddPigram86
Mobile - 216-224-5769

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message