cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Punith S <>
Subject Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?
Date Mon, 02 Jun 2014 05:36:11 GMT
hi hieu,

your problem is the bottle neck we see as a storage vendors in the cloud,
meaning all the vms in the cloud have not been guaranteed iops from the
primary storage, because in your case i'm assuming you are running 1000vms
on a xen cluster whose all vm's disks are lying on a same primary nfs
storage mounted to the cluster,
hence you won't get the dedicated iops for each vm since every vm is
sharing the same storage. to solve this issue in cloudstack we the third
party vendors have implemented the plugin(namely cloudbyte , solidfire etc)
to support managed storage(dedicated volumes with guaranteed qos for each
vms) , where we are mapping each root disk(vdi) or data disk of a vm with
one nfs or iscsi share coming out of a pool, also we are proposing the new
feature to change volume iops on fly in 4.5, where you can increase or
decrease your root disk iops while booting or at peak times. but to use
this plugin you have to buy our storage solution.

if not , you can try creating a nfs share out of ssd pool storage and
create a primary storage in cloudstack out of it named as golden primary
storage with specific tag like gold, and create a compute offering for your
template with the storage tag as gold, hence all the vm's you create will
sit on this gold primary storage with high iops. and other data disks on
other primary storage but still here you cannot guarantee the qos at vm


On Mon, Jun 2, 2014 at 10:12 AM, Hieu LE <> wrote:

> Hi all,
> There are some problems while deploying a large amount of VMs in my company
> with CloudStack. All VMs are deployed from same template (e.g: Windows 7)
> and the quantity is approximately ~1000VMs. The problems here is low IOPS,
> low performance of VM (about ~10-11 IOPS, boot time is very high). The
> storage of my company is SAN/NAS with NFS and Xen Server 6.2.0. All Xen
> Server nodes have standard server HDD disk raid.
> I have found some solutions for this such as:
>    - Enable Xen Server Intellicache and some tweaks in CloudStack codes to
>    deploy and start VM in Intellicache mode. But this solution will
> transfer
>    all IOPS from shared storage to all local storage, hence affect and
> limit
>    some CloudStack features.
>    - Buying some expensive storage solutions and network to increase IOPS.
>    Nah..
> So, I am thinking about a new feature that (may be) increasing IOPS and
> performance of VMs:
>    1. Separate golden image in high IOPS partition: buying new SSD, plug in
>    Xen Server and deployed a new VM in NFS storage WITH golden image in
> this
>    new SSD partition. This can reduce READ IOPS in shared storage and
> decrease
>    boot time of VM. (Currenty, VM deployed in Xen Server always have a
> master
>    image (golden image - in VMWare) always in the same storage repository
> with
>    different image (child image)). We can do this trick by tweaking in VHD
>    header file with new Xen Server plug-in.
>    2. Create golden primary storage and VM template that enable this
>    feature.
>    3. So, all VMs deployed from template that had enabled this feature will
>    have a golden image stored in golden primary storage (SSD or some high
>    partition), and different image (child image) stored in other normal
>    primary storage.
> This new feature will not transfer all IOPS from shared storage to local
> storage (because high IOPS partition can be another high IOPS shared
> storage) and require less money than buying new storage solution.
> What do you think ? If possible, may I write a proposal in CloudStack wiki
> ?
> BRs.
> Hieu Lee
> --
> Version: 3.1
> GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C++++(++++)$ ULC++++(++)$ P L++(+++)$
> E
> !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+ b+(++)>+++ DI- D+ G
> e++(+++) h-- r(++)>+++ y-
> ------END GEEK CODE BLOCK------


punith s

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message