Return-Path: X-Original-To: apmail-cloudstack-dev-archive@www.apache.org Delivered-To: apmail-cloudstack-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 94DD31176B for ; Mon, 9 Jun 2014 20:54:25 +0000 (UTC) Received: (qmail 63296 invoked by uid 500); 9 Jun 2014 20:54:24 -0000 Delivered-To: apmail-cloudstack-dev-archive@cloudstack.apache.org Received: (qmail 63252 invoked by uid 500); 9 Jun 2014 20:54:24 -0000 Mailing-List: contact dev-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cloudstack.apache.org Delivered-To: mailing list dev@cloudstack.apache.org Received: (qmail 63241 invoked by uid 99); 9 Jun 2014 20:54:24 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 09 Jun 2014 20:54:24 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of tmackey@gmail.com designates 209.85.213.42 as permitted sender) Received: from [209.85.213.42] (HELO mail-yh0-f42.google.com) (209.85.213.42) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 09 Jun 2014 20:54:19 +0000 Received: by mail-yh0-f42.google.com with SMTP id i57so2092916yha.29 for ; Mon, 09 Jun 2014 13:53:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; bh=rlqzT1HFn/hA64DGqGns/A8PCgT+TzU+YJhXxPMXBJk=; b=SfbgOUCsJ5VdZaiwib4hd9CrVqQRuHqtZ6s1kSirUc5PyFApSYsAUAiFByZV/wnggh 9YDnk1kBlAJJ6Kaw56vg+8hBdi6Zc555VqSR6inqOAXFkRHv56too1nkRPb0TSv95UfR KTZgzfR8Xs57s6i+5E0Tu5OqOcvy/BYimkLeN0boLzo4W6NNa4xe0yFoxhQ3cID+bjvk hSjp909aH3FLvtjWv0bvIq42vZ/AGc5wU7WwoZDoHpeb3wekSfMclIxAGxSBiA4869Ed kzWnoNSq1s1h+ENDMOTd5UG7GIYz/aCcmmrMxXG2PYyXZOlx6nlCKszyrZZqKUMtDKYZ i/+w== X-Received: by 10.236.228.40 with SMTP id e38mr7927577yhq.76.1402347238658; Mon, 09 Jun 2014 13:53:58 -0700 (PDT) MIME-Version: 1.0 Received: by 10.170.166.86 with HTTP; Mon, 9 Jun 2014 13:53:37 -0700 (PDT) In-Reply-To: References: From: Tim Mackey Date: Mon, 9 Jun 2014 16:53:37 -0400 Message-ID: Subject: Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ? To: "dev@cloudstack.apache.org" Content-Type: multipart/alternative; boundary=001a11c2c4da0317cb04fb6d6945 X-Virus-Checked: Checked by ClamAV on apache.org --001a11c2c4da0317cb04fb6d6945 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hieu, I made a couple of minor edits to your design to ensure everything is "XenServer" based. If you haven't done so already, please also fetch the most recent master and base off of that. I refactored the old Xen plugin into a XenServer specific one since Xen Project isn't currently supported, and files have moved. Also please ensure you don't use the term "Xen" in your code/docs to avoid any future confusion when the Xen Project work starts to materialize. Looking forward to seeing your work!! -tim On Mon, Jun 9, 2014 at 4:31 PM, Mike Tutkowski wrote: > Thanks, Hieu! > > I have reviewed your design (making only minor changes to your Wiki). > > Please feel free to have me review your code when you are ready. > > Also, do you have a plan for integration testing? It would be great if yo= u > could update your Wiki page to include what your plans on in this regard. > > Thanks! > Mike > > > On Mon, Jun 9, 2014 at 4:24 AM, Hieu LE wrote: > > > Hi guys, > > > > I have updated this proposal wiki[1], included diagram for VM migrate, > > volume migrate and snapshot. > > > > Please review and give feedback. > > > > [1]: > > > > > https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Sto= rage > > > > > > On Fri, Jun 6, 2014 at 7:14 PM, Todd Pigram wrote= : > > > > > Sorry, thought you were based off the link you provided in this reply= . > > > > > > "In our case, we are using CloudStack integrated in VDI solution to > > > provived > > > pooled VM type[1]. So may be my approach can bring better UX for user > > with > > > lower bootime ... > > > > > > A short change in design are followings > > > - VM will be deployed with golden primary storage if primary storage = is > > > marked golden and this VM template is also marked as golden. > > > - Choosing the best deploy destionation for both golden primary stora= ge > > and > > > normal root volume primary storage. Chosen host can also access both > > > storage pools. > > > - New Xen Server plug-in for modifying VHD parent id. > > > > > > Is there some place for me to submit my design and code. Can I write = a > > new > > > proposal in CS wiki ? > > > > > > [1]: > > > > > > > > > http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme= -type-rho.html > > > " > > > > > > > > > On Thu, Jun 5, 2014 at 11:55 PM, Hieu LE wrote: > > > > > > > Hi Todd, > > > > > > > > > > > > On Fri, Jun 6, 2014 at 9:17 AM, Todd Pigram > > wrote: > > > > > > > > > Hieu, > > > > > > > > > > I assume you are using MCS for you golden image? What version of > XD? > > > > Given > > > > > you are using pooled desktops, have you thought about using a PVS > BDM > > > iso > > > > > and mount it with in your 1000 VMs? This way you can stagger > reboots > > > via > > > > > PVS console or Studio. This would require a change to your delive= ry > > > > group. > > > > > > > > > > > > > > Sorry but I did not use MCS or XenDesktop in my company :-) > > > > > > > > > > > > > > > > > > On Thu, Jun 5, 2014 at 9:28 PM, Mike Tutkowski < > > > > > mike.tutkowski@solidfire.com > > > > > > wrote: > > > > > > > > > > > 6) The copy_vhd_from_secondarystorage XenServer plug-in is not > used > > > > when > > > > > > you're using XenServer + XS62ESP1 + XS62ESP1004. In that case, > > please > > > > > refer > > > > > > to copyTemplateToPrimaryStorage(CopyCommand) method in the > > > > > > Xenserver625StorageProcessor class. > > > > > > > > > > > > > > > > > > > Thank Mike, I will take note of that. > > > > > > > > > > > > > > > > > > > > On Thu, Jun 5, 2014 at 1:56 PM, Mike Tutkowski < > > > > > > mike.tutkowski@solidfire.com > > > > > > > wrote: > > > > > > > > > > > > > Other than going through a "for" loop and deploying VM after > VM, > > I > > > > > don't > > > > > > > think CloudStack currently supports a bulk-VM-deploy operatio= n. > > > > > > > > > > > > > > It would be nice if CS did so at some point in the future; > > however, > > > > > that > > > > > > > is probably a separate proposal from Hieu's. > > > > > > > > > > > > > > > > > > > > > On Thu, Jun 5, 2014 at 12:13 AM, Amit Das < > > amit.das@cloudbyte.com> > > > > > > wrote: > > > > > > > > > > > > > >> Hi Hieu, > > > > > > >> > > > > > > >> Will it be good to include bulk operation of this feature? I= n > > > > > addition, > > > > > > >> does Xen support parallel execution of these operations ? > > > > > > >> > > > > > > >> Regards, > > > > > > >> Amit > > > > > > >> *CloudByte Inc.* > > > > > > >> > > > > > > >> > > > > > > >> On Thu, Jun 5, 2014 at 8:59 AM, Hieu LE > > > wrote: > > > > > > >> > > > > > > >> > Mike, Punith, > > > > > > >> > > > > > > > >> > Please review "Golden Primary Storage" proposal. [1] > > > > > > >> > > > > > > > >> > Thank you. > > > > > > >> > > > > > > > >> > [1]: > > > > > > >> > > > > > > > >> > > > > > > > > > > > > > > > > > > > > > https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Sto= rage > > > > > > >> > > > > > > > >> > > > > > > > >> > On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski < > > > > > > >> > mike.tutkowski@solidfire.com> wrote: > > > > > > >> > > > > > > > >> >> Daan helped out with this. You should be good to go now. > > > > > > >> >> > > > > > > >> >> > > > > > > >> >> On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE < > hieulq19@gmail.com> > > > > > wrote: > > > > > > >> >> > > > > > > >> >> > Hi Mike, > > > > > > >> >> > > > > > > > >> >> > Could you please give edit/create permission on ASF > > Jira/Wiki > > > > > > >> >> confluence ? > > > > > > >> >> > I can not add a new Wiki page. > > > > > > >> >> > > > > > > > >> >> > My Jira ID: hieulq > > > > > > >> >> > Wiki: hieulq89 > > > > > > >> >> > Review Board: hieulq > > > > > > >> >> > > > > > > > >> >> > Thanks ! > > > > > > >> >> > > > > > > > >> >> > > > > > > > >> >> > On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski < > > > > > > >> >> > mike.tutkowski@solidfire.com > > > > > > >> >> > > wrote: > > > > > > >> >> > > > > > > > >> >> > > Hi, > > > > > > >> >> > > > > > > > > >> >> > > Yes, please feel free to add a new Wiki page for your > > > design. > > > > > > >> >> > > > > > > > > >> >> > > Here is a link to applicable design info: > > > > > > >> >> > > > > > > > > >> >> > > > > > > https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design > > > > > > >> >> > > > > > > > > >> >> > > Also, feel free to ask more questions and have me > review > > > your > > > > > > >> design. > > > > > > >> >> > > > > > > > > >> >> > > Thanks! > > > > > > >> >> > > Mike > > > > > > >> >> > > > > > > > > >> >> > > > > > > > > >> >> > > On Tue, Jun 3, 2014 at 7:29 PM, Hieu LE < > > > hieulq19@gmail.com> > > > > > > >> wrote: > > > > > > >> >> > > > > > > > > >> >> > > > Hi Mike, > > > > > > >> >> > > > > > > > > > >> >> > > > You are right, performance will be decreased over > time > > > > > because > > > > > > >> >> writes > > > > > > >> >> > > IOPS > > > > > > >> >> > > > will always end up on slower storage pool. > > > > > > >> >> > > > > > > > > > >> >> > > > In our case, we are using CloudStack integrated in > VDI > > > > > solution > > > > > > >> to > > > > > > >> >> > > provived > > > > > > >> >> > > > pooled VM type[1]. So may be my approach can bring > > better > > > > UX > > > > > > for > > > > > > >> >> user > > > > > > >> >> > > with > > > > > > >> >> > > > lower bootime ... > > > > > > >> >> > > > > > > > > > >> >> > > > A short change in design are followings > > > > > > >> >> > > > - VM will be deployed with golden primary storage i= f > > > > primary > > > > > > >> >> storage is > > > > > > >> >> > > > marked golden and this VM template is also marked a= s > > > > golden. > > > > > > >> >> > > > - Choosing the best deploy destionation for both > golden > > > > > primary > > > > > > >> >> storage > > > > > > >> >> > > and > > > > > > >> >> > > > normal root volume primary storage. Chosen host can > > also > > > > > access > > > > > > >> both > > > > > > >> >> > > > storage pools. > > > > > > >> >> > > > - New Xen Server plug-in for modifying VHD parent i= d. > > > > > > >> >> > > > > > > > > > >> >> > > > Is there some place for me to submit my design and > > code. > > > > Can > > > > > I > > > > > > >> >> write a > > > > > > >> >> > > new > > > > > > >> >> > > > proposal in CS wiki ? > > > > > > >> >> > > > > > > > > > >> >> > > > [1]: > > > > > > >> >> > > > > > > > > > >> >> > > > > > > > > > >> >> > > > > > > > > >> >> > > > > > > > >> >> > > > > > > >> > > > > > > > > > > > > > > > > > > > > > http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme= -type-rho.html > > > > > > >> >> > > > > > > > > > >> >> > > > > > > > > > >> >> > > > On Mon, Jun 2, 2014 at 9:04 PM, Mike Tutkowski < > > > > > > >> >> > > > mike.tutkowski@solidfire.com > > > > > > >> >> > > > > wrote: > > > > > > >> >> > > > > > > > > > >> >> > > > > It is an interesting idea. If the constraints you > > face > > > at > > > > > > your > > > > > > >> >> > company > > > > > > >> >> > > > can > > > > > > >> >> > > > > be corrected somewhat by implementing this, then > you > > > > should > > > > > > go > > > > > > >> for > > > > > > >> >> > it. > > > > > > >> >> > > > > > > > > > > >> >> > > > > It sounds like writes will be placed on the slowe= r > > > > storage > > > > > > >> pool. > > > > > > >> >> This > > > > > > >> >> > > > means > > > > > > >> >> > > > > as you update OS components, those updates will b= e > > > placed > > > > > on > > > > > > >> the > > > > > > >> >> > slower > > > > > > >> >> > > > > storage pool. As such, your performance is likely > to > > > > > somewhat > > > > > > >> >> > decrease > > > > > > >> >> > > > over > > > > > > >> >> > > > > time (as more and more writes end up on the slowe= r > > > > storage > > > > > > >> pool). > > > > > > >> >> > > > > > > > > > > >> >> > > > > That may be OK for your use case(s), though. > > > > > > >> >> > > > > > > > > > > >> >> > > > > You'll have to update the storage-pool > orchestration > > > > logic > > > > > to > > > > > > >> take > > > > > > >> >> > this > > > > > > >> >> > > > new > > > > > > >> >> > > > > scheme into account. > > > > > > >> >> > > > > > > > > > > >> >> > > > > Also, we'll have to figure out how this ties into > > > storage > > > > > > >> tagging > > > > > > >> >> (if > > > > > > >> >> > > at > > > > > > >> >> > > > > all). > > > > > > >> >> > > > > > > > > > > >> >> > > > > I'd be happy to review your design and code. > > > > > > >> >> > > > > > > > > > > >> >> > > > > > > > > > > >> >> > > > > On Mon, Jun 2, 2014 at 1:54 AM, Hieu LE < > > > > > hieulq19@gmail.com> > > > > > > >> >> wrote: > > > > > > >> >> > > > > > > > > > > >> >> > > > > > Thanks Mike and Punith for quick reply. > > > > > > >> >> > > > > > > > > > > > >> >> > > > > > Both solutions you gave here are absolutely > > correct. > > > > But > > > > > > as I > > > > > > >> >> > > mentioned > > > > > > >> >> > > > > in > > > > > > >> >> > > > > > the first email, I want another better solution > for > > > > > current > > > > > > >> >> > > > > infrastructure > > > > > > >> >> > > > > > at my company. > > > > > > >> >> > > > > > > > > > > > >> >> > > > > > Creating a high IOPS primary storage using > storage > > > tags > > > > > is > > > > > > >> good > > > > > > >> >> but > > > > > > >> >> > > it > > > > > > >> >> > > > > will > > > > > > >> >> > > > > > be very waste of disk capacity. For example, if= I > > > only > > > > > have > > > > > > >> 1TB > > > > > > >> >> SSD > > > > > > >> >> > > and > > > > > > >> >> > > > > > deploy 100 VM from a 100GB template. > > > > > > >> >> > > > > > > > > > > > >> >> > > > > > So I think about a solution where a high IOPS > > primary > > > > > > storage > > > > > > >> >> can > > > > > > >> >> > > only > > > > > > >> >> > > > > > store golden image (master image), and a child > > image > > > of > > > > > > this > > > > > > >> VM > > > > > > >> >> > will > > > > > > >> >> > > be > > > > > > >> >> > > > > > stored in another normal (NFS, ISCSI...) storag= e. > > In > > > > this > > > > > > >> case, > > > > > > >> >> > with > > > > > > >> >> > > > 1TB > > > > > > >> >> > > > > > SSD Primary Storage I can store as much golden > > image > > > > as I > > > > > > >> need. > > > > > > >> >> > > > > > > > > > > > >> >> > > > > > I have also tested it with 256 GB SSD mounted o= n > > Xen > > > > > Server > > > > > > >> >> 6.2.0 > > > > > > >> >> > > with > > > > > > >> >> > > > > 2TB > > > > > > >> >> > > > > > local storage 10000RPM, 6TB NFS share storage > with > > > 1GB > > > > > > >> network. > > > > > > >> >> The > > > > > > >> >> > > > IOPS > > > > > > >> >> > > > > of > > > > > > >> >> > > > > > VMs which have golden image (master image) in S= SD > > and > > > > > child > > > > > > >> >> image > > > > > > >> >> > in > > > > > > >> >> > > > NFS > > > > > > >> >> > > > > > increate more than 30-40% compare with VMs whic= h > > have > > > > > both > > > > > > >> >> golden > > > > > > >> >> > > image > > > > > > >> >> > > > > and > > > > > > >> >> > > > > > child image in NFS. The boot time of each VM is > > also > > > > > > >> decrease. > > > > > > >> >> > > ('cause > > > > > > >> >> > > > > > golden image in SSD only reduced READ IOPS). > > > > > > >> >> > > > > > > > > > > > >> >> > > > > > Do you think this approach OK ? > > > > > > >> >> > > > > > > > > > > > >> >> > > > > > > > > > > > >> >> > > > > > On Mon, Jun 2, 2014 at 12:50 PM, Mike Tutkowski= < > > > > > > >> >> > > > > > mike.tutkowski@solidfire.com> wrote: > > > > > > >> >> > > > > > > > > > > > >> >> > > > > > > Thanks, Punith - this is similar to what I wa= s > > > going > > > > to > > > > > > >> say. > > > > > > >> >> > > > > > > > > > > > > >> >> > > > > > > Any time a set of CloudStack volumes share IO= PS > > > from > > > > a > > > > > > >> common > > > > > > >> >> > pool, > > > > > > >> >> > > > you > > > > > > >> >> > > > > > > cannot guarantee IOPS to a given CloudStack > > volume > > > > at a > > > > > > >> given > > > > > > >> >> > time. > > > > > > >> >> > > > > > > > > > > > > >> >> > > > > > > Your choices at present are: > > > > > > >> >> > > > > > > > > > > > > >> >> > > > > > > 1) Use managed storage (where you can create = a > > 1:1 > > > > > > mapping > > > > > > >> >> > between > > > > > > >> >> > > a > > > > > > >> >> > > > > > > CloudStack volume and a volume on a storage > > system > > > > that > > > > > > has > > > > > > >> >> QoS). > > > > > > >> >> > > As > > > > > > >> >> > > > > > Punith > > > > > > >> >> > > > > > > mentioned, this requires that you purchase > > storage > > > > > from a > > > > > > >> >> vendor > > > > > > >> >> > > who > > > > > > >> >> > > > > > > provides guaranteed QoS on a volume-by-volume > > bases > > > > AND > > > > > > has > > > > > > >> >> this > > > > > > >> >> > > > > > integrated > > > > > > >> >> > > > > > > into CloudStack. > > > > > > >> >> > > > > > > > > > > > > >> >> > > > > > > 2) Create primary storage in CloudStack that = is > > not > > > > > > >> managed, > > > > > > >> >> but > > > > > > >> >> > > has > > > > > > >> >> > > > a > > > > > > >> >> > > > > > high > > > > > > >> >> > > > > > > number of IOPS (ex. using SSDs). You can then > > > storage > > > > > tag > > > > > > >> this > > > > > > >> >> > > > primary > > > > > > >> >> > > > > > > storage and create Compute and Disk Offerings > > that > > > > use > > > > > > this > > > > > > >> >> > storage > > > > > > >> >> > > > tag > > > > > > >> >> > > > > > to > > > > > > >> >> > > > > > > make sure their volumes end up on this storag= e > > pool > > > > > > >> (primary > > > > > > >> >> > > > storage). > > > > > > >> >> > > > > > This > > > > > > >> >> > > > > > > will still not guarantee IOPS on a CloudStack > > > > > > >> volume-by-volume > > > > > > >> >> > > basis, > > > > > > >> >> > > > > but > > > > > > >> >> > > > > > > it will at least place the CloudStack volumes > > that > > > > > need a > > > > > > >> >> better > > > > > > >> >> > > > chance > > > > > > >> >> > > > > > of > > > > > > >> >> > > > > > > getting higher IOPS on a storage pool that > could > > > > > provide > > > > > > >> the > > > > > > >> >> > > > necessary > > > > > > >> >> > > > > > > IOPS. A big downside here is that you want to > > watch > > > > how > > > > > > >> many > > > > > > >> >> > > > CloudStack > > > > > > >> >> > > > > > > volumes get deployed on this primary storage > > > because > > > > > > you'll > > > > > > >> >> need > > > > > > >> >> > to > > > > > > >> >> > > > > > > essentially over-provision IOPS in this prima= ry > > > > storage > > > > > > to > > > > > > >> >> > increase > > > > > > >> >> > > > the > > > > > > >> >> > > > > > > probability that each and every CloudStack > volume > > > > that > > > > > > uses > > > > > > >> >> this > > > > > > >> >> > > > > primary > > > > > > >> >> > > > > > > storage gets the necessary IOPS (and isn't as > > > likely > > > > to > > > > > > >> suffer > > > > > > >> >> > from > > > > > > >> >> > > > the > > > > > > >> >> > > > > > > Noisy Neighbor Effect). You should be able to > > tell > > > > > > >> CloudStack > > > > > > >> >> to > > > > > > >> >> > > only > > > > > > >> >> > > > > > use, > > > > > > >> >> > > > > > > say, 80% (or whatever) of the storage you're > > > > providing > > > > > to > > > > > > >> it > > > > > > >> >> (so > > > > > > >> >> > as > > > > > > >> >> > > > to > > > > > > >> >> > > > > > > increase your effective IOPS per GB ratio). > This > > > > > > >> >> > over-provisioning > > > > > > >> >> > > of > > > > > > >> >> > > > > > IOPS > > > > > > >> >> > > > > > > to control Noisy Neighbors is avoided in opti= on > > 1. > > > In > > > > > > that > > > > > > >> >> > > situation, > > > > > > >> >> > > > > you > > > > > > >> >> > > > > > > only provision the IOPS and capacity you > actually > > > > need. > > > > > > It > > > > > > >> is > > > > > > >> >> a > > > > > > >> >> > > much > > > > > > >> >> > > > > more > > > > > > >> >> > > > > > > sophisticated approach. > > > > > > >> >> > > > > > > > > > > > > >> >> > > > > > > Thanks, > > > > > > >> >> > > > > > > Mike > > > > > > >> >> > > > > > > > > > > > > >> >> > > > > > > > > > > > > >> >> > > > > > > On Sun, Jun 1, 2014 at 11:36 PM, Punith S < > > > > > > >> >> > punith.s@cloudbyte.com> > > > > > > >> >> > > > > > wrote: > > > > > > >> >> > > > > > > > > > > > > >> >> > > > > > > > hi hieu, > > > > > > >> >> > > > > > > > > > > > > > >> >> > > > > > > > your problem is the bottle neck we see as a > > > storage > > > > > > >> vendors > > > > > > >> >> in > > > > > > >> >> > > the > > > > > > >> >> > > > > > cloud, > > > > > > >> >> > > > > > > > meaning all the vms in the cloud have not > been > > > > > > guaranteed > > > > > > >> >> iops > > > > > > >> >> > > from > > > > > > >> >> > > > > the > > > > > > >> >> > > > > > > > primary storage, because in your case i'm > > > assuming > > > > > you > > > > > > >> are > > > > > > >> >> > > running > > > > > > >> >> > > > > > > 1000vms > > > > > > >> >> > > > > > > > on a xen cluster whose all vm's disks are > lying > > > on > > > > a > > > > > > same > > > > > > >> >> > primary > > > > > > >> >> > > > nfs > > > > > > >> >> > > > > > > > storage mounted to the cluster, > > > > > > >> >> > > > > > > > hence you won't get the dedicated iops for > each > > > vm > > > > > > since > > > > > > >> >> every > > > > > > >> >> > vm > > > > > > >> >> > > > is > > > > > > >> >> > > > > > > > sharing the same storage. to solve this iss= ue > > in > > > > > > >> cloudstack > > > > > > >> >> we > > > > > > >> >> > > the > > > > > > >> >> > > > > > third > > > > > > >> >> > > > > > > > party vendors have implemented the > > plugin(namely > > > > > > >> cloudbyte , > > > > > > >> >> > > > > solidfire > > > > > > >> >> > > > > > > etc) > > > > > > >> >> > > > > > > > to support managed storage(dedicated volume= s > > with > > > > > > >> guaranteed > > > > > > >> >> > qos > > > > > > >> >> > > > for > > > > > > >> >> > > > > > each > > > > > > >> >> > > > > > > > vms) , where we are mapping each root > disk(vdi) > > > or > > > > > data > > > > > > >> disk > > > > > > >> >> > of a > > > > > > >> >> > > > vm > > > > > > >> >> > > > > > with > > > > > > >> >> > > > > > > > one nfs or iscsi share coming out of a pool= , > > also > > > > we > > > > > > are > > > > > > >> >> > > proposing > > > > > > >> >> > > > > the > > > > > > >> >> > > > > > > new > > > > > > >> >> > > > > > > > feature to change volume iops on fly in 4.5= , > > > where > > > > > you > > > > > > >> can > > > > > > >> >> > > increase > > > > > > >> >> > > > > or > > > > > > >> >> > > > > > > > decrease your root disk iops while booting = or > > at > > > > peak > > > > > > >> times. > > > > > > >> >> > but > > > > > > >> >> > > to > > > > > > >> >> > > > > use > > > > > > >> >> > > > > > > > this plugin you have to buy our storage > > solution. > > > > > > >> >> > > > > > > > > > > > > > >> >> > > > > > > > if not , you can try creating a nfs share o= ut > > of > > > > ssd > > > > > > pool > > > > > > >> >> > storage > > > > > > >> >> > > > and > > > > > > >> >> > > > > > > > create a primary storage in cloudstack out = of > > it > > > > > named > > > > > > as > > > > > > >> >> > golden > > > > > > >> >> > > > > > primary > > > > > > >> >> > > > > > > > storage with specific tag like gold, and > > create a > > > > > > compute > > > > > > >> >> > > offering > > > > > > >> >> > > > > for > > > > > > >> >> > > > > > > your > > > > > > >> >> > > > > > > > template with the storage tag as gold, henc= e > > all > > > > the > > > > > > vm's > > > > > > >> >> you > > > > > > >> >> > > > create > > > > > > >> >> > > > > > will > > > > > > >> >> > > > > > > > sit on this gold primary storage with high > > iops. > > > > and > > > > > > >> other > > > > > > >> >> data > > > > > > >> >> > > > disks > > > > > > >> >> > > > > > on > > > > > > >> >> > > > > > > > other primary storage but still here you > cannot > > > > > > guarantee > > > > > > >> >> the > > > > > > >> >> > qos > > > > > > >> >> > > > at > > > > > > >> >> > > > > vm > > > > > > >> >> > > > > > > > level. > > > > > > >> >> > > > > > > > > > > > > > >> >> > > > > > > > thanks > > > > > > >> >> > > > > > > > > > > > > > >> >> > > > > > > > > > > > > > >> >> > > > > > > > On Mon, Jun 2, 2014 at 10:12 AM, Hieu LE < > > > > > > >> >> hieulq19@gmail.com> > > > > > > >> >> > > > wrote: > > > > > > >> >> > > > > > > > > > > > > > >> >> > > > > > > >> Hi all, > > > > > > >> >> > > > > > > >> > > > > > > >> >> > > > > > > >> There are some problems while deploying a > > large > > > > > amount > > > > > > >> of > > > > > > >> >> VMs > > > > > > >> >> > in > > > > > > >> >> > > > my > > > > > > >> >> > > > > > > >> company > > > > > > >> >> > > > > > > >> with CloudStack. All VMs are deployed from > > same > > > > > > template > > > > > > >> >> (e.g: > > > > > > >> >> > > > > Windows > > > > > > >> >> > > > > > > 7) > > > > > > >> >> > > > > > > >> and the quantity is approximately ~1000VMs= . > > The > > > > > > problems > > > > > > >> >> here > > > > > > >> >> > is > > > > > > >> >> > > > low > > > > > > >> >> > > > > > > IOPS, > > > > > > >> >> > > > > > > >> low performance of VM (about ~10-11 IOPS, > boot > > > > time > > > > > is > > > > > > >> very > > > > > > >> >> > > high). > > > > > > >> >> > > > > The > > > > > > >> >> > > > > > > >> storage of my company is SAN/NAS with NFS > and > > > Xen > > > > > > Server > > > > > > >> >> > 6.2.0. > > > > > > >> >> > > > All > > > > > > >> >> > > > > > Xen > > > > > > >> >> > > > > > > >> Server nodes have standard server HDD disk > > raid. > > > > > > >> >> > > > > > > >> > > > > > > >> >> > > > > > > >> I have found some solutions for this such > as: > > > > > > >> >> > > > > > > >> > > > > > > >> >> > > > > > > >> - Enable Xen Server Intellicache and so= me > > > > tweaks > > > > > in > > > > > > >> >> > > CloudStack > > > > > > >> >> > > > > > codes > > > > > > >> >> > > > > > > to > > > > > > >> >> > > > > > > >> deploy and start VM in Intellicache mod= e. > > But > > > > > this > > > > > > >> >> solution > > > > > > >> >> > > > will > > > > > > >> >> > > > > > > >> transfer > > > > > > >> >> > > > > > > >> all IOPS from shared storage to all loc= al > > > > > storage, > > > > > > >> hence > > > > > > >> >> > > affect > > > > > > >> >> > > > > and > > > > > > >> >> > > > > > > >> limit > > > > > > >> >> > > > > > > >> some CloudStack features. > > > > > > >> >> > > > > > > >> - Buying some expensive storage solutio= ns > > and > > > > > > >> network to > > > > > > >> >> > > > increase > > > > > > >> >> > > > > > > IOPS. > > > > > > >> >> > > > > > > >> Nah.. > > > > > > >> >> > > > > > > >> > > > > > > >> >> > > > > > > >> So, I am thinking about a new feature that > > (may > > > > be) > > > > > > >> >> increasing > > > > > > >> >> > > > IOPS > > > > > > >> >> > > > > > and > > > > > > >> >> > > > > > > >> performance of VMs: > > > > > > >> >> > > > > > > >> > > > > > > >> >> > > > > > > >> 1. Separate golden image in high IOPS > > > > partition: > > > > > > >> buying > > > > > > >> >> new > > > > > > >> >> > > > SSD, > > > > > > >> >> > > > > > plug > > > > > > >> >> > > > > > > >> in > > > > > > >> >> > > > > > > >> Xen Server and deployed a new VM in NFS > > > storage > > > > > > WITH > > > > > > >> >> golden > > > > > > >> >> > > > image > > > > > > >> >> > > > > > in > > > > > > >> >> > > > > > > >> this > > > > > > >> >> > > > > > > >> new SSD partition. This can reduce READ > > IOPS > > > in > > > > > > >> shared > > > > > > >> >> > > storage > > > > > > >> >> > > > > and > > > > > > >> >> > > > > > > >> decrease > > > > > > >> >> > > > > > > >> boot time of VM. (Currenty, VM deployed > in > > > Xen > > > > > > Server > > > > > > >> >> > always > > > > > > >> >> > > > > have a > > > > > > >> >> > > > > > > >> master > > > > > > >> >> > > > > > > >> image (golden image - in VMWare) always > in > > > the > > > > > same > > > > > > >> >> storage > > > > > > >> >> > > > > > > repository > > > > > > >> >> > > > > > > >> with > > > > > > >> >> > > > > > > >> different image (child image)). We can = do > > > this > > > > > > trick > > > > > > >> by > > > > > > >> >> > > > tweaking > > > > > > >> >> > > > > in > > > > > > >> >> > > > > > > VHD > > > > > > >> >> > > > > > > >> header file with new Xen Server plug-in= . > > > > > > >> >> > > > > > > >> 2. Create golden primary storage and VM > > > > template > > > > > > that > > > > > > >> >> > enable > > > > > > >> >> > > > this > > > > > > >> >> > > > > > > >> feature. > > > > > > >> >> > > > > > > >> 3. So, all VMs deployed from template > that > > > had > > > > > > >> enabled > > > > > > >> >> this > > > > > > >> >> > > > > feature > > > > > > >> >> > > > > > > >> will > > > > > > >> >> > > > > > > >> have a golden image stored in golden > > primary > > > > > > storage > > > > > > >> >> (SSD > > > > > > >> >> > or > > > > > > >> >> > > > some > > > > > > >> >> > > > > > > high > > > > > > >> >> > > > > > > >> IOPS > > > > > > >> >> > > > > > > >> partition), and different image (child > > image) > > > > > > stored > > > > > > >> in > > > > > > >> >> > other > > > > > > >> >> > > > > > normal > > > > > > >> >> > > > > > > >> primary storage. > > > > > > >> >> > > > > > > >> > > > > > > >> >> > > > > > > >> This new feature will not transfer all IOP= S > > from > > > > > > shared > > > > > > >> >> > storage > > > > > > >> >> > > to > > > > > > >> >> > > > > > local > > > > > > >> >> > > > > > > >> storage (because high IOPS partition can b= e > > > > another > > > > > > high > > > > > > >> >> IOPS > > > > > > >> >> > > > shared > > > > > > >> >> > > > > > > >> storage) and require less money than buyin= g > > new > > > > > > storage > > > > > > >> >> > > solution. > > > > > > >> >> > > > > > > >> > > > > > > >> >> > > > > > > >> What do you think ? If possible, may I > write a > > > > > > proposal > > > > > > >> in > > > > > > >> >> > > > > CloudStack > > > > > > >> >> > > > > > > >> wiki ? > > > > > > >> >> > > > > > > >> > > > > > > >> >> > > > > > > >> BRs. > > > > > > >> >> > > > > > > >> > > > > > > >> >> > > > > > > >> Hieu Lee > > > > > > >> >> > > > > > > >> > > > > > > >> >> > > > > > > >> -- > > > > > > >> >> > > > > > > >> -----BEGIN GEEK CODE BLOCK----- > > > > > > >> >> > > > > > > >> Version: 3.1 > > > > > > >> >> > > > > > > >> GCS/CM/IT/M/MU d-@? s+(++):+(++) !a > > > C++++(++++)$ > > > > > > >> >> > ULC++++(++)$ P > > > > > > >> >> > > > > > > >> L++(+++)$ E > > > > > > >> >> > > > > > > >> !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 = X > R > > > tv+ > > > > > > >> >> b+(++)>+++ > > > > > > >> >> > DI- > > > > > > >> >> > > > D+ > > > > > > >> >> > > > > G > > > > > > >> >> > > > > > > >> e++(+++) h-- r(++)>+++ y- > > > > > > >> >> > > > > > > >> ------END GEEK CODE BLOCK------ > > > > > > >> >> > > > > > > >> > > > > > > >> >> > > > > > > > > > > > > > >> >> > > > > > > > > > > > > > >> >> > > > > > > > > > > > > > >> >> > > > > > > > -- > > > > > > >> >> > > > > > > > regards, > > > > > > >> >> > > > > > > > > > > > > > >> >> > > > > > > > punith s > > > > > > >> >> > > > > > > > cloudbyte.com > > > > > > >> >> > > > > > > > > > > > > > >> >> > > > > > > > > > > > > >> >> > > > > > > > > > > > > >> >> > > > > > > > > > > > > >> >> > > > > > > -- > > > > > > >> >> > > > > > > *Mike Tutkowski* > > > > > > >> >> > > > > > > *Senior CloudStack Developer, SolidFire Inc.* > > > > > > >> >> > > > > > > e: mike.tutkowski@solidfire.com > > > > > > >> >> > > > > > > o: 303.746.7302 > > > > > > >> >> > > > > > > Advancing the way the world uses the cloud > > > > > > >> >> > > > > > > < > > > http://solidfire.com/solution/overview/?video=3Dplay > > > > > >*=E2=84=A2* > > > > > > >> >> > > > > > > > > > > > > >> >> > > > > > > > > > > > >> >> > > > > > > > > > > > >> >> > > > > > > > > > > > >> >> > > > > > -- > > > > > > >> >> > > > > > -----BEGIN GEEK CODE BLOCK----- > > > > > > >> >> > > > > > Version: 3.1 > > > > > > >> >> > > > > > GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C++++(++++)= $ > > > > > > >> ULC++++(++)$ P > > > > > > >> >> > > > > L++(+++)$ > > > > > > >> >> > > > > > E > > > > > > >> >> > > > > > !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R t= v+ > > > > > > b+(++)>+++ > > > > > > >> DI- > > > > > > >> >> > D+ G > > > > > > >> >> > > > > > e++(+++) h-- r(++)>+++ y- > > > > > > >> >> > > > > > ------END GEEK CODE BLOCK------ > > > > > > >> >> > > > > > > > > > > > >> >> > > > > > > > > > > >> >> > > > > > > > > > > >> >> > > > > > > > > > > >> >> > > > > -- > > > > > > >> >> > > > > *Mike Tutkowski* > > > > > > >> >> > > > > *Senior CloudStack Developer, SolidFire Inc.* > > > > > > >> >> > > > > e: mike.tutkowski@solidfire.com > > > > > > >> >> > > > > o: 303.746.7302 > > > > > > >> >> > > > > Advancing the way the world uses the cloud > > > > > > >> >> > > > > < > http://solidfire.com/solution/overview/?video=3Dplay > > > >*=E2=84=A2* > > > > > > >> >> > > > > > > > > > > >> >> > > > > > > > > > >> >> > > > > > > > > > >> >> > > > > > > > > > >> >> > > > -- > > > > > > >> >> > > > -----BEGIN GEEK CODE BLOCK----- > > > > > > >> >> > > > Version: 3.1 > > > > > > >> >> > > > GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C++++(++++)$ > > > > > ULC++++(++)$ > > > > > > P > > > > > > >> >> > > L++(+++)$ > > > > > > >> >> > > > E > > > > > > >> >> > > > !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+ > > > > b+(++)>+++ > > > > > > DI- > > > > > > >> D+ > > > > > > >> >> G > > > > > > >> >> > > > e++(+++) h-- r(++)>+++ y- > > > > > > >> >> > > > ------END GEEK CODE BLOCK------ > > > > > > >> >> > > > > > > > > > >> >> > > > > > > > > >> >> > > > > > > > > >> >> > > > > > > > > >> >> > > -- > > > > > > >> >> > > *Mike Tutkowski* > > > > > > >> >> > > *Senior CloudStack Developer, SolidFire Inc.* > > > > > > >> >> > > e: mike.tutkowski@solidfire.com > > > > > > >> >> > > o: 303.746.7302 > > > > > > >> >> > > Advancing the way the world uses the cloud > > > > > > >> >> > > >*=E2=84=A2* > > > > > > >> >> > > > > > > > > >> >> > > > > > > > >> >> > > > > > > > >> >> > > > > > > > >> >> > -- > > > > > > >> >> > -----BEGIN GEEK CODE BLOCK----- > > > > > > >> >> > Version: 3.1 > > > > > > >> >> > GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C++++(++++)$ > > > ULC++++(++)$ > > > > P > > > > > > >> >> L++(+++)$ > > > > > > >> >> > E > > > > > > >> >> > !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+ > > b+(++)>+++ > > > > DI- > > > > > > D+ G > > > > > > >> >> > e++(+++) h-- r(++)>+++ y- > > > > > > >> >> > ------END GEEK CODE BLOCK------ > > > > > > >> >> > > > > > > > >> >> > > > > > > >> >> > > > > > > >> >> > > > > > > >> >> -- > > > > > > >> >> *Mike Tutkowski* > > > > > > >> >> *Senior CloudStack Developer, SolidFire Inc.* > > > > > > >> >> e: mike.tutkowski@solidfire.com > > > > > > >> >> o: 303.746.7302 > > > > > > >> >> Advancing the way the world uses the cloud > > > > > > >> >> *= =E2=84=A2* > > > > > > >> >> > > > > > > >> > > > > > > > >> > > > > > > > >> > > > > > > > >> > -- > > > > > > >> > -----BEGIN GEEK CODE BLOCK----- > > > > > > >> > Version: 3.1 > > > > > > >> > GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C++++(++++)$ > > ULC++++(++)$ P > > > > > > >> L++(+++)$ > > > > > > >> > E !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+ > b+(++)>+++ > > > DI- > > > > > D+ > > > > > > G > > > > > > >> > e++(+++) h-- r(++)>+++ y- > > > > > > >> > ------END GEEK CODE BLOCK------ > > > > > > >> > > > > > > > >> > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > *Mike Tutkowski* > > > > > > > *Senior CloudStack Developer, SolidFire Inc.* > > > > > > > e: mike.tutkowski@solidfire.com > > > > > > > o: 303.746.7302 > > > > > > > Advancing the way the world uses the cloud > > > > > > > *=E2=84= =A2* > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > *Mike Tutkowski* > > > > > > *Senior CloudStack Developer, SolidFire Inc.* > > > > > > e: mike.tutkowski@solidfire.com > > > > > > o: 303.746.7302 > > > > > > Advancing the way the world uses the cloud > > > > > > *=E2=84= =A2* > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > > > > > > > > Todd Pigram > > > > > http://about.me/ToddPigram > > > > > www.linkedin.com/in/toddpigram/ > > > > > @pigram86 on twitter > > > > > https://plus.google.com/+ToddPigram86 > > > > > Mobile - 216-224-5769 > > > > > > > > > > > > > > > > > > > > > -- > > > > -----BEGIN GEEK CODE BLOCK----- > > > > Version: 3.1 > > > > GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C++++(++++)$ ULC++++(++)$ P > > > L++(+++)$ > > > > E > > > > !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+ b+(++)>+++ DI- D+= G > > > > e++(+++) h-- r(++)>+++ y- > > > > ------END GEEK CODE BLOCK------ > > > > > > > > > > > > > > > > -- > > > > > > > > > Todd Pigram > > > http://about.me/ToddPigram > > > www.linkedin.com/in/toddpigram/ > > > @pigram86 on twitter > > > https://plus.google.com/+ToddPigram86 > > > Mobile - 216-224-5769 > > > > > > > > > > > -- > > -----BEGIN GEEK CODE BLOCK----- > > Version: 3.1 > > GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C++++(++++)$ ULC++++(++)$ P > L++(+++)$ > > E > > !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+ b+(++)>+++ DI- D+ G > > e++(+++) h-- r(++)>+++ y- > > ------END GEEK CODE BLOCK------ > > > > > > -- > *Mike Tutkowski* > *Senior CloudStack Developer, SolidFire Inc.* > e: mike.tutkowski@solidfire.com > o: 303.746.7302 > Advancing the way the world uses the cloud > *=E2=84=A2* > --001a11c2c4da0317cb04fb6d6945--