Return-Path: X-Original-To: apmail-incubator-cloudstack-users-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-users-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 57342DB20 for ; Mon, 24 Sep 2012 17:28:30 +0000 (UTC) Received: (qmail 14462 invoked by uid 500); 24 Sep 2012 17:28:30 -0000 Delivered-To: apmail-incubator-cloudstack-users-archive@incubator.apache.org Received: (qmail 14438 invoked by uid 500); 24 Sep 2012 17:28:30 -0000 Mailing-List: contact cloudstack-users-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: cloudstack-users@incubator.apache.org Delivered-To: mailing list cloudstack-users@incubator.apache.org Received: (qmail 14428 invoked by uid 99); 24 Sep 2012 17:28:30 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 24 Sep 2012 17:28:30 +0000 X-ASF-Spam-Status: No, hits=-5.0 required=5.0 tests=RCVD_IN_DNSWL_HI,SPF_HELO_PASS,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of Chiradeep.Vittal@citrix.com designates 66.165.176.89 as permitted sender) Received: from [66.165.176.89] (HELO SMTP.CITRIX.COM) (66.165.176.89) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 24 Sep 2012 17:28:24 +0000 X-IronPort-AV: E=Sophos;i="4.80,476,1344211200"; d="scan'208";a="38978554" Received: from sjcpmailmx02.citrite.net ([10.216.14.75]) by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5; 24 Sep 2012 17:28:02 +0000 Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by SJCPMAILMX02.citrite.net ([10.216.14.75]) with mapi; Mon, 24 Sep 2012 10:28:01 -0700 From: Chiradeep Vittal To: CloudStack Users Date: Mon, 24 Sep 2012 10:27:59 -0700 Subject: Re: NFS speed traffic control Thread-Topic: NFS speed traffic control Thread-Index: Ac2aefKdngKAcYESTOi5gCFDreVJ3g== Message-ID: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: user-agent: Microsoft-MacOutlook/14.13.0.110805 acceptlanguage: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 CloudStack supports tagging physical resources [1]. A service offering or disk offering can include these tags. During deployment of the VM and volume,=20 these tags will be taken as hints. You could have 2 types of primary storage per cluster: one tagged "high priority"=20 and one tagged "low priority". Of course there's still variables such as shared network bandwidth and contention for dom0 resources. [1] http://wiki.cloudstack.org/display/COMM/Host+tags+and+Storage+tags On 9/24/12 12:15 AM, "Shanker Balan" wrote: >On 24-Sep-2012, at 6:02 AM, Ivan Rodriguez wrote: > >> Dear Cloudstack users, >>=20 >> We have several servers provisioning vm's through cloudstack, our >>primary >> storage is coming from a NFS mount point, we have some high demanding >>and >> important vm's and some which are not that important but do lots of >>writes >> to the disk, since all the vm's are competing for access to the NFS >>disk, >> one vm can affect the speed of the whole setup, I know cloudstack can >> manage network speed shaping, but I haven't found a way to do network >> storage speed quota or something like that do you guys know if this is >> something that we can do inside cloudstack ? > >AWS provides a "Provisioned IOPS" feature for their storage offerings. >This allows for predictable IO performance to the instances. > >http://aws.amazon.com/about-aws/whats-new/2012/07/31/announcing-provisione >d-iops-for-amazon-ebs/ > >NFS by itself cannot do much to guarantee predictable performance. A >simple network rate limit can control overall throughput but cannot >prevent a tenant from performing massive number of small IO operations on >their virtual disk like what a database would usually do. End of the day, >NFS does not do multi tenancy very well. > >I am guessing that an AWS style "Provisioned IOPS" like "feature" can be >implemented by having a storage layer that understands multi tenancy and >natively provides an API to limit/guarantee raw read/write IO operations >on a per file/object/directory basis. Basically, a storage layer that >does QoS on a wide range of factors. Cloudstack can then use these native >APIs to set the desired PIOPS while provisioning instances using a plugin >connector. > >Xenserver seems to allow for I/O priority QoS on the virtual disk. I am >not sure if this is possible for NFS. The doc seems to suggest that its >for multiple hosts accessing the same LUN. > >http://support.citrix.com/servlet/KbServlet/download/28751-102-673823/XenS >erver-6.0.0-reference.pdf > >A workaround would be to have multiple PODs having different QoS >characteristics - low IO, medium IO, high IO and a very high IO PODs. >Each POD can use a different primary storage with desired IO >characteristics or maybe a shared primary storage with QoS features >enabled. > >Am hoping others can share their experiences handling performance >expectations in a multi tenant cloud using shared storage. Its a very >interesting problem to solve as clouds in general are notorious for their >IO performances. > >Hth. > >-- =20 >Shanker Balan >@shankerbalan