Return-Path: X-Original-To: apmail-incubator-cloudstack-users-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-users-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 75F39D537 for ; Mon, 24 Sep 2012 07:15:47 +0000 (UTC) Received: (qmail 90542 invoked by uid 500); 24 Sep 2012 07:15:47 -0000 Delivered-To: apmail-incubator-cloudstack-users-archive@incubator.apache.org Received: (qmail 90318 invoked by uid 500); 24 Sep 2012 07:15:45 -0000 Mailing-List: contact cloudstack-users-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: cloudstack-users@incubator.apache.org Delivered-To: mailing list cloudstack-users@incubator.apache.org Received: (qmail 90255 invoked by uid 99); 24 Sep 2012 07:15:44 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 24 Sep 2012 07:15:44 +0000 X-ASF-Spam-Status: No, hits=1.6 required=5.0 tests=RCVD_IN_BRBL_LASTEXT,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: local policy) Received: from [206.125.172.14] (HELO sympanel.syminet.com) (206.125.172.14) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 24 Sep 2012 07:15:37 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=shankerbalan.net; s=x; h=To:References:Message-Id:Content-Transfer-Encoding:Date:In-Reply-To:From:Subject:Mime-Version:Content-Type; bh=l36WKNsMmwC+JeFIrrcKBRRgAxfgQ9ctuNEVN4erBDs=; b=Nruh8x8w8ubkFIit0CjVHmV9lhfPe+xcnhljcXYWFrWqkvETZbTXtZIYig3XaVW/dn6ZBgPVvJW91HvZZAWctCR1UFd60Idd5URQ315h99x8ZELsjfTzEHRaYZq8+lvc; Received: from [122.178.231.102] (helo=bunny.home.shanu.net) by sympanel.syminet.com with esmtpsa (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1TG2sq-0007Nd-86; Mon, 24 Sep 2012 00:15:12 -0700 Content-Type: text/plain; charset=iso-8859-1 Mime-Version: 1.0 (Mac OS X Mail 6.1 \(1498\)) Subject: Re: NFS speed traffic control From: Shanker Balan In-Reply-To: Date: Mon, 24 Sep 2012 12:45:10 +0530 Content-Transfer-Encoding: quoted-printable Message-Id: References: To: cloudstack-users@incubator.apache.org X-Mailer: Apple Mail (2.1498) X-Antiabuse: This header was added to track abuse, please include it with any abuse report X-Antiabuse: Primary Hostname - sympanel.syminet.com X-Antiabuse: Original Domain - incubator.apache.org X-Antiabuse: Originator/Caller UID/GID - [105 113] / [105 113] X-Antiabuse: Sender Address Domain - shankerbalan.net On 24-Sep-2012, at 6:02 AM, Ivan Rodriguez wrote: > Dear Cloudstack users, >=20 > We have several servers provisioning vm's through cloudstack, our = primary > storage is coming from a NFS mount point, we have some high demanding = and > important vm's and some which are not that important but do lots of = writes > to the disk, since all the vm's are competing for access to the NFS = disk, > one vm can affect the speed of the whole setup, I know cloudstack can > manage network speed shaping, but I haven't found a way to do network > storage speed quota or something like that do you guys know if this is > something that we can do inside cloudstack ? AWS provides a "Provisioned IOPS" feature for their storage offerings. = This allows for predictable IO performance to the instances. = http://aws.amazon.com/about-aws/whats-new/2012/07/31/announcing-provisione= d-iops-for-amazon-ebs/ NFS by itself cannot do much to guarantee predictable performance. A = simple network rate limit can control overall throughput but cannot = prevent a tenant from performing massive number of small IO operations = on their virtual disk like what a database would usually do. End of the = day, NFS does not do multi tenancy very well. I am guessing that an AWS style "Provisioned IOPS" like "feature" can be = implemented by having a storage layer that understands multi tenancy and = natively provides an API to limit/guarantee raw read/write IO operations = on a per file/object/directory basis. Basically, a storage layer that = does QoS on a wide range of factors. Cloudstack can then use these = native APIs to set the desired PIOPS while provisioning instances using = a plugin connector. Xenserver seems to allow for I/O priority QoS on the virtual disk. I am = not sure if this is possible for NFS. The doc seems to suggest that its = for multiple hosts accessing the same LUN. = http://support.citrix.com/servlet/KbServlet/download/28751-102-673823/XenS= erver-6.0.0-reference.pdf A workaround would be to have multiple PODs having different QoS = characteristics - low IO, medium IO, high IO and a very high IO PODs. = Each POD can use a different primary storage with desired IO = characteristics or maybe a shared primary storage with QoS features = enabled. Am hoping others can share their experiences handling performance = expectations in a multi tenant cloud using shared storage. Its a very = interesting problem to solve as clouds in general are notorious for = their IO performances. Hth. -- =20 Shanker Balan @shankerbalan=