cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Shanker Balan <m...@shankerbalan.net>
Subject Re: NFS speed traffic control
Date Mon, 24 Sep 2012 07:15:10 GMT
On 24-Sep-2012, at 6:02 AM, Ivan Rodriguez <ivanoch@gmail.com> wrote:

> Dear Cloudstack users,
> 
> We have several servers provisioning vm's through cloudstack, our primary
> storage is coming from a NFS mount point,  we have some high demanding and
> important vm's  and some which are not that important but do lots of writes
> to the disk, since all the vm's are competing for access to the NFS disk,
> one vm can affect the speed of the whole setup, I know cloudstack can
> manage network speed shaping, but I haven't found a way to do network
> storage speed quota or something like that do you guys know if this is
> something that we can do inside cloudstack ?

AWS provides a "Provisioned IOPS" feature for their storage offerings. This allows for predictable
IO performance to the instances.

http://aws.amazon.com/about-aws/whats-new/2012/07/31/announcing-provisioned-iops-for-amazon-ebs/

NFS by itself cannot do much to guarantee predictable performance. A simple network rate limit
can control overall throughput but cannot prevent a tenant from performing massive number
of small IO operations on their virtual disk like what a database would usually do. End of
the day, NFS does not do multi tenancy very well.

I am guessing that an AWS style "Provisioned IOPS" like "feature" can be implemented by having
a storage layer that understands multi tenancy and natively provides an API to limit/guarantee
raw read/write IO operations on a per file/object/directory basis. Basically, a storage layer
that does QoS on a wide range of factors. Cloudstack can then use these native APIs to set
the desired PIOPS while provisioning instances using a plugin connector.

Xenserver seems to allow for I/O priority QoS on the virtual disk. I am not sure if this is
possible for NFS. The doc seems to suggest that its for multiple hosts accessing the same
LUN.

http://support.citrix.com/servlet/KbServlet/download/28751-102-673823/XenServer-6.0.0-reference.pdf

A workaround would be to have multiple PODs having different QoS characteristics - low IO,
medium IO, high IO and a very high IO PODs. Each POD can use a different primary storage with
desired IO characteristics or maybe a shared primary storage with QoS features enabled.

Am hoping others can share their experiences handling performance expectations in a multi
tenant cloud using shared storage. Its a very interesting problem to solve as clouds in general
are notorious for their IO performances.

Hth.

--  
Shanker Balan
@shankerbalan
Mime
View raw message