Return-Path: X-Original-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 82A75DF67 for ; Wed, 26 Dec 2012 12:39:38 +0000 (UTC) Received: (qmail 43279 invoked by uid 500); 26 Dec 2012 12:39:38 -0000 Delivered-To: apmail-incubator-cloudstack-dev-archive@incubator.apache.org Received: (qmail 43248 invoked by uid 500); 26 Dec 2012 12:39:38 -0000 Mailing-List: contact cloudstack-dev-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: cloudstack-dev@incubator.apache.org Delivered-To: mailing list cloudstack-dev@incubator.apache.org Received: (qmail 43219 invoked by uid 99); 26 Dec 2012 12:39:37 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 26 Dec 2012 12:39:37 +0000 X-ASF-Spam-Status: No, hits=-2.3 required=5.0 tests=RCVD_IN_DNSWL_MED,SPF_HELO_PASS,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of bharat.kumar@citrix.com designates 203.166.19.134 as permitted sender) Received: from [203.166.19.134] (HELO SMTP.CITRIX.COM.AU) (203.166.19.134) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 26 Dec 2012 12:39:30 +0000 X-IronPort-AV: E=Sophos;i="4.84,356,1355097600"; d="scan'208";a="195668" Received: from banpmailmx01.citrite.net ([10.103.128.73]) by SYDPIPO01.CITRIX.COM.AU with ESMTP/TLS/RC4-MD5; 26 Dec 2012 12:39:08 +0000 Received: from BANPMAILBOX01.citrite.net ([10.103.128.72]) by BANPMAILMX01.citrite.net ([10.103.128.73]) with mapi; Wed, 26 Dec 2012 18:09:06 +0530 From: Bharat Kumar To: "cloudstack-dev@incubator.apache.org" , "cloudstack-users@incubator.apache.org" Date: Wed, 26 Dec 2012 18:09:05 +0530 Subject: Re: [Discuss] Cpu and Ram overcommit. Thread-Topic: [Discuss] Cpu and Ram overcommit. Thread-Index: Ac3jZf3i59L5QYzeQ8e+N5QcUkf4Aw== Message-ID: <2F01076F-9551-4057-ACCF-2A4E87D71BB6@citrix.com> References: <90404868-634A-46C0-9AED-161B046A8A3A@citrix.com> <25FDE730-61CA-485F-9927-B6EE2EA681A8@citrix.com> In-Reply-To: <25FDE730-61CA-485F-9927-B6EE2EA681A8@citrix.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-Virus-Checked: Checked by ClamAV on apache.org Nitin thanks for your suggestions. My comments inline On Dec 26, 2012, at 3:22 PM, Nitin Mehta wrote: > Thanks Bharat for the bringing this up.=20 > I have a few questions and suggestions for you. >=20 > 1. Why do we need it per cluster basis and when and where do you configur= e this ? I hope when we change it for a cluster it would not require MS reb= oot and be dynamically understood - is that the case ? Depending on the applications running in a given cluster the admin need= s to adjust the over commit factor. for example if the applications = running in a cluster are ram intensive he may want to decrease the ram over= commit ratio for this cluster without effecting the other clusters. This ca= n be done only if the ratios can be specified on a per cluster basis.=20 Also to change these ratios MS restart will not be required. =20 > If we make it cluster based allocators will have to check this config for= each cluster while allocating and can potentially make allocators expensiv= e. Same logic applies for dashboard calculation as well. > What granularity and fine tuning do we require - do you have any use case= s ? The intent of having cluster based over provisioning ratios is to deplo= y VMs selectively depending on the type of application the vm will run. By = selectively i mean the admin will want to specify in which clusters to run = the VM. This will narrow down the number of clusters we need to check while= deploying. I still don't know the exact way in which we should control th= e vm deployment. This definitely needs further discussion, will be clear on= ce we narrow down all the possible use cases. > 2. What would happen in case of contention ? In case of contention the the hypervisor specific methods to handle the con= tention will come into effect. This feature assumes that admin has thought = of the possible scenarios and has chosen the overcommit ratios accordingly. >=20 > 3. Please remember to take care of alerts and dashboard related functiona= lity. Along with this also list Zone/Pod.../host/pool API also use this fac= tor. Please make sure that you take care of that as well. Thanks for the suggestions.=20 >=20 > -Nitin >=20 > On 26-Dec-2012, at 11:32 AM, Bharat Kumar wrote: >=20 >> Hi all, >>=20 >> Presently in Cloudstack there is a provision for cpu overcommit and no = provision for the ram overcommit. There is no way to configure the overcomm= it ratios on a per cluster basis. >>=20 >> So we propose to add a new feature to allow the ram overcommit and to sp= ecify the overcommit ratios ( cpu/ram ) on a per cluster basis.=20 >>=20 >> Motivation to add the feature: >> Most of the operating systems and applications do not use the allocated = resources to 100%. This makes it possible to allocate more resource than wh= at is actually available. The overcommitting of resources allows to run th= e underutilized VMs in fewer number of hosts, This saves money and power. = Currently the cpu overcommit ratio is a global parameter which means there= is no way to fine tune or have a granular control over the overcommit rati= os.=20 >>=20 >> This feature will enable=20 >> 1.) Configuring the overcommit ratios on a per cluster basis. >> 2.) ram overcommit feature in xen and kvm. ( It is there for VMware.)=20 >> 3.) Updating the overcommit ratios of a cluster. >>=20 >> Regards, >> Bharat Kumar. >=20