Return-Path: X-Original-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 8779ED19A for ; Thu, 20 Dec 2012 15:45:16 +0000 (UTC) Received: (qmail 1646 invoked by uid 500); 20 Dec 2012 15:45:16 -0000 Delivered-To: apmail-incubator-cloudstack-dev-archive@incubator.apache.org Received: (qmail 1614 invoked by uid 500); 20 Dec 2012 15:45:16 -0000 Mailing-List: contact cloudstack-dev-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: cloudstack-dev@incubator.apache.org Delivered-To: mailing list cloudstack-dev@incubator.apache.org Received: (qmail 1605 invoked by uid 99); 20 Dec 2012 15:45:16 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 20 Dec 2012 15:45:16 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of shadowsor@gmail.com designates 209.85.212.46 as permitted sender) Received: from [209.85.212.46] (HELO mail-vb0-f46.google.com) (209.85.212.46) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 20 Dec 2012 15:45:10 +0000 Received: by mail-vb0-f46.google.com with SMTP id b13so3921421vby.33 for ; Thu, 20 Dec 2012 07:44:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=Xf0TuKuDEZlEhlrtbWZqWzUFJP8fPfaBwmTI0jbtYKY=; b=vyjAdkt5uF3j4UXXURmiC8K/p0ZvCjhnhED2kqaAQA19rRMjfcnC7hlXmoPBMQjlNp kurJX81ygxQw4sH+lB9+gICV0jmf/Ei/xFnZxZY2dsElBY4oUsRlrc46YGdHenoS/Kw9 BQhNJRdBa/5dvvRFSd1NkK6T2g9v/WCP74FI+uW8CspDpeh6pEp0KhzUdS+RzdiL/Aa5 3XiaGcdVHkwOZQbPZLwc1txMbtkQF+bsROIRC/3blv/MnKs5EX6x1xgJfqYETmtrxELu /ZF+/SuvPZ5B47rTszbI28L9Zv3PHPAZYTzd3f3DTtuBhIOFUOBsTn6QkvvEOVvXv2c2 sDMg== MIME-Version: 1.0 Received: by 10.52.90.133 with SMTP id bw5mr13388429vdb.109.1356018289598; Thu, 20 Dec 2012 07:44:49 -0800 (PST) Received: by 10.58.152.143 with HTTP; Thu, 20 Dec 2012 07:44:49 -0800 (PST) In-Reply-To: <2529883E7B666F4E8F21F85AADA43CA7010C8EB1BDFA@BANPMAILBOX01.citrite.net> References: <2529883E7B666F4E8F21F85AADA43CA7010C8EB1B5EF@BANPMAILBOX01.citrite.net> <2529883E7B666F4E8F21F85AADA43CA7010C8EB1BA59@BANPMAILBOX01.citrite.net> <2529883E7B666F4E8F21F85AADA43CA7010C8EB1BB63@BANPMAILBOX01.citrite.net> <2529883E7B666F4E8F21F85AADA43CA7010C8EB1BDFA@BANPMAILBOX01.citrite.net> Date: Thu, 20 Dec 2012 08:44:49 -0700 Message-ID: Subject: Re: [DISCUSS] Scaling up CPU and RAM for running VMs From: Marcus Sorensen To: "cloudstack-dev@incubator.apache.org" Content-Type: multipart/alternative; boundary=20cf307f374875e07204d14a9cf4 X-Virus-Checked: Checked by ClamAV on apache.org --20cf307f374875e07204d14a9cf4 Content-Type: text/plain; charset=ISO-8859-1 On Thu, Dec 20, 2012 at 4:52 AM, Koushik Das wrote: > See inline > > Thanks, > Koushik > > > -----Original Message----- > > From: Chip Childers [mailto:chip.childers@sungard.com] > > Sent: Wednesday, December 19, 2012 7:55 PM > > To: cloudstack-dev@incubator.apache.org > > Subject: Re: [DISCUSS] Scaling up CPU and RAM for running VMs > > > > On Wed, Dec 19, 2012 at 3:34 AM, Koushik Das > > wrote: > > > See inline > > > > > >> -----Original Message----- > > >> From: Marcus Sorensen [mailto:shadowsor@gmail.com] > > >> Sent: Tuesday, December 18, 2012 10:35 PM > > >> To: cloudstack-dev@incubator.apache.org > > >> Subject: Re: [DISCUSS] Scaling up CPU and RAM for running VMs > > >> > > >> The FS looks good and addresses the things I'd want it to (scaling > > >> should be limited to within cluster, use offerings). > > >> > > >> As you mention, there's a real problem surrounding no support for > > >> scaling down CPU, and it's just as much a problem with the guests as > > >> it is with hvms at the moment, it seems. This makes it hard to just > > >> set a VM as a dynamic one, since at some point you'll likely trigger > > >> it to scale up and have to reboot to get back down. My suggestion if > > >> this goes through however is that instead of marking a vm for auto > > >> scale, we can either attach multiple compute offerings (with a > > >> priority or "level") to a VM, along with triggers (we can't really > trigger on > > memory, but perhaps cpu utilization over a specific time, e.g. > > >> if cpu is at 80% for x time, fall back to the next offering), or we > > >> can create a specific single compute offering that allows you to > > >> specify a min and max memory, cpu, and a trigger at which it scales > > >> (this latter one is my preference). > > >> > > >> The whole thing is problematic though, because people can > > >> inadvertently trigger their VM to scale up when they're installing > > >> updates or compiling or something and then have to reboot to come > > >> back down. If we can't take away resources without manual > > >> intervention, we shouldn't add them. For this reason I'd like to see > > >> the focus (at least initially) on simply being able to change to > > >> larger compute offerings while the VM is up. With this in place, if > > >> someone really wants to autoscale, they can use the api in a > > >> combination of fetching the VM stats and the existing > > >> changeServiceForVirtualMachine. Or we can put that in, but I think any > > implementation will be a poor experience without being able to go both > > ways. > > >> > > > > > > This is a good suggestion but as you have mentioned first priority is > to have > > the basic stuff working (increasing CPU/RAM for running VMs). > > > Also another thing is that HVs (at least Vmware) require that a VM is > > configured appropriately when it is stopped in order to support > increasing > > CPU/RAM while it is running. We can either do this for all VMs > irrespective of > > the fact whether the CPU/RAM is going to be actually increased OR do it > only > > for selective VMs (maybe based on compute offering). If this is going to > be > > common across all HVs the latter can be done. > I think it could be done either way. The straightforward way is via offering that allows for max/current CPU and max/current RAM to be entered (basically exposing how the hypervisor settings themselves work). But you could also do a global setting of some sort that says 'set everything to a max of X CPU and Y RAM', so that every service offering can be upgraded live. As you mention, it will require at least a restart of the VMs to apply, so perhaps users could just switch service offerings anyway. It could be handy to allow people to upgrade service offering when it was unplanned for, though. > > > > > >> I don't know, maybe I'm off in left field here, I'd be interested in > > >> hearing the thoughts of others. > > >> > > >> You mention 'upgradeVirtualMachine', which should be mentioned on > > >> the customer facing API is called 'changeServiceForVirtualMachine', > > >> just to reduce confusion. > > >> > > > > > > upgradeVirtualMachine is an existing command (see > > UpgradeVMCmd.java), was planning to reuse it. But yes if the name sounds > > confusing we can deprecate it and create a new command with the name > > you have suggested. > > > > > > > Please don't break backward compatibility without the whole list > discussing > > the implications on a dedicated thread. We had previously agreed that we > > were going to maintain API compatibility between 4.0.0-incubating and our > > next feature release. If we break it, we have to release as > 5.0.0-incubating > > instead of 4.1.0-incubating. > > In that case will add a new async API changeServiceForVirtualMachine (or > if anyone else comes up with a better name) which will work for both > running and stopped VMs. upgradeVirtualMachine would continue to exist till > 5.0.0 happens. > Would this break backward compatibility? If an API call goes from upgrading VMs only while they're off, and still upgrades VMs only while they're off, but also upgrades VMs with a newer, specific service offering type while they're on, does that break backward compatibility? Or let's say we simply removed the check to make sure the VM was off, and instead just checked if the VM was started with the newer compatible settings... would that break backward compatibility? The call still does what it did before when used as before (changes service offering while the VM is off). Regarding upgradeVirtualMachine, I saw no mention of it in the API docs, and found that in the code, changeServiceForVirtualMachine was mapped to UpgradeVMCmd.java, which is why I mentioned the confusion. 'upgradeVirtualMachine' only exists as an internal method of the userVmService. See the file "client/tomcatconf/commands.properties.in" changeServiceForVirtualMachine=com.cloud.api.commands.UpgradeVMCmd > > > > > >> > > >> On Tue, Dec 18, 2012 at 9:18 AM, Koushik Das > > >> wrote: > > >> > > >> > Created first draft of the FS > > >> > > > >> > > https://cwiki.apache.org/confluence/display/CLOUDSTACK/Dynamic+scalin > > >> g > > >> > +of+CPU+and+RAM > > >> > Also created jira issue > > >> > https://issues.apache.org/jira/browse/CLOUDSTACK-658 > > >> > > > >> > Comments? There is an 'open issue' section where I have mentioned > > >> > some issues that needs to be closed > > >> > > > >> > Thanks, > > >> > Koushik > > >> > > > >> > > -----Original Message----- > > >> > > From: Koushik Das [mailto:koushik.das@citrix.com] > > >> > > Sent: Saturday, December 15, 2012 11:14 PM > > >> > > To: cloudstack-dev@incubator.apache.org > > >> > > Subject: [DISCUSS] Scaling up CPU and RAM for running VMs > > >> > > > > >> > > Currently CS supports changing CPU and RAM for stopped VM. This > > >> > > is achieved by changing compute offering of the VM (with new CPU > > >> > > and RAM > > >> > > values) and then starting it. I am planning to extend the same > > >> > > for > > >> > running VM > > >> > > as well. Initially planning to do it for Vmware where CPU and RAM > > >> > > can be dynamically increased. Support of other HVs can also be > > >> > > added if they support increasing CPU/RAM. > > >> > > > > >> > > Assuming that in the updated compute offering only CPU and RAM > > >> > > has changed, the deployment planner can either select the same > > >> > > host in which case the values are dynamically scaled up OR a > > >> > > different one in which > > >> > case > > >> > > the operation fails. In future if there is support for live > > >> > > migration > > >> > (provided > > >> > > HV supports it) then another option in the latter case could be > > >> > > to > > >> > migrate the > > >> > > VM first and then scale it up. > > >> > > > > >> > > I will start working on the FS and share it out sometime next > week. > > >> > > > > >> > > Comments/suggestions? > > >> > > > > >> > > Thanks, > > >> > > Koushik > > >> > > > > > --20cf307f374875e07204d14a9cf4--