cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alena Prokharchyk <Alena.Prokharc...@citrix.com>
Subject Re: 答复: System VMs restarted on a disabled cluster
Date Thu, 12 Jul 2012 16:00:20 GMT
Mice,

Yes, change 2 global configs you've mentioned + set
system.vm.auto.reserve.capacity to false.

-Alena.

On 7/11/12 8:20 PM, "Mice Xia" <mice_xia@tcloudcomputing.com> wrote:

>Hi, Alena,
>
>Im trying to follow your steps:
>
>* disable cluster
>Succeed.
>
>* enable maintenance for the primary storage in the cluster
>Maintenance on VMware cluster failed for the first two trys, with error
>message like: 
>Unable to create a deployment for VM[ConsoleProxy|v-38-VM]
>
>WARN  [cloud.consoleproxy.ConsoleProxyManagerImpl] (consoleproxy-1:)
>Exception while trying to start console proxy
>com.cloud.exception.InsufficientServerCapacityException: Unable to create
>a deployment for VM[ConsoleProxy|v-47-VM]Scope=interface
>com.cloud.dc.DataCenter; id=1
>
>seems each time a new system VM was created, but still on VMware cluster,
>which leads to failure
>The maintenance succeed in the third try.
>
>* put hosts in cluster into maintenance mode
>Succeed
>
>* destroy system vms
>Destroying them does not stop them re-create
>
>* delete hosts and primary storage
>Failed to delete primary storage, with message: there are still volumes
>associated with this pool
>
>* delete the cluster
>
>
>Putting hosts/storage into maintenance mode does not stop system VMs
>re-create
>From codes I can see management server get supported hypervisorTypes and
>always fetch the first one, and the first one in my environment happens
>to be vmware.
>
>I have changed expunge.interval = expunge.delay = 120
>Should I set consoleproxy.restart = false and update db to set
>secondary.storage.vm=false ?
>
>Regards
>Mice
>
>-----邮件原件-----
>发件人: Alena Prokharchyk [mailto:Alena.Prokharchyk@citrix.com]
>发送时间: 2012年7月12日 10:03
>收件人: cloudstack-dev@incubator.apache.org
>主题: Re: System VMs restarted on a disabled cluster
>
>On 7/11/12 6:29 PM, "Mice Xia" <mice_xia@tcloudcomputing.com> wrote:
>
>>Hi, All
>>
>> 
>>
>>I've set up an environment with two clusters (in the same pod), one
>>Xenserver and the other is VMware, based on 3.0.x ASF branch.
>>
>>Now I'm trying to remove the VMware cluster begin with disabling it and
>>destroying the system VMS running on it, but the systemVMs restarted
>>immediately on VMware cluster, which blocks cluster removal.
>>
>> 
>>
>>I wonder if this is the expected result by design, or should it be
>>better that the system VMs get allocated on an enabled cluster?
>>
>> 
>>
>> 
>>
>>Regards
>>
>>Mice 
>>
>>
>
>
>
>It's by design. Disabled cluster just can't be used for creating new /
>starting existing user vms / routers; but it still can be used by system
>resources (SSVM and Console proxy).
>
>To delete the cluster, you need to:
>
>* disable cluster
>* enable maintenance for the primary storage in the cluster
>* put hosts in cluster into maintenance mode
>
>* destroy system vms
>* delete hosts and primary storage
>* delete the cluster
>
>-Alena.
>
>


Mime
View raw message