cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alena Prokharchyk <Alena.Prokharc...@citrix.com>
Subject Re: 答复: System VMs restarted on a disabled cluster
Date Fri, 20 Jul 2012 19:58:01 GMT
All volumes allocated in the pool, have to be destroyed ("Cannot delete
pool LS_PRIMARY1 as there are associated vols for this pool" error
indicates it). Please destroy all vms using this pool.

-Alena.

On 7/20/12 12:52 PM, "Evan Miller" <Evan.Miller@citrix.com> wrote:

>Hi Alena:
>
>I got thwarted on one of the cluster deletion steps.
>
>>* disable cluster
>>* enable maintenance for the primary storage in the cluster
>>* put hosts in cluster into maintenance mode
>>
>>* destroy system vms
>>* delete hosts and primary storage
>
>From CSMS GUI ...
>I can delete the hosts.
>However, I couldn't delete primary storage.
>The error said "Failed to delete storage pool".
>
>I can list the particular storage pool:
>
>FINAL URL AFTER SPECIAL SUBSTITUTION(S):
>  
>http://10.217.5.192:8080/client/api?apikey=bb0HqLkZWZl87olMVaQ1MCWgt_3NPPf
>oWLorilzI-vDpwSgN1KF2KfSoUl00yHNxa8x2aYrMfG2d_s-FXu_Tfg&command=listStorag
>ePools&clusterid=c03d4dee-d8cd-475b-962b-14149ba3be45&response=json&signat
>ure=7q%2BIr4lZMbsjctbnUidIej9gtgk%3D
>
>HEADERS:
>Date: Fri, 20 Jul 2012 19:43:40 GMT
>Server: Apache-Coyote/1.1
>Content-Length: 562
>Content-Type: text/javascript;charset=UTF-8
>Client-Date: Fri, 20 Jul 2012 19:43:39 GMT
>Client-Peer: 10.217.5.192:8080
>Client-Response-Num: 1
>CONTENT:
>HTTP/1.1 200 OK
>Date: Fri, 20 Jul 2012 19:43:40 GMT
>Server: Apache-Coyote/1.1
>Content-Length: 562
>Content-Type: text/javascript;charset=UTF-8
>Client-Date: Fri, 20 Jul 2012 19:43:39 GMT
>Client-Peer: 10.217.5.192:8080
>Client-Response-Num: 1
>
>{ "liststoragepoolsresponse" : { "count":1 ,"storagepool" : [
>{"id":"c9c0319f-33f0-3494-9ada-4d7a2f1dafd4","zoneid":"5127f0df-0d5e-4a22-
>9c88-fba8ff592612","zonename":"LS_ZONE1","podid":"c89cb02e-78f9-413f-8783-
>19d1baaddb03","podname":"LS_POD1","name":"LS_PRIMARY1","ipaddress":"10.217
>.5.192","path":"/home/export/primary","created":"2012-07-20T12:20:01-0700"
>,"type":"NetworkFilesystem","clusterid":"c03d4dee-d8cd-475b-962b-14149ba3b
>e45","clustername":"LS_R12345","disksizetotal":104586543104,"disksizealloc
>ated":2712723968,"tags":"","state":"Maintenance"} ] } }
>
>NOTE: Under Storage tab from the GUI, there is no data.
>
>But I can't delete that storage pool:
>
>FINAL URL AFTER SPECIAL SUBSTITUTION(S):
>  
>http://10.217.5.192:8080/client/api?apikey=bb0HqLkZWZl87olMVaQ1MCWgt_3NPPf
>oWLorilzI-vDpwSgN1KF2KfSoUl00yHNxa8x2aYrMfG2d_s-FXu_Tfg&command=deleteStor
>agePool&id=c9c0319f-33f0-3494-9ada-4d7a2f1dafd4&response=json&signature=8z
>4Rbi2t%2BzKHvCkJ2USIRC%2Bx8oQ%3D
>
>Error My Final URL:
>http://10.217.5.192:8080/client/api?apikey=bb0HqLkZWZl87olMVaQ1MCWgt_3NPPf
>oWLorilzI-vDpwSgN1KF2KfSoUl00yHNxa8x2aYrMfG2d_s-FXu_Tfg&command=deleteStor
>agePool&id=c9c0319f-33f0-3494-9ada-4d7a2f1dafd4&response=json&signature=8z
>4Rbi2t%2BzKHvCkJ2USIRC%2Bx8oQ%3D
><html>
><head><title>An Error Occurred</title></head>
><body>
><h1>An Error Occurred</h1>
><p>530 Unknown code</p>
></body>
></html>
>moonshine#
>
>The api log says:
>
>2012-07-20 12:46:05,499 INFO  [cloud.api.ApiServer]
>(catalina-exec-10:null) (userId=2 accountId=2
>sessionId=DC150E34937E29953352893CADABEA63) 10.216.134.53 -- GET
>command=deleteStoragePool&id=c9c0319f-33f0-3494-9ada-4d7a2f1dafd4&response
>=json&sessionkey=UsR2i5%2FbTT7zW8RfStD8aH6EqVA%3D&_=1342813564939 530
>Failed to delete storage pool
>
>The management log says this:
>
>2012-07-20 12:46:05,497 WARN  [cloud.storage.StorageManagerImpl]
>(catalina-exec-10:null) Cannot delete pool LS_PRIMARY1 as there are
>associated vols for this pool
>
>I need to be able to cleanly (and often) delete clusters, since each
>labscaler reservation
>will require a cluster.
>
>Is there something in the database that needs to be cleaned out?
>
>>* delete the cluster
>
>Regards,
>Evan
>
>
>-----Original Message-----
>From: Alena Prokharchyk
>Sent: Friday, July 13, 2012 4:26 PM
>To: Evan Miller
>Subject: FW: 答复: System VMs restarted on a disabled cluster
>
>On 7/11/12 8:20 PM, "Mice Xia" <mice_xia@tcloudcomputing.com> wrote:
>
>>Hi, Alena,
>>
>>Im trying to follow your steps:
>>
>>* disable cluster
>>Succeed.
>>
>>* enable maintenance for the primary storage in the cluster Maintenance
>>on VMware cluster failed for the first two trys, with error message
>>like:
>>Unable to create a deployment for VM[ConsoleProxy|v-38-VM]
>>
>>WARN  [cloud.consoleproxy.ConsoleProxyManagerImpl] (consoleproxy-1:)
>>Exception while trying to start console proxy
>>com.cloud.exception.InsufficientServerCapacityException: Unable to
>>create a deployment for VM[ConsoleProxy|v-47-VM]Scope=interface
>>com.cloud.dc.DataCenter; id=1
>>
>>seems each time a new system VM was created, but still on VMware
>>cluster, which leads to failure The maintenance succeed in the third
>>try.
>>
>>* put hosts in cluster into maintenance mode Succeed
>>
>>* destroy system vms
>>Destroying them does not stop them re-create
>>
>>* delete hosts and primary storage
>>Failed to delete primary storage, with message: there are still volumes
>>associated with this pool
>>
>>* delete the cluster
>>
>>
>>Putting hosts/storage into maintenance mode does not stop system VMs
>>re-create From codes I can see management server get supported
>>hypervisorTypes and always fetch the first one, and the first one in my
>>environment happens to be vmware.
>>
>>I have changed expunge.interval = expunge.delay = 120 Should I set
>>consoleproxy.restart = false and update db to set
>>secondary.storage.vm=false ?
>>
>>Regards
>>Mice
>>
>>-----邮件原件-----
>>发件人: Alena Prokharchyk [mailto:Alena.Prokharchyk@citrix.com]
>>发送时间: 2012年7月12日 10:03
>>收件人: cloudstack-dev@incubator.apache.org
>>主题: Re: System VMs restarted on a disabled cluster
>>
>>On 7/11/12 6:29 PM, "Mice Xia" <mice_xia@tcloudcomputing.com> wrote:
>>
>>>Hi, All
>>>
>>> 
>>>
>>>I've set up an environment with two clusters (in the same pod), one
>>>Xenserver and the other is VMware, based on 3.0.x ASF branch.
>>>
>>>Now I'm trying to remove the VMware cluster begin with disabling it
>>>and destroying the system VMS running on it, but the systemVMs
>>>restarted immediately on VMware cluster, which blocks cluster removal.
>>>
>>> 
>>>
>>>I wonder if this is the expected result by design, or should it be
>>>better that the system VMs get allocated on an enabled cluster?
>>>
>>> 
>>>
>>> 
>>>
>>>Regards
>>>
>>>Mice
>>>
>>>
>>
>>
>>
>>It's by design. Disabled cluster just can't be used for creating new /
>>starting existing user vms / routers; but it still can be used by
>>system resources (SSVM and Console proxy).
>>
>>To delete the cluster, you need to:
>>
>>* disable cluster
>>* enable maintenance for the primary storage in the cluster
>>* put hosts in cluster into maintenance mode
>>
>>* destroy system vms
>>* delete hosts and primary storage
>>* delete the cluster
>>
>>-Alena.
>>
>>
>
>
>


Mime
View raw message