Return-Path: X-Original-To: apmail-cloudstack-issues-archive@www.apache.org Delivered-To: apmail-cloudstack-issues-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 4476318A1B for ; Wed, 8 Jul 2015 10:05:06 +0000 (UTC) Received: (qmail 34721 invoked by uid 500); 8 Jul 2015 10:05:05 -0000 Delivered-To: apmail-cloudstack-issues-archive@cloudstack.apache.org Received: (qmail 34695 invoked by uid 500); 8 Jul 2015 10:05:05 -0000 Mailing-List: contact issues-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cloudstack.apache.org Delivered-To: mailing list issues@cloudstack.apache.org Received: (qmail 34686 invoked by uid 500); 8 Jul 2015 10:05:05 -0000 Delivered-To: apmail-incubator-cloudstack-issues@incubator.apache.org Received: (qmail 34683 invoked by uid 99); 8 Jul 2015 10:05:05 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 08 Jul 2015 10:05:05 +0000 Date: Wed, 8 Jul 2015 10:05:05 +0000 (UTC) From: "Daan Hoogland (JIRA)" To: cloudstack-issues@incubator.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (CLOUDSTACK-8442) [VMWARE] VM Cannot be powered on after restoreVirtualMachine MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/CLOUDSTACK-8442?page=3Dcom.atl= assian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daan Hoogland updated CLOUDSTACK-8442: -------------------------------------- Fix Version/s: (was: 4.5.2) 4.6.0 > [VMWARE] VM Cannot be powered on after restoreVirtualMachine=20 > ------------------------------------------------------------- > > Key: CLOUDSTACK-8442 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-844= 2 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the defa= ult.)=20 > Components: Management Server > Affects Versions: 4.5.1 > Environment: ACS 4.5.1, CentOS 6.6 > vSphere 5.5 with NFS for Primary Storage > Reporter: ilya musayev > Labels: vmware > Fix For: 4.6.0 > > > While restoreVirtualMachine call is successful, when you try to power on = the VM, vSphere fails to find and use proper ROOT volume.=20 > To recreate this issue, create a project, then deploy a VM with template = X in same project, then use restoreVirtualMachine API call to alter the ROO= T disk and attempt to power on.. > Same errors are seen in vcenter... > 2015-05-05 06:38:43,962 INFO [o.a.c.f.j.i.AsyncJobMonitor] (API-Job-Exec= utor-54:ctx-a7142d34 job-8077) Add job-8077 into job monitoring > 2015-05-05 06:38:43,969 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (catalina= -exec-7:ctx-6e032e40 ctx-8bb374e0) submit async job-8077, details: AsyncJob= VO {id:8077, userId: 2, accountId: 2, instanceType: VirtualMachine, instanc= eId: 1350, cmd: org.apache.cloudstack.api.command.admin.vm.StartVMCmdByAdmi= n, cmdInfo: {"id":"bb958b5f-a374-4f0a-a7e2-b1ed877ac0e2","response":"json",= "sessionkey":"EfTBAqeGH5ivA9E7W1q7gcYXWgI\u003d","ctxDetails":"{\"com.cloud= .vm.VirtualMachine\":\"bb958b5f-a374-4f0a-a7e2-b1ed877ac0e2\"}","cmdEventTy= pe":"VM.START","ctxUserId":"2","httpmethod":"GET","projectid":"98b2e16f-1e4= f-4b19-866b-154ef5aad53d","_":"1430807923839","uuid":"bb958b5f-a374-4f0a-a7= e2-b1ed877ac0e2","ctxAccountId":"2","ctxStartEventId":"17421"}, cmdVersion:= 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, ini= tMsid: 345049223690, completeMsid: null, lastUpdated: null, lastPolled: nul= l, created: null} > 2015-05-05 06:38:43,978 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-= Executor-54:ctx-a7142d34 job-8077) Executing AsyncJobVO {id:8077, userId: 2= , accountId: 2, instanceType: VirtualMachine, instanceId: 1350, cmd: org.ap= ache.cloudstack.api.command.admin.vm.StartVMCmdByAdmin, cmdInfo: {"id":"bb9= 58b5f-a374-4f0a-a7e2-b1ed877ac0e2","response":"json","sessionkey":"EfTBAqeG= H5ivA9E7W1q7gcYXWgI\u003d","ctxDetails":"{\"com.cloud.vm.VirtualMachine\":\= "bb958b5f-a374-4f0a-a7e2-b1ed877ac0e2\"}","cmdEventType":"VM.START","ctxUse= rId":"2","httpmethod":"GET","projectid":"98b2e16f-1e4f-4b19-866b-154ef5aad5= 3d","_":"1430807923839","uuid":"bb958b5f-a374-4f0a-a7e2-b1ed877ac0e2","ctxA= ccountId":"2","ctxStartEventId":"17421"}, cmdVersion: 0, status: IN_PROGRES= S, processStatus: 0, resultCode: 0, result: null, initMsid: 345049223690, c= ompleteMsid: null, lastUpdated: null, lastPolled: null, created: null} > 2015-05-05 06:38:43,990 WARN [c.c.a.d.ParamGenericValidationWorker] (API= -Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Received unknown param= eters for command startVirtualMachine. Unknown parameters : projectid > 2015-05-05 06:38:44,020 DEBUG [c.c.n.NetworkModelImpl] (API-Job-Executor-= 54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Service SecurityGroup is not support= ed in the network id=3D224 > 2015-05-05 06:38:44,025 DEBUG [c.c.n.NetworkModelImpl] (API-Job-Executor-= 54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Service SecurityGroup is not support= ed in the network id=3D224 > 2015-05-05 06:38:44,045 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-= Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Deploy avoids pods: [],= clusters: [], hosts: [] > 2015-05-05 06:38:44,046 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-= Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) DeploymentPlanner alloc= ation algorithm: com.cloud.deploy.FirstFitPlanner@49361de4 > 2015-05-05 06:38:44,046 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-= Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Trying to allocate a ho= st and storage pools from dc:1, pod:1,cluster:null, requested cpu: 100, req= uested ram: 2147483648 > 2015-05-05 06:38:44,047 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-= Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Is ROOT volume READY (p= ool already allocated)?: No > 2015-05-05 06:38:44,047 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-= Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) This VM has last host_i= d specified, trying to choose the same host: 5 > 2015-05-05 06:38:44,055 DEBUG [c.c.c.CapacityManagerImpl] (API-Job-Execut= or-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Checking if host: 5 has enough ca= pacity for requested CPU: 100 and requested RAM: 2147483648 , cpuOverprovis= ioningFactor: 1.0 > 2015-05-05 06:38:44,058 DEBUG [c.c.c.CapacityManagerImpl] (API-Job-Execut= or-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Hosts's actual total CPU: 70208 a= nd CPU after applying overprovisioning: 70208 > 2015-05-05 06:38:44,058 DEBUG [c.c.c.CapacityManagerImpl] (API-Job-Execut= or-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) We need to allocate to the last h= ost again, so checking if there is enough reserved capacity > 2015-05-05 06:38:44,058 DEBUG [c.c.c.CapacityManagerImpl] (API-Job-Execut= or-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Reserved CPU: 100 , Requested CPU= : 100 > 2015-05-05 06:38:44,059 DEBUG [c.c.c.CapacityManagerImpl] (API-Job-Execut= or-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Reserved RAM: 2147483648 , Reques= ted RAM: 2147483648 > 2015-05-05 06:38:44,059 DEBUG [c.c.c.CapacityManagerImpl] (API-Job-Execut= or-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Host has enough CPU and RAM avail= able > 2015-05-05 06:38:44,059 DEBUG [c.c.c.CapacityManagerImpl] (API-Job-Execut= or-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) STATS: Can alloc CPU from host: 5= , used: 6000, reserved: 100, actual total: 70208, total with overprovisioni= ng: 70208; requested cpu:100,alloc_from_last_host?:true ,considerReservedCa= pacity?: true > 2015-05-05 06:38:44,059 DEBUG [c.c.c.CapacityManagerImpl] (API-Job-Execut= or-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) STATS: Can alloc MEM from host: 5= , used: 177704271872, reserved: 2147483648, total: 1374207606784; requested= mem: 2147483648,alloc_from_last_host?:true ,considerReservedCapacity?: tru= e > 2015-05-05 06:38:44,061 DEBUG [c.c.c.CapacityManagerImpl] (API-Job-Execut= or-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Host: 5 has cpu capability (cpu:3= 2, speed:2194) to support requested CPU: 1 and requested speed: 100 > 2015-05-05 06:38:44,061 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-= Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) The last host of this V= M is UP and has enough capacity > 2015-05-05 06:38:44,061 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-= Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Now checking for suitab= le pools under zone: 1, pod: 1, cluster: 2 > 2015-05-05 06:38:44,065 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-= Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Checking suitable pools= for volume (Id, Type): (1607,ROOT) > 2015-05-05 06:38:44,065 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-= Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) We need to allocate new= storagepool for this volume > 2015-05-05 06:38:44,068 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-= Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Calling StoragePoolAllo= cators to find suitable pools > 2015-05-05 06:38:44,070 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (API-= Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) LocalStoragePoolAllocat= or trying to find storage pool to fit the vm > 2015-05-05 06:38:44,070 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator= ] (API-Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) ClusterScopeStor= agePoolAllocator looking for storage pool > 2015-05-05 06:38:44,070 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator= ] (API-Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Looking for pool= s in dc: 1 pod:1 cluster:2 > 2015-05-05 06:38:44,072 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator= ] (API-Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Found pools matc= hing tags: [Pool[5|NetworkFilesystem]] > 2015-05-05 06:38:44,075 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (A= PI-Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Checking if storage = pool is suitable, name: null ,poolId: 5 > 2015-05-05 06:38:44,077 INFO [c.c.s.StorageManagerImpl] (API-Job-Executo= r-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Storage pool null (5) does not sup= ply IOPS capacity, assuming enough capacity > 2015-05-05 06:38:44,079 DEBUG [c.c.s.StorageManagerImpl] (API-Job-Executo= r-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Checking pool 5 for storage, total= Size: 8796093022208, usedBytes: 5124216307712, usedPct: 0.5825559478253126,= disable threshold: 0.9 > 2015-05-05 06:38:44,102 DEBUG [c.c.s.StorageManagerImpl] (API-Job-Executo= r-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Found storage pool Madhouse-Dev-CI= C-NFS-primary of type NetworkFilesystem with overprovisioning factor 6 > 2015-05-05 06:38:44,102 DEBUG [c.c.s.StorageManagerImpl] (API-Job-Executo= r-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Total over provisioned capacity ca= lculated is 6 * 8796093022208 > 2015-05-05 06:38:44,102 DEBUG [c.c.s.StorageManagerImpl] (API-Job-Executo= r-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Total capacity of the pool Madhous= e-Dev-CIC-NFS-primary id: 5 is 52776558133248 > 2015-05-05 06:38:44,103 DEBUG [c.c.s.StorageManagerImpl] (API-Job-Executo= r-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Checking pool: 5 for volume alloca= tion [Vol[1607|vm=3D1350|ROOT]], maxSize : 52776558133248, totalAllocatedSi= ze : 9007464972808, askingSize : 21474836480, allocated disable threshold: = 0.9 > 2015-05-05 06:38:44,103 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator= ] (API-Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) ClusterScopeStor= agePoolAllocator returning 1 suitable storage pools > 2015-05-05 06:38:44,103 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-= Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Trying to find a poteni= al host and associated storage pools from the suitable host/pool lists for = this VM > 2015-05-05 06:38:44,103 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-= Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Checking if host: 5 can= access any suitable storage pool for volume: ROOT > 2015-05-05 06:38:44,104 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-= Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Host: 5 can access pool= : 5 > 2015-05-05 06:38:44,105 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-= Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Found a potential host = id: 5 name: fed33-madhoused-grv06.zonex.cloudsand.com and associated storag= e pools for this VM > 2015-05-05 06:38:44,106 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-= Job-Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Returning Deployment De= stination: Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Typ= e-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(2)-Host(5)-Storage(Volume(1607= |ROOT-->Pool(5))] > 2015-05-05 06:38:44,128 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-= Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Sync job-8078 execution on = object VmWorkJobQueue.1350 > 2015-05-05 06:38:45,743 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Exe= cutor-112:ctx-ff983380 job-8077/job-8078) Add job-8078 into job monitoring > 2015-05-05 06:38:45,752 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job= -Executor-112:ctx-ff983380 job-8077/job-8078) Executing AsyncJobVO {id:8078= , userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.c= loud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9c= MGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50= UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9= zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTm= V0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMA= AlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29t= LmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEw= AC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAFRnQAGVZpcnR1YWxNYW= NoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoAB= XZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAJzcQB-AAgAAAAA= AAAABXBwcHNxAH4ACAAAAAAAAAABcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgA= KbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZH= QAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PRO= GRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 34504922369= 0, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Ma= y 05 06:38:44 GMT 2015} > 2015-05-05 06:38:45,753 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Execu= tor-112:ctx-ff983380 job-8077/job-8078) Run VM work job: com.cloud.vm.VmWor= kStart for VM 1350, job origin: 8077 > 2015-05-05 06:38:45,754 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Exe= cutor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Execute VM work job:= com.cloud.vm.VmWorkStart{"dcId":1,"podId":1,"clusterId":2,"hostId":5,"rawP= arams":{"VmPassword":"rO0ABXQADnNhdmVkX3Bhc3N3b3Jk"},"userId":2,"accountId"= :2,"vmId":1350,"handlerName":"VirtualMachineManagerImpl"} > 2015-05-05 06:38:45,760 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) VM state transitted fr= om :Stopped to Starting with event: StartRequestedvm's original host id: 5 = new host id: null host id before state transition: null > 2015-05-05 06:38:45,761 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job= -Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Successfully tra= nsitioned to start state for VM[User|i-149-1350-VM] reservation id =3D ae87= 0499-0ca4-4aa0-9c83-8314628bc69f > 2015-05-05 06:38:45,766 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job= -Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Trying to deploy= VM, vm has dcId: 1 and podId: 1 > 2015-05-05 06:38:45,766 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job= -Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) advanceStart: De= ploymentPlan is provided, using dcId:1, podId: 1, clusterId: 2, hostId: 5, = poolId: null > 2015-05-05 06:38:45,766 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job= -Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Deploy avoids po= ds: null, clusters: null, hosts: null > 2015-05-05 06:38:45,774 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Deploy avoid= s pods: [], clusters: [], hosts: [] > 2015-05-05 06:38:45,774 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) DeploymentPl= anner allocation algorithm: com.cloud.deploy.FirstFitPlanner@49361de4 > 2015-05-05 06:38:45,774 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Trying to al= locate a host and storage pools from dc:1, pod:1,cluster:2, requested cpu: = 100, requested ram: 2147483648 > 2015-05-05 06:38:45,775 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Is ROOT volu= me READY (pool already allocated)?: No > 2015-05-05 06:38:45,775 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) DeploymentPl= an has host_id specified, choosing this host and making no checks on this h= ost: 5 > 2015-05-05 06:38:45,776 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Looking for = suitable pools for this host under zone: 1, pod: 1, cluster: 2 > 2015-05-05 06:38:45,778 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Checking sui= table pools for volume (Id, Type): (1607,ROOT) > 2015-05-05 06:38:45,778 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) We need to a= llocate new storagepool for this volume > 2015-05-05 06:38:45,780 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Calling Stor= agePoolAllocators to find suitable pools > 2015-05-05 06:38:45,780 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work= -Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) LocalStorage= PoolAllocator trying to find storage pool to fit the vm > 2015-05-05 06:38:45,780 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator= ] (Work-Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Clust= erScopeStoragePoolAllocator looking for storage pool > 2015-05-05 06:38:45,780 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator= ] (Work-Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Looki= ng for pools in dc: 1 pod:1 cluster:2 > 2015-05-05 06:38:45,781 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator= ] (Work-Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Found= pools matching tags: [Pool[5|NetworkFilesystem]] > 2015-05-05 06:38:45,783 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (W= ork-Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Checking = if storage pool is suitable, name: null ,poolId: 5 > 2015-05-05 06:38:45,784 INFO [c.c.s.StorageManagerImpl] (Work-Job-Execut= or-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Storage pool null (5) d= oes not supply IOPS capacity, assuming enough capacity > 2015-05-05 06:38:45,785 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Execut= or-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Checking pool 5 for sto= rage, totalSize: 8796093022208, usedBytes: 5124216307712, usedPct: 0.582555= 9478253126, disable threshold: 0.9 > 2015-05-05 06:38:45,796 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Execut= or-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Found storage pool Madh= ouse-Dev-CIC-NFS-primary of type NetworkFilesystem with overprovisioning fa= ctor 6 > 2015-05-05 06:38:45,796 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Execut= or-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Total over provisioned = capacity calculated is 6 * 8796093022208 > 2015-05-05 06:38:45,796 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Execut= or-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Total capacity of the p= ool Madhouse-Dev-CIC-NFS-primary id: 5 is 52776558133248 > 2015-05-05 06:38:45,797 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Execut= or-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Checking pool: 5 for vo= lume allocation [Vol[1607|vm=3D1350|ROOT]], maxSize : 52776558133248, total= AllocatedSize : 9007464972808, askingSize : 21474836480, allocated disable = threshold: 0.9 > 2015-05-05 06:38:45,797 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator= ] (Work-Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Clust= erScopeStoragePoolAllocator returning 1 suitable storage pools > 2015-05-05 06:38:45,797 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Trying to fi= nd a potenial host and associated storage pools from the suitable host/pool= lists for this VM > 2015-05-05 06:38:45,797 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Checking if = host: 5 can access any suitable storage pool for volume: ROOT > 2015-05-05 06:38:45,797 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Host: 5 can = access pool: 5 > 2015-05-05 06:38:45,798 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Found a pote= ntial host id: 5 name: fed33-madhoused-grv06.zonex.cloudsand.com and associ= ated storage pools for this VM > 2015-05-05 06:38:45,798 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Returning De= ployment Destination: Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Vo= lume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(2)-Host(5)-Storage(= Volume(1607|ROOT-->Pool(5))] > 2015-05-05 06:38:45,799 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job= -Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Deployment found= - P0=3DVM[User|i-149-1350-VM], P0=3DDest[Zone(Id)-Pod(Id)-Cluster(Id)-Hos= t(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(2)-= Host(5)-Storage(Volume(1607|ROOT-->Pool(5))] > 2015-05-05 06:38:45,843 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) VM state transitted fr= om :Starting to Starting with event: OperationRetryvm's original host id: 5= new host id: 5 host id before state transition: null > 2015-05-05 06:38:45,843 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) VM starting again on t= he last host it was stopped on > 2015-05-05 06:38:45,851 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Hosts's actual total C= PU: 70208 and CPU after applying overprovisioning: 70208 > 2015-05-05 06:38:45,851 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) We are allocating VM, = increasing the used capacity of this host:5 > 2015-05-05 06:38:45,851 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Current Used CPU: 6000= , Free CPU:64108 ,Requested CPU: 100 > 2015-05-05 06:38:45,851 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Current Used RAM: 1777= 04271872 , Free RAM:1194355851264 ,Requested RAM: 2147483648 > 2015-05-05 06:38:45,851 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) We are allocating VM t= o the last host again, so adjusting the reserved capacity if it is not less= than required > 2015-05-05 06:38:45,851 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Reserved CPU: 100 , Re= quested CPU: 100 > 2015-05-05 06:38:45,851 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Reserved RAM: 21474836= 48 , Requested RAM: 2147483648 > 2015-05-05 06:38:45,851 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) CPU STATS after alloca= tion: for host: 5, old used: 6000, old reserved: 100, actual total: 70208, = total with overprovisioning: 70208; new used:6100, reserved:0; requested cp= u:100,alloc_from_last:true > 2015-05-05 06:38:45,851 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) RAM STATS after alloca= tion: for host: 5, old used: 177704271872, old reserved: 2147483648, total:= 1374207606784; new used: 179851755520, reserved: 0; requested mem: 2147483= 648,alloc_from_last:true > 2015-05-05 06:38:45,857 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job= -Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) VM is being crea= ted in podId: 1 > 2015-05-05 06:38:45,864 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-E= xecutor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Network id=3D224 i= s already implemented > 2015-05-05 06:38:45,874 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor= -112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Service SecurityGroup is = not supported in the network id=3D224 > 2015-05-05 06:38:45,878 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-E= xecutor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Changing active nu= mber of nics for network id=3D224 on 1 > 2015-05-05 06:38:45,885 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-E= xecutor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Asking VirtualRout= er to prepare for Nic[1357-1350-null-172.83.24.37] > 2015-05-05 06:38:45,895 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl= ] (Work-Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Lock = is acquired for network id 224 as a part of router startup in Dest[Zone(Id)= -Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zo= ne(1)-Pod(1)-Cluster(2)-Host(5)-Storage(Volume(1607|ROOT-->Pool(5))] > 2015-05-05 06:38:45,898 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl= ] (Work-Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Lock = is released for network id 224 as a part of router startup in Dest[Zone(Id)= -Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zo= ne(1)-Pod(1)-Cluster(2)-Host(5)-Storage(Volume(1607|ROOT-->Pool(5))] > 2015-05-05 06:38:45,905 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor= -112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Service SecurityGroup is = not supported in the network id=3D224 > 2015-05-05 06:38:45,916 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor= -112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Service SecurityGroup is = not supported in the network id=3D224 > 2015-05-05 06:38:45,922 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl= ] (Work-Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Apply= ing dhcp entry in network Ntwk[224|Guest|7] > 2015-05-05 06:38:45,935 DEBUG [c.c.a.t.Request] (Work-Job-Executor-112:ct= x-ff983380 job-8077/job-8078 ctx-c097f8e0) Seq 4-3564599105063748816: Sendi= ng { Cmd , MgmtId: 345049223690, via: 4(fed33-madhoused-grv04.zonex.clouds= and.com), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryC= ommand":{"vmMac":"06:34:d2:00:11:af","vmIpAddress":"172.83.24.37","vmName":= "test","defaultRouter":"172.83.24.1","defaultDns":"172.83.24.10","duid":"00= :03:00:01:06:34:d2:00:11:af","isDefault":true,"executeInSequence":false,"ac= cessDetails":{"router.guest.ip":"172.83.24.10","zone.network.type":"Advance= d","router.name":"r-1017-VM","router.ip":"10.178.84.38"},"wait":0}}] } > 2015-05-05 06:38:45,936 DEBUG [c.c.a.t.Request] (Work-Job-Executor-112:ct= x-ff983380 job-8077/job-8078 ctx-c097f8e0) Seq 4-3564599105063748816: Execu= ting: { Cmd , MgmtId: 345049223690, via: 4(fed33-madhoused-grv04.zonex.clo= udsand.com), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEnt= ryCommand":{"vmMac":"06:34:d2:00:11:af","vmIpAddress":"172.83.24.37","vmNam= e":"test","defaultRouter":"172.83.24.1","defaultDns":"172.83.24.10","duid":= "00:03:00:01:06:34:d2:00:11:af","isDefault":true,"executeInSequence":false,= "accessDetails":{"router.guest.ip":"172.83.24.10","zone.network.type":"Adva= nced","router.name":"r-1017-VM","router.ip":"10.178.84.38"},"wait":0}}] } > 2015-05-05 06:38:45,936 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-415= :ctx-d5cb637f fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-8078,= cmd: DhcpEntryCommand) Use router's private IP for SSH control. IP : 10.17= 8.84.38 > 2015-05-05 06:38:45,936 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-415= :ctx-d5cb637f fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-8078,= cmd: DhcpEntryCommand) Run command on VR: 10.178.84.38, script: edithosts.= sh with args: -m 06:34:d2:00:11:af -4 172.83.24.37 -h test -d 172.83.24.1 = -n 172.83.24.10 > 2015-05-05 06:38:45,936 INFO [c.c.h.v.u.VmwareContextPool] (DirectAgent-= 415:ctx-d5cb637f fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-80= 78, cmd: DhcpEntryCommand) Return a VmwareContext from the idle pool: root@= fed33-madhousedev-lvc01.zonex.cloudsand.com. current pool size: 6, outstand= ing count: 8 > 2015-05-05 06:38:46,461 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-415= :ctx-d5cb637f fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-8078,= cmd: DhcpEntryCommand) edithosts.sh execution result: true > 2015-05-05 06:38:46,461 INFO [c.c.h.v.u.VmwareContextPool] (DirectAgent-= 415:ctx-d5cb637f fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-80= 78, cmd: DhcpEntryCommand) Recycle VmwareContext into idle pool: root@fed33= -madhousedev-lvc01.zonex.cloudsand.com, current idle pool size: 7, outstand= ing count: 8 > 2015-05-05 06:38:46,461 DEBUG [c.c.a.t.Request] (Work-Job-Executor-112:ct= x-ff983380 job-8077/job-8078 ctx-c097f8e0) Seq 4-3564599105063748816: Recei= ved: { Ans: , MgmtId: 345049223690, via: 4, Ver: v1, Flags: 10, { Answer }= } > 2015-05-05 06:38:46,489 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor= -112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Service SecurityGroup is = not supported in the network id=3D224 > 2015-05-05 06:38:46,504 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl= ] (Work-Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Apply= ing userdata and password entry in network Ntwk[224|Guest|7] > 2015-05-05 06:38:46,537 DEBUG [c.c.a.t.Request] (Work-Job-Executor-112:ct= x-ff983380 job-8077/job-8078 ctx-c097f8e0) Seq 4-3564599105063748817: Sendi= ng { Cmd , MgmtId: 345049223690, via: 4(fed33-madhoused-grv04.zonex.clouds= and.com), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswo= rdCommand":{,"vmIpAddress":"172.83.24.37","vmName":"test","executeInSequenc= e":false,"accessDetails":{"router.guest.ip":"172.83.24.10","zone.network.ty= pe":"Advanced","router.ip":"10.178.84.38","router.name":"r-1017-VM"},"wait"= :0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"172.83.24= .37","vmName":"test","executeInSequence":false,"accessDetails":{"router.gue= st.ip":"172.83.24.10","zone.network.type":"Advanced","router.ip":"10.178.84= .38","router.name":"r-1017-VM"},"wait":0}}] } > 2015-05-05 06:38:46,543 DEBUG [c.c.a.t.Request] (Work-Job-Executor-112:ct= x-ff983380 job-8077/job-8078 ctx-c097f8e0) Seq 4-3564599105063748817: Execu= ting: { Cmd , MgmtId: 345049223690, via: 4(fed33-madhoused-grv04.zonex.clo= udsand.com), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePas= swordCommand":{,"vmIpAddress":"172.83.24.37","vmName":"test","executeInSequ= ence":false,"accessDetails":{"router.guest.ip":"172.83.24.10","zone.network= .type":"Advanced","router.ip":"10.178.84.38","router.name":"r-1017-VM"},"wa= it":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"172.83= .24.37","vmName":"test","executeInSequence":false,"accessDetails":{"router.= guest.ip":"172.83.24.10","zone.network.type":"Advanced","router.ip":"10.178= .84.38","router.name":"r-1017-VM"},"wait":0}}] } > 2015-05-05 06:38:46,543 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-414= :ctx-b48e6903 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-8078,= cmd: SavePasswordCommand) Use router's private IP for SSH control. IP : 10= .178.84.38 > 2015-05-05 06:38:46,543 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-414= :ctx-b48e6903 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-8078,= cmd: SavePasswordCommand) Run command on VR: 10.178.84.38, script: savepas= sword.sh with args: -v 172.83.24.37 -p saved_password > 2015-05-05 06:38:46,543 INFO [c.c.h.v.u.VmwareContextPool] (DirectAgent-= 414:ctx-b48e6903 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-80= 78, cmd: SavePasswordCommand) Return a VmwareContext from the idle pool: ro= ot@fed33-madhousedev-lvc01.zonex.cloudsand.com. current pool size: 6, outst= anding count: 8 > 2015-05-05 06:38:46,920 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-414= :ctx-b48e6903 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-8078,= cmd: SavePasswordCommand) savepassword.sh execution result: true > 2015-05-05 06:38:46,920 INFO [c.c.h.v.u.VmwareContextPool] (DirectAgent-= 414:ctx-b48e6903 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-80= 78, cmd: SavePasswordCommand) Recycle VmwareContext into idle pool: root@fe= d33-madhousedev-lvc01.zonex.cloudsand.com, current idle pool size: 6, outst= anding count: 8 > 2015-05-05 06:38:46,921 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-414= :ctx-b48e6903 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-8078,= cmd: VmDataCommand) Use router's private IP for SSH control. IP : 10.178.8= 4.38 > 2015-05-05 06:38:46,921 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-414= :ctx-b48e6903 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-8078,= cmd: VmDataCommand) Run command on VR: 10.178.84.38, script: vmdata.py wit= h args: -d eyIxMC4xODUuMjQuMzciOltbInVzZXJkYXRhIiwidXNlci1kYXRhIixudWxsXSxb= Im1ldGFkYXRhIiwic2VydmljZS1vZmZlcmluZyIsIlNtYWxsIDFDUFV4MkdCIl0sWyJtZXRhZGF= 0YSIsImF2YWlsYWJpbGl0eS16b25lIiwiTW9uc29vbi1EZXYtQ0lDLXpvbmUiXSxbIm1ldGFkYX= RhIiwibG9jYWwtaXB2NCIsIjEwLjE4NS4yNC4zNyJdLFsibWV0YWRhdGEiLCJsb2NhbC1ob3N0b= mFtZSIsInRlc3QiXSxbIm1ldGFkYXRhIiwicHVibGljLWlwdjQiLCIxMC4xODUuMjQuMzciXSxb= Im1ldGFkYXRhIiwicHVibGljLWhvc3RuYW1lIixudWxsXSxbIm1ldGFkYXRhIiwiaW5zdGFuY2U= taWQiLCJiYjk1OGI1Zi1hMzc0LTRmMGEtYTdlMi1iMWVkODc3YWMwZTIiXSxbIm1ldGFkYXRhIi= widm0taWQiLCJiYjk1OGI1Zi1hMzc0LTRmMGEtYTdlMi1iMWVkODc3YWMwZTIiXSxbIm1ldGFkY= XRhIiwicHVibGljLWtleXMiLG51bGxdLFsibWV0YWRhdGEiLCJjbG91ZC1pZGVudGlmaWVyIiwi= Q2xvdWRTdGFjay17NjIxMGQ2YzMtYmM4Mi00MjAwLTg3M2QtOGI2NjVjYjM5YjdmfSJdXX0=3D > 2015-05-05 06:38:46,921 INFO [c.c.h.v.u.VmwareContextPool] (DirectAgent-= 414:ctx-b48e6903 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-80= 78, cmd: VmDataCommand) Return a VmwareContext from the idle pool: root@fed= 33-madhousedev-lvc01.zonex.cloudsand.com. current pool size: 5, outstanding= count: 8 > 2015-05-05 06:38:47,491 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-414= :ctx-b48e6903 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-8078,= cmd: VmDataCommand) vmdata.py execution result: true > 2015-05-05 06:38:47,491 INFO [c.c.h.v.u.VmwareContextPool] (DirectAgent-= 414:ctx-b48e6903 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-80= 78, cmd: VmDataCommand) Recycle VmwareContext into idle pool: root@fed33-ma= dhousedev-lvc01.zonex.cloudsand.com, current idle pool size: 7, outstanding= count: 8 > 2015-05-05 06:38:47,492 DEBUG [c.c.a.t.Request] (Work-Job-Executor-112:ct= x-ff983380 job-8077/job-8078 ctx-c097f8e0) Seq 4-3564599105063748817: Recei= ved: { Ans: , MgmtId: 345049223690, via: 4, Ver: v1, Flags: 10, { Answer, = Answer } } > 2015-05-05 06:38:47,494 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor= -112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Service SecurityGroup is = not supported in the network id=3D224 > 2015-05-05 06:38:47,497 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Ex= ecutor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Checking if we need= to prepare 1 volumes for VM[User|i-149-1350-VM] > 2015-05-05 06:38:47,534 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-J= ob-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) template 274 i= s already in store:2, type:Image > 2015-05-05 06:38:47,539 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-J= ob-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) template 274 i= s already in store:5, type:Primary > 2015-05-05 06:38:47,563 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work= -Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) copyAsync in= specting src type TEMPLATE copyAsync inspecting dest type VOLUME > 2015-05-05 06:38:47,583 DEBUG [c.c.a.t.Request] (Work-Job-Executor-112:ct= x-ff983380 job-8077/job-8078 ctx-c097f8e0) Seq 1-2157787171463890287: Sendi= ng { Cmd , MgmtId: 345049223690, via: 1(fed33-madhoused-grv08.zonex.clouds= and.com), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.= CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":= {"path":"cd7d90f6cd963cbe8690d1050406d80f","origUrl":"http://17.161.41.9/OL= 6.5-AppVM-CIC-Networkable-v2.ova","uuid":"bcbcadb7-3621-4721-8bd3-4e18403d2= 3bd","id":274,"format":"OVA","accountId":2,"checksum":"888149bbce59ffc424c8= cf938bcdfe74","hvm":true,"displayText":"OL6.5-AppVM-CIC-Networkable-v2","im= ageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid= ":"b0d30331-66f3-361a-bea8-b85e0efe4055","id":5,"poolType":"NetworkFilesyst= em","host":"vlan2213.ncisi01.zonex.cloudsand.com","path":"/ifs/madhouse_dev= _poccluster_nds01/Madhouse-Dev-CIC/Madhouse-Dev-Master-Zone/primary","port"= :2049,"url":"NetworkFilesystem://vlan2213.ncisi01.zonex.cloudsand.com/ifs/m= adhouse_dev_poccluster_nds01/Madhouse-Dev-CIC/Madhouse-Dev-Master-Zone/prim= ary/?ROLE=3DPrimary&STOREUUID=3Db0d30331-66f3-361a-bea8-b85e0efe4055"}},"na= me":"274-2-f276aaee-3cde-36ea-b499-d73c1be1d2f3","hypervisorType":"VMware"}= },"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"7d19= a229-3e59-413c-9153-fbb41bc18956","volumeType":"ROOT","dataStore":{"org.apa= che.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"b0d30331-66f3-361a-b= ea8-b85e0efe4055","id":5,"poolType":"NetworkFilesystem","host":"vlan2213.nc= isi01.zonex.cloudsand.com","path":"/ifs/madhouse_dev_poccluster_nds01/Madho= use-Dev-CIC/Madhouse-Dev-Master-Zone/primary","port":2049,"url":"NetworkFil= esystem://vlan2213.ncisi01.zonex.cloudsand.com/ifs/madhouse_dev_poccluster_= nds01/Madhouse-Dev-CIC/Madhouse-Dev-Master-Zone/primary/?ROLE=3DPrimary&STO= REUUID=3Db0d30331-66f3-361a-bea8-b85e0efe4055"}},"name":"ROOT-1350","size":= 21474836480,"volumeId":1607,"vmName":"i-149-1350-VM","accountId":149,"forma= t":"OVA","provisioningType":"THIN","id":1607,"deviceId":0,"cacheMode":"NONE= ","hypervisorType":"VMware"}},"executeInSequence":true,"options":{},"wait":= 0}}] } > 2015-05-05 06:38:47,584 DEBUG [c.c.a.t.Request] (Work-Job-Executor-112:ct= x-ff983380 job-8077/job-8078 ctx-c097f8e0) Seq 1-2157787171463890287: Execu= ting: { Cmd , MgmtId: 345049223690, via: 1(fed33-madhoused-grv08.zonex.clo= udsand.com), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.comma= nd.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectT= O":{"path":"cd7d90f6cd963cbe8690d1050406d80f","origUrl":"http://17.161.41.9= /OL6.5-AppVM-CIC-Networkable-v2.ova","uuid":"bcbcadb7-3621-4721-8bd3-4e1840= 3d23bd","id":274,"format":"OVA","accountId":2,"checksum":"888149bbce59ffc42= 4c8cf938bcdfe74","hvm":true,"displayText":"OL6.5-AppVM-CIC-Networkable-v2",= "imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"u= uid":"b0d30331-66f3-361a-bea8-b85e0efe4055","id":5,"poolType":"NetworkFiles= ystem","host":"vlan2213.ncisi01.zonex.cloudsand.com","path":"/ifs/madhouse_= dev_poccluster_nds01/Madhouse-Dev-CIC/Madhouse-Dev-Master-Zone/primary","po= rt":2049,"url":"NetworkFilesystem://vlan2213.ncisi01.zonex.cloudsand.com/if= s/madhouse_dev_poccluster_nds01/Madhouse-Dev-CIC/Madhouse-Dev-Master-Zone/p= rimary/?ROLE=3DPrimary&STOREUUID=3Db0d30331-66f3-361a-bea8-b85e0efe4055"}},= "name":"274-2-f276aaee-3cde-36ea-b499-d73c1be1d2f3","hypervisorType":"VMwar= e"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"7= d19a229-3e59-413c-9153-fbb41bc18956","volumeType":"ROOT","dataStore":{"org.= apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"b0d30331-66f3-361= a-bea8-b85e0efe4055","id":5,"poolType":"NetworkFilesystem","host":"vlan2213= .ncisi01.zonex.cloudsand.com","path":"/ifs/madhouse_dev_poccluster_nds01/Ma= dhouse-Dev-CIC/Madhouse-Dev-Master-Zone/primary","port":2049,"url":"Network= Filesystem://vlan2213.ncisi01.zonex.cloudsand.com/ifs/madhouse_dev_pocclust= er_nds01/Madhouse-Dev-CIC/Madhouse-Dev-Master-Zone/primary/?ROLE=3DPrimary&= STOREUUID=3Db0d30331-66f3-361a-bea8-b85e0efe4055"}},"name":"ROOT-1350","siz= e":21474836480,"volumeId":1607,"vmName":"i-149-1350-VM","accountId":149,"fo= rmat":"OVA","provisioningType":"THIN","id":1607,"deviceId":0,"cacheMode":"N= ONE","hypervisorType":"VMware"}},"executeInSequence":true,"options":{},"wai= t":0}}] } > 2015-05-05 06:38:47,585 INFO [c.c.h.v.u.VmwareContextPool] (DirectAgent-= 417:ctx-04dda96f fed33-madhoused-grv08.zonex.cloudsand.com, job-8077/job-80= 78, cmd: CopyCommand) Return a VmwareContext from the idle pool: root@fed33= -madhousedev-lvc01.zonex.cloudsand.com. current pool size: 6, outstanding c= ount: 8 > 2015-05-05 06:38:47,825 INFO [c.c.s.r.VmwareStorageProcessor] (DirectAge= nt-417:ctx-04dda96f fed33-madhoused-grv08.zonex.cloudsand.com, job-8077/job= -8078, cmd: CopyCommand) creating linked clone from template > 2015-05-05 06:38:55,594 INFO [c.c.s.r.VmwareStorageProcessor] (DirectAge= nt-417:ctx-04dda96f fed33-madhoused-grv08.zonex.cloudsand.com, job-8077/job= -8078, cmd: CopyCommand) Move volume out of volume-wrapper VM > 2015-05-05 06:38:57,266 INFO [c.c.s.r.VmwareStorageProcessor] (DirectAge= nt-417:ctx-04dda96f fed33-madhoused-grv08.zonex.cloudsand.com, job-8077/job= -8078, cmd: CopyCommand) detach disks from volume-wrapper VM ROOT-1350 > 2015-05-05 06:38:59,239 INFO [c.c.s.r.VmwareStorageProcessor] (DirectAge= nt-417:ctx-04dda96f fed33-madhoused-grv08.zonex.cloudsand.com, job-8077/job= -8078, cmd: CopyCommand) destroy volume-wrapper VM ROOT-1350 > 2015-05-05 06:39:00,607 INFO [c.c.h.v.m.DatastoreMO] (DirectAgent-417:ct= x-04dda96f fed33-madhoused-grv08.zonex.cloudsand.com, job-8077/job-8078, cm= d: CopyCommand) Folder i-149-1350-VM does not exist on datastore > 2015-05-05 06:39:00,608 INFO [c.c.h.v.u.VmwareContextPool] (DirectAgent-= 417:ctx-04dda96f fed33-madhoused-grv08.zonex.cloudsand.com, job-8077/job-80= 78, cmd: CopyCommand) Recycle VmwareContext into idle pool: root@fed33-madh= ousedev-lvc01.zonex.cloudsand.com, current idle pool size: 8, outstanding c= ount: 8 > 2015-05-05 06:39:00,609 DEBUG [c.c.a.t.Request] (Work-Job-Executor-112:ct= x-ff983380 job-8077/job-8078 ctx-c097f8e0) Seq 1-2157787171463890287: Recei= ved: { Ans: , MgmtId: 345049223690, via: 1, Ver: v1, Flags: 110, { CopyCmd= Answer } } > 2015-05-05 06:39:00,665 DEBUG [c.c.h.g.VMwareGuru] (Work-Job-Executor-112= :ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Nested virtualization request= ed, adding flag to vm configuration > 2015-05-05 06:39:00,697 DEBUG [c.c.a.t.Request] (Work-Job-Executor-112:ct= x-ff983380 job-8077/job-8078 ctx-c097f8e0) Seq 5-1317865840959292782: Sendi= ng { Cmd , MgmtId: 345049223690, via: 5(fed33-madhoused-grv06.zonex.clouds= and.com), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.StartCommand":{"vm= ":{"id":1350,"name":"i-149-1350-VM","bootloader":"HVM","type":"User","cpus"= :1,"minSpeed":100,"maxSpeed":100,"minRam":428867584,"maxRam":2147483648,"ho= stName":"test","arch":"x86_64","os":"Oracle Enterprise Linux 6.0 (64-bit)",= "bootArgs":"","enableHA":false,"limitCpuUse":false,"enableDynamicallyScaleV= m":false,"vncPassword":"ntXbYgEqj8YG13Eca+Lbaw=3D=3D","params":{"memoryOver= commitRatio":"5.0","keyboard":"us","Message.ReservedCapacityFreed.Flag":"fa= lse","cpuOvercommitRatio":"1.0","nicAdapter":"Vmxnet3","vmware.reserve.cpu"= :"false","nestedVirtualizationFlag":"true","rootDiskController":"scsi","vmw= are.reserve.mem":"false"},"uuid":"bb958b5f-a374-4f0a-a7e2-b1ed877ac0e2","di= sks":[{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"7= d19a229-3e59-413c-9153-fbb41bc18956","volumeType":"ROOT","dataStore":{"org.= apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"b0d30331-66f3-361= a-bea8-b85e0efe4055","id":5,"poolType":"NetworkFilesystem","host":"vlan2213= .ncisi01.zonex.cloudsand.com","path":"/ifs/madhouse_dev_poccluster_nds01/Ma= dhouse-Dev-CIC/Madhouse-Dev-Master-Zone/primary","port":2049,"url":"Network= Filesystem://vlan2213.ncisi01.zonex.cloudsand.com/ifs/madhouse_dev_pocclust= er_nds01/Madhouse-Dev-CIC/Madhouse-Dev-Master-Zone/primary/?ROLE=3DPrimary&= STOREUUID=3Db0d30331-66f3-361a-bea8-b85e0efe4055"}},"name":"ROOT-1350","siz= e":21474836480,"path":"ROOT-1350","volumeId":1607,"vmName":"i-149-1350-VM",= "accountId":149,"format":"OVA","provisioningType":"THIN","id":1607,"deviceI= d":0,"cacheMode":"NONE","hypervisorType":"VMware"}},"diskSeq":0,"path":"ROO= T-1350","type":"ROOT","_details":{"managed":"false","storagePort":"2049","s= torageHost":"vlan2213.ncisi01.zonex.cloudsand.com","volumeSize":"2147483648= 0"}},{"data":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"id":0,"= format":"ISO","accountId":0,"hvm":false}},"diskSeq":3,"type":"ISO"}],"nics"= :[{"deviceId":0,"networkRateMbps":200,"defaultNic":true,"pxeDisable":false,= "nicUuid":"97580411-7518-4fdb-8403-0b1785f3a4c5","uuid":"79d72acd-d9f0-4f99= -ace4-6f9085db8ace","ip":"172.83.24.37","netmask":"255.255.252.0","gateway"= :"172.83.24.1","mac":"06:34:d2:00:11:af","dns1":"17.170.191.10","dns2":"17.= 170.128.10","broadcastType":"Vlan","type":"Guest","broadcastUri":"vlan://11= 51","isolationUri":"vlan://1152","isSecurityGroupEnabled":false}]},"hostIp"= :"17.169.69.151","executeInSequence":false,"wait":0}}] } > 2015-05-05 06:39:00,699 DEBUG [c.c.a.t.Request] (Work-Job-Executor-112:ct= x-ff983380 job-8077/job-8078 ctx-c097f8e0) Seq 5-1317865840959292782: Execu= ting: { Cmd , MgmtId: 345049223690, via: 5(fed33-madhoused-grv06.zonex.clo= udsand.com), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.StartCommand":{= "vm":{"id":1350,"name":"i-149-1350-VM","bootloader":"HVM","type":"User","cp= us":1,"minSpeed":100,"maxSpeed":100,"minRam":428867584,"maxRam":2147483648,= "hostName":"test","arch":"x86_64","os":"Oracle Enterprise Linux 6.0 (64-bit= )","bootArgs":"","enableHA":false,"limitCpuUse":false,"enableDynamicallySca= leVm":false,"vncPassword":"ntXbYgEqj8YG13Eca+Lbaw=3D=3D","params":{"memoryO= vercommitRatio":"5.0","keyboard":"us","Message.ReservedCapacityFreed.Flag":= "false","cpuOvercommitRatio":"1.0","nicAdapter":"Vmxnet3","vmware.reserve.c= pu":"false","nestedVirtualizationFlag":"true","rootDiskController":"scsi","= vmware.reserve.mem":"false"},"uuid":"bb958b5f-a374-4f0a-a7e2-b1ed877ac0e2",= "disks":[{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid"= :"7d19a229-3e59-413c-9153-fbb41bc18956","volumeType":"ROOT","dataStore":{"o= rg.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"b0d30331-66f3-= 361a-bea8-b85e0efe4055","id":5,"poolType":"NetworkFilesystem","host":"vlan2= 213.ncisi01.zonex.cloudsand.com","path":"/ifs/madhouse_dev_poccluster_nds01= /Madhouse-Dev-CIC/Madhouse-Dev-Master-Zone/primary","port":2049,"url":"Netw= orkFilesystem://vlan2213.ncisi01.zonex.cloudsand.com/ifs/madhouse_dev_poccl= uster_nds01/Madhouse-Dev-CIC/Madhouse-Dev-Master-Zone/primary/?ROLE=3DPrima= ry&STOREUUID=3Db0d30331-66f3-361a-bea8-b85e0efe4055"}},"name":"ROOT-1350","= size":21474836480,"path":"ROOT-1350","volumeId":1607,"vmName":"i-149-1350-V= M","accountId":149,"format":"OVA","provisioningType":"THIN","id":1607,"devi= ceId":0,"cacheMode":"NONE","hypervisorType":"VMware"}},"diskSeq":0,"path":"= ROOT-1350","type":"ROOT","_details":{"managed":"false","storagePort":"2049"= ,"storageHost":"vlan2213.ncisi01.zonex.cloudsand.com","volumeSize":"2147483= 6480"}},{"data":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"id":= 0,"format":"ISO","accountId":0,"hvm":false}},"diskSeq":3,"type":"ISO"}],"ni= cs":[{"deviceId":0,"networkRateMbps":200,"defaultNic":true,"pxeDisable":fal= se,"nicUuid":"97580411-7518-4fdb-8403-0b1785f3a4c5","uuid":"79d72acd-d9f0-4= f99-ace4-6f9085db8ace","ip":"172.83.24.37","netmask":"255.255.252.0","gatew= ay":"172.83.24.1","mac":"06:34:d2:00:11:af","dns1":"17.170.191.10","dns2":"= 17.170.128.10","broadcastType":"Vlan","type":"Guest","broadcastUri":"vlan:/= /1151","isolationUri":"vlan://1152","isSecurityGroupEnabled":false}]},"host= Ip":"17.169.69.151","executeInSequence":false,"wait":0}}] } > 2015-05-05 06:39:00,701 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-209= :ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078,= cmd: StartCommand) Executing resource StartCommand: {"vm":{"id":1350,"name= ":"i-149-1350-VM","bootloader":"HVM","type":"User","cpus":1,"minSpeed":100,= "maxSpeed":100,"minRam":428867584,"maxRam":2147483648,"hostName":"test","ar= ch":"x86_64","os":"Oracle Enterprise Linux 6.0 (64-bit)","bootArgs":"","ena= bleHA":false,"limitCpuUse":false,"enableDynamicallyScaleVm":false,"vncPassw= ord":"ntXbYgEqj8YG13Eca+Lbaw=3D=3D","params":{"memoryOvercommitRatio":"5.0"= ,"keyboard":"us","Message.ReservedCapacityFreed.Flag":"false","cpuOvercommi= tRatio":"1.0","nicAdapter":"Vmxnet3","vmware.reserve.cpu":"false","nestedVi= rtualizationFlag":"true","rootDiskController":"scsi","vmware.reserve.mem":"= false"},"uuid":"bb958b5f-a374-4f0a-a7e2-b1ed877ac0e2","disks":[{"data":{"or= g.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"7d19a229-3e59-413c-= 9153-fbb41bc18956","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.= storage.to.PrimaryDataStoreTO":{"uuid":"b0d30331-66f3-361a-bea8-b85e0efe405= 5","id":5,"poolType":"NetworkFilesystem","host":"vlan2213.ncisi01.zonex.clo= udsand.com","path":"/ifs/madhouse_dev_poccluster_nds01/Madhouse-Dev-CIC/Mad= house-Dev-Master-Zone/primary","port":2049,"url":"NetworkFilesystem://vlan2= 213.ncisi01.zonex.cloudsand.com/ifs/madhouse_dev_poccluster_nds01/Madhouse-= Dev-CIC/Madhouse-Dev-Master-Zone/primary/?ROLE=3DPrimary&STOREUUID=3Db0d303= 31-66f3-361a-bea8-b85e0efe4055"}},"name":"ROOT-1350","size":21474836480,"pa= th":"ROOT-1350","volumeId":1607,"vmName":"i-149-1350-VM","accountId":149,"f= ormat":"OVA","provisioningType":"THIN","id":1607,"deviceId":0,"cacheMode":"= NONE","hypervisorType":"VMware"}},"diskSeq":0,"path":"ROOT-1350","type":"RO= OT","_details":{"managed":"false","storagePort":"2049","storageHost":"vlan2= 213.ncisi01.zonex.cloudsand.com","volumeSize":"21474836480"}},{"data":{"org= .apache.cloudstack.storage.to.TemplateObjectTO":{"id":0,"format":"ISO","acc= ountId":0,"hvm":false}},"diskSeq":3,"type":"ISO"}],"nics":[{"deviceId":0,"n= etworkRateMbps":200,"defaultNic":true,"pxeDisable":false,"nicUuid":"9758041= 1-7518-4fdb-8403-0b1785f3a4c5","uuid":"79d72acd-d9f0-4f99-ace4-6f9085db8ace= ","ip":"172.83.24.37","netmask":"255.255.252.0","gateway":"172.83.24.1","ma= c":"06:34:d2:00:11:af","dns1":"17.170.191.10","dns2":"17.170.128.10","broad= castType":"Vlan","type":"Guest","broadcastUri":"vlan://1151","isolationUri"= :"vlan://1152","isSecurityGroupEnabled":false}]},"hostIp":"17.169.69.151","= executeInSequence":false,"wait":0} > 2015-05-05 06:39:00,701 INFO [c.c.h.v.u.VmwareContextPool] (DirectAgent-= 209:ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-80= 78, cmd: StartCommand) Return a VmwareContext from the idle pool: root@fed3= 3-madhousedev-lvc01.zonex.cloudsand.com. current pool size: 7, outstanding = count: 8 > 2015-05-05 06:39:00,853 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-209:ctx-ab6= 5bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078, cmd: St= artCommand) find VM i-149-1350-VM on host > 2015-05-05 06:39:00,853 INFO [c.c.h.v.m.HostMO] (DirectAgent-209:ctx-ab6= 5bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078, cmd: St= artCommand) VM i-149-1350-VM not found in host cache > 2015-05-05 06:39:00,854 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-209:ctx-ab6= 5bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078, cmd: St= artCommand) load VM cache on host > 2015-05-05 06:39:00,886 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-209= :ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078,= cmd: StartCommand) VM i-149-1350-VM already exists, tear down devices for = reconfiguration > 2015-05-05 06:39:03,781 WARN [c.c.h.v.r.VmwareResource] (DirectAgent-209= :ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078,= cmd: StartCommand) Guest OS mapping name is not set for guest os: Oracle E= nterprise Linux 6.0 (64-bit) > 2015-05-05 06:39:04,125 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-209= :ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078,= cmd: StartCommand) Nested Virtualization enabled in configuration, checkin= g hypervisor capability > 2015-05-05 06:39:04,160 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-209= :ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078,= cmd: StartCommand) Hypervisor supports nested virtualization, enabling for= VM i-149-1350-VM > 2015-05-05 06:39:04,200 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-209= :ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078,= cmd: StartCommand) Prepare ISO volume at existing device {"key":3000,"devi= ceInfo":{"label":"CD/DVD drive 1","summary":"Remote device"},"backing":{"ex= clusive":false,"deviceName":""},"connectable":{"startConnected":true,"allow= GuestControl":false,"connected":true},"controllerKey":200,"unitNumber":0} > 2015-05-05 06:39:04,244 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-209= :ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078,= cmd: StartCommand) Found existing disk info from volume path: ROOT-1350 > 2015-05-05 06:39:04,244 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-209= :ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078,= cmd: StartCommand) Chose disk controller based on existing information: sc= si0:0 > 2015-05-05 06:39:04,255 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-209= :ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078,= cmd: StartCommand) Prepare volume at new device {"capacityInKB":0,"key":-2= ,"backing":{"diskMode":"persistent","parent":{"diskMode":"persistent","file= Name":"[b0d3033166f3361abea8b85e0efe4055] c3eb55bcaea139e5a4bfa1cc265e3255/= c3eb55bcaea139e5a4bfa1cc265e3255.vmdk","datastore":{"value":"datastore-823"= ,"type":"Datastore"}},"fileName":"[b0d3033166f3361abea8b85e0efe4055] test/R= OOT-1350.vmdk","datastore":{"value":"datastore-823","type":"Datastore"}},"c= onnectable":{"startConnected":true,"allowGuestControl":false,"connected":tr= ue},"controllerKey":1000,"unitNumber":0} > 2015-05-05 06:39:04,255 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-209= :ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078,= cmd: StartCommand) VM i-149-1350-VM will be started with NIC device type: = Vmxnet3 > 2015-05-05 06:39:04,256 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-209= :ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078,= cmd: StartCommand) Prepare NIC device based on NicTO: {"deviceId":0,"netwo= rkRateMbps":200,"defaultNic":true,"pxeDisable":false,"nicUuid":"97580411-75= 18-4fdb-8403-0b1785f3a4c5","uuid":"79d72acd-d9f0-4f99-ace4-6f9085db8ace","i= p":"172.83.24.37","netmask":"255.255.252.0","gateway":"172.83.24.1","mac":"= 06:34:d2:00:11:af","dns1":"17.170.191.10","dns2":"17.170.128.10","broadcast= Type":"Vlan","type":"Guest","broadcastUri":"vlan://1151","isolationUri":"vl= an://1152","isSecurityGroupEnabled":false} > 2015-05-05 06:39:04,265 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-209= :ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078,= cmd: StartCommand) Prepare network on vmwaresvs vSwitch0 with name prefix:= cloud.guest > 2015-05-05 06:39:04,454 DEBUG [c.c.h.v.m.HypervisorHostHelper] (DirectAge= nt-209:ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job= -8078, cmd: StartCommand) Prepare network on other host, vlan: 1151, host: = fed33-madhoused-grv03.zonex.cloudsand.com > 2015-05-05 06:39:04,589 INFO [c.c.h.v.m.HypervisorHostHelper] (DirectAge= nt-209:ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job= -8078, cmd: StartCommand) Network cloud.guest.1151.200.1-vSwitch0 is ready = on vSwitch vSwitch0 > 2015-05-05 06:39:04,599 DEBUG [c.c.h.v.m.HypervisorHostHelper] (DirectAge= nt-209:ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job= -8078, cmd: StartCommand) Prepare network on other host, vlan: 1151, host: = fed33-madhoused-grv04.zonex.cloudsand.com > 2015-05-05 06:39:05,001 INFO [c.c.h.v.m.HypervisorHostHelper] (DirectAge= nt-209:ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job= -8078, cmd: StartCommand) Network cloud.guest.1151.200.1-vSwitch0 is ready = on vSwitch vSwitch0 > 2015-05-05 06:39:05,010 DEBUG [c.c.h.v.m.HypervisorHostHelper] (DirectAge= nt-209:ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job= -8078, cmd: StartCommand) Prepare network on other host, vlan: 1151, host: = fed33-madhoused-grv08.zonex.cloudsand.com > 2015-05-05 06:39:05,416 INFO [c.c.h.v.m.HypervisorHostHelper] (DirectAge= nt-209:ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job= -8078, cmd: StartCommand) Network cloud.guest.1151.200.1-vSwitch0 is ready = on vSwitch vSwitch0 > 2015-05-05 06:39:05,427 DEBUG [c.c.h.v.m.HypervisorHostHelper] (DirectAge= nt-209:ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job= -8078, cmd: StartCommand) Prepare network on other host, vlan: 1151, host: = fed33-madhoused-grv07.zonex.cloudsand.com > 2015-05-05 06:39:05,585 INFO [c.c.h.v.m.HypervisorHostHelper] (DirectAge= nt-209:ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job= -8078, cmd: StartCommand) Network cloud.guest.1151.200.1-vSwitch0 is ready = on vSwitch vSwitch0 > 2015-05-05 06:39:05,602 DEBUG [c.c.h.v.m.HypervisorHostHelper] (DirectAge= nt-209:ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job= -8078, cmd: StartCommand) Prepare network on other host, vlan: 1151, host: = fed33-madhoused-grv05.zonex.cloudsand.com > 2015-05-05 06:39:05,787 INFO [c.c.h.v.m.HypervisorHostHelper] (DirectAge= nt-209:ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job= -8078, cmd: StartCommand) Network cloud.guest.1151.200.1-vSwitch0 is ready = on vSwitch vSwitch0 > 2015-05-05 06:39:05,788 INFO [c.c.h.v.m.HypervisorHostHelper] (DirectAge= nt-209:ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job= -8078, cmd: StartCommand) Network cloud.guest.1151.200.1-vSwitch0 is ready = on vSwitch vSwitch0 > 2015-05-05 06:39:05,788 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-209= :ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078,= cmd: StartCommand) Preparing NIC device on network cloud.guest.1151.200.1-= vSwitch0 > 2015-05-05 06:39:05,789 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-209= :ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078,= cmd: StartCommand) Prepare NIC at new device {"operation":"ADD","device":{= "addressType":"Manual","macAddress":"06:34:d2:00:11:af","key":-3,"backing":= {"network":{"value":"network-5856","type":"Network"},"deviceName":"cloud.gu= est.1151.200.1-vSwitch0"},"connectable":{"startConnected":true,"allowGuestC= ontrol":true,"connected":true},"unitNumber":0}} > 2015-05-05 06:39:05,789 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-209:ctx-ab6= 5bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078, cmd: St= artCommand) find VM i-149-1350-VM on host > 2015-05-05 06:39:05,790 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-209:ctx-ab6= 5bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078, cmd: St= artCommand) VM i-149-1350-VM found in host cache > 2015-05-05 06:39:06,093 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-209= :ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078,= cmd: StartCommand) Configure VNC port for VM i-149-1350-VM, port: 5948, ho= st: fed33-madhoused-grv06.zonex.cloudsand.com > 2015-05-05 06:39:06,551 WARN [c.c.h.v.r.VmwareResource] (DirectAgent-209= :ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078,= cmd: StartCommand) StartCommand failed due to Exception: java.lang.Runtime= Exception > java.lang.RuntimeException: File []/vmfs/volumes/31a4cab6-db4ef888/test/R= OOT-1350.vmdk was not found > =09at com.cloud.hypervisor.vmware.util.VmwareClient.waitForTask(VmwareCli= ent.java:335) > =09at com.cloud.hypervisor.vmware.mo.VirtualMachineMO.configureVm(Virtual= MachineMO.java:939) > =09at com.cloud.hypervisor.vmware.resource.VmwareResource.execute(VmwareR= esource.java:1736) > =09at com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(= VmwareResource.java:450) > =09at com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(Direct= AgentAttache.java:302) > =09at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(= ManagedContextRunnable.java:49) > =09at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.= call(DefaultManagedContext.java:56) > 2015-05-05 06:39:06,554 INFO [c.c.h.v.u.VmwareContextPool] (DirectAgent-= 209:ctx-ab65bdde fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-80= 78, cmd: StartCommand) Recycle VmwareContext into idle pool: root@fed33-mad= housedev-lvc01.zonex.cloudsand.com, current idle pool size: 8, outstanding = count: 8 > 2015-05-05 06:39:06,556 DEBUG [c.c.a.t.Request] (DirectAgent-209:ctx-ab65= bdde) Seq 5-1317865840959292782: Processing: { Ans: , MgmtId: 345049223690= , via: 5, Ver: v1, Flags: 10, [{"com.cloud.agent.api.StartAnswer":{"vm":{"i= d":1350,"name":"i-149-1350-VM","bootloader":"HVM","type":"User","cpus":1,"m= inSpeed":100,"maxSpeed":100,"minRam":428867584,"maxRam":2147483648,"hostNam= e":"test","arch":"x86_64","os":"Oracle Enterprise Linux 6.0 (64-bit)","boot= Args":"","enableHA":false,"limitCpuUse":false,"enableDynamicallyScaleVm":fa= lse,"vncPassword":"ntXbYgEqj8YG13Eca+Lbaw=3D=3D","params":{"memoryOvercommi= tRatio":"5.0","keyboard":"us","Message.ReservedCapacityFreed.Flag":"false",= "cpuOvercommitRatio":"1.0","nicAdapter":"Vmxnet3","vmware.reserve.cpu":"fal= se","nestedVirtualizationFlag":"true","rootDiskController":"scsi","vmware.r= eserve.mem":"false"},"uuid":"bb958b5f-a374-4f0a-a7e2-b1ed877ac0e2","disks":= [{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"7d19a2= 29-3e59-413c-9153-fbb41bc18956","volumeType":"ROOT","dataStore":{"org.apach= e.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"b0d30331-66f3-361a-bea= 8-b85e0efe4055","id":5,"poolType":"NetworkFilesystem","host":"vlan2213.ncis= i01.zonex.cloudsand.com","path":"/ifs/madhouse_dev_poccluster_nds01/Madhous= e-Dev-CIC/Madhouse-Dev-Master-Zone/primary","port":2049,"url":"NetworkFiles= ystem://vlan2213.ncisi01.zonex.cloudsand.com/ifs/madhouse_dev_poccluster_nd= s01/Madhouse-Dev-CIC/Madhouse-Dev-Master-Zone/primary/?ROLE=3DPrimary&STORE= UUID=3Db0d30331-66f3-361a-bea8-b85e0efe4055"}},"name":"ROOT-1350","size":21= 474836480,"path":"ROOT-1350","volumeId":1607,"vmName":"i-149-1350-VM","acco= untId":149,"format":"OVA","provisioningType":"THIN","id":1607,"deviceId":0,= "cacheMode":"NONE","hypervisorType":"VMware"}},"diskSeq":0,"path":"ROOT-135= 0","type":"ROOT","_details":{"managed":"false","storagePort":"2049","storag= eHost":"vlan2213.ncisi01.zonex.cloudsand.com","volumeSize":"21474836480"}},= {"data":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"id":0,"forma= t":"ISO","accountId":0,"hvm":false}},"diskSeq":3,"type":"ISO"}],"nics":[{"d= eviceId":0,"networkRateMbps":200,"defaultNic":true,"pxeDisable":false,"nicU= uid":"97580411-7518-4fdb-8403-0b1785f3a4c5","uuid":"79d72acd-d9f0-4f99-ace4= -6f9085db8ace","ip":"172.83.24.37","netmask":"255.255.252.0","gateway":"172= .83.24.1","mac":"06:34:d2:00:11:af","dns1":"17.170.191.10","dns2":"17.170.1= 28.10","broadcastType":"Vlan","type":"Guest","broadcastUri":"vlan://1151","= isolationUri":"vlan://1152","isSecurityGroupEnabled":false}]},"result":fals= e,"details":"StartCommand failed due to Exception: java.lang.RuntimeExcepti= on\nMessage: File []/vmfs/volumes/31a4cab6-db4ef888/test/ROOT-1350.vmdk was= not found\n","wait":0}}] } > 2015-05-05 06:39:06,556 DEBUG [c.c.a.t.Request] (Work-Job-Executor-112:ct= x-ff983380 job-8077/job-8078 ctx-c097f8e0) Seq 5-1317865840959292782: Recei= ved: { Ans: , MgmtId: 345049223690, via: 5, Ver: v1, Flags: 10, { StartAns= wer } } > 2015-05-05 06:39:06,567 INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job= -Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Unable to start = VM on Host[-5-Routing] due to StartCommand failed due to Exception: java.la= ng.RuntimeException > 2015-05-05 06:39:06,572 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job= -Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Cleaning up reso= urces for the vm VM[User|i-149-1350-VM] in Starting state > 2015-05-05 06:39:06,577 DEBUG [c.c.a.t.Request] (Work-Job-Executor-112:ct= x-ff983380 job-8077/job-8078 ctx-c097f8e0) Seq 5-1317865840959292783: Sendi= ng { Cmd , MgmtId: 345049223690, via: 5(fed33-madhoused-grv06.zonex.clouds= and.com), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.StopCommand":{"isP= roxy":false,"executeInSequence":false,"checkBeforeCleanup":false,"vmName":"= i-149-1350-VM","wait":0}}] } > 2015-05-05 06:39:06,577 DEBUG [c.c.a.t.Request] (Work-Job-Executor-112:ct= x-ff983380 job-8077/job-8078 ctx-c097f8e0) Seq 5-1317865840959292783: Execu= ting: { Cmd , MgmtId: 345049223690, via: 5(fed33-madhoused-grv06.zonex.clo= udsand.com), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.StopCommand":{"= isProxy":false,"executeInSequence":false,"checkBeforeCleanup":false,"vmName= ":"i-149-1350-VM","wait":0}}] } > 2015-05-05 06:39:06,578 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-418= :ctx-bf67511e fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078,= cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"execut= eInSequence":false,"checkBeforeCleanup":false,"vmName":"i-149-1350-VM","wai= t":0} > 2015-05-05 06:39:06,578 INFO [c.c.h.v.u.VmwareContextPool] (DirectAgent-= 418:ctx-bf67511e fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-80= 78, cmd: StopCommand) Return a VmwareContext from the idle pool: root@fed33= -madhousedev-lvc01.zonex.cloudsand.com. current pool size: 7, outstanding c= ount: 8 > 2015-05-05 06:39:06,603 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-418:ctx-bf6= 7511e fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078, cmd: St= opCommand) find VM i-149-1350-VM on host > 2015-05-05 06:39:06,603 INFO [c.c.h.v.m.HostMO] (DirectAgent-418:ctx-bf6= 7511e fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078, cmd: St= opCommand) VM i-149-1350-VM not found in host cache > 2015-05-05 06:39:06,603 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-418:ctx-bf6= 7511e fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078, cmd: St= opCommand) load VM cache on host > 2015-05-05 06:39:06,724 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-418= :ctx-bf67511e fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8078,= cmd: StopCommand) VM i-149-1350-VM is already in stopped state > 2015-05-05 06:39:06,725 INFO [c.c.h.v.u.VmwareContextPool] (DirectAgent-= 418:ctx-bf67511e fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-80= 78, cmd: StopCommand) Recycle VmwareContext into idle pool: root@fed33-madh= ousedev-lvc01.zonex.cloudsand.com, current idle pool size: 8, outstanding c= ount: 8 > 2015-05-05 06:39:06,726 DEBUG [c.c.a.t.Request] (Work-Job-Executor-112:ct= x-ff983380 job-8077/job-8078 ctx-c097f8e0) Seq 5-1317865840959292783: Recei= ved: { Ans: , MgmtId: 345049223690, via: 5, Ver: v1, Flags: 10, { StopAnsw= er } } > 2015-05-05 06:39:06,737 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-E= xecutor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Changing active nu= mber of nics for network id=3D224 on -1 > 2015-05-05 06:39:06,747 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job= -Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Successfully rel= eased network resources for the vm VM[User|i-149-1350-VM] > 2015-05-05 06:39:06,747 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job= -Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Successfully cle= anued up resources for the vm VM[User|i-149-1350-VM] in Starting state > 2015-05-05 06:39:06,750 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job= -Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Root volume is r= eady, need to place VM in volume's cluster > 2015-05-05 06:39:06,768 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Deploy avoid= s pods: [], clusters: [], hosts: [5] > 2015-05-05 06:39:06,769 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) DeploymentPl= anner allocation algorithm: com.cloud.deploy.FirstFitPlanner@49361de4 > 2015-05-05 06:39:06,769 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Trying to al= locate a host and storage pools from dc:1, pod:1,cluster:2, requested cpu: = 100, requested ram: 2147483648 > 2015-05-05 06:39:06,770 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Is ROOT volu= me READY (pool already allocated)?: Yes > 2015-05-05 06:39:06,770 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) DeploymentPl= an has host_id specified, choosing this host and making no checks on this h= ost: 5 > 2015-05-05 06:39:06,771 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) The specifie= d host is in avoid set > 2015-05-05 06:39:06,772 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Cannnot depl= oy to specified host, returning. > 2015-05-05 06:39:06,795 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) VM state transitted fr= om :Starting to Stopped with event: OperationFailedvm's original host id: 5= new host id: null host id before state transition: 5 > 2015-05-05 06:39:06,802 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Hosts's actual total C= PU: 70208 and CPU after applying overprovisioning: 70208 > 2015-05-05 06:39:06,803 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Hosts's actual total R= AM: 274841534464 and RAM after applying overprovisioning: 1374207606784 > 2015-05-05 06:39:06,803 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) release cpu from host:= 5, old used: 6100,reserved: 0, actual total: 70208, total with overprovisi= oning: 70208; new used: 6000,reserved:0; movedfromreserved: false,moveToRes= erveredfalse > 2015-05-05 06:39:06,803 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) release mem from host:= 5, old used: 179851755520,reserved: 0, total: 1374207606784; new used: 177= 704271872,reserved:0; movedfromreserved: false,moveToReserveredfalse > 2015-05-05 06:39:06,834 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Exe= cutor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Invocation exception= , caused by: com.cloud.exception.InsufficientServerCapacityException: Unabl= e to create a deployment for VM[User|i-149-1350-VM]Scope=3Dinterface com.cl= oud.dc.DataCenter; id=3D1 > 2015-05-05 06:39:06,834 INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Exe= cutor-112:ctx-ff983380 job-8077/job-8078 ctx-c097f8e0) Rethrow exception co= m.cloud.exception.InsufficientServerCapacityException: Unable to create a d= eployment for VM[User|i-149-1350-VM]Scope=3Dinterface com.cloud.dc.DataCent= er; id=3D1 > 2015-05-05 06:39:06,835 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Execu= tor-112:ctx-ff983380 job-8077/job-8078) Done with run of VM work job: com.c= loud.vm.VmWorkStart for VM 1350, job origin: 8077 > 2015-05-05 06:39:06,835 ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Execu= tor-112:ctx-ff983380 job-8077/job-8078) Unable to complete AsyncJobVO {id:8= 078, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: co= m.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3Rhcn= R9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZ= W50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAG= aG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2F= sTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AA= JMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY= 29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1J= ZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAFRnQAGVZpcnR1YWx= NYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAU= oABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAJzcQB-AAgAA= AAAAAAABXBwcHNxAH4ACAAAAAAAAAABcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwAC= RgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29= yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_= PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 34504922= 3690, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue= May 05 06:38:44 GMT 2015}, job origin:8077 > =09at com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMach= ineManagerImpl.java:941) > =09at com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMach= ineManagerImpl.java:4471) > =09at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcc= essorImpl.java:43) > =09at java.lang.reflect.Method.invoke(Method.java:601) > =09at com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandler= Proxy.java:107) > =09at com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachi= neManagerImpl.java:4627) > =09at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:10= 3) > =09at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run= InContext(AsyncJobManagerImpl.java:537) > 2015-05-05 06:39:06,839 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job= -Executor-112:ctx-ff983380 job-8077/job-8078) Complete async job-8078, jobS= tatus: FAILED, resultCode: 0, result: rO0ABXNyADdjb20uY2xvdWQuZXhjZXB0aW9uL= kluc3VmZmljaWVudFNlcnZlckNhcGFjaXR5RXhjZXB0aW9uAAAAAFZNT3ACAAFaABVhZmZpbml0= eUdyb3Vwc0FwcGxpZWR4cgAxY29tLmNsb3VkLmV4Y2VwdGlvbi5JbnN1ZmZpY2llbnRDYXBhY2l= 0eUV4Y2VwdGlvbgAAAABWTU91AgACTAACaWR0ABBMamF2YS9sYW5nL0xvbmc7TAAFc2NvcGV0AB= FMamF2YS9sYW5nL0NsYXNzO3hyACJjb20uY2xvdWQuZXhjZXB0aW9uLkNsb3VkRXhjZXB0aW9ue= eiOG_HkRMcCAAJMAAtjc0Vycm9yQ29kZXQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAAZpZExpc3R0= ABVMamF2YS91dGlsL0FycmF5TGlzdDt4cgATamF2YS5sYW5nLkV4Y2VwdGlvbtD9Hz4aOxzEAgA= AeHIAE2phdmEubGFuZy5UaHJvd2FibGXVxjUnOXe4ywMABEwABWNhdXNldAAVTGphdmEvbGFuZy= 9UaHJvd2FibGU7TAANZGV0YWlsTWVzc2FnZXQAEkxqYXZhL2xhbmcvU3RyaW5nO1sACnN0YWNrV= HJhY2V0AB5bTGphdmEvbGFuZy9TdGFja1RyYWNlRWxlbWVudDtMABRzdXBwcmVzc2VkRXhjZXB0= aW9uc3QAEExqYXZhL3V0aWwvTGlzdDt4cHEAfgANdAA4VW5hYmxlIHRvIGNyZWF0ZSBhIGRlcGx= veW1lbnQgZm9yIFZNW1VzZXJ8aS0xNDktMTM1MC1WTV11cgAeW0xqYXZhLmxhbmcuU3RhY2tUcm= FjZUVsZW1lbnQ7AkYqPDz9IjkCAAB4cAAAABVzcgAbamF2YS5sYW5nLlN0YWNrVHJhY2VFbGVtZ= W50YQnFmiY23YUCAARJAApsaW5lTnVtYmVyTAAOZGVjbGFyaW5nQ2xhc3NxAH4ACkwACGZpbGVO= YW1lcQB-AApMAAptZXRob2ROYW1lcQB-AAp4cAAAA610ACZjb20uY2xvdWQudm0uVmlydHVhbE1= hY2hpbmVNYW5hZ2VySW1wbHQAHlZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwuamF2YXQAEG9yY2= hlc3RyYXRlU3RhcnRzcQB-ABEAABF3cQB-ABNxAH4AFHEAfgAVc3EAfgAR_____3QAJnN1bi5yZ= WZsZWN0LkdlbmVyYXRlZE1ldGhvZEFjY2Vzc29yNDUxcHQABmludm9rZXNxAH4AEQAAACt0AChz= dW4ucmVmbGVjdC5EZWxlZ2F0aW5nTWV0aG9kQWNjZXNzb3JJbXBsdAAhRGVsZWdhdGluZ01ldGh= vZEFjY2Vzc29ySW1wbC5qYXZhcQB-ABlzcQB-ABEAAAJZdAAYamF2YS5sYW5nLnJlZmxlY3QuTW= V0aG9kdAALTWV0aG9kLmphdmFxAH4AGXNxAH4AEQAAAGt0ACJjb20uY2xvdWQudm0uVm1Xb3JrS= m9iSGFuZGxlclByb3h5dAAaVm1Xb3JrSm9iSGFuZGxlclByb3h5LmphdmF0AA9oYW5kbGVWbVdv= cmtKb2JzcQB-ABEAABITcQB-ABNxAH4AFHEAfgAjc3EAfgARAAAAZ3QAIGNvbS5jbG91ZC52bS5= WbVdvcmtKb2JEaXNwYXRjaGVydAAYVm1Xb3JrSm9iRGlzcGF0Y2hlci5qYXZhdAAGcnVuSm9ic3= EAfgARAAACGXQAP29yZy5hcGFjaGUuY2xvdWRzdGFjay5mcmFtZXdvcmsuam9icy5pbXBsLkFze= W5jSm9iTWFuYWdlckltcGwkNXQAGEFzeW5jSm9iTWFuYWdlckltcGwuamF2YXQADHJ1bkluQ29u= dGV4dHNxAH4AEQAAADF0AD5vcmcuYXBhY2hlLmNsb3Vkc3RhY2subWFuYWdlZC5jb250ZXh0Lk1= hbmFnZWRDb250ZXh0UnVubmFibGUkMXQAG01hbmFnZWRDb250ZXh0UnVubmFibGUuamF2YXQAA3= J1bnNxAH4AEQAAADh0AEJvcmcuYXBhY2hlLmNsb3Vkc3RhY2subWFuYWdlZC5jb250ZXh0Lmltc= GwuRGVmYXVsdE1hbmFnZWRDb250ZXh0JDF0ABpEZWZhdWx0TWFuYWdlZENvbnRleHQuamF2YXQA= BGNhbGxzcQB-ABEAAABndABAb3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5= pbXBsLkRlZmF1bHRNYW5hZ2VkQ29udGV4dHEAfgAzdAAPY2FsbFdpdGhDb250ZXh0c3EAfgARAA= AANXEAfgA2cQB-ADN0AA5ydW5XaXRoQ29udGV4dHNxAH4AEQAAAC50ADxvcmcuYXBhY2hlLmNsb= 3Vkc3RhY2subWFuYWdlZC5jb250ZXh0Lk1hbmFnZWRDb250ZXh0UnVubmFibGVxAH4AL3EAfgAw= c3EAfgARAAAB7nEAfgAqcQB-ACtxAH4AMHNxAH4AEQAAAdd0AC5qYXZhLnV0aWwuY29uY3VycmV= udC5FeGVjdXRvcnMkUnVubmFibGVBZGFwdGVydAAORXhlY3V0b3JzLmphdmFxAH4ANHNxAH4AEQ= AAAU50ACRqYXZhLnV0aWwuY29uY3VycmVudC5GdXR1cmVUYXNrJFN5bmN0AA9GdXR1cmVUYXNrL= mphdmF0AAhpbm5lclJ1bnNxAH4AEQAAAKZ0AB9qYXZhLnV0aWwuY29uY3VycmVudC5GdXR1cmVU= YXNrcQB-AEJxAH4AMHNxAH4AEQAABFZ0ACdqYXZhLnV0aWwuY29uY3VycmVudC5UaHJlYWRQb29= sRXhlY3V0b3J0ABdUaHJlYWRQb29sRXhlY3V0b3IuamF2YXQACXJ1bldvcmtlcnNxAH4AEQAAAl= t0AC5qYXZhLnV0aWwuY29uY3VycmVudC5UaHJlYWRQb29sRXhlY3V0b3IkV29ya2VycQB-AEhxA= H4AMHNxAH4AEQAAAtJ0ABBqYXZhLmxhbmcuVGhyZWFkdAALVGhyZWFkLmphdmFxAH4AMHNyACZq= YXZhLnV0aWwuQ29sbGVjdGlvbnMkVW5tb2RpZmlhYmxlTGlzdPwPJTG17I4QAgABTAAEbGlzdHE= AfgAMeHIALGphdmEudXRpbC5Db2xsZWN0aW9ucyRVbm1vZGlmaWFibGVDb2xsZWN0aW9uGUIAgM= te9x4CAAFMAAFjdAAWTGphdmEvdXRpbC9Db2xsZWN0aW9uO3hwc3IAE2phdmEudXRpbC5BcnJhe= Uxpc3R4gdIdmcdhnQMAAUkABHNpemV4cAAAAAB3BAAAAAB4cQB-AFR4c3IAEWphdmEubGFuZy5J= bnRlZ2VyEuKgpPeBhzgCAAFJAAV2YWx1ZXhyABBqYXZhLmxhbmcuTnVtYmVyhqyVHQuU4IsCAAB= 4cAAAEO9zcQB-AFMAAAAAdwQAAAAKeHNyAA5qYXZhLmxhbmcuTG9uZzuL5JDMjyPfAgABSgAFdm= FsdWV4cQB-AFYAAAAAAAAAAXZyABdjb20uY2xvdWQuZGMuRGF0YUNlbnRlcrLxCuun53i-AgAAe= HAA > 2015-05-05 06:39:06,841 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job= -Executor-112:ctx-ff983380 job-8077/job-8078) Publish async job-8078 comple= te on message bus > 2015-05-05 06:39:06,841 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job= -Executor-112:ctx-ff983380 job-8077/job-8078) Wake up jobs related to job-8= 078 > 2015-05-05 06:39:06,841 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job= -Executor-112:ctx-ff983380 job-8077/job-8078) Update db status for job-8078 > 2015-05-05 06:39:06,844 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job= -Executor-112:ctx-ff983380 job-8077/job-8078) Wake up jobs joined with job-= 8078 and disjoin all subjobs created from job- 8078 > 2015-05-05 06:39:06,857 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job= -Executor-112:ctx-ff983380 job-8077/job-8078) Done executing com.cloud.vm.V= mWorkStart for job-8078 > 2015-05-05 06:39:06,862 DEBUG [o.a.c.f.j.i.SyncQueueManagerImpl] (Work-Jo= b-Executor-112:ctx-ff983380 job-8077/job-8078) Sync queue (2149) is current= ly empty > 2015-05-05 06:39:06,864 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Exe= cutor-112:ctx-ff983380 job-8077/job-8078) Remove job-8078 from job monitori= ng > 2015-05-05 06:39:06,881 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-= Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Sync job-8079 execution on = object VmWorkJobQueue.1350 > 2015-05-05 06:39:07,751 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Exe= cutor-113:ctx-e236ca61 job-8077/job-8079) Add job-8079 into job monitoring > 2015-05-05 06:39:07,766 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job= -Executor-113:ctx-e236ca61 job-8077/job-8079) Executing AsyncJobVO {id:8079= , userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.c= loud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9c= MGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50= UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9= zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTm= V0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMA= AlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29t= LmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEw= AC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAFRnQAGVZpcnR1YWxNYW= NoaW5lTWFuYWdlckltcGwAAAAAAAAAAHBwcHBwcHBwc3IAEWphdmEudXRpbC5IYXNoTWFwBQfaw= cMWYNEDAAJGAApsb2FkRmFjdG9ySQAJdGhyZXNob2xkeHA_QAAAAAAADHcIAAAAEAAAAAF0AApW= bVBhc3N3b3JkdAAcck8wQUJYUUFEbk5oZG1Wa1gzQmhjM04zYjNKa3hw, cmdVersion: 0, st= atus: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid:= 345049223690, completeMsid: null, lastUpdated: null, lastPolled: null, cre= ated: Tue May 05 06:39:06 GMT 2015} > 2015-05-05 06:39:07,768 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079) Run VM work job: com.cloud.vm.VmWor= kStart for VM 1350, job origin: 8077 > 2015-05-05 06:39:07,770 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Exe= cutor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Execute VM work job:= com.cloud.vm.VmWorkStart{"dcId":0,"rawParams":{"VmPassword":"rO0ABXQADnNhd= mVkX3Bhc3N3b3Jk"},"userId":2,"accountId":2,"vmId":1350,"handlerName":"Virtu= alMachineManagerImpl"} > 2015-05-05 06:39:07,781 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) VM state transitted fr= om :Stopped to Starting with event: StartRequestedvm's original host id: 5 = new host id: null host id before state transition: null > 2015-05-05 06:39:07,781 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job= -Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Successfully tra= nsitioned to start state for VM[User|i-149-1350-VM] reservation id =3D 28f5= ac3f-b8bf-4adf-9437-3c8000d1e59a > 2015-05-05 06:39:07,787 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job= -Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Trying to deploy= VM, vm has dcId: 1 and podId: 1 > 2015-05-05 06:39:07,787 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job= -Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Deploy avoids po= ds: null, clusters: null, hosts: null > 2015-05-05 06:39:07,791 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job= -Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Root volume is r= eady, need to place VM in volume's cluster > 2015-05-05 06:39:07,791 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job= -Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Vol[1607|vm=3D13= 50|ROOT] is READY, changing deployment plan to use this pool's dcId: 1 , po= dId: 1 , and clusterId: 2 > 2015-05-05 06:39:07,801 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Deploy avoid= s pods: [], clusters: [], hosts: [] > 2015-05-05 06:39:07,802 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) DeploymentPl= anner allocation algorithm: com.cloud.deploy.FirstFitPlanner@49361de4 > 2015-05-05 06:39:07,803 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Trying to al= locate a host and storage pools from dc:1, pod:1,cluster:2, requested cpu: = 100, requested ram: 2147483648 > 2015-05-05 06:39:07,803 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Is ROOT volu= me READY (pool already allocated)?: Yes > 2015-05-05 06:39:07,803 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) This VM has = last host_id specified, trying to choose the same host: 5 > 2015-05-05 06:39:07,811 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Checking if host: 5 ha= s enough capacity for requested CPU: 100 and requested RAM: 2147483648 , cp= uOverprovisioningFactor: 1.0 > 2015-05-05 06:39:07,814 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Hosts's actual total C= PU: 70208 and CPU after applying overprovisioning: 70208 > 2015-05-05 06:39:07,814 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) We need to allocate to= the last host again, so checking if there is enough reserved capacity > 2015-05-05 06:39:07,814 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Reserved CPU: 0 , Requ= ested CPU: 100 > 2015-05-05 06:39:07,814 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Reserved RAM: 0 , Requ= ested RAM: 2147483648 > 2015-05-05 06:39:07,814 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) STATS: Failed to alloc= resource from host: 5 reservedCpu: 0, requested cpu: 100, reservedMem: 0, = requested mem: 2147483648 > 2015-05-05 06:39:07,814 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Host does not have eno= ugh reserved CPU available, cannot allocate to this host. > 2015-05-05 06:39:07,814 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) The last hos= t of this VM does not have enough capacity > 2015-05-05 06:39:07,814 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Cannot choos= e the last host to deploy this VM > 2015-05-05 06:39:07,815 DEBUG [c.c.d.FirstFitPlanner] (Work-Job-Executor-= 113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Searching resources only u= nder specified Cluster: 2 > 2015-05-05 06:39:07,826 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Checking res= ources in Cluster: 2 under Pod: 1 > 2015-05-05 06:39:07,826 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (Work-Job-E= xecutor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllo= cator) Looking for hosts in dc: 1 pod:1 cluster:2 > 2015-05-05 06:39:07,832 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (Work-Job-E= xecutor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllo= cator) FirstFitAllocator has 6 hosts to check for allocation: [Host[-5-Rout= ing], Host[-4-Routing], Host[-7-Routing], Host[-2-Routing], Host[-6-Routing= ], Host[-1-Routing]] > 2015-05-05 06:39:07,847 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (Work-Job-E= xecutor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllo= cator) Found 6 hosts for allocation after prioritization: [Host[-5-Routing]= , Host[-4-Routing], Host[-7-Routing], Host[-2-Routing], Host[-6-Routing], H= ost[-1-Routing]] > 2015-05-05 06:39:07,847 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (Work-Job-E= xecutor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllo= cator) Looking for speed=3D100Mhz, Ram=3D2048 > 2015-05-05 06:39:07,856 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Host: 5 has cpu capability (cpu:32, speed:2194) to support requested CPU= : 1 and requested speed: 100 > 2015-05-05 06:39:07,856 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Checking if host: 5 has enough capacity for requested CPU: 100 and reque= sted RAM: 2147483648 , cpuOverprovisioningFactor: 1.0 > 2015-05-05 06:39:07,859 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Hosts's actual total CPU: 70208 and CPU after applying overprovisioning:= 70208 > 2015-05-05 06:39:07,859 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Free CPU: 64208 , Requested CPU: 100 > 2015-05-05 06:39:07,859 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Free RAM: 1196503334912 , Requested RAM: 2147483648 > 2015-05-05 06:39:07,859 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Host has enough CPU and RAM available > 2015-05-05 06:39:07,860 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) STATS: Can alloc CPU from host: 5, used: 6000, reserved: 0, actual total= : 70208, total with overprovisioning: 70208; requested cpu:100,alloc_from_l= ast_host?:false ,considerReservedCapacity?: true > 2015-05-05 06:39:07,860 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) STATS: Can alloc MEM from host: 5, used: 177704271872, reserved: 0, tota= l: 1374207606784; requested mem: 2147483648,alloc_from_last_host?:false ,co= nsiderReservedCapacity?: true > 2015-05-05 06:39:07,860 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (Work-Job-E= xecutor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllo= cator) Found a suitable host, adding to list: 5 > 2015-05-05 06:39:07,869 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Host: 4 has cpu capability (cpu:32, speed:2194) to support requested CPU= : 1 and requested speed: 100 > 2015-05-05 06:39:07,869 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Checking if host: 4 has enough capacity for requested CPU: 100 and reque= sted RAM: 2147483648 , cpuOverprovisioningFactor: 1.0 > 2015-05-05 06:39:07,872 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Hosts's actual total CPU: 70208 and CPU after applying overprovisioning:= 70208 > 2015-05-05 06:39:07,872 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Free CPU: 63608 , Requested CPU: 100 > 2015-05-05 06:39:07,873 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Free RAM: 1193282109440 , Requested RAM: 2147483648 > 2015-05-05 06:39:07,873 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Host has enough CPU and RAM available > 2015-05-05 06:39:07,873 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) STATS: Can alloc CPU from host: 4, used: 6600, reserved: 0, actual total= : 70208, total with overprovisioning: 70208; requested cpu:100,alloc_from_l= ast_host?:false ,considerReservedCapacity?: true > 2015-05-05 06:39:07,873 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) STATS: Can alloc MEM from host: 4, used: 180925497344, reserved: 0, tota= l: 1374207606784; requested mem: 2147483648,alloc_from_last_host?:false ,co= nsiderReservedCapacity?: true > 2015-05-05 06:39:07,873 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (Work-Job-E= xecutor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllo= cator) Found a suitable host, adding to list: 4 > 2015-05-05 06:39:07,883 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Host: 7 has cpu capability (cpu:32, speed:2194) to support requested CPU= : 1 and requested speed: 100 > 2015-05-05 06:39:07,883 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Checking if host: 7 has enough capacity for requested CPU: 100 and reque= sted RAM: 2147483648 , cpuOverprovisioningFactor: 1.0 > 2015-05-05 06:39:07,886 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Hosts's actual total CPU: 70208 and CPU after applying overprovisioning:= 70208 > 2015-05-05 06:39:07,886 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Free CPU: 63908 , Requested CPU: 100 > 2015-05-05 06:39:07,886 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Free RAM: 1220930961408 , Requested RAM: 2147483648 > 2015-05-05 06:39:07,886 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Host has enough CPU and RAM available > 2015-05-05 06:39:07,886 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) STATS: Can alloc CPU from host: 7, used: 6300, reserved: 0, actual total= : 70208, total with overprovisioning: 70208; requested cpu:100,alloc_from_l= ast_host?:false ,considerReservedCapacity?: true > 2015-05-05 06:39:07,886 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) STATS: Can alloc MEM from host: 7, used: 153276645376, reserved: 0, tota= l: 1374207606784; requested mem: 2147483648,alloc_from_last_host?:false ,co= nsiderReservedCapacity?: true > 2015-05-05 06:39:07,887 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (Work-Job-E= xecutor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllo= cator) Found a suitable host, adding to list: 7 > 2015-05-05 06:39:07,896 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Host: 2 has cpu capability (cpu:32, speed:2194) to support requested CPU= : 1 and requested speed: 100 > 2015-05-05 06:39:07,896 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Checking if host: 2 has enough capacity for requested CPU: 100 and reque= sted RAM: 2147483648 , cpuOverprovisioningFactor: 1.0 > 2015-05-05 06:39:07,899 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Hosts's actual total CPU: 70208 and CPU after applying overprovisioning:= 70208 > 2015-05-05 06:39:07,899 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Free CPU: 62608 , Requested CPU: 100 > 2015-05-05 06:39:07,899 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Free RAM: 1152211484672 , Requested RAM: 2147483648 > 2015-05-05 06:39:07,899 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Host has enough CPU and RAM available > 2015-05-05 06:39:07,899 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) STATS: Can alloc CPU from host: 2, used: 7600, reserved: 0, actual total= : 70208, total with overprovisioning: 70208; requested cpu:100,alloc_from_l= ast_host?:false ,considerReservedCapacity?: true > 2015-05-05 06:39:07,899 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) STATS: Can alloc MEM from host: 2, used: 221996122112, reserved: 0, tota= l: 1374207606784; requested mem: 2147483648,alloc_from_last_host?:false ,co= nsiderReservedCapacity?: true > 2015-05-05 06:39:07,899 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (Work-Job-E= xecutor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllo= cator) Found a suitable host, adding to list: 2 > 2015-05-05 06:39:07,908 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Host: 6 has cpu capability (cpu:32, speed:2194) to support requested CPU= : 1 and requested speed: 100 > 2015-05-05 06:39:07,909 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Checking if host: 6 has enough capacity for requested CPU: 100 and reque= sted RAM: 2147483648 , cpuOverprovisioningFactor: 1.0 > 2015-05-05 06:39:07,911 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Hosts's actual total CPU: 70208 and CPU after applying overprovisioning:= 70208 > 2015-05-05 06:39:07,912 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Free CPU: 65308 , Requested CPU: 100 > 2015-05-05 06:39:07,912 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Free RAM: 1079465476096 , Requested RAM: 2147483648 > 2015-05-05 06:39:07,912 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Host has enough CPU and RAM available > 2015-05-05 06:39:07,912 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) STATS: Can alloc CPU from host: 6, used: 4900, reserved: 0, actual total= : 70208, total with overprovisioning: 70208; requested cpu:100,alloc_from_l= ast_host?:false ,considerReservedCapacity?: true > 2015-05-05 06:39:07,912 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) STATS: Can alloc MEM from host: 6, used: 294742130688, reserved: 0, tota= l: 1374207606784; requested mem: 2147483648,alloc_from_last_host?:false ,co= nsiderReservedCapacity?: true > 2015-05-05 06:39:07,912 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (Work-Job-E= xecutor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllo= cator) Found a suitable host, adding to list: 6 > 2015-05-05 06:39:07,921 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Host: 1 has cpu capability (cpu:32, speed:2194) to support requested CPU= : 1 and requested speed: 100 > 2015-05-05 06:39:07,922 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Checking if host: 1 has enough capacity for requested CPU: 100 and reque= sted RAM: 2147483648 , cpuOverprovisioningFactor: 1.0 > 2015-05-05 06:39:07,925 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Hosts's actual total CPU: 70208 and CPU after applying overprovisioning:= 70208 > 2015-05-05 06:39:07,925 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Free CPU: 64108 , Requested CPU: 100 > 2015-05-05 06:39:07,925 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Free RAM: 828478324736 , Requested RAM: 2147483648 > 2015-05-05 06:39:07,925 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) Host has enough CPU and RAM available > 2015-05-05 06:39:07,925 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) STATS: Can alloc CPU from host: 1, used: 6100, reserved: 0, actual total= : 70208, total with overprovisioning: 70208; requested cpu:100,alloc_from_l= ast_host?:false ,considerReservedCapacity?: true > 2015-05-05 06:39:07,925 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllocato= r) STATS: Can alloc MEM from host: 1, used: 545729282048, reserved: 0, tota= l: 1374207606784; requested mem: 2147483648,alloc_from_last_host?:false ,co= nsiderReservedCapacity?: true > 2015-05-05 06:39:07,925 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (Work-Job-E= xecutor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllo= cator) Found a suitable host, adding to list: 1 > 2015-05-05 06:39:07,926 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (Work-Job-E= xecutor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71 FirstFitRoutingAllo= cator) Host Allocator returning 6 suitable hosts > 2015-05-05 06:39:07,929 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Checking sui= table pools for volume (Id, Type): (1607,ROOT) > 2015-05-05 06:39:07,929 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Volume has p= ool already allocated, checking if pool can be reused, poolId: 5 > 2015-05-05 06:39:07,931 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Planner need= not allocate a pool for this volume since its READY > 2015-05-05 06:39:07,931 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Trying to fi= nd a potenial host and associated storage pools from the suitable host/pool= lists for this VM > 2015-05-05 06:39:07,931 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Checking if = host: 5 can access any suitable storage pool for volume: ROOT > 2015-05-05 06:39:07,932 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Host: 5 can = access pool: 5 > 2015-05-05 06:39:07,934 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Found a pote= ntial host id: 5 name: fed33-madhoused-grv06.zonex.cloudsand.com and associ= ated storage pools for this VM > 2015-05-05 06:39:07,936 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work= -Job-Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Returning De= ployment Destination: Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Vo= lume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(2)-Host(5)-Storage(= )] > 2015-05-05 06:39:07,936 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job= -Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Deployment found= - P0=3DVM[User|i-149-1350-VM], P0=3DDest[Zone(Id)-Pod(Id)-Cluster(Id)-Hos= t(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(2)-= Host(5)-Storage()] > 2015-05-05 06:39:07,972 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) VM state transitted fr= om :Starting to Starting with event: OperationRetryvm's original host id: 5= new host id: 5 host id before state transition: null > 2015-05-05 06:39:07,973 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) VM starting again on t= he last host it was stopped on > 2015-05-05 06:39:07,977 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Hosts's actual total C= PU: 70208 and CPU after applying overprovisioning: 70208 > 2015-05-05 06:39:07,977 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) We are allocating VM, = increasing the used capacity of this host:5 > 2015-05-05 06:39:07,977 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Current Used CPU: 6000= , Free CPU:64208 ,Requested CPU: 100 > 2015-05-05 06:39:07,977 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Current Used RAM: 1777= 04271872 , Free RAM:1196503334912 ,Requested RAM: 2147483648 > 2015-05-05 06:39:07,978 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) We are allocating VM t= o the last host again, so adjusting the reserved capacity if it is not less= than required > 2015-05-05 06:39:07,978 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Reserved CPU: 0 , Requ= ested CPU: 100 > 2015-05-05 06:39:07,978 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Reserved RAM: 0 , Requ= ested RAM: 2147483648 > 2015-05-05 06:39:07,978 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) CPU STATS after alloca= tion: for host: 5, old used: 6000, old reserved: 0, actual total: 70208, to= tal with overprovisioning: 70208; new used:6100, reserved:0; requested cpu:= 100,alloc_from_last:true > 2015-05-05 06:39:07,978 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) RAM STATS after alloca= tion: for host: 5, old used: 177704271872, old reserved: 0, total: 13742076= 06784; new used: 179851755520, reserved: 0; requested mem: 2147483648,alloc= _from_last:true > 2015-05-05 06:39:07,982 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job= -Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) VM is being crea= ted in podId: 1 > 2015-05-05 06:39:07,988 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-E= xecutor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Network id=3D224 i= s already implemented > 2015-05-05 06:39:07,993 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor= -113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Service SecurityGroup is = not supported in the network id=3D224 > 2015-05-05 06:39:07,996 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-E= xecutor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Changing active nu= mber of nics for network id=3D224 on 1 > 2015-05-05 06:39:08,002 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-E= xecutor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Asking VirtualRout= er to prepare for Nic[1357-1350-null-172.83.24.37] > 2015-05-05 06:39:08,008 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl= ] (Work-Job-Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Lock = is acquired for network id 224 as a part of router startup in Dest[Zone(Id)= -Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zo= ne(1)-Pod(1)-Cluster(2)-Host(5)-Storage()] > 2015-05-05 06:39:08,011 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl= ] (Work-Job-Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Lock = is released for network id 224 as a part of router startup in Dest[Zone(Id)= -Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zo= ne(1)-Pod(1)-Cluster(2)-Host(5)-Storage()] > 2015-05-05 06:39:08,015 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor= -113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Service SecurityGroup is = not supported in the network id=3D224 > 2015-05-05 06:39:08,023 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor= -113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Service SecurityGroup is = not supported in the network id=3D224 > 2015-05-05 06:39:08,027 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl= ] (Work-Job-Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Apply= ing dhcp entry in network Ntwk[224|Guest|7] > 2015-05-05 06:39:08,039 DEBUG [c.c.a.t.Request] (Work-Job-Executor-113:ct= x-e236ca61 job-8077/job-8079 ctx-6e1bff71) Seq 4-3564599105063748818: Sendi= ng { Cmd , MgmtId: 345049223690, via: 4(fed33-madhoused-grv04.zonex.clouds= and.com), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryC= ommand":{"vmMac":"06:34:d2:00:11:af","vmIpAddress":"172.83.24.37","vmName":= "test","defaultRouter":"172.83.24.1","defaultDns":"172.83.24.10","duid":"00= :03:00:01:06:34:d2:00:11:af","isDefault":true,"executeInSequence":false,"ac= cessDetails":{"router.guest.ip":"172.83.24.10","zone.network.type":"Advance= d","router.name":"r-1017-VM","router.ip":"10.178.84.38"},"wait":0}}] } > 2015-05-05 06:39:08,039 DEBUG [c.c.a.t.Request] (Work-Job-Executor-113:ct= x-e236ca61 job-8077/job-8079 ctx-6e1bff71) Seq 4-3564599105063748818: Execu= ting: { Cmd , MgmtId: 345049223690, via: 4(fed33-madhoused-grv04.zonex.clo= udsand.com), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEnt= ryCommand":{"vmMac":"06:34:d2:00:11:af","vmIpAddress":"172.83.24.37","vmNam= e":"test","defaultRouter":"172.83.24.1","defaultDns":"172.83.24.10","duid":= "00:03:00:01:06:34:d2:00:11:af","isDefault":true,"executeInSequence":false,= "accessDetails":{"router.guest.ip":"172.83.24.10","zone.network.type":"Adva= nced","router.name":"r-1017-VM","router.ip":"10.178.84.38"},"wait":0}}] } > 2015-05-05 06:39:08,039 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-210= :ctx-c3c50d74 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-8079,= cmd: DhcpEntryCommand) Use router's private IP for SSH control. IP : 10.17= 8.84.38 > 2015-05-05 06:39:08,040 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-210= :ctx-c3c50d74 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-8079,= cmd: DhcpEntryCommand) Run command on VR: 10.178.84.38, script: edithosts.= sh with args: -m 06:34:d2:00:11:af -4 172.83.24.37 -h test -d 172.83.24.1 = -n 172.83.24.10 > 2015-05-05 06:39:08,040 INFO [c.c.h.v.u.VmwareContextPool] (DirectAgent-= 210:ctx-c3c50d74 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-80= 79, cmd: DhcpEntryCommand) Return a VmwareContext from the idle pool: root@= fed33-madhousedev-lvc01.zonex.cloudsand.com. current pool size: 7, outstand= ing count: 8 > 2015-05-05 06:39:08,471 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-210= :ctx-c3c50d74 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-8079,= cmd: DhcpEntryCommand) edithosts.sh execution result: true > 2015-05-05 06:39:08,471 INFO [c.c.h.v.u.VmwareContextPool] (DirectAgent-= 210:ctx-c3c50d74 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-80= 79, cmd: DhcpEntryCommand) Recycle VmwareContext into idle pool: root@fed33= -madhousedev-lvc01.zonex.cloudsand.com, current idle pool size: 8, outstand= ing count: 8 > 2015-05-05 06:39:08,472 DEBUG [c.c.a.t.Request] (Work-Job-Executor-113:ct= x-e236ca61 job-8077/job-8079 ctx-6e1bff71) Seq 4-3564599105063748818: Recei= ved: { Ans: , MgmtId: 345049223690, via: 4, Ver: v1, Flags: 10, { Answer }= } > 2015-05-05 06:39:08,490 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor= -113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Service SecurityGroup is = not supported in the network id=3D224 > 2015-05-05 06:39:08,505 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl= ] (Work-Job-Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Apply= ing userdata and password entry in network Ntwk[224|Guest|7] > 2015-05-05 06:39:08,530 DEBUG [c.c.a.t.Request] (Work-Job-Executor-113:ct= x-e236ca61 job-8077/job-8079 ctx-6e1bff71) Seq 4-3564599105063748819: Sendi= ng { Cmd , MgmtId: 345049223690, via: 4(fed33-madhoused-grv04.zonex.clouds= and.com), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswo= rdCommand":{,"vmIpAddress":"172.83.24.37","vmName":"test","executeInSequenc= e":false,"accessDetails":{"router.guest.ip":"172.83.24.10","zone.network.ty= pe":"Advanced","router.ip":"10.178.84.38","router.name":"r-1017-VM"},"wait"= :0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"172.83.24= .37","vmName":"test","executeInSequence":false,"accessDetails":{"router.gue= st.ip":"172.83.24.10","zone.network.type":"Advanced","router.ip":"10.178.84= .38","router.name":"r-1017-VM"},"wait":0}}] } > 2015-05-05 06:39:08,534 DEBUG [c.c.a.t.Request] (Work-Job-Executor-113:ct= x-e236ca61 job-8077/job-8079 ctx-6e1bff71) Seq 4-3564599105063748819: Execu= ting: { Cmd , MgmtId: 345049223690, via: 4(fed33-madhoused-grv04.zonex.clo= udsand.com), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePas= swordCommand":{,"vmIpAddress":"172.83.24.37","vmName":"test","executeInSequ= ence":false,"accessDetails":{"router.guest.ip":"172.83.24.10","zone.network= .type":"Advanced","router.ip":"10.178.84.38","router.name":"r-1017-VM"},"wa= it":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"172.83= .24.37","vmName":"test","executeInSequence":false,"accessDetails":{"router.= guest.ip":"172.83.24.10","zone.network.type":"Advanced","router.ip":"10.178= .84.38","router.name":"r-1017-VM"},"wait":0}}] } > 2015-05-05 06:39:08,535 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-106= :ctx-a304ee33 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-8079,= cmd: SavePasswordCommand) Use router's private IP for SSH control. IP : 10= .178.84.38 > 2015-05-05 06:39:08,535 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-106= :ctx-a304ee33 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-8079,= cmd: SavePasswordCommand) Run command on VR: 10.178.84.38, script: savepas= sword.sh with args: -v 172.83.24.37 -p saved_password > 2015-05-05 06:39:08,535 INFO [c.c.h.v.u.VmwareContextPool] (DirectAgent-= 106:ctx-a304ee33 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-80= 79, cmd: SavePasswordCommand) Return a VmwareContext from the idle pool: ro= ot@fed33-madhousedev-lvc01.zonex.cloudsand.com. current pool size: 7, outst= anding count: 8 > 2015-05-05 06:39:08,857 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-106= :ctx-a304ee33 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-8079,= cmd: SavePasswordCommand) savepassword.sh execution result: true > 2015-05-05 06:39:08,857 INFO [c.c.h.v.u.VmwareContextPool] (DirectAgent-= 106:ctx-a304ee33 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-80= 79, cmd: SavePasswordCommand) Recycle VmwareContext into idle pool: root@fe= d33-madhousedev-lvc01.zonex.cloudsand.com, current idle pool size: 8, outst= anding count: 8 > 2015-05-05 06:39:08,857 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-106= :ctx-a304ee33 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-8079,= cmd: VmDataCommand) Use router's private IP for SSH control. IP : 10.178.8= 4.38 > 2015-05-05 06:39:08,858 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-106= :ctx-a304ee33 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-8079,= cmd: VmDataCommand) Run command on VR: 10.178.84.38, script: vmdata.py wit= h args: -d eyIxMC4xODUuMjQuMzciOltbInVzZXJkYXRhIiwidXNlci1kYXRhIixudWxsXSxb= Im1ldGFkYXRhIiwic2VydmljZS1vZmZlcmluZyIsIlNtYWxsIDFDUFV4MkdCIl0sWyJtZXRhZGF= 0YSIsImF2YWlsYWJpbGl0eS16b25lIiwiTW9uc29vbi1EZXYtQ0lDLXpvbmUiXSxbIm1ldGFkYX= RhIiwibG9jYWwtaXB2NCIsIjEwLjE4NS4yNC4zNyJdLFsibWV0YWRhdGEiLCJsb2NhbC1ob3N0b= mFtZSIsInRlc3QiXSxbIm1ldGFkYXRhIiwicHVibGljLWlwdjQiLCIxMC4xODUuMjQuMzciXSxb= Im1ldGFkYXRhIiwicHVibGljLWhvc3RuYW1lIixudWxsXSxbIm1ldGFkYXRhIiwiaW5zdGFuY2U= taWQiLCJiYjk1OGI1Zi1hMzc0LTRmMGEtYTdlMi1iMWVkODc3YWMwZTIiXSxbIm1ldGFkYXRhIi= widm0taWQiLCJiYjk1OGI1Zi1hMzc0LTRmMGEtYTdlMi1iMWVkODc3YWMwZTIiXSxbIm1ldGFkY= XRhIiwicHVibGljLWtleXMiLG51bGxdLFsibWV0YWRhdGEiLCJjbG91ZC1pZGVudGlmaWVyIiwi= Q2xvdWRTdGFjay17NjIxMGQ2YzMtYmM4Mi00MjAwLTg3M2QtOGI2NjVjYjM5YjdmfSJdXX0=3D > 2015-05-05 06:39:08,858 INFO [c.c.h.v.u.VmwareContextPool] (DirectAgent-= 106:ctx-a304ee33 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-80= 79, cmd: VmDataCommand) Return a VmwareContext from the idle pool: root@fed= 33-madhousedev-lvc01.zonex.cloudsand.com. current pool size: 7, outstanding= count: 8 > 2015-05-05 06:39:09,219 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-106= :ctx-a304ee33 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-8079,= cmd: VmDataCommand) vmdata.py execution result: true > 2015-05-05 06:39:09,220 INFO [c.c.h.v.u.VmwareContextPool] (DirectAgent-= 106:ctx-a304ee33 fed33-madhoused-grv04.zonex.cloudsand.com, job-8077/job-80= 79, cmd: VmDataCommand) Recycle VmwareContext into idle pool: root@fed33-ma= dhousedev-lvc01.zonex.cloudsand.com, current idle pool size: 8, outstanding= count: 8 > 2015-05-05 06:39:09,220 DEBUG [c.c.a.t.Request] (Work-Job-Executor-113:ct= x-e236ca61 job-8077/job-8079 ctx-6e1bff71) Seq 4-3564599105063748819: Recei= ved: { Ans: , MgmtId: 345049223690, via: 4, Ver: v1, Flags: 10, { Answer, = Answer } } > 2015-05-05 06:39:09,222 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor= -113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Service SecurityGroup is = not supported in the network id=3D224 > 2015-05-05 06:39:09,226 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Ex= ecutor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Checking if we need= to prepare 1 volumes for VM[User|i-149-1350-VM] > 2015-05-05 06:39:09,282 DEBUG [c.c.h.g.VMwareGuru] (Work-Job-Executor-113= :ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Nested virtualization request= ed, adding flag to vm configuration > 2015-05-05 06:39:09,310 DEBUG [c.c.a.t.Request] (Work-Job-Executor-113:ct= x-e236ca61 job-8077/job-8079 ctx-6e1bff71) Seq 5-1317865840959292784: Sendi= ng { Cmd , MgmtId: 345049223690, via: 5(fed33-madhoused-grv06.zonex.clouds= and.com), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.StartCommand":{"vm= ":{"id":1350,"name":"i-149-1350-VM","bootloader":"HVM","type":"User","cpus"= :1,"minSpeed":100,"maxSpeed":100,"minRam":428867584,"maxRam":2147483648,"ho= stName":"test","arch":"x86_64","os":"Oracle Enterprise Linux 6.0 (64-bit)",= "bootArgs":"","enableHA":false,"limitCpuUse":false,"enableDynamicallyScaleV= m":false,"vncPassword":"ntXbYgEqj8YG13Eca+Lbaw=3D=3D","params":{"memoryOver= commitRatio":"5.0","keyboard":"us","Message.ReservedCapacityFreed.Flag":"fa= lse","cpuOvercommitRatio":"1.0","nicAdapter":"Vmxnet3","vmware.reserve.cpu"= :"false","nestedVirtualizationFlag":"true","rootDiskController":"scsi","vmw= are.reserve.mem":"false"},"uuid":"bb958b5f-a374-4f0a-a7e2-b1ed877ac0e2","di= sks":[{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"7= d19a229-3e59-413c-9153-fbb41bc18956","volumeType":"ROOT","dataStore":{"org.= apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"b0d30331-66f3-361= a-bea8-b85e0efe4055","id":5,"poolType":"NetworkFilesystem","host":"vlan2213= .ncisi01.zonex.cloudsand.com","path":"/ifs/madhouse_dev_poccluster_nds01/Ma= dhouse-Dev-CIC/Madhouse-Dev-Master-Zone/primary","port":2049,"url":"Network= Filesystem://vlan2213.ncisi01.zonex.cloudsand.com/ifs/madhouse_dev_pocclust= er_nds01/Madhouse-Dev-CIC/Madhouse-Dev-Master-Zone/primary/?ROLE=3DPrimary&= STOREUUID=3Db0d30331-66f3-361a-bea8-b85e0efe4055"}},"name":"ROOT-1350","siz= e":21474836480,"path":"ROOT-1350","volumeId":1607,"vmName":"i-149-1350-VM",= "accountId":149,"format":"OVA","provisioningType":"THIN","id":1607,"deviceI= d":0,"cacheMode":"NONE","hypervisorType":"VMware"}},"diskSeq":0,"path":"ROO= T-1350","type":"ROOT","_details":{"managed":"false","storagePort":"2049","s= torageHost":"vlan2213.ncisi01.zonex.cloudsand.com","volumeSize":"2147483648= 0"}},{"data":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"id":0,"= format":"ISO","accountId":0,"hvm":false}},"diskSeq":3,"type":"ISO"}],"nics"= :[{"deviceId":0,"networkRateMbps":200,"defaultNic":true,"pxeDisable":false,= "nicUuid":"97580411-7518-4fdb-8403-0b1785f3a4c5","uuid":"79d72acd-d9f0-4f99= -ace4-6f9085db8ace","ip":"172.83.24.37","netmask":"255.255.252.0","gateway"= :"172.83.24.1","mac":"06:34:d2:00:11:af","dns1":"17.170.191.10","dns2":"17.= 170.128.10","broadcastType":"Vlan","type":"Guest","broadcastUri":"vlan://11= 51","isolationUri":"vlan://1152","isSecurityGroupEnabled":false}]},"hostIp"= :"17.169.69.151","executeInSequence":false,"wait":0}}] } > 2015-05-05 06:39:09,311 DEBUG [c.c.a.t.Request] (Work-Job-Executor-113:ct= x-e236ca61 job-8077/job-8079 ctx-6e1bff71) Seq 5-1317865840959292784: Execu= ting: { Cmd , MgmtId: 345049223690, via: 5(fed33-madhoused-grv06.zonex.clo= udsand.com), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.StartCommand":{= "vm":{"id":1350,"name":"i-149-1350-VM","bootloader":"HVM","type":"User","cp= us":1,"minSpeed":100,"maxSpeed":100,"minRam":428867584,"maxRam":2147483648,= "hostName":"test","arch":"x86_64","os":"Oracle Enterprise Linux 6.0 (64-bit= )","bootArgs":"","enableHA":false,"limitCpuUse":false,"enableDynamicallySca= leVm":false,"vncPassword":"ntXbYgEqj8YG13Eca+Lbaw=3D=3D","params":{"memoryO= vercommitRatio":"5.0","keyboard":"us","Message.ReservedCapacityFreed.Flag":= "false","cpuOvercommitRatio":"1.0","nicAdapter":"Vmxnet3","vmware.reserve.c= pu":"false","nestedVirtualizationFlag":"true","rootDiskController":"scsi","= vmware.reserve.mem":"false"},"uuid":"bb958b5f-a374-4f0a-a7e2-b1ed877ac0e2",= "disks":[{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid"= :"7d19a229-3e59-413c-9153-fbb41bc18956","volumeType":"ROOT","dataStore":{"o= rg.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"b0d30331-66f3-= 361a-bea8-b85e0efe4055","id":5,"poolType":"NetworkFilesystem","host":"vlan2= 213.ncisi01.zonex.cloudsand.com","path":"/ifs/madhouse_dev_poccluster_nds01= /Madhouse-Dev-CIC/Madhouse-Dev-Master-Zone/primary","port":2049,"url":"Netw= orkFilesystem://vlan2213.ncisi01.zonex.cloudsand.com/ifs/madhouse_dev_poccl= uster_nds01/Madhouse-Dev-CIC/Madhouse-Dev-Master-Zone/primary/?ROLE=3DPrima= ry&STOREUUID=3Db0d30331-66f3-361a-bea8-b85e0efe4055"}},"name":"ROOT-1350","= size":21474836480,"path":"ROOT-1350","volumeId":1607,"vmName":"i-149-1350-V= M","accountId":149,"format":"OVA","provisioningType":"THIN","id":1607,"devi= ceId":0,"cacheMode":"NONE","hypervisorType":"VMware"}},"diskSeq":0,"path":"= ROOT-1350","type":"ROOT","_details":{"managed":"false","storagePort":"2049"= ,"storageHost":"vlan2213.ncisi01.zonex.cloudsand.com","volumeSize":"2147483= 6480"}},{"data":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"id":= 0,"format":"ISO","accountId":0,"hvm":false}},"diskSeq":3,"type":"ISO"}],"ni= cs":[{"deviceId":0,"networkRateMbps":200,"defaultNic":true,"pxeDisable":fal= se,"nicUuid":"97580411-7518-4fdb-8403-0b1785f3a4c5","uuid":"79d72acd-d9f0-4= f99-ace4-6f9085db8ace","ip":"172.83.24.37","netmask":"255.255.252.0","gatew= ay":"172.83.24.1","mac":"06:34:d2:00:11:af","dns1":"17.170.191.10","dns2":"= 17.170.128.10","broadcastType":"Vlan","type":"Guest","broadcastUri":"vlan:/= /1151","isolationUri":"vlan://1152","isSecurityGroupEnabled":false}]},"host= Ip":"17.169.69.151","executeInSequence":false,"wait":0}}] } > 2015-05-05 06:39:09,314 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-420= :ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079,= cmd: StartCommand) Executing resource StartCommand: {"vm":{"id":1350,"name= ":"i-149-1350-VM","bootloader":"HVM","type":"User","cpus":1,"minSpeed":100,= "maxSpeed":100,"minRam":428867584,"maxRam":2147483648,"hostName":"test","ar= ch":"x86_64","os":"Oracle Enterprise Linux 6.0 (64-bit)","bootArgs":"","ena= bleHA":false,"limitCpuUse":false,"enableDynamicallyScaleVm":false,"vncPassw= ord":"ntXbYgEqj8YG13Eca+Lbaw=3D=3D","params":{"memoryOvercommitRatio":"5.0"= ,"keyboard":"us","Message.ReservedCapacityFreed.Flag":"false","cpuOvercommi= tRatio":"1.0","nicAdapter":"Vmxnet3","vmware.reserve.cpu":"false","nestedVi= rtualizationFlag":"true","rootDiskController":"scsi","vmware.reserve.mem":"= false"},"uuid":"bb958b5f-a374-4f0a-a7e2-b1ed877ac0e2","disks":[{"data":{"or= g.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"7d19a229-3e59-413c-= 9153-fbb41bc18956","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.= storage.to.PrimaryDataStoreTO":{"uuid":"b0d30331-66f3-361a-bea8-b85e0efe405= 5","id":5,"poolType":"NetworkFilesystem","host":"vlan2213.ncisi01.zonex.clo= udsand.com","path":"/ifs/madhouse_dev_poccluster_nds01/Madhouse-Dev-CIC/Mad= house-Dev-Master-Zone/primary","port":2049,"url":"NetworkFilesystem://vlan2= 213.ncisi01.zonex.cloudsand.com/ifs/madhouse_dev_poccluster_nds01/Madhouse-= Dev-CIC/Madhouse-Dev-Master-Zone/primary/?ROLE=3DPrimary&STOREUUID=3Db0d303= 31-66f3-361a-bea8-b85e0efe4055"}},"name":"ROOT-1350","size":21474836480,"pa= th":"ROOT-1350","volumeId":1607,"vmName":"i-149-1350-VM","accountId":149,"f= ormat":"OVA","provisioningType":"THIN","id":1607,"deviceId":0,"cacheMode":"= NONE","hypervisorType":"VMware"}},"diskSeq":0,"path":"ROOT-1350","type":"RO= OT","_details":{"managed":"false","storagePort":"2049","storageHost":"vlan2= 213.ncisi01.zonex.cloudsand.com","volumeSize":"21474836480"}},{"data":{"org= .apache.cloudstack.storage.to.TemplateObjectTO":{"id":0,"format":"ISO","acc= ountId":0,"hvm":false}},"diskSeq":3,"type":"ISO"}],"nics":[{"deviceId":0,"n= etworkRateMbps":200,"defaultNic":true,"pxeDisable":false,"nicUuid":"9758041= 1-7518-4fdb-8403-0b1785f3a4c5","uuid":"79d72acd-d9f0-4f99-ace4-6f9085db8ace= ","ip":"172.83.24.37","netmask":"255.255.252.0","gateway":"172.83.24.1","ma= c":"06:34:d2:00:11:af","dns1":"17.170.191.10","dns2":"17.170.128.10","broad= castType":"Vlan","type":"Guest","broadcastUri":"vlan://1151","isolationUri"= :"vlan://1152","isSecurityGroupEnabled":false}]},"hostIp":"17.169.69.151","= executeInSequence":false,"wait":0} > 2015-05-05 06:39:09,314 INFO [c.c.h.v.u.VmwareContextPool] (DirectAgent-= 420:ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-80= 79, cmd: StartCommand) Return a VmwareContext from the idle pool: root@fed3= 3-madhousedev-lvc01.zonex.cloudsand.com. current pool size: 7, outstanding = count: 8 > 2015-05-05 06:39:09,467 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-420:ctx-f92= 9d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079, cmd: St= artCommand) find VM i-149-1350-VM on host > 2015-05-05 06:39:09,468 INFO [c.c.h.v.m.HostMO] (DirectAgent-420:ctx-f92= 9d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079, cmd: St= artCommand) VM i-149-1350-VM not found in host cache > 2015-05-05 06:39:09,468 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-420:ctx-f92= 9d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079, cmd: St= artCommand) load VM cache on host > 2015-05-05 06:39:09,504 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-420= :ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079,= cmd: StartCommand) VM i-149-1350-VM already exists, tear down devices for = reconfiguration > 2015-05-05 06:39:09,565 WARN [c.c.h.v.r.VmwareResource] (DirectAgent-420= :ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079,= cmd: StartCommand) Guest OS mapping name is not set for guest os: Oracle E= nterprise Linux 6.0 (64-bit) > 2015-05-05 06:39:10,000 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-420= :ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079,= cmd: StartCommand) Nested Virtualization enabled in configuration, checkin= g hypervisor capability > 2015-05-05 06:39:10,038 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-420= :ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079,= cmd: StartCommand) Hypervisor supports nested virtualization, enabling for= VM i-149-1350-VM > 2015-05-05 06:39:10,072 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-420= :ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079,= cmd: StartCommand) Prepare ISO volume at existing device {"key":3000,"devi= ceInfo":{"label":"CD/DVD drive 1","summary":"Remote device"},"backing":{"ex= clusive":false,"deviceName":""},"connectable":{"startConnected":true,"allow= GuestControl":false,"connected":true},"controllerKey":200,"unitNumber":0} > 2015-05-05 06:39:10,112 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-420= :ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079,= cmd: StartCommand) Chose disk controller for vol ROOT -> scsi, based on ro= ot disk controller settings: scsi > 2015-05-05 06:39:10,199 INFO [c.c.h.v.m.DatastoreMO] (DirectAgent-420:ct= x-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079, cm= d: StartCommand) Folder test exists on datastore > 2015-05-05 06:39:10,207 INFO [c.c.h.v.m.DatastoreMO] (DirectAgent-420:ct= x-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079, cm= d: StartCommand) Search file ROOT-1350.vmdk on [b0d3033166f3361abea8b85e0ef= e4055] > 2015-05-05 06:39:10,282 INFO [c.c.h.v.m.DatastoreMO] (DirectAgent-420:ct= x-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079, cm= d: StartCommand) File [b0d3033166f3361abea8b85e0efe4055] ROOT-1350.vmdk exi= sts on datastore > 2015-05-05 06:39:10,292 INFO [c.c.h.v.m.DatastoreMO] (DirectAgent-420:ct= x-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079, cm= d: StartCommand) Search file ROOT-1350-flat.vmdk on [b0d3033166f3361abea8b8= 5e0efe4055] > 2015-05-05 06:39:10,355 INFO [c.c.h.v.m.DatastoreMO] (DirectAgent-420:ct= x-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079, cm= d: StartCommand) File [b0d3033166f3361abea8b85e0efe4055] ROOT-1350-flat.vmd= k does not exist on datastore > 2015-05-05 06:39:10,364 INFO [c.c.h.v.m.DatastoreMO] (DirectAgent-420:ct= x-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079, cm= d: StartCommand) Search file ROOT-1350-delta.vmdk on [b0d3033166f3361abea8b= 85e0efe4055] > 2015-05-05 06:39:10,434 INFO [c.c.h.v.m.DatastoreMO] (DirectAgent-420:ct= x-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079, cm= d: StartCommand) File [b0d3033166f3361abea8b85e0efe4055] ROOT-1350-delta.vm= dk exists on datastore > 2015-05-05 06:39:10,435 INFO [c.c.s.r.VmwareStorageLayoutHelper] (Direct= Agent-420:ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/= job-8079, cmd: StartCommand) sync [b0d3033166f3361abea8b85e0efe4055] ROOT-1= 350-delta.vmdk->[b0d3033166f3361abea8b85e0efe4055] test/ROOT-1350-delta.vmd= k > 2015-05-05 06:39:10,725 INFO [c.c.h.v.m.DatastoreMO] (DirectAgent-420:ct= x-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079, cm= d: StartCommand) Search file ROOT-1350.vmdk on [b0d3033166f3361abea8b85e0ef= e4055] > 2015-05-05 06:39:10,790 INFO [c.c.h.v.m.DatastoreMO] (DirectAgent-420:ct= x-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079, cm= d: StartCommand) File [b0d3033166f3361abea8b85e0efe4055] ROOT-1350.vmdk exi= sts on datastore > 2015-05-05 06:39:10,790 INFO [c.c.s.r.VmwareStorageLayoutHelper] (Direct= Agent-420:ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/= job-8079, cmd: StartCommand) sync [b0d3033166f3361abea8b85e0efe4055] ROOT-1= 350.vmdk->[b0d3033166f3361abea8b85e0efe4055] test/ROOT-1350.vmdk > 2015-05-05 06:39:10,980 INFO [c.c.h.v.m.DatastoreMO] (DirectAgent-420:ct= x-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079, cm= d: StartCommand) Search file ROOT-1350.vmdk on [b0d3033166f3361abea8b85e0ef= e4055] test > 2015-05-05 06:39:11,041 INFO [c.c.h.v.m.DatastoreMO] (DirectAgent-420:ct= x-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079, cm= d: StartCommand) File [b0d3033166f3361abea8b85e0efe4055] test/ROOT-1350.vmd= k exists on datastore > 2015-05-05 06:39:11,054 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-420= :ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079,= cmd: StartCommand) Prepare volume at new device {"capacityInKB":0,"key":-2= ,"backing":{"diskMode":"persistent","fileName":"[b0d3033166f3361abea8b85e0e= fe4055] test/ROOT-1350.vmdk","datastore":{"value":"datastore-823","type":"D= atastore"}},"connectable":{"startConnected":true,"allowGuestControl":false,= "connected":true},"controllerKey":1000,"unitNumber":0} > 2015-05-05 06:39:11,054 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-420= :ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079,= cmd: StartCommand) VM i-149-1350-VM will be started with NIC device type: = Vmxnet3 > 2015-05-05 06:39:11,055 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-420= :ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079,= cmd: StartCommand) Prepare NIC device based on NicTO: {"deviceId":0,"netwo= rkRateMbps":200,"defaultNic":true,"pxeDisable":false,"nicUuid":"97580411-75= 18-4fdb-8403-0b1785f3a4c5","uuid":"79d72acd-d9f0-4f99-ace4-6f9085db8ace","i= p":"172.83.24.37","netmask":"255.255.252.0","gateway":"172.83.24.1","mac":"= 06:34:d2:00:11:af","dns1":"17.170.191.10","dns2":"17.170.128.10","broadcast= Type":"Vlan","type":"Guest","broadcastUri":"vlan://1151","isolationUri":"vl= an://1152","isSecurityGroupEnabled":false} > 2015-05-05 06:39:11,063 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-420= :ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079,= cmd: StartCommand) Prepare network on vmwaresvs vSwitch0 with name prefix:= cloud.guest > 2015-05-05 06:39:11,240 DEBUG [c.c.h.v.m.HypervisorHostHelper] (DirectAge= nt-420:ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job= -8079, cmd: StartCommand) Prepare network on other host, vlan: 1151, host: = fed33-madhoused-grv03.zonex.cloudsand.com > 2015-05-05 06:39:11,393 INFO [c.c.h.v.m.HypervisorHostHelper] (DirectAge= nt-420:ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job= -8079, cmd: StartCommand) Network cloud.guest.1151.200.1-vSwitch0 is ready = on vSwitch vSwitch0 > 2015-05-05 06:39:11,403 DEBUG [c.c.h.v.m.HypervisorHostHelper] (DirectAge= nt-420:ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job= -8079, cmd: StartCommand) Prepare network on other host, vlan: 1151, host: = fed33-madhoused-grv04.zonex.cloudsand.com > 2015-05-05 06:39:11,803 INFO [c.c.h.v.m.HypervisorHostHelper] (DirectAge= nt-420:ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job= -8079, cmd: StartCommand) Network cloud.guest.1151.200.1-vSwitch0 is ready = on vSwitch vSwitch0 > 2015-05-05 06:39:11,816 DEBUG [c.c.h.v.m.HypervisorHostHelper] (DirectAge= nt-420:ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job= -8079, cmd: StartCommand) Prepare network on other host, vlan: 1151, host: = fed33-madhoused-grv08.zonex.cloudsand.com > 2015-05-05 06:39:11,974 INFO [c.c.h.v.m.HypervisorHostHelper] (DirectAge= nt-420:ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job= -8079, cmd: StartCommand) Network cloud.guest.1151.200.1-vSwitch0 is ready = on vSwitch vSwitch0 > 2015-05-05 06:39:11,982 DEBUG [c.c.h.v.m.HypervisorHostHelper] (DirectAge= nt-420:ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job= -8079, cmd: StartCommand) Prepare network on other host, vlan: 1151, host: = fed33-madhoused-grv07.zonex.cloudsand.com > 2015-05-05 06:39:12,142 INFO [c.c.h.v.m.HypervisorHostHelper] (DirectAge= nt-420:ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job= -8079, cmd: StartCommand) Network cloud.guest.1151.200.1-vSwitch0 is ready = on vSwitch vSwitch0 > 2015-05-05 06:39:12,152 DEBUG [c.c.h.v.m.HypervisorHostHelper] (DirectAge= nt-420:ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job= -8079, cmd: StartCommand) Prepare network on other host, vlan: 1151, host: = fed33-madhoused-grv05.zonex.cloudsand.com > 2015-05-05 06:39:12,310 INFO [c.c.h.v.m.HypervisorHostHelper] (DirectAge= nt-420:ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job= -8079, cmd: StartCommand) Network cloud.guest.1151.200.1-vSwitch0 is ready = on vSwitch vSwitch0 > 2015-05-05 06:39:12,311 INFO [c.c.h.v.m.HypervisorHostHelper] (DirectAge= nt-420:ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job= -8079, cmd: StartCommand) Network cloud.guest.1151.200.1-vSwitch0 is ready = on vSwitch vSwitch0 > 2015-05-05 06:39:12,312 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-420= :ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079,= cmd: StartCommand) Preparing NIC device on network cloud.guest.1151.200.1-= vSwitch0 > 2015-05-05 06:39:12,312 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-420= :ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079,= cmd: StartCommand) Prepare NIC at new device {"operation":"ADD","device":{= "addressType":"Manual","macAddress":"06:34:d2:00:11:af","key":-3,"backing":= {"network":{"value":"network-5856","type":"Network"},"deviceName":"cloud.gu= est.1151.200.1-vSwitch0"},"connectable":{"startConnected":true,"allowGuestC= ontrol":true,"connected":true},"unitNumber":0}} > 2015-05-05 06:39:12,312 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-420:ctx-f92= 9d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079, cmd: St= artCommand) find VM i-149-1350-VM on host > 2015-05-05 06:39:12,313 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-420:ctx-f92= 9d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079, cmd: St= artCommand) VM i-149-1350-VM found in host cache > 2015-05-05 06:39:12,585 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-420= :ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079,= cmd: StartCommand) Configure VNC port for VM i-149-1350-VM, port: 5901, ho= st: fed33-madhoused-grv06.zonex.cloudsand.com > 2015-05-05 06:39:16,501 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-420= :ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-8079,= cmd: StartCommand) Found existing disk info from volume path: ROOT-1350 > 2015-05-05 06:39:23,042 INFO [c.c.h.v.u.VmwareContextPool] (DirectAgent-= 420:ctx-f929d6d0 fed33-madhoused-grv06.zonex.cloudsand.com, job-8077/job-80= 79, cmd: StartCommand) Recycle VmwareContext into idle pool: root@fed33-mad= housedev-lvc01.zonex.cloudsand.com, current idle pool size: 8, outstanding = count: 8 > 2015-05-05 06:39:23,044 DEBUG [c.c.a.t.Request] (Work-Job-Executor-113:ct= x-e236ca61 job-8077/job-8079 ctx-6e1bff71) Seq 5-1317865840959292784: Recei= ved: { Ans: , MgmtId: 345049223690, via: 5, Ver: v1, Flags: 10, { StartAns= wer } } > 2015-05-05 06:39:23,073 INFO [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Ex= ecutor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Update volume disk = chain info. vol: 1607, ROOT-1350 -> ROOT-1350, null -> {"diskDeviceBusName"= :"scsi0:0","diskChain":["[b0d3033166f3361abea8b85e0efe4055] test/ROOT-1350.= vmdk","[b0d3033166f3361abea8b85e0efe4055] cd7d90f6cd963cbe8690d1050406d80f/= cd7d90f6cd963cbe8690d1050406d80f.vmdk"]} > 2015-05-05 06:39:23,108 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor= -113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Service SecurityGroup is = not supported in the network id=3D224 > 2015-05-05 06:39:23,113 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor= -113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Service SecurityGroup is = not supported in the network id=3D224 > 2015-05-05 06:39:23,130 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) VM state transitted fr= om :Starting to Running with event: OperationSucceededvm's original host id= : 5 new host id: 5 host id before state transition: 5 > 2015-05-05 06:39:23,133 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job= -Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Start completed = for VM VM[User|i-149-1350-VM] > 2015-05-05 06:39:23,134 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Exe= cutor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Done executing VM wo= rk job: com.cloud.vm.VmWorkStart{"dcId":0,"rawParams":{"VmPassword":"rO0ABX= QADnNhdmVkX3Bhc3N3b3Jk"},"userId":2,"accountId":2,"vmId":1350,"handlerName"= :"VirtualMachineManagerImpl"} > 2015-05-05 06:39:23,134 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job= -Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Complete async j= ob-8079, jobStatus: SUCCEEDED, resultCode: 0, result: null > 2015-05-05 06:39:23,135 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job= -Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Publish async jo= b-8079 complete on message bus > 2015-05-05 06:39:23,136 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job= -Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Wake up jobs rel= ated to job-8079 > 2015-05-05 06:39:23,136 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job= -Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Update db status= for job-8079 > 2015-05-05 06:39:23,137 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job= -Executor-113:ctx-e236ca61 job-8077/job-8079 ctx-6e1bff71) Wake up jobs joi= ned with job-8079 and disjoin all subjobs created from job- 8079 > 2015-05-05 06:39:23,151 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Execu= tor-113:ctx-e236ca61 job-8077/job-8079) Done with run of VM work job: com.c= loud.vm.VmWorkStart for VM 1350, job origin: 8077 > 2015-05-05 06:39:23,151 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job= -Executor-113:ctx-e236ca61 job-8077/job-8079) Done executing com.cloud.vm.V= mWorkStart for job-8079 > 2015-05-05 06:39:23,162 DEBUG [o.a.c.f.j.i.SyncQueueManagerImpl] (Work-Jo= b-Executor-113:ctx-e236ca61 job-8077/job-8079) Sync queue (2149) is current= ly empty > 2015-05-05 06:39:23,163 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Exe= cutor-113:ctx-e236ca61 job-8077/job-8079) Remove job-8079 from job monitori= ng > 2015-05-05 06:39:23,193 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-= Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Complete async job-8077, jo= bStatus: SUCCEEDED, resultCode: 0, result: org.apache.cloudstack.api.respon= se.UserVmResponse/virtualmachine/{"id":"bb958b5f-a374-4f0a-a7e2-b1ed877ac0e= 2","name":"test","displayname":"test","projectid":"98b2e16f-1e4f-4b19-866b-= 154ef5aad53d","project":"mve02","domainid":"c5073484-b111-11e3-b7ee-0050568= 9520a","domain":"ROOT","created":"2015-05-05T06:35:15+0000","state":"Runnin= g","haenable":false,"zoneid":"0a81f11b-90ce-4f24-92a0-5a2710f42bd2","zonena= me":"Madhouse-Dev-CIC-zone","hostid":"0a929069-5b4b-4130-82e3-45a0b571f783"= ,"hostname":"fed33-madhoused-grv06.zonex.cloudsand.com","templateid":"bcbca= db7-3621-4721-8bd3-4e18403d23bd","templatename":"OL6.5-AppVM-CIC-Networkabl= e-v2","templatedisplaytext":"OL6.5-AppVM-CIC-Networkable-v2","passwordenabl= ed":false,"serviceofferingid":"9bf47fc8-030e-4c23-a136-de36c8e57d87","servi= ceofferingname":"Small 1CPUx2GB","cpunumber":1,"cpuspeed":100,"memory":2048= ,"guestosid":"c5494284-b111-11e3-b7ee-00505689520a","rootdeviceid":0,"rootd= evicetype":"ROOT","securitygroup":[],"nic":[{"id":"97580411-7518-4fdb-8403-= 0b1785f3a4c5","networkid":"79d72acd-d9f0-4f99-ace4-6f9085db8ace","networkna= me":"Guest 1151","netmask":"255.255.252.0","gateway":"172.83.24.1","ipaddre= ss":"172.83.24.37","isolationuri":"vlan://1152","broadcasturi":"vlan://1151= ","traffictype":"Guest","type":"Shared","isdefault":true,"macaddress":"06:3= 4:d2:00:11:af"}],"hypervisor":"VMware","instancename":"i-149-1350-VM","tags= ":[],"affinitygroup":[],"displayvm":true,"isdynamicallyscalable":false,"ost= ypeid":148,"jobid":"c8490707-706f-4267-8520-0514ead44777","jobstatus":0} > 2015-05-05 06:39:23,196 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-= Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Publish async job-8077 comp= lete on message bus > 2015-05-05 06:39:23,196 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-= Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Wake up jobs related to job= -8077 > 2015-05-05 06:39:23,196 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-= Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Update db status for job-80= 77 > 2015-05-05 06:39:23,200 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-= Executor-54:ctx-a7142d34 job-8077 ctx-b6fc1bbf) Wake up jobs joined with jo= b-8077 and disjoin all subjobs created from job- 8077 > 2015-05-05 06:39:23,207 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-= Executor-54:ctx-a7142d34 job-8077) Done executing org.apache.cloudstack.api= .command.admin.vm.StartVMCmdByAdmin for job-8077 > 2015-05-05 06:39:23,207 INFO [o.a.c.f.j.i.AsyncJobMonitor] (API-Job-Exec= utor-54:ctx-a7142d34 job-8077) Remove job-8077 from job monitoring -- This message was sent by Atlassian JIRA (v6.3.4#6332)