cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chiradeep Vittal <Chiradeep.Vit...@citrix.com>
Subject Re: Cryptic Error creating VM from ISO
Date Thu, 02 Aug 2012 23:39:44 GMT
Certainly, hypervisor-on-hypervisor is not officially supported by XS.

On 8/2/12 8:01 AM, "Nik Martin" <nik.martin@nfinausa.com> wrote:

>On 08/02/2012 09:14 AM, Gijsbert Janssen van Doorn wrote:
>> What kind of OS did you select for this ISO. Also installing XCP on top
>>of XenServer doesn't sound like something that
>> will work like expected (hypervisor on hypervisor).
>>
>> The "Expunging" VM's should be removed within 24 hours (depending on
>>you cleanup settings in the Global Settings).
>>
>I dug around online and the best I could tell is that XCP is built on
>top ofCentos x86_64, so I chose Centos 6 x86_64.  That was my first
>suspicion when it failed, but the log didn't really give me that
>definitively.
>>
>> -----Original Message-----
>> From: Nik Martin [mailto:nik.martin@nfinausa.com]
>> Sent: donderdag 2 augustus 2012 16:04
>> To: cloudstack-users@incubator.apache.org
>> Subject: Re: Cryptic Error creating VM from ISO
>>
>> On 08/01/2012 11:41 PM, Nitin Mehta wrote:
>>> Nik - Can you please check on the hypervisor as well if you see any
>>>reason why this is failing ?
>>> BTW which hypervisor are you using and is this ISO guest OS supported
>>>by the hypervisor version you are using ?
>>>
>>> Thanks,
>>> -Nitin
>>>
>>
>> The HV is Xenserver 6.2 patched to current. I just created a VM using
>> the same offering, but using a Centos 6.02 DVD iso, and it worked
>> perfectly.
>>
>> Here is the output from var/log/messages on the HV  when trying to
>> create a VM with the XCP (Xen Cloud Platform) iso image:
>>
>> Aug  2 08:58:25 hv1 kernel: svc: failed to register lockdv1 RPC service
>> (errno 97).
>> Aug  2 08:58:26 hv1 kernel: svc: failed to register lockdv1 RPC service
>> (errno 97).
>> Aug  2 08:58:27 hv1 tapdisk[31978]: tapdisk-control: init, 10 x 4k
>>buffers
>> Aug  2 08:58:27 hv1 tapdisk[31978]: I/O queue driver: lio
>> Aug  2 08:58:27 hv1 tapdisk[31978]: tapdisk-log: started, level 0
>> Aug  2 08:58:27 hv1 tapdisk[31978]: block-aio
>>
>>open('/var/run/sr-mount/cdd49fd7-b7fd-0247-e277-2c8e4530a976/205-2-bd9870
>>63-a037-3a5f-9796-f0d5ddee3c31.iso')
>> Aug  2 08:58:27 hv1 tapdisk[31978]:
>>
>>open(/var/run/sr-mount/cdd49fd7-b7fd-0247-e277-2c8e4530a976/205-2-bd98706
>>3-a037-3a5f-9796-f0d5ddee3c31.iso)
>> with O_DIRECT
>> Aug  2 08:58:27 hv1 tapdisk[31978]: Image size:       pre sector_shift
>> [379250688]   post sector_shift [740724]
>> Aug  2 08:58:27 hv1 tapdisk[31978]: opened image
>>
>>/var/run/sr-mount/cdd49fd7-b7fd-0247-e277-2c8e4530a976/205-2-bd987063-a03
>>7-3a5f-9796-f0d5ddee3c31.iso
>> (1 users, state: 0x00000003, type: 0, ro)
>> Aug  2 08:58:27 hv1 tapdisk[31978]: VBD CHAIN:
>>Aug  2 08:58:27 hv1 tapdisk[31978]:
>>
>>/var/run/sr-mount/cdd49fd7-b7fd-0247-e277-2c8e4530a976/205-2-bd987063-a03
>>7-3a5f-9796-f0d5ddee3c31.iso:
>> type:aio(0) storage:n/a(-1)
>> Aug  2 08:58:27 hv1 tapdisk[31978]: bdev: capacity=740724
>> sector_size=512/512 flags=0
>> Aug  2 08:58:27 hv1 kernel: block tde: sector-size: 512/512 capacity:
>>740724
>> Aug  2 08:58:27 hv1 kernel: blkback: event-channel 65
>> Aug  2 08:58:27 hv1 kernel: blkback: ring-ref 8
>> Aug  2 08:58:27 hv1 kernel: blkback: protocol 1 (x86_32-abi)
>> Aug  2 08:58:27 hv1 kernel: blkfront: xvda: barriers enabled
>> Aug  2 08:58:27 hv1 kernel:  xvda: unknown partition table
>> Aug  2 08:58:27 hv1 vbd.uevent[add](backend/vbd/0/51712): wrote
>> /xapi/0/hotplug/vbd/51712/hotplug = 'online'
>> Aug  2 08:58:27 hv1 vbd.uevent[add](backend/vbd/0/51712): wrote
>> backend/vbd/0/51712/hotplug-status = 'connected'
>> Aug  2 08:58:28 hv1 ELILOADER: Traceback (most recent call last):
>> Aug  2 08:58:28 hv1 ELILOADER:   File "/usr/bin/eliloader", line 909, in
>> ?     sys.exit(main())
>> Aug  2 08:58:28 hv1 ELILOADER:   File "/usr/bin/eliloader", line 899, in
>> main     handle_first_boot(vm, img, args, other_config)
>> Aug  2 08:58:28 hv1 ELILOADER:   File "/usr/bin/eliloader", line 718, in
>> handle_first_boot     kernel, ramdisk = rhel_first_boot_handler(vm,
>> repo_url)
>> Aug  2 08:58:28 hv1 ELILOADER:   File "/usr/bin/eliloader", line 493, in
>> rhel_first_boot_handler     raise InvalidSource, "Unable to access a
>> required file in the specified repository: %s." % e.name
>> Aug  2 08:58:28 hv1 ELILOADER: InvalidSource: Unable to access a
>> required file in the specified repository:
>> file:///tmp/cdrom-repo-ocZrAO/isolinux/vmlinuz.
>> Aug  2 08:58:28 hv1 fe: 32027 (/usr/bin/eliloader -q --default_args=
>> --extra_args=graphical utf8 --vm=cbc638...) exitted with code 1
>> Aug  2 08:58:28 hv1 multipathd: xvda: remove path (operator)
>> Aug  2 08:58:28 hv1 multipathd: xvda: spurious uevent, path not in
>>pathvec
>> Aug  2 08:58:28 hv1 vbd.uevent[remove](backend/vbd/0/51712): removed
>> /xapi/0/hotplug/vbd/51712/hotplug
>> Aug  2 08:58:28 hv1 vbd.uevent[remove](backend/vbd/0/51712): removed
>> backend/vbd/0/51712/hotplug-status
>> Aug  2 08:58:29 hv1 tapdisk[31978]: closed image
>>
>>/var/run/sr-mount/cdd49fd7-b7fd-0247-e277-2c8e4530a976/205-2-bd987063-a03
>>7-3a5f-9796-f0d5ddee3c31.iso
>> (0 users, state: 0x00000002, type: 0)
>> Aug  2 08:58:29 hv1 tapdisk[31978]: tapdisk-log: closing after 0 errors
>> Aug  2 08:58:29 hv1 tapdisk[31978]: tapdisk-syslog: 11 messages, 909
>> bytes, xmits: 12, failed: 0, dropped: 0
>> Aug  2 08:58:29 hv1 tapdisk[31978]: tapdisk-control: draining 1
>>connections
>> Aug  2 08:58:29 hv1 tapdisk[31978]: tapdisk-control: done
>> Aug  2 08:58:29 hv1 xapi: [error|hv1|372223|Async.VM.start_on
>> R:15a23795dd78|xapi] Vmops.start_paused caught: INVALID_SOURCE: [ Unable
>> to access a required file in the specified repository:
>> file:///tmp/cdrom-repo-ocZrAO/isolinux/vmlinuz.; ;  ]
>>
>>
>>
>>> -----Original Message-----
>>> From: Nik Martin [mailto:nik.martin@nfinausa.com]
>>> Sent: Thursday, August 02, 2012 3:38 AM
>>> To: cloudstack-users@incubator.apache.org
>>> Subject: Cryptic Error creating VM from ISO
>>>
>>>     I had a request to upload an ISO of XCP for a developer to test
>>>some
>>> API work he's doing.  So I download the 1.5 XCP beta and add it to CS.
>>> When I try to craete a VM from the XCP ISO, it fails with a very
>>>cryptic
>>> "Unable to create a deployment for VM[User|i-2-135-VM]"
>>>
>>> The log shows many varied errors, but none of them give a definitive
>>> answer.  Worse, when the VM creation fails, CS doesn;t clean uo after
>>> itself, and now I have 10 or 12 VMs in state "Expunging".  I can create
>>> VMs using other templates but with the same service offerings as I'm
>>> trying to create the XCP VM with.  Most errors I see are about not
>>> enough resources, but there are plenty of resources, and as I said,
>>> using the same service offering with other templates works fine.
>>>
>>>
>>>
>>>
>>> 2012-08-01 16:44:19,841 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-374:null) VBD c0e05d8b-bda3-49b4-400b-043c1945744f created
>>> for
>>>
>>>Vol[135|ISO|nfs://172.16.5.2/export/secondary/template/tmpl/2/205//205-2
>>>-bd987063-a037-3a5f-9796-f0d5ddee3c31.iso|0]
>>> 2012-08-01 16:44:19,841 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-374:null) Creating VIF for i-2-135-VM on nic
>>> [Nic:Guest-10.1.1.220-vlan://510]
>>> 2012-08-01 16:44:19,841 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-374:null) Looking for network named guest_pub
>>> 2012-08-01 16:44:19,896 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-374:null) Found a network called guest_pub on
>>> host=172.16.5.5;  Network=620948b7-4081-53df-a370-a28868e52c19;
>>> pif=c2847562-742c-3f95-71ff-a8f63ca6330f
>>> 2012-08-01 16:44:19,923 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-374:null) Creating VLAN 510 on host 172.16.5.5 on device
>>>eth1
>>> 2012-08-01 16:44:19,960 DEBUG [agent.manager.AgentManagerImpl]
>>> (AgentManager-Handler-15:null) Ping from 19
>>> 2012-08-01 16:44:0,102 DEBUG [agent.manager.AgentManagerImpl]
>>> (AgentManager-Handler-1:null) Ping from 18
>>> 2012-08-01 16:44:20,919 DEBUG [cloud.server.StatsCollector]
>>> (StatsCollector-1:null) VmStatsCollector is running...
>>> 2012-08-01 16:44:20,937 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-48:null) Seq 16-479526995: Executing request
>>> 2012-08-01 16:44:21,162 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-374:null) VLAN is created for 510.  The uuid is
>>> b0763cac-7c7d-4486-2279-57896801f370
>>> 2012-08-01 16:44:21,199 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-374:null) Created a vif
>>> d2c4ec16-920a-35fc-4e0b-e77f7a2f1e10 on 0
>>> 2012-08-01 16:44:21,340 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-170:null) Ping from 17
>>> 2012-08-01 16:44:21,447 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-48:null) Vm cpu utilization 0.0125
>>> 2012-08-01 16:44:21,447 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-48:null) Vm cpu utilization 0.0125
>>> 2012-08-01 16:44:21,447 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-48:null) Vm cpu utilization 0.011250000000000001
>>> 2012-08-01 16:44:21,447 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-48:null) Seq 16-479526995: Response Received:
>>> 2012-08-01 16:44:21,447 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (DirectAgent-48:null) Cleanup succeeded. Details null
>>> 2012-08-01 16:44:21,447 DEBUG [agent.transport.Request]
>>> (StatsCollector-1:null) Seq 16-479526995: Received:  { Ans: , MgmtId:
>>> 130577622632, via: 16, Ver: v1, Flags: 10, { GetVmStatsAnswer } }
>>> 2012-08-01 16:44:21,447 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (StatsCollector-1:null) Cleanup succeeded. Details null
>>> 2012-08-01 16:44:21,460 DEBUG [agent.managr.DirectAgentAttache]
>>> (DirectAgent-457:null) Ping from 16
>>> 2012-08-01 16:44:25,944 DEBUG
>>> [storage.secondary.SecondaryStorageManagerImpl] (secstorage-1:null)
>>>Zone
>>> 2 is ready to launch secondary storage VM
>>> 2012-08-01 16:44:26,038 DEBUG
>>> [cloud.consoleproxy.ConsoleProxyManagerImpl] (consoleproxy-1:null) Zone
>>> 2 is ready to launch console proxy
>>> 2012-08-01 16:44:26,232 DEBUG
>>> [network.router.VirtualNetworkApplianceManagerImpl]
>>> (RouterStatusMonitor-1:null) Found 2 routers.
>>> 2012-08-01 16:44:26,269 WARN  [xen.resource.CitrixResourceBase]
>>> (DirectAgent-374:null) Task failed! Task record:                 uuid:
>>> 4995c5df-3d6a-548f-3feb-257dbfed99a6
>>>                 created: Wed Aug 01 16:44:21 CDT 2012
>>>                finished: Wed Aug 01 16:44:27 CDT 2012
>>> 2012-08-01 16:44:26,284 WARN  [xen.resource.CitrixResourceBase]
>>> (DirectAgent-374:null) Unable to start VM(i-2-135-VM) on
>>> host(3624054e-b343-4786-95c7-d82d2aad0044) due to Task failed! Task
>>> record:                 uuid: 4995c5df-3d6a-548f-3feb-257dbfed99a6
>>>                 created: Wed Aug 01 16:44:21 CDT 2012
>>>                finished: Wed Aug 01 16:44:27 CDT 2012
>>>                 created: Wed Aug 01 16:44:21 CDT 2012
>>>                finished: Wed Aug 01 16:44:27 CDT 2012
>>> 2012-08-01 16:44:26,294 WARN  [xen.resource.CitrixResourceBase]
>>> (DirectAgent-374:null) Catch Exception: class
>>> com.cloud.utils.exception.CloudRuntimeException due to
>>> com.cloud.utils.exception.CloudRuntimeException: Unable to start
>>> VM(i-2-135-VM) on host(3624054e-b343-4786-95c7-d82d2aad0044) due to
>>>Task
>>> failed! Task record:                 uuid:
>>> 4995c5df-3d6a-548f-3feb-257dbfed99a6
>>>                 created: Wed Aug 01 16:44:21 CDT 2012
>>>                finished: Wed Aug 01 16:44:27 CDT 2012
>>>                 created: Wed Aug 01 16:44:21 CDT 2012
>>>                finished: Wed Aug 01 16:44:27 CDT 2012
>>> 2012-08-01 16:44:26,295 WARN  [xen.resource.CitrixResourceBase]
>>> (DirectAgent-374:null) Unable to start i-2-135-VM due to
>>>                 created: Wed Aug 01 16:44:21 CDT 2012
>>>                finished: Wed Aug 01 16:44:27 CDT 2012
>>> 2012-08-01 16:44:26,360 WARN  [xen.resource.CitrixResourceBase]
>>> (DirectAgent-374:null) Unable to clean up VBD due to
>>> 2012-08-01 16:44:26,375 WARN  [xen.resource.CitrixResourceBase]
>>> (DirectAgent-374:null) Unable to clean up VBD due to
>>> 2012-08-01 16:44:26,390 WARN  [xen.resource.CitrixResourceBase]
>>> (DirectAgent-374:null) Unable to cleanup VIF
>>> 2012-08-01 16:44:27,731 INFO  [xen.resource.XenServer56Resource]
>>> (DirectAgent-374:null) Catch com.xensource.xenapi.Types$InvalidValue:
>>> failed to destory VLAN eth1 on host
>>>3624054e-b343-4786-95c7-d82d2aad0044
>>> due to The value given is invalid
>>> 2012-08-01 16:44:27,731 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-374:null) The VM is in stopped state, detected problem
>>> during startup : i-2-135-VM
>>> 2012-08-01 16:44:27,731 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-374:null) Seq 17-2018508893: Response Received:
>>> 2012-08-01 16:44:27,731 DEBUG [agent.transport.Request]
>>> (DirectAgent-374:null) Seq 17-2018508893: Processing:  { Ans: , MgmtId:
>>> 130577622632, via: 17, Ver: v1, Flags: 110,
>>>
>>>[{"StartAnswer":{"vm":{"id":135,"name":"i-2-135-VM","bootloader":"CD","t
>>>ype":"User","cpus":1,"speed":1000,"minRam":1073741824,"maxRam":107374182
>>>4,"arch":"x86_64","os":"CentOS
>>> 6.0
>>>
>>>(64-bit)","bootArgs":"","rebootOnCrash":false,"enableHA":false,"limitCpu
>>>Use":false,"vncPassword":"a88cf66d4a6d9df","params":{},"disks":[{"id":14
>>>7,"name":"ROOT-135","mountPoint":"/iqn.2012-01.com.nfinausa:san01/1","pa
>>>th":"ff2e7337-72fa-4fa0-8195-14ae5de1ecef","size":21474836480,"type":"RO
>>>OT","storagePoolType":"IscsiLUN","storagePoolUuid":"776299a8-a770-30f7-9
>>>f6b-8c422790f5b7","deviceId":0},{"id":135,"name":"Xen
>>> Cloud Platform
>>>
>>>1.5","path":"nfs://172.16.5.2/export/secondary/template/tmpl/2/205//205-
>>>2-bd987063-a037-3a5f-9796-f0d5ddee3c31.iso","size":0,"type":"ISO","stora
>>>gePoolType":"ISO","deviceId":3,"guestOsType":"CentOS
>>> 6.0
>>>
>>>(64-bit)"}],"nics":[{"deviceId":0,"networkRateMbps":200,"defaultNic":tru
>>>e,"ip":"10.1.1.220","netmask":"255.255.255.0","gateway":"10.1.1.1","mac"
>>>:"02:00:3d:fc:00:0b","dns1":"8.8.8.8","dns2":"","broadcastType":"Vlan","
>>>type":"Guest","broadcastUri":"vlan://510","isolationUri":"vlan://510","i
>>>sSecurityGroupEnabled":false,"name":"guest_pub"}]},"result":false,"detai
>>>ls":"Unable
>>> to start i-2-135-VM due to ","wait":0}}] }
>>> 2012-08-01 16:44:27,732 WARN  [cloud.vm.VirtualMachineManagerImpl]
>>> (DirectAgent-374:null) Cleanup failed due to Unable to start i-2-135-VM
>>> due to
>>> 2012-08-01 16:44:27,732 DEBUG [agent.transport.Request]
>>> (Job-Executor-8:job-206) Seq 17-2018508893: Received:  { Ans: , MgmtId:
>>> 130577622632, via: 17, Ver: v1, Flags: 110, { StartAnswer } }
>>> 2012-08-01 16:44:27,732 DEBUG [agent.manager.AgentAttache]
>>> (DirectAgent-374:null) Seq 17-2018508893: No more commands found
>>> 2012-08-01 16:44:27,732 WARN  [cloud.vm.VirtualMachineManagerImpl]
>>> (Job-Executor-8:job-206) Cleanup failed due to Unable to start
>>> i-2-135-VM due to
>>> 2012-08-01 16:44:27,739 INFO  [cloud.vm.VirtualMachineManagerImpl]
>>> (Job-Executor-8:job-206) Unable to start VM on Host[-17-Routing] due to
>>> Unable to start i-2-135-VM due to
>>> 2012-08-01 16:44:27,742 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (Job-Executor-8:job-206) Cleaning up resources for the vm
>>> VM[User|i-2-135-VM] in Starting state
>>> 2012-08-01 16:44:27,744 DEBUG [agent.transport.Request]
>>> (Job-Executor-8:job-206) Seq 17-2018508894: Sending  { Cmd , MgmtId:
>>> 130577622632, via: 17, Ver: v1, Flags: 100111,
>>> [{"StopCommand":{"isProxy":false,"vmName":"i-2-135-VM","wait":0}}] }
>>> 2012-08-01 16:44:27,744 DEBUG [agent.transport.Request]
>>> (Job-Executor-8:job-206) Seq 17-2018508894: Executing:  { Cmd , MgmtId:
>>> 130577622632, via: 17, Ver: v1, Flags: 100111,
>>> [{"StopCommand":{"isProxy":false,"vmName":"i-2-135-VM","wait":0}}] }
>>> 2012-08-01 16:44:27,744 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-383:null) Seq 17-2018508894: Executing request
>>> 2012-08-01 16:44:27,911 INFO  [xen.resource.CitrixResourceBase]
>>> (DirectAgent-383:null) VM does not exist on
>>> XenServer3624054e-b343-4786-95c7-d82d2aad0044
>>> 2012-08-01 16:44:27,911 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-383:null) Seq 17-2018508894: Response Received:
>>> 2012-08-01 16:44:27,911 DEBUG [agent.transport.Request]
>>> DirectAgent-383:null) Seq 17-2018508894: Processing:  { Ans: , MgmtId:
>>> 13057762263, via: 17, Ver: v1, Flags: 110,
>>>
>>>[{"StopAnswer":{"vncPort":0,"bytesSent":0,"bytesReceived":0,"result":tru
>>>e,"details":"VM
>>> does not exist","wait":0}}] }
>>> 202-08-01 16:44:27,911 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (DirectAgent-383:null) Cleanup succeeded. Details VM does not exist
>>> 2012-08-01 16:44:27,911 DEBUG [agent.transport.Request]
>>> (Job-Executor-8:job-206) Seq 17-2018508894: Received:  { Ans: , MgmtId:
>>> 130577622632, via: 17, Ver: v1, Flags: 110, { StopAnswer } }
>>> 2012-08-01 16:44:27,911 DEBUG [agent.manager.AgentAttache]
>>> (DirectAgent-383:null) Seq 17-2018508894: No more commands found
>>> 2012-08-01 16:44:27,911 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (Job-Executor-8:job-206) Cleanup succeeded. Details VM does not exist
>>> 2012-08-01 16:44:27,920 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Service SecurityGroup is not supported in the
>>> network id=211
>>> 2012-08-01 16:44:27,931 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Changing active number of nics for network
>>> id=211 on -1
>>> 2012-08-01 16:44:27,933 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (Job-Executor-8:job-206) Successfully cleanued up resources for the vm
>>> VM[User|i-2-135-VM] in Starting state
>>> 2012-08-01 16:44:27,935 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (Job-Executor-8:job-206) Root volme is ready, need to place VM in
>>> volume's cluster
>>> 2012-08-01 16:44:27,935 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (Job-Executor-8:job-206) Vol[147|vm=135|ROOT] is READY, changing
>>> deployment plan to use this pool's dcId: 2 , podId: 1 , and clusterId:
>>>1
>>> 2012-08-01 16:44:27,936 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) DeploymentPlanner allcation algorithm: random
>>> 2012-08-01 16:44:27,936 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Trying to allocate a host and storage pools
>>> from dc:2, pod:1,cluster:1, requested cpu: 1000, requested ram:
>>>1073741824
>>> 2012-08-01 16:44:27,936 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Is ROOT volume READY (pool already
>>>allocated)?: Yes
>>> 2012-08-01 16:44:27,936 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Searchingresources only under specified
>>>Cluster: 1
>>> 2012-08-01 16:44:27,941 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Checking resources in Cluster: 1 under Pod: 1
>>> 2012-08-01 16:44:27,941 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Calling HostAllocators to find suitable hosts
>>> 2012-08-01 16:44:27,941 DEBUG [allocator.impl.FirstFitAllocator]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Looking for hosts in
>>> dc: 2  pod:1  cluster:1
>>> 2012-08-01 16:44:27,943 DEBUG [allocator.impl.FirstFitAllocator]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) FirstFitAllocator has
>>> 3 hosts to check for allocation: [Host[-17-Routing], Host[-13-Routing],
>>> Host[-16-Routing]]
>>> 2012-08-01 16:44:27,946 DEBUG [allocator.impl.FirstFitAllocator]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Found 3 hosts for
>>> allocation after prioritization: [Host[-17-Routing], Host[-13-Routing],
>>> Host[-16-Routing]]
>>> 2012-08-01 16:44:27,946 DEBUG [allocator.impl.FirstFitAllocator]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Looking for
>>> speed=1000Mhz, Ram=1024
>>> 2012-08-01 16:44:27,946 DEBUG [allocator.impl.FirstFitAllocator]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Host nme: hv3,
>>> hostId: 17 is in avoid set, skipping this and trying other available
>>>hosts
>>> 2012-08-01 16:44:27,948 DEBUG [cloud.capacity.CapacityMnagerImpl]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Checking if host: 13
>>> has enough capacity for requested CPU: 1000 and requested RAM:
>>> 1073741824 , cpuOverprovisioningFactor: 1.0
>>> 2012-08-01 16:44:27,949 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Hosts's actual total
>>> CPU: 57456 and CPU after applying overprovisioning: 57456
>>> 2012-08-01 16:44:27,949 DEBUG [cloud.capacity.CapacityMangerImpl]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Free CPU: 55956 ,
>>> Requested CPU: 1000
>>> 2012-08-01 16:44:27,949 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Free RAM: 96761591552
>>> , Requested RAM: 1073741824
>>> 2012-08-01 16:44:27,949 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Host has enough CPU
>>>and RAM available
>>> 2012-08-01 16:44:27,949 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) STATS: Can alloc CPU
>>> from host: 13, used: 1500, reserved: 0, actual total: 57456, total with
>>> overprovisioning: 57456; requested cpu:1000,alloc_from_last_host?:false
>>> ,considerReservedCapacity?: true
>>> 2012-08-01 16:44:27,950 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) STATS: Can alloc MEM
>>> from host: 13, used: 1476395008, reserved: 0, total: 98237986560;
>>> requested mem: 1073741824,alloc_from_last_host?:false
>>> ,considerReservedCapacity?: true
>>> 2012-08-01 16:44:27,950 DEBUG [allocator.impl.FirstFitAllocator]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Found a suitable
>>>host,
>> adding to list: 13
>>> 2012-08-01 16:44:27,951 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Checking if host: 16
>>> has enough capacity for requested CPU: 1000 and requested RAM:
>>> 1073741824 , cpuOverprovisioningFactor: 1.0
>>> 2012-08-01 16:44:27,953 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Hosts's actual total
>>> CPU: 57456 and CPU after applying overprovisioning: 57456
>>> 2012-08-01 16:44:27,953 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Free CPU: 33456 ,
>>> Requested CPU: 1000
>>> 2012-08-01 16:44:27,953 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Free RAM: 74722373184
>>> , Requested RAM: 1073741824
>>> 2012-08-01 16:44:27,953 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Host has enough CPU
>>> and RAM available
>>> 2012-08-01 16:44:27,953 DEBUG [cloudcapacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) STATS: Can alloc CPU
>>> from host: 16, used: 24000, reserved: 0, actual total: 57456, total
>>>with
>>> overprovisioning: 57456; requested cpu:1000,alloc_from_last_host?:false
>>> ,considerReservedCapacity?: true
>>> 2012-08-01 16:44:27,953 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) STATS: Can alloc MEM
>>> from host: 16, used: 25769803776, reserved: 0, total: 100492176960;
>>> requested mem: 1073741824,alloc_from_last_host?:false
>>> ,considerReservedCapacity?: true
>>> 2012-08-01 16:44:27,953 DEBUG [allocator.impl.FirstFitAllocator]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Found asuitable
>>>host,
>>> adding to list: 16
>>> 2012-08-01 16:44:27,953 DEBUG [allocator.impl.FirstFitAllocator]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Host Allocator
>>> returning 2 suitable hosts
>>> 2012-08-01 16:44:27,954 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Checking suitable pools for volume (Id, Type):
>>> (147,ROOT)
>>> 2012-08-01 16:44:27,954 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Volume is in READY state and has pool already
>>> allocated, checking if pool can be reused, poolId: 204
>>> 2012-08-01 16:44:27,955 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Planner need not allocate a pool for this
>>> volume since its READY
>>> 2012-08-01 16:44:27,955 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Trying to find a potenial host and associated
>>> storage pools from the suitable host/pool lists for this VM
>>> 2012-08-01 16:44:27,955 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Checking if host: 13 can access any suitable
>>> storage pool for volume: ROOT
>>> 2012-08-01 16:44:27,956 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Host: 13 can access pool: 204
>>> 2012-08-01 16:44:27,956 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Found a potential host id: 13 name: hv1 and
>>> associated storage pools for this VM
>>> 2012-08-01 16:44:27,957 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Returning Deployment Destination:
>>>
>>>Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool
>>>(Id))]
>>> : Dest[Zone(2)-Pod(1)-Cluster(1)-Host(13)-Storage()]
>>> 2012-08-01 16:44:27,957 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (Jo-Executor-8:job-206) Deployment found  - P0=VM[User|i-2-135-VM],
>>>
>>>P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->P
>>>ool(Id))]
>>> : Dest[Zone(2)-Pod(1)-Cluster(1)-Host(13)-Storage()]
>>> 2012-08-01 16:44:27,962 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206) VM state transitted from :Starting to Starting
>>> with event: OperationRetryvm's original host id: null new host id: 13
>>> host id before state transition: 17
>>> 2012-08-01 16:44:27,965 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206) Hosts's actual total CPU: 57456 and CPU after
>>> applying overprovisioning: 57456
>>> 2012-08-01 16:44:27,965 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206) release cpu from host: 17, old used:
>>> 1000,reserved: 0, actual total: 57456, total with overprovisioning:
>>> 57456; new used: 0,reserved:0; movedfromreserved:
>>> false,moveToReserveredfalse
>>> 2012-08-01 16:44:27,965 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206) release mem from host: 17, old used:
>>> 1073741824,reserved: 0, total: 98958404160; new used: 0,reserved:0;
>>> movedfromreserved: false,moveToReserveredfalse
>>> 2012-08-01 16:44:27,971 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206) Hosts's actual total CPU: 57456 and CPU after
>>> applying overprovisioning: 57456
>>> 2012-08-01 16:44:27,971 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206) We are allocating VM, increasing the used
>>> capacity of this host:13
>>> 2012-08-01 16:44:27,971 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206) Current Used CPU: 1500 , Free CPU:55956
>>> ,Requested CPU: 1000
>>> 2012-08-01 16:44:27,971 DEBUG [coud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206) Current Used RAM: 1476395008 , Free
>>> RAM:96761591552 ,Requested RAM: 1073741824
>>> 2012-08-01 16:44:27,971 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206) CPU STATS after allocation: for host: 13, old
>>> used: 1500, old reserved: 0, actual total: 57456, total with
>>> overprovisioning: 57456; new used:2500, reserved:0; requested
>>> cpu:1000,alloc_from_last:false
>>> 2012-08-01 16:44:27,971 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206) RAM STATS after allocation: for host: 13, old
>>> used: 1476395008, old reserved: 0, total: 98237986560; new used:
>>> 2550136832, reserved: 0; requested mem:
>>>1073741824,alloc_from_last:false
>>> 2012-08-01 16:44:27,974 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (Job-Executor-8:job-206) VM is being started in podId: 1
>>> 2012-08-01 16:44:27,977 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Network id=211 is already implemented
>>> 2012-08-01 16:44:27,985 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Service SecurityGroup is not supported in the
>>> network id=211
>>> 2012-08-01 16:44:27,991 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Changing active number of nics for network
>>> id=211 on 1
>>> 2012-08-01 16:44:27,994 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Asking JuniperSRX to prepare for
>>> Nic[469-135-28c5f2c5-110c-4de4-98f4-fe96866a2f0d-10.1.1.220]
>>> 2012-08-01 16:44:27,996 DEBUG [cloud.network.NetworkManagerImpl]
>>> Job-Executor-8:job-206) Asking Netscaler to prepare for
>>> Nic[469-135-28c5f2c5-110c-4de4-98f4-fe96866a2f0d-10.1.1.220]
>>> 2012-08-01 16:44:27,999 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Asking F5BigIp to prepare for
>>> Nic[469-135-28c5f2c5-110c-4de4-98f4-fe96866a2f0d-10.1.1.220]
>>> 2012-08-01 16:44:28,002 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-:job-206) Asking VirtualRouter to prepare for
>>> Nic[469-135-28c5f2c5-110c-4de4-98f4-fe96866a2f0d-10.1.1.220]
>>> 2012-08-01 16:44:28,008 DEBUG
>>> [network.router.VirtualNetworkApplianceManagerImpl]
>>> (Job-Executor-8:job-206) Starting a router for Ntwk[211|Guest|7] in
>>> datacenter:com.cloud.dc.DataCenterVO$$EnhancerByCGLIB$$22635440@2
>>> 2012-08-01 16:44:28,019 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Service SecurityGroup is not supported in the
>>> network id=211
>>> 2012-08-01 16:44:28,021 DEBUG
>>> [network.router.VirtualNetworkApplianceManagerImpl]
>>> (Job-Executor-8:job-206) Applying dhcp entry in network
>>>Ntwk[211|Guest|7]
>>> 2012-08-01 16:44:28,033 DEBUG [agent.transport.Request]
>>> (Job-Executor-8:job-206) Seq 13-1102643298: Sending  { Cmd , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 100111,
>>>
>>>[{"routing.DhcpEntryCommand":{"vmMac":"02:00:3d:fc:00:0b","vmIpAddress":
>>>"10.1.1.220","vmName":"XCP","defaultRouter":"10.1.1.1","defaultDns":"10.
>>>1.1.1","accessDetails":{"router.guest.ip":"10.1.1.1","zone.network.type"
>>>:"Advanced","router.name":"r-126-VM","router.ip":"169.254.2.97"},"wait":
>>>0}}]
>>> }
>>> 2012-08-01 16:44:28,033 DEBUG [agent.transport.Request]
>>> (Job-Executor-8:job-206) Seq 13-1102643298: Executing:  { Cmd  MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 100111,
>>>
>>>[{"routing.DhcpEntryCommand":{"vmMac":"02:00:3d:fc:00:0b","vmIpAddress":
>>>"10.1.1.220","vmName":"XCP","defaultRouter":"10.1.1.1","defaultDns":"10.
>>>1.1.1","accessDetails":{"router.guest.ip":"10.1.1.1","zone.network.type"
>>>:"Advanced","router.name":"r-126-VM","router.ip":"169.254.2.97"},"wait":
>>>0}}]
>>> }
>>> 2012-08-01 16:44:28,033 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-104:null) Seq 13-1102643298: Executing request
>>> 2012-08-01 16:44:30,455 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-104:null) Seq 13-1102643298: Response Received:
>>> 2012-08-01 16:44:30,455 DEBUG [agent.transport.Request]
>>> (DirectAgent-104:null) Seq 13-1102643298: Processing:  { Ans: , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 110,
>>> [{"Answer":{"result":true,"wait":0}}] }
>>> 2012-08-01 16:44:30,455 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (DirectAgent-104:null) Cleanup succeeded. Details null
>>> 2012-08-01 16:44:30,455 DEBUG [agent.transport.Request]
>>> (Job-Executor-8:job-206) Seq 13-1102643298: Received:  { Ans: , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 110, { Answer } }
>>> 2012-08-01 16:44:30,455 DEBUG [agent.manager.AgentAttache]
>>> (DirectAgent-104:null) Seq 13-1102643298: No more commands found
>>> 2012-08-01 16:44:30,455 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (Job-Executor-8:job-206) Cleanup succeeded. Details null
>>> 2012-08-01 16:44:30,465 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Service SecurityGroup is not supported in the
>>> network id=211
>>> 2012-08-01 16:44:30,467 DEBUG
>>> [network.router.VirtualNetworkApplianceManagerImpl]
>>> (Job-Executor-8:job-206) Applying userdata and password entry in
>>>network
>>> Ntwk[211|Guest|7]
>>> 2012-08-01 16:44:30,480 DEBUG [agent.transport.Request]
>>> (Job-Executor-8:job-206) Seq 13-1102643299: Sending  { Cmd , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 100111,
>>>
>>>[{"routing.SavePasswordCommand":{"password":"fnirq_cnffjbeq","vmIpAddres
>>>s":"10.1.1.220","vmName":"XCP","accessDetails":{"router.guest.ip":"10.1.
>>>1.1","zone.network.type":"Advanced","router.ip":"169.254.2.97","router.n
>>>ame":"r-126-VM"},"wait":0}},{"routing.VmDataCommand":{"vmIpAddress":"10.
>>>1.1.220","vmName":"XCP","accessDetails":{"router.guest.ip":"10.1.1.1","z
>>>one.network.type":"Advanced","router.ip":"169.254.2.97","router.name":"r
>>>-126-VM"},"wait":0}}]
>>> }
>>> 2012-08-01 16:44:30,481 DEBUG [agent.transport.Request]
>>> (Job-Executor-8:job-206) Seq 13-1102643299: Executing:  { Cmd , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 100111,
>>>
>>>[{"routing.SavePasswordCommand":{"password":"fnirq_cnffjbeq","vmIpAddres
>>>s":"10.1.1.220","vmName":"XCP","accessDetails":{"router.guest.ip":"10.1.
>>>1.1","zone.network.type":"Advanced","router.ip":"169.254.2.97","router.n
>>>ame":"r-126-VM"},"wait":0}},{"routing.VmDataCommand":{"vmIpAddress":"10.
>>>1.1.220","vmName":"XCP","accessDetails":{"router.guest.ip":"10.1.1.1","z
>>>one.network.type":"Advanced","router.ip":"169.254.2.97","router.name":"r
>>>-126-VM"},"wait":0}}]
>>> }
>>> 2012-08-01 16:44:30,481 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-428:null) Seq 13-1102643299: Executing request
>>> 2012-08-01 16:44:34,029 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-58:null) Ping from 13
>>> 2012-08-01 16:44:34,961 DEBUG [agent.manager.AgentManagerImpl]
>>> (AgentManager-Handler-13:null) Ping from 19
>>> 2012-08-01 16:44:35,106 DEBUG [agent.manager.AgentManagerImpl]
>>> (AgentManager-Handler-3:null) Ping from 18
>>> 2012-08-01 16:44:36,299 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-138:null) Ping from 17
>>> 2012-08-01 16:44:36,773 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-320:null) Ping from 16
>>> 2012-08-01 16:44:38,337 DEBUG [cloud.server.StatsCollector]
>>> (StatsCollector-2:null) StorageCollector is running...
>>> 2012-08-01 16:44:38,404 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (AgentManager-Handler-4:null) Cleanup succeeded. Details null
>>> 2012-08-01 16:44:38,404 DEBUG [agent.transport.Request]
>>> (StatsCollector-2:null) Seq 19-380633146: Received:  { Ans: , MgmtId:
>>> 130577622632, via: 19, Ver: v1, Flags: 10, { GetStorageStatsAnswer } }
>>> 2012-08-01 16:44:38,404 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (StatsCollector-2:null) Cleanup succeeded. Details null
>>> 2012-08-01 16:44:38,411 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-266:null) Seq 16-479526996: Executing request
>>> 2012-08-01 16:44:39,078 DEBUG [cloud.server.StatsCollector]
>>> (StatsCollector-3:null) HostStatsCollector is running...
>>> 2012-08-01 16:44:39,089 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-464:null) Seq 13-1102643300: Executing request
>>> 2012-08-01 16:44:39,305 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-464:null) Seq 13-1102643300: Response Received:
>>> 2012-08-01 16:44:39,305 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (DirectAgent-464:null) Cleanup succeeded. Details null
>>> 2012-08-01 16:44:39,305 DEBUG [agent.transport.Request]
>>> (StatsCollector-3:null) Seq 13-1102643300: Received:  { Ans: , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 10, { GetHostStatsAnswer } }
>>> 2012-08-01 16:44:39,305 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (StatsCollector-3:null) Cleanup succeeded. Details null
>>> 2012-08-01 16:44:39,310 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-336:null) Seq 16-479526997: Executing request
>>> 2012-08-01 16:44:39,326 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-266:null) Seq 16-479526996: Response Received:
>>> 2012-08-01 16:44:39,326 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (DirectAgent-266:null) Cleanup succeeded. Details null
>>> 2012-08-01 16:44:39,326 DEBUG [agent.transport.Request]
>>> (StatsCollector-2:null) Seq 16-479526996: Received:  { Ans: , MgmtId:
>>> 130577622632, via: 16, Ver: v1, Flags: 10, { GetStorageStatsAnswer } }
>>> 2012-08-01 16:44:39,326 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (StatsCollector-2:null) Cleanup succeeded. Details null
>>> 2012-08-01 16:44:39,700 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-336:null) Seq 16-479526997: Response Received:
>>> 2012-08-01 16:44:39,700 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (DirectAgent-336:null) Cleanup succeeded. Details null
>>> 2012-08-01 16:44:39,700 DEBUG [agent.transport.Request]
>>> (StatsCollector-3:null) Seq 16-479526997: Received:  { Ans: , MgmtId:
>>> 130577622632, via: 16, Ver: v1, Flags: 10, { GetHostStatsAnswer } }
>>> 2012-08-01 16:44:39,700 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (StatsCollector-3:null) Cleanup succeeded. Details null
>>> 2012-08-01 16:44:39,705 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-334:null) Seq 17-2018508895: Executing request
>>> 2012-08-01 16:44:40,091 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-334:null) Seq 17-2018508895: Response Received:
>>> 2012-08-01 16:44:40,091 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (DirectAgent-334:null) Cleanup succeeded. Details null
>>> 2012-08-01 16:44:40,091 DEBUG [agent.transport.Request]
>>> (StatsCollector-3:null) Seq 17-2018508895: Received:  { Ans: , MgmtId:
>>> 130577622632, via: 17, Ver: v1, Flags: 10, { GetHostStatsAnswer } }
>>> 2012-08-01 16:44:40,091 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (StatsCollector-3:null) Cleanup succeeded. Details null
>>> 2012-08-01 16:44:41,955 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-428:null) Seq 13-1102643299: Response Received:
>>> 2012-08-01 16:44:41,955 DEBUG [agent.transport.Request]
>>> (DirectAgent-428:null) Seq 13-1102643299: Processing:  { Ans: , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 110,
>>>
>>>[{"Answer":{"result":true,"wait":0}},{"Answer":{"result":true,"wait":0}}
>>>] }
>>> 2012-08-01 16:44:41,955 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (DirectAgent-428:null) Cleanup succeeded. Details null
>>> 2012-08-01 16:44:41,955 DEBUG [agent.transport.Request]
>>> (Job-Executor-8:job-206) Seq 13-1102643299: Received:  { Ans: , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 110, { Answer, Answer } }
>>> 2012-08-01 16:44:41,955 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (DirectAgent-428:null) Cleanup succeeded. Details null
>>> 2012-08-01 16:44:41,955 DEBUG [agent.manager.AgentAttache]
>>> (DirectAgent-428:null) Seq 13-1102643299: No more commands found
>>> 2012-08-01 16:44:41,955 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (Job-Executor-8:job-206) Cleanup succeeded. Details null
>>> 2012-08-01 16:44:41,955 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (Job-Executor-8:job-206) Cleanup succeeded. Details null
>>> 2012-08-01 16:44:41,955 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Asking Ovs to prepare for
>>> Nic[469-135-28c5f2c5-110c-4de4-98f4-fe96866a2f0d-10.1.1.220]
>>> 2012-08-01 16:44:41,955 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Asking ExternalDhcpServer to prepare for
>>> Nic[469-135-28c5f2c5-110c-4de4-98f4-fe96866a2f0d-10.1.1.220]
>>> 2012-08-01 16:44:41,961 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Asking BareMetal to prepare for
>>> Nic[469-135-28c5f2c5-110c-4de4-98f4-fe96866a2f0d-10.1.1.220]
>>> 2012-08-01 16:44:41,961 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Asking SecurityGroupProvider to prepare for
>>> Nic[469-135-28c5f2c5-110c-4de4-98f4-fe96866a2f0d-10.1.1.220]
>>> 2012-08-01 16:44:41,964 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Service SecurityGroup is not supported in the
>>> network id=211
>>> 2012-08-01 16:44:41,965 DEBUG [cloud.storage.StorageManagerImpl]
>>> (Job-Executor-8:job-206) Checking if we need to prepare 1 volumes for
>>> VM[User|i-2-135-VM]
>>> 2012-08-01 16:44:41,965 DEBUG [cloud.storage.StorageManagerImpl]
>>> (Job-Executor-8:job-206) No need to recreate the volume:
>>> Vol[147|vm=135|ROOT], since it already has a pool assigned: 204, adding
>>> disk to VM
>>> 2012-08-01 16:44:41,978 DEBUG [agent.transport.Request]
>>> (Job-Executor-8:job-206) Seq 13-1102643301: Sending  { Cmd , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 100111,
>>>
>>>[{"StartCommand":{"vm":{"id":135,"name":"i-2-135-VM","bootloader":"CD","
>>>type":"User","cpus":1,"speed":1000,"minRam":1073741824,"maxRam":10737418
>>>24,"arch":"x86_64","os":"CentOS
>>> 6.0
>>>
>>>(64-bit)","bootArgs":"","rebootOnCrash":false,"enableHA":false,"limitCpu
>>>Use":false,"vncPassword":"a88cf66d4a6d9df","params":{},"disks":[{"id":14
>>>7,"name":"ROOT-135","mountPoint":"/iqn.2012-01.com.nfinausa:san01/1","pa
>>>th":"ff2e7337-72fa-4fa0-8195-14ae5de1ecef","size":21474836480,"type":"RO
>>>OT","storagePoolType":"IscsiLUN","storagePoolUuid":"776299a8-a770-30f7-9
>>>f6b-8c422790f5b7","deviceId":0},{"id":135,"name":"Xen
>>> Cloud Platform
>>>
>>>1.5","path":"nfs://172.16.5.2/export/secondary/template/tmpl/2/205//205-
>>>2-bd987063-a037-3a5f-9796-f0d5ddee3c31.iso","size":0,"type":"ISO","stora
>>>gePoolType":"ISO","deviceId":3,"guestOsType":"CentOS
>>> 6.0
>>>
>>>(64-bit)"}],"nics":[{"deviceId":0,"networkRateMbps":200,"defaultNic":tru
>>>e,"ip":"10.1.1.220","netmask":"255.255.255.0","gateway":"10.1.1.1","mac"
>>>:"02:00:3d:fc:00:0b","dns1":"8.8.8.8","dns2":"","broadcastType":"Vlan","
>>>type":"Guest","broadcastUri":"vlan://510","isolationUri":"vlan://510","i
>>>sSecurityGroupEnabled":false,"name":"guest_pub"}]},"wait":0}}]
>>> }
>>> 2012-08-01 16:44:41,978 DEBUG [agent.transport.Request]
>>> (Job-Executor-8:job-206) Seq 13-1102643301: Executing:  { Cmd , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 100111,
>>>
>>>[{"StartCommand":{"vm":{"id":135,"name":"i-2-135-VM","bootloader":"CD","
>>>type":"User","cpus":1,"speed":1000,"minRam":1073741824,"maxRam":10737418
>>>24,"arch":"x86_64","os":"CentOS
>>> 6.0
>>>
>>>(64-bit)","bootArgs":"","rebootOnCrash":false,"enableHA":false,"limitCpu
>>>Use":false,"vncPassword":"a88cf66d4a6d9df","params":{},"disks":[{"id":14
>>>7,"name":"ROOT-135","mountPoint":"/iqn.2012-01.com.nfinausa:san01/1","pa
>>>th":"ff2e7337-72fa-4fa0-8195-14ae5de1ecef","size":21474836480,"type":"RO
>>>OT","storagePoolType":"IscsiLUN","storagePoolUuid":"776299a8-a770-30f7-9
>>>f6b-8c422790f5b7","deviceId":0},{"id":135,"name":"Xen
>>> Cloud Platform
>>>
>>>1.5","path":"nfs://172.16.5.2/export/secondary/template/tmpl/2/205//205-
>>>2-bd987063-a037-3a5f-9796-f0d5ddee3c31.iso","size":0,"type":"ISO","stora
>>>gePoolType":"ISO","deviceId":3,"guestOsType":"CentOS
>>> 6.0
>>>
>>>(64-bit)"}],"nics":[{"deviceId":0,"networkRateMbps":200,"defaultNic":tru
>>>e,"ip":"10.1.1.220","netmask":"255.255.255.0","gateway":"10.1.1.1","mac"
>>>:"02:00:3d:fc:00:0b","dns1":"8.8.8.8","dns2":"","broadcastType":"Vlan","
>>>type":"Guest","broadcastUri":"vlan://510","isolationUri":"vlan://510","i
>>>sSecurityGroupEnabled":false,"name":"guest_pub"}]},"wait":0}}]
>>> }
>>> 2012-08-01 16:44:41,978 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-445:null) Seq 13-1102643301: Executing request
>>> 2012-08-01 16:44:42,076 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-445:null) 1. The VM i-2-135-VM is in Starting state.
>>> 2012-08-01 16:44:42,125 DEBUG [xen.resource.XenServer56FP1Resource]
>>> (DirectAgent-445:null) Created VM 464cd918-debd-d6c0-8e53-ccaf2b63bfea
>>> for i-2-135-VM
>>> 2012-08-01 16:44:42,182 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-445:null) VBD d0be6eda-340d-9935-bef5-9313fe15d94d created
>>> for Vol[147|ROOT|ff2e7337-72fa-4fa0-8195-14ae5de1ecef|21474836480]
>>> 2012-08-01 16:44:43,353 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-445:null) VBD 13c50150-9614-4a32-fbd4-d0489ab57c76 created
>>> for
>>>
>>>Vol[135|ISO|nfs://172.16.5.2/export/secondary/template/tmpl/2/205//205-2
>>>-bd987063-a037-3a5f-9796-f0d5ddee3c31.iso|0]
>>> 2012-08-01 16:44:43,353 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-445:null) Creating VIF for i-2-135-VM on nic
>>> [Nic:Guest-10.1.1.220-vlan://510]
>>> 2012-08-01 16:44:43,353 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-445:null) Looking for network named guest_pub
>>> 2012-08-01 16:44:43,401 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-445:null) Found a network called guest_pub on
>>> host=172.16.5.3;  Network=620948b7-4081-53df-a370-a28868e52c19;
>>> pif=817e2173-007b-0f73-5a0b-122e53dc83fb
>>> 2012-08-01 16:44:43,417 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-445:null) Found a network called
>>> VLAN-620948b7-4081-53df-a370-a28868e52c19-510 on host=172.16.5.3;
>>> Network=a7fa9542-0ebb-71fd-e1fd-68980917b92b;
>>> pif=a7e75661-4223-06be-3ecd-064dedcf72db
>>> 2012-08-01 16:44:43,443 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-445:null) Created a vif
>>> fbd12fe6-7fad-eea0-dde4-08cdac23da34 on 0
>>> 2012-08-01 16:44:46,498 WARN  [xen.resource.CitrixResourceBase]
>>> (DirectAgent-445:null) Task failed! Task record:                 uuid:
>>> 4b457405-4527-777c-862c-9430899bb447
>>>                 created: Wed Aug 01 16:44:43 CDT 2012
>>>                finished: Wed Aug 01 16:44:46 CDT 2012
>>> 2012-08-01 16:44:46,508 WARN  [xen.resource.CitrixResourceBase]
>>> (DirectAgent-445:null) Unable to start VM(i-2-135-VM) on
>>> host(e0e2c7af-8adb-49d7-bae5-51dcdb43aa16) due to Task failed! Task
>>> record:                 uuid: 4b457405-4527-777c-862c-9430899bb447
>>>                 created: Wed Aug 01 16:44:43 CDT 2012
>>>                finished: Wed Aug 01 16:44:46 CDT 2012
>>>                 created: Wed Aug 01 16:44:43 CDT 2012
>>>                finished: Wed Aug 01 16:44:46 CDT 2012
>>> 2012-08-01 16:44:46,516 WARN  [xen.resource.CitrixResourceBase]
>>> (DirectAgent-445:null) Catch Exception: class
>>> com.cloud.utils.exception.CloudRuntimeException due to
>>> com.cloud.utils.exception.CloudRuntimeException: Unable to start
>>> VM(i-2-135-VM) on host(e0e2c7af-8adb-49d7-bae5-51dcdb43aa16) due to
>>>Task
>>> failed! Task record:                 uuid:
>>> 4b457405-4527-777c-862c-9430899bb447
>>>                 created: Wed Aug 01 16:44:43 CDT 2012
>>>                finished: Wed Aug 01 16:44:46 CDT 2012
>>>                 created: Wed Aug 01 16:44:43 CDT 2012
>>>                finished: Wed Aug 01 16:44:46 CDT 2012
>>> 2012-08-01 16:44:46,516 WARN  [xen.resource.CitrixResourceBase]
>>> (DirectAgent-445:null) Unable to start i-2-135-VM due to
>>>                 created: Wed Aug 01 16:44:43 CDT 2012
>>>                finished: Wed Aug 01 16:44:46 CDT 2012
>>> 2012-08-01 16:44:46,588 WARN  [xen.resource.CitrixResourceBase]
>>> (DirectAgent-445:null) Unable to clean up VBD due to
>>> 2012-08-01 16:44:46,604 WARN  [xen.resource.CitrixResourceBase]
>>> (DirectAgent-445:null) Unable to clean up VBD due to
>>> 2012-08-01 16:44:46,619 WARN  [xen.resource.CitrixResourceBase]
>>> (DirectAgent-445:null) Unable to cleanup VIF
>>> 2012-08-01 16:44:46,656 INFO  [xen.resource.XenServer56Resource]
>>> (DirectAgent-445:null) Catch com.xensource.xenapi.Types$VifInUse:
>>>failed
>>> to destory VLAN eth1 on host e0e2c7af-8adb-49d7-bae5-51dcdb43aa16 due
>>>to
>>> Network has active VIFs
>>> 2012-08-01 16:44:46,656 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-445:null) The VM is in stopped state, detected problem
>>> during startup : i-2-135-VM
>>> 2012-08-01 16:44:46,656 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-445:null) Seq 13-1102643301: Response Received:
>>> 2012-08-01 16:44:46,656 DEBUG [agent.transport.Request]
>>> (DirectAgent-445:null) Seq 13-1102643301: Processing:  { Ans: , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 110,
>>>
>>>[{"StartAnswer":{"vm":{"id":135,"name":"i-2-135-VM","bootloader":"CD","t
>>>ype":"User","cpus":1,"speed":1000,"minRam":1073741824,"maxRam":107374182
>>>4,"arch":"x86_64","os":"CentOS
>>> 6.0
>>>
>>>(64-bit)","bootArgs":"","rebootOnCrash":false,"enableHA":false,"limitCpu
>>>Use":false,"vncPassword":"a88cf66d4a6d9df","params":{},"disks":[{"id":14
>>>7,"name":"ROOT-135","mountPoint":"/iqn.2012-01.com.nfinausa:san01/1","pa
>>>th":"ff2e7337-72fa-4fa0-8195-14ae5de1ecef","size":21474836480,"type":"RO
>>>OT","storagePoolType":"IscsiLUN","storagePoolUuid":"776299a8-a770-30f7-9
>>>f6b-8c422790f5b7","deviceId":0},{"id":135,"name":"Xen
>>> Cloud Platform
>>>
>>>1.5","path":"nfs://172.16.5.2/export/secondary/template/tmpl/2/205//205-
>>>2-bd987063-a037-3a5f-9796-f0d5ddee3c31.iso","size":0,"type":"ISO","stora
>>>gePoolType":"ISO","deviceId":3,"guestOsType":"CentOS
>>> 6.0
>>>
>>>(64-bit)"}],"nics":[{"deviceId":0,"networkRateMbps":200,"defaultNic":tru
>>>e,"ip":"10.1.1.220","netmask":"255.255.255.0","gateway":"10.1.1.1","mac"
>>>:"02:00:3d:fc:00:0b","dns1":"8.8.8.8","dns2":"","broadcastType":"Vlan","
>>>type":"Guest","broadcastUri":"vlan://510","isolationUri":"vlan://510","i
>>>sSecurityGroupEnabled":false,"name":"guest_pub"}]},"result":false,"detai
>>>ls":"Unable
>>> to start i-2-135-VM due to ","wait":0}}] }
>>> 2012-08-01 16:44:46,656 DEBUG [agent.transport.Request]
>>> (Job-Executor-8:job-206) Seq 13-1102643301: Received:  { Ans: , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 110, { StartAnswer } }
>>> 2012-08-01 16:44:46,657 WARN  [cloud.vm.VirtualMachineManagerImpl]
>>> (Job-Executor-8:job-206) Cleanup failed due to Unable to start
>>> i-2-135-VM due to
>>> 2012-08-01 16:44:46,657 WARN  [cloud.vm.VirtualMachineManagerImpl]
>>> (DirectAgent-445:null) Cleanup failed due to Unable to start i-2-135-VM
>>> due to
>>> 2012-08-01 16:44:46,658 DEBUG [agent.manager.AgentAttache]
>>> (DirectAgent-445:null) Seq 13-1102643301: No more commands found
>>> 2012-08-01 16:44:46,661 INFO  [cloud.vm.VirtualMachineManagerImpl]
>>> (Job-Executor-8:job-206) Unable to start VM on Host[-13-Routing] due to
>>> Unable to start i-2-135-VM due to
>>> 2012-08-01 16:44:46,664 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (Job-Executor-8:job-206) Cleaning up resources for the vm
>>> VM[User|i-2-135-VM] in Starting state
>>> 2012-08-01 16:44:46,665 DEBUG [agent.transport.Request]
>>> (Job-Executor-8:job-206) Seq 13-1102643302: Sending  { Cmd , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 100111,
>>> [{"StopCommand":{"isProxy":false,"vmName":"i-2-135-VM","wait":0}}] }
>>> 2012-08-01 16:44:46,665 DEBUG [agent.transport.Request]
>>> (Job-Executor-8:job-206) Seq 13-1102643302: Executing:  { Cmd , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 100111,
>>> [{"StopCommand":{"isProxy":false,"vmName":"i-2-135-VM","wait":0}}] }
>>> 2012-08-01 16:44:46,665 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-27:null) Seq 13-1102643302: Executing request
>>> 2012-08-01 16:44:46,758 INFO  [xen.resource.CitrixResourceBase]
>>> (DirectAgent-27:null) VM does not exist on
>>> XenServere0e2c7af-8adb-49d7-bae5-51dcdb43aa16
>>> 2012-08-01 16:44:46,758 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-27:null) Seq 13-1102643302: Response Received:
>>> 2012-08-01 16:44:46,759 DEBUG [agent.transport.Request]
>>> (DirectAgent-27:null) Seq 13-1102643302: Processing:  { Ans: , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 110,
>>>
>>>[{"StopAnswer":{"vncPort":0,"bytesSent":0,"bytesReceived":0,"result":tru
>>>e,"details":"VM
>>> does not exist","wait":0}}] }
>>> 2012-08-01 16:44:46,759 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (DirectAgent-27:null) Cleanup succeeded. Details VM does not exist
>>> 2012-08-01 16:44:46,759 DEBUG [agent.manager.AgentAttache]
>>> (DirectAgent-27:null) Seq 13-1102643302: No more commands found
>>> 2012-08-01 16:44:46,759 DEBUG [agent.transport.Request]
>>> (Job-Executor-8:job-206) Seq 13-1102643302: Received:  { Ans: , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 110, { StopAnswer } }
>>> 2012-08-01 16:44:46,759 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (Job-Executor-8:job-206) Cleanup succeeded. Details VM does not exist
>>> 2012-08-01 16:44:46,768 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Service SecurityGroup is not supported in the
>>> network id=211
>>> 2012-08-01 16:44:46,775 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Changing active number of nics for network
>>> id=211 on -1
>>> 2012-08-01 16:44:46,778 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (Job-Executor-8:job-206) Successfully cleanued up resources for the vm
>>> VM[User|i-2-135-VM] in Starting state
>>> 2012-08-01 16:44:46,780 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (Job-Executor-8:job-206) Root volume is ready, need to place VM in
>>> volume's cluster
>>> 2012-08-01 16:44:46,780 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (Job-Executor-8:job-206) Vol[147|vm=135|ROOT] is READY, changing
>>> deployment plan to use this pool's dcId: 2 , podId: 1 , and clusterId:
>>>1
>>> 2012-08-01 16:44:46,781 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) DeploymentPlanner allocation algorithm: random
>>> 2012-08-01 16:44:46,781 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Trying to allocate a host and storage pools
>>> from dc:2, pod:1,cluster:1, requested cpu: 1000, requested ram:
>>>1073741824
>>> 2012-08-01 16:44:46,781 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Is ROOT volume READY (pool already
>>>allocated)?: Yes
>>> 2012-08-01 16:44:46,781 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Searching resources only under specified
>>>Cluster: 1
>>> 2012-08-01 16:44:46,786 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Checking resources in Cluster: 1 under Pod: 1
>>> 2012-08-01 16:44:46,786 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Calling HostAllocators to find suitable hosts
>>> 2012-08-01 16:44:46,786 DEBUG [allocator.impl.FirstFitAllocator]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Looking for hosts in
>>> dc: 2  pod:1  cluster:1
>>> 2012-08-01 16:44:46,788 DEBUG [allocator.impl.FirstFitAllocator]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) FirstFitAllocator has
>>> 3 hosts to check for allocation: [Host[-13-Routing], Host[-17-Routing],
>>> Host[-16-Routing]]
>>> 2012-08-01 16:44:46,791 DEBUG [allocator.impl.FirstFitAllocator]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Found 3 hosts for
>>> allocation after prioritization: [Host[-13-Routing], Host[-17-Routing],
>>> Host[-16-Routing]]
>>> 2012-08-01 16:44:46,791 DEBUG [allocator.impl.FirstFitAllocator]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Looking for
>>> speed=1000Mhz, Ram=1024
>>> 2012-08-01 16:44:46,791 DEBUG [allocator.impl.FirstFitAllocator]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Host name: hv1,
>>> hostId: 13 is in avoid set, skipping this and trying other available
>>>hosts
>>> 2012-08-01 16:44:46,791 DEBUG [allocator.impl.FirstFitAllocator]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Host name: hv3,
>>> hostId: 17 is in avoid set, skipping this and trying other available
>>>hosts
>>> 2012-08-01 16:44:46,793 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Checking if host: 16
>>> has enough capacity for requested CPU: 1000 and requested RAM:
>>> 1073741824 , cpuOverprovisioningFactor: 1.0
>>> 2012-08-01 16:44:46,795 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Hosts's actual total
>>> CPU: 57456 and CPU after applying overprovisioning: 57456
>>> 2012-08-01 16:44:46,795 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Free CPU: 33456 ,
>>> Requested CPU: 1000
>>> 2012-08-01 16:44:46,795 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Free RAM: 74722373184
>>> , Requested RAM: 1073741824
>>> 2012-08-01 16:44:46,795 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Host has enough CPU
>>> and RAM available
>>> 2012-08-01 16:44:46,795 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) STATS: Can alloc CPU
>>> from host: 16, used: 24000, reserved: 0, actual total: 57456, total
>>>with
>>> overprovisioning: 57456; requested cpu:1000,alloc_from_last_host?:false
>>> ,considerReservedCapacity?: true
>>> 2012-08-01 16:44:46,795 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) STATS: Can alloc MEM
>>> from host: 16, used: 25769803776, reserved: 0, total: 100492176960;
>>> requested mem: 1073741824,alloc_from_last_host?:false
>>> ,considerReservedCapacity?: true
>>> 2012-08-01 16:44:46,795 DEBUG [allocator.impl.FirstFitAllocator]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Found a suitable
>>>host,
>>> adding to list: 16
>>> 2012-08-01 16:44:46,795 DEBUG [allocator.impl.FirstFitAllocator]
>>> (Job-Executor-8:job-206 FirstFitRoutingAllocator) Host Allocator
>>> returning 1 suitable hosts
>>> 2012-08-01 16:44:46,796 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Checking suitable pools for volume (Id, Type):
>>> (147,ROOT)
>>> 2012-08-01 16:44:46,796 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Volume is in READY state and has pool already
>>> allocated, checking if pool can be reused, poolId: 204
>>> 2012-08-01 16:44:46,797 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Planner need not allocate a pool for this
>>> volume since its READY
>>> 2012-08-01 16:44:46,797 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Trying to find a potenial host and associated
>>> storage pools from the suitable host/pool lists for this VM
>>> 2012-08-01 16:44:46,797 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Checking if host: 16 can access any suitable
>>> storage pool for volume: ROOT
>>> 2012-08-01 16:44:46,798 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Host: 16 can access pool: 204
>>> 2012-08-01 16:44:46,798 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Found a potential host id: 16 name: hv2 and
>>> associated storage pools for this VM
>>> 2012-08-01 16:44:46,799 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-8:job-206) Returning Deployment Destination:
>>>
>>>Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool
>>>(Id))]
>>> : Dest[Zone(2)-Pod(1)-Cluster(1)-Host(16)-Storage()]
>>> 2012-08-01 16:44:46,799 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (Job-Executor-8:job-206) Deployment found  - P0=VM[User|i-2-135-VM],
>>>
>>>P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->P
>>>ool(Id))]
>>> : Dest[Zone(2)-Pod(1)-Cluster(1)-Host(16)-Storage()]
>>> 2012-08-01 16:44:46,804 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206) VM state transitted from :Starting to Starting
>>> with event: OperationRetryvm's original host id: null new host id: 16
>>> host id before state transition: 13
>>> 2012-08-01 16:44:46,807 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206) Hosts's actual total CPU: 57456 and CPU after
>>> applying overprovisioning: 57456
>>> 2012-08-01 16:44:46,807 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206) release cpu from host: 13, old used:
>>> 2500,reserved: 0, actual total: 57456, total with overprovisioning:
>>> 57456; new used: 1500,reserved:0; movedfromreserved:
>>> false,moveToReserveredfalse
>>> 2012-08-01 16:44:46,807 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206) release mem from host: 13, old used:
>>> 2550136832,reserved: 0, total: 98237986560; new used:
>>> 1476395008,reserved:0; movedfromreserved: false,moveToReserveredfalse
>>> 2012-08-01 16:44:46,813 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206) Hosts's actual total CPU: 57456 and CPU after
>>> applying overprovisioning: 57456
>>> 2012-08-01 16:44:46,813 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206) We are allocating VM, increasing the used
>>> capacity of this host:16
>>> 2012-08-01 16:44:46,813 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206) Current Used CPU: 24000 , Free CPU:33456
>>> ,Requested CPU: 1000
>>> 2012-08-01 16:44:46,813 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206) Current Used RAM: 25769803776 , Free
>>> RAM:74722373184 ,Requested RAM: 1073741824
>>> 2012-08-01 16:44:46,814 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206) CPU STATS after allocation: for host: 16, old
>>> used: 24000, old reserved: 0, actual total: 57456, total with
>>> overprovisioning: 57456; new used:25000, reserved:0; requested
>>> cpu:1000,alloc_from_last:false
>>> 2012-08-01 16:44:46,814 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (Job-Executor-8:job-206) RAM STATS after allocation: for host: 16, old
>>> used: 25769803776, old reserved: 0, total: 100492176960; new used:
>>> 26843545600, reserved: 0; requested mem:
>>>1073741824,alloc_from_last:false
>>> 2012-08-01 16:44:46,817 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (Job-Executor-8:job-206) VM is being started in podId: 1
>>> 2012-08-01 16:44:46,819 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Network id=211 is already implemented
>>> 2012-08-01 16:44:46,828 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Service SecurityGroup is not supported in the
>>> network id=211
>>> 2012-08-01 16:44:46,833 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Changing active number of nics for network
>>> id=211 on 1
>>> 2012-08-01 16:44:46,836 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Asking JuniperSRX to prepare for
>>> Nic[469-135-28c5f2c5-110c-4de4-98f4-fe96866a2f0d-10.1.1.220]
>>> 2012-08-01 16:44:46,839 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Asking Netscaler to prepare for
>>> Nic[469-135-28c5f2c5-110c-4de4-98f4-fe96866a2f0d-10.1.1.220]
>>> 2012-08-01 16:44:46,842 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Asking F5BigIp to prepare for
>>> Nic[469-135-28c5f2c5-110c-4de4-98f4-fe96866a2f0d-10.1.1.220]
>>> 2012-08-01 16:44:46,846 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Asking VirtualRouter to prepare for
>>> Nic[469-135-28c5f2c5-110c-4de4-98f4-fe96866a2f0d-10.1.1.220]
>>> 2012-08-01 16:44:46,852 DEBUG
>>> [network.router.VirtualNetworkApplianceManagerImpl]
>>> (Job-Executor-8:job-206) Starting a router for Ntwk[211|Guest|7] in
>>> datacenter:com.cloud.dc.DataCenterVO$$EnhancerByCGLIB$$22635440@2
>>> 2012-08-01 16:44:46,862 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Service SecurityGroup is not supported in the
>>> network id=211
>>> 2012-08-01 16:44:46,864 DEBUG
>>> [network.router.VirtualNetworkApplianceManagerImpl]
>>> (Job-Executor-8:job-206) Applying dhcp entry in network
>>>Ntwk[211|Guest|7]
>>> 2012-08-01 16:44:46,875 DEBUG [agent.transport.Request]
>>> (Job-Executor-8:job-206) Seq 13-1102643303: Sending  { Cmd , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 100111,
>>>
>>>[{"routing.DhcpEntryCommand":{"vmMac":"02:00:3d:fc:00:0b","vmIpAddress":
>>>"10.1.1.220","vmName":"XCP","defaultRouter":"10.1.1.1","defaultDns":"10.
>>>1.1.1","accessDetails":{"router.guest.ip":"10.1.1.1","zone.network.type"
>>>:"Advanced","router.name":"r-126-VM","router.ip":"169.254.2.97"},"wait":
>>>0}}]
>>> }
>>> 2012-08-01 16:44:46,875 DEBUG [agent.transport.Request]
>>> (Job-Executor-8:job-206) Seq 13-1102643303: Executing:  { Cmd , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 100111,
>>>
>>>[{"routing.DhcpEntryCommand":{"vmMac":"02:00:3d:fc:00:0b","vmIpAddress":
>>>"10.1.1.220","vmName":"XCP","defaultRouter":"10.1.1.1","defaultDns":"10.
>>>1.1.1","accessDetails":{"router.guest.ip":"10.1.1.1","zone.network.type"
>>>:"Advanced","router.name":"r-126-VM","router.ip":"169.254.2.97"},"wait":
>>>0}}]
>>> }
>>> 2012-08-01 16:44:46,876 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-15:null) Seq 13-1102643303: Executing request
>>> 2012-08-01 16:44:49,024 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-42:null) Ping from 13
>>> 2012-08-01 16:44:49,285 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-15:null) Seq 13-1102643303: Response Received:
>>> 2012-08-01 16:44:49,285 DEBUG [agent.transport.Request]
>>> (DirectAgent-15:null) Seq 13-1102643303: Processing:  { Ans: , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 110,
>>> [{"Answer":{"result":true,"wait":0}}] }
>>> 2012-08-01 16:44:49,285 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (DirectAgent-15:null) Cleanup succeeded. Details null
>>> 2012-08-01 16:44:49,285 DEBUG [agent.manager.AgentAttache]
>>> (DirectAgent-15:null) Seq 13-1102643303: No more commands found
>>> 2012-08-01 16:44:49,285 DEBUG [agent.transport.Request]
>>> (Job-Executor-8:job-206) Seq 13-1102643303: Received:  { Ans: , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 110, { Answer } }
>>> 2012-08-01 16:44:49,285 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (Job-Executor-8:job-206) Cleanup succeeded. Details null
>>> 2012-08-01 16:44:49,292 DEBUG [cloud.network.NetworkManagerImpl]
>>> (Job-Executor-8:job-206) Service SecurityGroup is not supported in the
>>> network id=211
>>> 2012-08-01 16:44:49,295 DEBUG
>>> [network.router.VirtualNetworkApplianceManagerImpl]
>>> (Job-Executor-8:job-206) Applying userdata and password entry in
>>>network
>>> Ntwk[211|Guest|7]
>>> 2012-08-01 16:44:49,307 DEBUG [agent.transport.Request]
>>> (Job-Executor-8:job-206) Seq 13-1102643304: Sending  { Cmd , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 100111,
>>>
>>>[{"routing.SavePasswordCommand":{"password":"fnirq_cnffjbeq","vmIpAddres
>>>s":"10.1.1.220","vmName":"XCP","accessDetails":{"router.guest.ip":"10.1.
>>>1.1","zone.network.type":"Advanced","router.ip":"169.254.2.97","router.n
>>>ame":"r-126-VM"},"wait":0}},{"routing.VmDataCommand":{"vmIpAddress":"10.
>>>1.1.220","vmName":"XCP","accessDetails":{"router.guest.ip":"10.1.1.1","z
>>>one.network.type":"Advanced","router.ip":"169.254.2.97","router.name":"r
>>>-126-VM"},"wait":0}}]
>>> }
>>> 2012-08-01 16:44:49,307 DEBUG [agent.transport.Request]
>>> (Job-Executor-8:job-206) Seq 13-1102643304: Executing:  { Cmd , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 100111,
>>>
>>>[{"routing.SavePasswordCommand":{"password":"fnirq_cnffjbeq","vmIpAddres
>>>s":"10.1.1.220","vmName":"XCP","accessDetails":{"router.guest.ip":"10.1.
>>>1.1","zone.network.type":"Advanced","router.ip":"169.254.2.97","router.n
>>>ame":"r-126-VM"},"wait":0}},{"routing.VmDataCommand":{"vmIpAddress":"10.
>>>1.1.220","vmName":"XCP","accessDetails":{"router.guest.ip":"10.1.1.1","z
>>>one.network.type":"Advanced","router.ip":"169.254.2.97","router.name":"r
>>>-126-VM"},"wait":0}}]
>>> }
>>> 2012-08-01 16:44:49,307 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-169:null) Seq 13-1102643304: Executing request
>>> 2012-08-01 16:44:49,921 DEBUG [agent.manager.AgentManagerImpl]
>>> (AgentManager-Handler-5:null) Ping from 19
>>> 2012-08-01 16:44:50,102 DEBUG [agent.manager.AgentManagerImpl]
>>> (AgentManager-Handler-6:null) Ping from 18
>>> 2012-08-01 16:44:51,308 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-430:null) Ping from 17
>>> 2012-08-01 16:44:51,682 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-419:null) Ping from 16
>>> 2012-08-01 16:44:55,945 DEBUG
>>> [storage.secondary.SecondaryStorageManagerImpl] (secstorage-1:null)
>>>Zone
>>> 2 is ready to launch secondary storage VM
>>> 2012-08-01 16:44:56,037 DEBUG
>>> [cloud.consoleproxy.ConsoleProxyManagerImpl] (consoleproxy-1:null) Zone
>>> 2 is ready to launch console proxy
>>> 2012-08-01 16:44:56,231 DEBUG
>>> [network.router.VirtualNetworkApplianceManagerImpl]
>>> (RouterStatusMonitor-1:null) Found 2 routers.
>>> 2012-08-01 16:44:58,092 INFO  [agent.manager.AgentMonitor]
>>> (AgentMonitor:null) Found the following agents behind on ping: [19, 13,
>>> 18, 17, 16]
>>> 2012-08-01 16:44:58,094 DEBUG [cloud.host.Status] (AgentMonitor:null)
>>> Ping timeout for host 19, do invstigation
>>> 2012-08-01 16:44:58,097 INFO  [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-10:null) Investigating why host 19 has disconnected with
>>> event PingTimeout
>>> 2012-08-01 16:44:58,097 DEBUG [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-10:null) checking if agent (19) is alive
>>> 2012-08-01 16:44:58,098 DEBUG [cloud.host.Status] (AgentMonitor:null)
>>> Ping timeout for host 13, do invstigation
>>> 2012-08-01 16:44:58,100 INFO  [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-12:null) Investigating why host 13 has disconnected with
>>> event PingTimeout
>>> 2012-08-01 16:44:58,100 DEBUG [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-12:null) checking if agent (13) is alive
>>> 2012-08-01 16:44:58,100 DEBUG [agent.transport.Request]
>>> (AgentTaskPool-10:null) Seq 19-380633147: Sending  { Cmd , MgmtId:
>>> 130577622632, via: 19, Ver: v1, Flags: 100011,
>>> [{"CheckHealthCommand":{"wait":50}}] }
>>> 2012-08-01 16:44:58,101 DEBUG [cloud.host.Status] (AgentMonitor:null)
>>> Ping timeout for host 18, do invstigation
>>> 2012-08-01 16:44:58,103 DEBUG [agent.transport.Request]
>>> (AgentTaskPool-12:null) Seq 13-1102643305: Sending  { Cmd , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 100011,
>>> [{"CheckHealthCommand":{"wait":50}}] }
>>> 2012-08-01 16:44:58,103 INFO  [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-16:null) Investigating why host 18 has disconnected with
>>> event PingTimeout
>>> 2012-08-01 16:44:58,103 DEBUG [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-16:null) checking if agent (18) is alive
>>> 2012-08-01 16:44:58,103 DEBUG [agent.transport.Request]
>>> (AgentTaskPool-12:null) Seq 13-1102643305: Executing:  { Cmd , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 100011,
>>> [{"CheckHealthCommand":{"wait":50}}] }
>>> 2012-08-01 16:44:58,103 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-218:null) Seq 13-1102643305: Executing request
>>> 2012-08-01 16:44:58,104 DEBUG [cloud.host.Status] (AgentMonitor:null)
>>> Ping timeout for host 17, do invstigation
>>> 2012-08-01 16:44:58,106 INFO  [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-11:null) Investigating why host 17 has disconnected with
>>> event PingTimeout
>>> 2012-08-01 16:44:58,106 DEBUG [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-11:null) checking if agent (17) is alive
>>> 2012-08-01 16:44:58,107 DEBUG [cloud.host.Status] (AgentMonitor:null)
>>> Ping timeout for host 16, do invstigation
>>> 2012-08-01 16:44:58,107 DEBUG [agent.transport.Request]
>>> (AgentTaskPool-16:null) Seq 18-202899485: Sending  { Cmd , MgmtId:
>>> 130577622632, via: 18, Ver: v1, Flags: 100011,
>>> [{"CheckHealthCommand":{"wait":50}}] }
>>> 2012-08-01 16:44:58,109 INFO  [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-3:null) Investigating why host 16 has disconnected with
>>> event PingTimeout
>>> 2012-08-01 16:44:58,109 DEBUG [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-3:null) checking if agent (16) is alive
>>> 2012-08-01 16:44:58,109 DEBUG [agent.transport.Request]
>>> (AgentTaskPool-11:null) Seq 17-2018508896: Sending  { Cmd , MgmtId:
>>> 130577622632, via: 17, Ver: v1, Flags: 100011,
>>> [{"CheckHealthCommand":{"wait":50}}] }
>>> 2012-08-01 16:44:58,109 DEBUG [agent.transport.Request]
>>> (AgentTaskPool-11:null) Seq 17-2018508896: Executing:  { Cmd , MgmtId:
>>> 130577622632, via: 17, Ver: v1, Flags: 100011,
>>> [{"CheckHealthCommand":{"wait":50}}] }
>>> 2012-08-01 16:44:58,109 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-411:null) Seq 17-2018508896: Executing request
>>> 2012-08-01 16:44:58,111 DEBUG [agent.transport.Request]
>>> (AgentTaskPool-3:null) Seq 16-479526998: Sending  { Cmd , MgmtId:
>>> 130577622632, via: 16, Ver: v1, Flags: 100011,
>>> [{"CheckHealthCommand":{"wait":50}}] }
>>> 2012-08-01 16:44:58,112 DEBUG [agent.transport.Request]
>>> (AgentTaskPool-3:null) Seq 16-479526998: Executing:  { Cmd , MgmtId:
>>> 130577622632, via: 16, Ver: v1, Flags: 100011,
>>> [{"CheckHealthCommand":{"wait":50}}] }
>>> 2012-08-01 16:44:58,114 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-129:null) Seq 16-479526998: Executing request
>>> 2012-08-01 16:44:58,143 DEBUG [agent.transport.Request]
>>> (AgentManager-Handler-7:null) Seq 19-380633147: Processing:  { Ans: ,
>>> MgmtId: 130577622632, via: 19, Ver: v1, Flags: 10,
>>> [{"CheckHealthAnswer":{"result":true,"details":"resource is
>>> alive","wait":0}}] }
>>> 2012-08-01 16:44:58,144 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (AgentManager-Handler-7:null) Cleanup succeeded. Details resource is
>>>alive
>>> 2012-08-01 16:44:58,144 DEBUG [agent.transport.Request]
>>> (AgentTaskPool-10:null) Seq 19-380633147: Received:  { Ans: , MgmtId:
>>> 130577622632, via: 19, Ver: v1, Flags: 10, { CheckHealthAnswer } }
>>> 2012-08-01 16:44:58,144 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (AgentTaskPool-10:null) Cleanup succeeded. Details resource is alive
>>> 2012-08-01 16:44:58,144 DEBUG [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-10:null) Details from executing class
>>> com.cloud.agent.api.CheckHealthCommand: resource is alive
>>> 2012-08-01 16:44:58,144 DEBUG [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-10:null) agent (19) responded to checkHeathCommand,
>>> reporting that agent is Up
>>> 2012-08-01 16:44:58,144 INFO  [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-10:null) The state determined is Up
>>> 2012-08-01 16:44:58,144 INFO  [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-10:null) Agent is determined to be up and running
>>> 2012-08-01 16:44:58,144 DEBUG [cloud.host.Status]
>>> (AgentTaskPool-10:null) Transition:[Resource state = Enabled, Agent
>>> event = Ping, Host id = 19, name = s-121-VM]
>>> 2012-08-01 16:44:58,150 DEBUG [cloud.host.Status]
>>> (AgentTaskPool-10:null) Agent status update: [id = 19; name = s-121-VM;
>>> old status = Up; event = Ping; new status = Up; old update count = 79;
>>> new update count = 80]
>>> 2012-08-01 16:44:58,151 DEBUG [agent.transport.Request]
>>> (AgentManager-Handler-8:null) Seq 18-202899485: Processing:  { Ans: ,
>>> MgmtId: 130577622632, via: 18, Ver: v1, Flags: 10,
>>> [{"CheckHealthAnswer":{"result":true,"details":"resource is
>>> alive","wait":0}}] }
>>> 2012-08-01 16:44:58,151 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (AgentManager-Handler-8:null) Cleanup succeeded. Details resource is
>>>alive
>>> 2012-08-01 16:44:58,152 DEBUG [agent.transport.Request]
>>> (AgentTaskPool-16:null) Seq 18-202899485: Received:  { Ans: , MgmtId:
>>> 130577622632, via: 18, Ver: v1, Flags: 10, { CheckHealthAnswer } }
>>> 2012-08-01 16:44:58,152 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (AgentTaskPool-16:null) Cleanup succeeded. Details resource is alive
>>> 2012-08-01 16:44:58,152 DEBUG [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-16:null) Details from executing class
>>> com.cloud.agent.api.CheckHealthCommand: resource is alive
>>> 2012-08-01 16:44:58,152 DEBUG [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-16:null) agent (18) responded to checkHeathCommand,
>>> reporting that agent is Up
>>> 2012-08-01 16:44:58,152 INFO  [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-16:null) The state determined is Up
>>> 2012-08-01 16:44:58,152 INFO  [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-16:null) Agent is determined to be up and running
>>> 2012-08-01 16:44:58,152 DEBUG [cloud.host.Status]
>>> (AgentTaskPool-16:null) Transition:[Resource state = Enabled, Agent
>>> event = Ping, Host id = 18, name = v-122-VM]
>>> 2012-08-01 16:44:58,155 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-218:null) Seq 13-1102643305: Response Received:
>>> 2012-08-01 16:44:58,155 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-411:null) Seq 17-2018508896: Response Received:
>>> 2012-08-01 16:44:58,156 DEBUG [agent.transport.Request]
>>> (DirectAgent-218:null) Seq 13-1102643305: Processing:  { Ans: , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 10,
>>> [{"CheckHealthAnswer":{"result":true,"details":"resource is
>>> alive","wait":0}}] }
>>> 2012-08-01 16:44:58,156 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (DirectAgent-218:null) Cleanup succeeded. Details resource is alive
>>> 2012-08-01 16:44:58,156 DEBUG [agent.transport.Request]
>>> (DirectAgent-411:null) Seq 17-2018508896: Processing:  { Ans: , MgmtId:
>>> 130577622632, via: 17, Ver: v1, Flags: 10,
>>> [{"CheckHealthAnswer":{"result":true,"details":"resource is
>>> alive","wait":0}}] }
>>> 2012-08-01 16:44:58,156 DEBUG [agent.transport.Request]
>>> (AgentTaskPool-12:null) Seq 13-1102643305: Received:  { Ans: , MgmtId:
>>> 130577622632, via: 13, Ver: v1, Flags: 10, { CheckHealthAnswer } }
>>> 2012-08-01 16:44:58,156 DEBUG [agent.transport.Request]
>>> (AgentTaskPool-11:null) Seq 17-2018508896: Received:  { Ans: , MgmtId:
>>> 130577622632, via: 17, Ver: v1, Flags: 10, { CheckHealthAnswer } }
>>> 2012-08-01 16:44:58,156 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (AgentTaskPool-11:null) Cleanup succeeded. Details resource is alive
>>> 2012-08-01 16:44:58,156 DEBUG [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-11:null) Details from executing class
>>> com.cloud.agent.api.CheckHealthCommand: resource is alive
>>> 2012-08-01 16:44:58,156 DEBUG [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-11:null) agent (17) responded to checkHeathCommand,
>>> reporting that agent is Up
>>> 2012-08-01 16:44:58,156 INFO  [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-11:null) The state determined is Up
>>> 2012-08-01 16:44:58,156 INFO  [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-11:null) Agent is determined to be up and running
>>> 2012-08-01 16:44:58,156 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (DirectAgent-411:null) Cleanup succeeded. Details resource is alive
>>> 2012-08-01 16:44:58,156 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (AgentTaskPool-12:null) Cleanup succeeded. Details resource is alive
>>> 2012-08-01 16:44:58,156 DEBUG [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-12:null) Details from executing class
>>> com.cloud.agent.api.CheckHealthCommand: resource is alive
>>> 2012-08-01 16:44:58,156 DEBUG [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-12:null) agent (13) responded to checkHeathCommand,
>>> reporting that agent is Up
>>> 2012-08-01 16:44:58,156 INFO  [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-12:null) The state determined is Up
>>> 2012-08-01 16:44:58,156 INFO  [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-12:null) Agent is determined to be up and running
>>> 2012-08-01 16:44:58,158 DEBUG [cloud.host.Status]
>>> (AgentTaskPool-16:null) Agent status update: [id = 18; name = v-122-VM;
>>> old status = Up; event = Ping; new status = Up; old update count = 103;
>>> new update count = 104]
>>> 2012-08-01 16:44:58,158 DEBUG [cloud.host.Status]
>>> (AgentTaskPool-11:null) Transition:[Resource state = Enabled, Agent
>>> event = Ping, Host id = 17, name = hv3]
>>> 2012-08-01 16:44:58,162 DEBUG [cloud.host.Status]
>>> (AgentTaskPool-11:null) Agent status update: [id = 17; name = hv3; old
>>> status = Up; event = Ping; new status = Up; old update count = 955; new
>>> update count = 956]
>>> 2012-08-01 16:44:58,162 DEBUG [cloud.host.Status]
>>> (AgentTaskPool-12:null) Transition:[Resource state = Enabled, Agent
>>> event = Ping, Host id = 13, name = hv1]
>>> 2012-08-01 16:44:58,163 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-129:null) Seq 16-479526998: Response Received:
>>> 2012-08-01 16:44:58,163 DEBUG [agent.transport.Request]
>>> (DirectAgent-129:null) Seq 16-479526998: Processing:  { Ans: , MgmtId:
>>> 130577622632, via: 16, Ver: v1, Flags: 10,
>>> [{"CheckHealthAnswer":{"result":true,"details":"resource is
>>> alive","wait":0}}] }
>>> 2012-08-01 16:44:58,163 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (DirectAgent-129:null) Cleanup succeeded. Details resource is alive
>>> 2012-08-01 16:44:58,163 DEBUG [agent.transport.Request]
>>> (AgentTaskPool-3:null) Seq 16-479526998: Received:  { Ans: , MgmtId:
>>> 130577622632, via: 16, Ver: v1, Flags: 10, { CheckHealthAnswer } }
>>> 2012-08-01 16:44:58,163 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (AgentTaskPool-3:null) Cleanup succeeded. Details resource is alive
>>> 2012-08-01 16:44:58,164 DEBUG [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-3:null) Details from executing class
>>> com.cloud.agent.api.CheckHealthCommand: resource is alive
>>> 2012-08-01 16:44:58,164 DEBUG [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-3:null) agent (16) responded to checkHeathCommand,
>>> reporting that agent is Up
>>> 2012-08-01 16:44:58,164 INFO  [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-3:null) The state determined is Up
>>> 2012-08-01 16:44:58,164 INFO  [agent.manager.AgentManagerImpl]
>>> (AgentTaskPool-3:null) Agent is determined to be up and running
>>> 2012-08-01 16:44:58,167 DEBUG [cloud.host.Status]
>>> (AgentTaskPool-12:null) Agent status update: [id = 13; name = hv1; old
>>> status = Up; event = Ping; new status = Up; old update count = 1914;
>>>new
>>> update count = 1915]
>>> 2012-08-01 16:44:58,167 DEBUG [cloud.host.Status]
>>>(AgentTaskPool-3:null)
>>> Transition:[Resource state = Enabled, Agent event = Ping, Host id = 16,
>>> name = hv2]
>>> 2012-08-01 16:44:58,171 DEBUG [cloud.host.Status]
>>>(AgentTaskPool-3:null)
>>> Agent status update: [id = 16; name = hv2; old status = Up; event =
>>> Ping; new status = Up; old update count = 936; new update count = 937]
>>> 2012-08-01 16:44:59,257 INFO  [cloud.ha.HighAvailabilityManagerImpl]
>>> (HA-Worker-2:work-110) Processing HAWork[110-HA-124-Stopped-Scheduled]
>>> 2012-08-01 16:44:59,261 INFO  [cloud.ha.HighAvailabilityManagerImpl]
>>> (HA-Worker-2:work-110) HA on VM[DomainRouter|r-124-VM]
>>> 2012-08-01 16:44:59,269 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110) VM state transitted from :Stopped to Starting
>>> with event: StartRequestedvm's original host id: 13 new host id: null
>>> host id before state transition: null
>>> 2012-08-01 16:44:59,269 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (HA-Worker-2:work-110) Successfully transitioned to start state for
>>> VM[DomainRouter|r-124-VM] reservation id =
>>> 10b7874f-8f09-4a00-8f51-3a9375c7e1a7
>>> 2012-08-01 16:44:59,273 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (HA-Worker-2:work-110) Trying to deploy VM, vm has dcId: 2 and podId: 1
>>> 2012-08-01 16:44:59,274 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (HA-Worker-2:work-110) Deploy avoids pods: null, clusters: null,
>>>hosts: null
>>> 2012-08-01 16:44:59,276 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (HA-Worker-2:work-110) Root volume is ready, need to place VM in
>>> volume's cluster
>>> 2012-08-01 16:44:59,276 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (HA-Worker-2:work-110) Vol[132|vm=124|ROOT] is READY, changing
>>> deployment plan to use this pool's dcId: 2 , podId: 1 , and clusterId:
>>>1
>>> 2012-08-01 16:44:59,277 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (HA-Worker-2:work-110) DeploymentPlanner allocation algorithm: random
>>> 2012-08-01 16:44:59,277 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (HA-Worker-2:work-110) Trying to allocate a host and storage pools from
>>> dc:2, pod:1,cluster:1, requested cpu: 500, requested ram: 134217728
>>> 2012-08-01 16:44:59,277 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (HA-Worker-2:work-110) Is ROOT volume READY (pool already allocated)?:
>>>Yes
>>> 2012-08-01 16:44:59,277 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (HA-Worker-2:work-110) This VM has last host_id specified, trying to
>>> choose the same host: 13
>>> 2012-08-01 16:44:59,278 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110) Checking if host: 13 has enough capacity for
>>> requested CPU: 500 and requested RAM: 134217728 ,
>>> cpuOverprovisioningFactor: 1.0
>>> 2012-08-01 16:44:59,280 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110) Hosts's actual total CPU: 57456 and CPU after
>>> applying overprovisioning: 57456
>>> 2012-08-01 16:44:59,280 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110) We need to allocate to the last host again, so
>>> checking if there is enough reserved capacity
>>> 2012-08-01 16:44:59,280 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110) Reserved CPU: 0 , Requested CPU: 500
>>> 2012-08-01 16:44:59,280 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110) Reserved RAM: 0 , Requested RAM: 134217728
>>> 2012-08-01 16:44:59,280 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110) STATS: Failed to alloc resource from host: 13
>>> reservedCpu: 0, requested cpu: 500, reservedMem: 0, requested mem:
>>>134217728
>>> 2012-08-01 16:44:59,280 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110) Host does not have enough reserved CPU
>>>available,
>>> cannot allocate to this host.
>>> 2012-08-01 16:44:59,280 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (HA-Worker-2:work-110) The last host of this VM does not have enough
>>> capacity
>>> 2012-08-01 16:44:59,280 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (HA-Worker-2:work-110) Cannot choose the last host to deploy this VM
>>> 2012-08-01 16:44:59,280 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (HA-Worker-2:work-110) Searching resources only under specified
>>>Cluster: 1
>>> 2012-08-01 16:44:59,285 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (HA-Worker-2:work-110) Checking resources in Cluster: 1 under Pod: 1
>>> 2012-08-01 16:44:59,285 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (HA-Worker-2:work-110) Calling HostAllocators to find suitable hosts
>>> 2012-08-01 16:44:59,285 DEBUG [allocator.impl.FirstFitAllocator]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) Looking for hosts in
>>>dc:
>>> 2  pod:1  cluster:1
>>> 2012-08-01 16:44:59,286 DEBUG [allocator.impl.FirstFitAllocator]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) FirstFitAllocator has 3
>>> hosts to check for allocation: [Host[-16-Routing], Host[-17-Routing],
>>> Host[-13-Routing]]
>>> 2012-08-01 16:44:59,289 DEBUG [allocator.impl.FirstFitAllocator]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) Found 3 hosts for
>>> allocation after prioritization: [Host[-16-Routing], Host[-17-Routing],
>>> Host[-13-Routing]]
>>> 2012-08-01 16:44:59,289 DEBUG [allocator.impl.FirstFitAllocator]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) Looking for
>>> speed=500Mhz, Ram=128
>>> 2012-08-01 16:44:59,290 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) Checking if host: 16
>>>has
>>> enough capacity for requested CPU: 500 and requested RAM: 134217728 ,
>>> cpuOverprovisioningFactor: 1.0
>>> 2012-08-01 16:44:59,292 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) Hosts's actual total
>>> CPU: 57456 and CPU after applying overprovisioning: 57456
>>> 2012-08-01 16:44:59,292 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) Free CPU: 32456 ,
>>> Requested CPU: 500
>>> 2012-08-01 16:44:59,292 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) Free RAM: 73648631360 ,
>>> Requested RAM: 134217728
>>> 2012-08-01 16:44:59,292 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) Host has enough CPU and
>>> RAM available
>>> 2012-08-01 16:44:59,292 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) STATS: Can alloc CPU
>>> from host: 16, used: 25000, reserved: 0, actual total: 57456, total
>>>with
>>> overprovisioning: 57456; requested cpu:500,alloc_from_last_host?:false
>>> ,considerReservedCapacity?: true
>>> 2012-08-01 16:44:59,292 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) STATS: Can alloc MEM
>>> from host: 16, used: 26843545600, reserved: 0, total: 100492176960;
>>> requested mem: 134217728,alloc_from_last_host?:false
>>> ,considerReservedCapacity?: true
>>> 2012-08-01 16:44:59,292 DEBUG [allocator.impl.FirstFitAllocator]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) Found a suitable host,
>>> adding to list: 16
>>> 2012-08-01 16:44:59,293 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) Checking if host: 17
>>>has
>>> enough capacity for requested CPU: 500 and requested RAM: 134217728 ,
>>> cpuOverprovisioningFactor: 1.0
>>> 2012-08-01 16:44:59,295 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) Hosts's actual total
>>> CPU: 57456 and CPU after applying overprovisioning: 57456
>>> 2012-08-01 16:44:59,295 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) Free CPU: 57456 ,
>>> Requested CPU: 500
>>> 2012-08-01 16:44:59,295 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) Free RAM: 98958404160 ,
>>> Requested RAM: 134217728
>>> 2012-08-01 16:44:59,295 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) Host has enough CPU and
>>> RAM available
>>> 2012-08-01 16:44:59,295 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) STATS: Can alloc CPU
>>> from host: 17, used: 0, reserved: 0, actual total: 57456, total with
>>> overprovisioning: 57456; requested cpu:500,alloc_from_last_host?:false
>>> ,considerReservedCapacity?: true
>>> 2012-08-01 16:44:59,295 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) STATS: Can alloc MEM
>>> from host: 17, used: 0, reserved: 0, total: 98958404160; requested mem:
>>> 134217728,alloc_from_last_host?:false ,considerReservedCapacity?: true
>>> 2012-08-01 16:44:59,295 DEBUG [allocator.impl.FirstFitAllocator]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) Found a suitable host,
>>> adding to list: 17
>>> 2012-08-01 16:44:59,296 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) Checking if host: 13
>>>has
>>> enough capacity for requested CPU: 500 and requested RAM: 134217728 ,
>>> cpuOverprovisioningFactor: 1.0
>>> 2012-08-01 16:44:59,298 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) Hosts's actual total
>>> CPU: 57456 and CPU after applying overprovisioning: 57456
>>> 2012-08-01 16:44:59,298 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) Free CPU: 55956 ,
>>> Requested CPU: 500
>>> 2012-08-01 16:44:59,298 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) Free RAM: 96761591552 ,
>>> Requested RAM: 134217728
>>> 2012-08-01 16:44:59,298 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) Host has enough CPU and
>>> RAM available
>>> 2012-08-01 16:44:59,298 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) STATS: Can alloc CPU
>>> from host: 13, used: 1500, reserved: 0, actual total: 57456, total with
>>> overprovisioning: 57456; requested cpu:500,alloc_from_last_host?:false
>>> ,considerReservedCapacity?: true
>>> 2012-08-01 16:44:59,298 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) STATS: Can alloc MEM
>>> from host: 13, used: 1476395008, reserved: 0, total: 98237986560;
>>> requested mem: 134217728,alloc_from_last_host?:false
>>> ,considerReservedCapacity?: true
>>> 2012-08-01 16:44:59,298 DEBUG [allocator.impl.FirstFitAllocator]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) Found a suitable host,
>>> adding to list: 13
>>> 2012-08-01 16:44:59,298 DEBUG [allocator.impl.FirstFitAllocator]
>>> (HA-Worker-2:work-110 FirstFitRoutingAllocator) Host Allocator
>>>returning
>>> 3 suitable hosts
>>> 2012-08-01 16:44:59,299 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (HA-Worker-2:work-110) Checking suitable pools for volume (Id, Type):
>>> (132,ROOT)
>>> 2012-08-01 16:44:59,299 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (HA-Worker-2:work-110) Volume is in READY state and has pool already
>>> allocated, checking if pool can be reused, poolId: 204
>>> 2012-08-01 16:44:59,300 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (HA-Worker-2:work-110) Planner need not allocate a pool for this volume
>>> since its READY
>>> 2012-08-01 16:44:59,300 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (HA-Worker-2:work-110) Trying to find a potenial host and associated
>>> storage pools from the suitable host/pool lists for this VM
>>> 2012-08-01 16:44:59,300 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (HA-Worker-2:work-110) Checking if host: 16 can access any suitable
>>> storage pool for volume: ROOT
>>> 2012-08-01 16:44:59,300 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (HA-Worker-2:work-110) Host: 16 can access pool: 204
>>> 2012-08-01 16:44:59,300 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (HA-Worker-2:work-110) Found a potential host id: 16 name: hv2 and
>>> associated storage pools for this VM
>>> 2012-08-01 16:44:59,301 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (HA-Worker-2:work-110) Returning Deployment Destination:
>>>
>>>Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool
>>>(Id))]
>>> : Dest[Zone(2)-Pod(1)-Cluster(1)-Host(16)-Storage()]
>>> 2012-08-01 16:44:59,301 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (HA-Worker-2:work-110) Deployment found  -
>>>P0=VM[DomainRouter|r-124-VM],
>>>
>>>P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->P
>>>ool(Id))]
>>> : Dest[Zone(2)-Pod(1)-Cluster(1)-Host(16)-Storage()]
>>> 2012-08-01 16:44:59,305 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110) VM state transitted from :Starting to Starting
>>> with event: OperationRetryvm's original host id: 13 new host id: 16
>>>host
>>> id before state transition: null
>>> 2012-08-01 16:44:59,310 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110) Hosts's actual total CPU: 57456 and CPU after
>>> applying overprovisioning: 57456
>>> 2012-08-01 16:44:59,310 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110) We are allocating VM, increasing the used
>>> capacity of this host:16
>>> 2012-08-01 16:44:59,310 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110) Current Used CPU: 25000 , Free CPU:32456
>>> ,Requested CPU: 500
>>> 2012-08-01 16:44:59,310 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110) Current Used RAM: 26843545600 , Free
>>> RAM:73648631360 ,Requested RAM: 134217728
>>> 2012-08-01 16:44:59,310 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110) CPU STATS after allocation: for host: 16, old
>>> used: 25000, old reserved: 0, actual total: 57456, total with
>>> overprovisioning: 57456; new used:25500, reserved:0; requested
>>> cpu:500,alloc_from_last:false
>>> 2012-08-01 16:44:59,310 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110) RAM STATS after allocation: for host: 16, old
>>> used: 26843545600, old reserved: 0, total: 100492176960; new used:
>>> 26977763328, reserved: 0; requested mem:
>>>134217728,alloc_from_last:false
>>> 2012-08-01 16:44:59,313 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (HA-Worker-2:work-110) VM is being started in podId: 1
>>> 2012-08-01 16:44:59,315 DEBUG [cloud.network.NetworkManagerImpl]
>>> (HA-Worker-2:work-110) Network id=204 is already implemented
>>> 2012-08-01 16:44:59,331 DEBUG [cloud.network.NetworkManagerImpl]
>>> (HA-Worker-2:work-110) Asking JuniperSRX to prepare for
>>> Nic[454-124-null-172.16.15.190]
>>> 2012-08-01 16:44:59,331 DEBUG [cloud.network.NetworkManagerImpl]
>>> (HA-Worker-2:work-110) Asking Netscaler to prepare for
>>> Nic[454-124-null-172.16.15.190]
>>> 2012-08-01 16:44:59,331 DEBUG [cloud.network.NetworkManagerImpl]
>>> (HA-Worker-2:work-110) Asking F5BigIp to prepare for
>>> Nic[454-124-null-172.16.15.190]
>>> 2012-08-01 16:44:59,331 DEBUG [cloud.network.NetworkManagerImpl]
>>> (HA-Worker-2:work-110) Asking VirtualRouter to prepare for
>>> Nic[454-124-null-172.16.15.190]
>>> 2012-08-01 16:44:59,331 DEBUG [cloud.network.NetworkManagerImpl]
>>> (HA-Worker-2:work-110) Asking Ovs to prepare for
>>> Nic[454-124-null-172.16.15.190]
>>> 2012-08-01 16:44:59,331 DEBUG [cloud.network.NetworkManagerImpl]
>>> (HA-Worker-2:work-110) Asking ExternalDhcpServer to prepare for
>>> Nic[454-124-null-172.16.15.190]
>>> 2012-08-01 16:44:59,331 DEBUG [cloud.network.NetworkManagerImpl]
>>> (HA-Worker-2:work-110) Asking BareMetal to prepare for
>>> Nic[454-124-null-172.16.15.190]
>>> 2012-08-01 16:44:59,331 DEBUG [cloud.network.NetworkManagerImpl]
>>> (HA-Worker-2:work-110) Asking SecurityGroupProvider to prepare for
>>> Nic[454-124-null-172.16.15.190]
>>> 2012-08-01 16:44:59,332 DEBUG [cloud.network.NetworkManagerImpl]
>>> (HA-Worker-2:work-110) Network id=206 is already implemented
>>> 2012-08-01 16:44:59,349 DEBUG [cloud.network.NetworkManagerImpl]
>>> (HA-Worker-2:work-110) Asking JuniperSRX to prepare for
>>> Nic[456-124-10b7874f-8f09-4a00-8f51-3a9375c7e1a7-169.254.0.230]
>>> 2012-08-01 16:44:59,349 DEBUG [cloud.network.NetworkManagerImpl]
>>> (HA-Worker-2:work-110) Asking Netscaler to prepare for
>>> Nic[456-124-10b7874f-8f09-4a00-8f51-3a9375c7e1a7-169.254.0.230]
>>> 2012-08-01 16:44:59,349 DEBUG [cloud.network.NetworkManagerImpl]
>>> (HA-Worker-2:work-110) Asking F5BigIp to prepare for
>>> Nic[456-124-10b7874f-8f09-4a00-8f51-3a9375c7e1a7-169.254.0.230]
>>> 2012-08-01 16:44:59,349 DEBUG [cloud.network.NetworkManagerImpl]
>>> (HA-Worker-2:work-110) Asking VirtualRouter to prepare for
>>> Nic[456-124-10b7874f-8f09-4a00-8f51-3a9375c7e1a7-169.254.0.230]
>>> 2012-08-01 16:44:59,349 DEBUG [cloud.network.NetworkManagerImpl]
>>> (HA-Worker-2:work-110) Asking Ovs to prepare for
>>> Nic[456-124-10b7874f-8f09-4a00-8f51-3a9375c7e1a7-169.254.0.230]
>>> 2012-08-01 16:44:59,350 DEBUG [cloud.network.NetworkManagerImpl]
>>> (HA-Worker-2:work-110) Asking ExternalDhcpServer to prepare for
>>> Nic[456-124-10b7874f-8f09-4a00-8f51-3a9375c7e1a7-169.254.0.230]
>>> 2012-08-01 16:44:59,350 DEBUG [cloud.network.NetworkManagerImpl]
>>> (HA-Worker-2:work-110) Asking BareMetal to prepare for
>>> Nic[456-124-10b7874f-8f09-4a00-8f51-3a9375c7e1a7-169.254.0.230]
>>> 2012-08-01 16:44:59,350 DEBUG [cloud.network.NetworkManagerImpl]
>>> (HA-Worker-2:work-110) Asking SecurityGroupProvider to prepare for
>>> Nic[456-124-10b7874f-8f09-4a00-8f51-3a9375c7e1a7-169.254.0.230]
>>> 2012-08-01 16:44:59,351 DEBUG [cloud.storage.StorageManagerImpl]
>>> (HA-Worker-2:work-110) Checking if we need to prepare 1 volumes for
>>> VM[DomainRouter|r-124-VM]
>>> 2012-08-01 16:44:59,351 DEBUG [cloud.storage.StorageManagerImpl]
>>> (HA-Worker-2:work-110) No need to recreate the volume:
>>> Vol[132|vm=124|ROOT], since it already has a pool assigned: 204, adding
>>> disk to VM
>>> 2012-08-01 16:44:59,354 ERROR [cloud.vm.VirtualMachineManagerImpl]
>>> (HA-Worker-2:work-110) Failed to start instance
>>>VM[DomainRouter|r-124-VM]
>>> 2012-08-01 16:44:59,357 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (HA-Worker-2:work-110) Cleaning up resources for the vm
>>> VM[DomainRouter|r-124-VM] in Starting state
>>> 2012-08-01 16:44:59,358 DEBUG [agent.transport.Request]
>>> (HA-Worker-2:work-110) Seq 16-479526999: Sending  { Cmd , MgmtId:
>>> 130577622632, via: 16, Ver: v1, Flags: 100111,
>>> [{"StopCommand":{"isProxy":false,"vmName":"r-124-VM","wait":0}}] }
>>> 2012-08-01 16:44:59,358 DEBUG [agent.transport.Request]
>>> (HA-Worker-2:work-110) Seq 16-479526999: Executing:  { Cmd , MgmtId:
>>> 130577622632, via: 16, Ver: v1, Flags: 100111,
>>> [{"StopCommand":{"isProxy":false,"vmName":"r-124-VM","wait":0}}] }
>>> 2012-08-01 16:44:59,358 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-122:null) Seq 16-479526999: Executing request
>>> 2012-08-01 16:44:59,522 INFO  [xen.resource.CitrixResourceBase]
>>> (DirectAgent-122:null) VM does not exist on
>>> XenServer1d86d063-10b3-41c2-a681-3ce3ce78760a
>>> 2012-08-01 16:44:59,522 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-122:null) Seq 16-479526999: Response Received:
>>> 2012-08-01 16:44:59,522 DEBUG [agent.transport.Request]
>>> (DirectAgent-122:null) Seq 16-479526999: Processing:  { Ans: , MgmtId:
>>> 130577622632, via: 16, Ver: v1, Flags: 110,
>>>
>>>[{"StopAnswer":{"vncPort":0,"bytesSent":0,"bytesReceived":0,"result":tru
>>>e,"details":"VM
>>> does not exist","wait":0}}] }
>>> 2012-08-01 16:44:59,522 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (DirectAgent-122:null) Cleanup succeeded. Details VM does not exist
>>> 2012-08-01 16:44:59,523 DEBUG [agent.manager.AgentAttache]
>>> (DirectAgent-122:null) Seq 16-479526999: No more commands found
>>> 2012-08-01 16:44:59,523 DEBUG [agent.transport.Request]
>>> (HA-Worker-2:work-110) Seq 16-479526999: Received:  { Ans: , MgmtId:
>>> 130577622632, via: 16, Ver: v1, Flags: 110, { StopAnswer } }
>>> 2012-08-01 16:44:59,523 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (HA-Worker-2:work-110) Cleanup succeeded. Details VM does not exist
>>> 2012-08-01 16:44:59,526 DEBUG
>>> [network.router.VirtualNetworkApplianceManagerImpl]
>>> (HA-Worker-2:work-110) Successfully updated user statistics as a part
>>>of
>>> domR VM[DomainRouter|r-124-VM] reboot/stop
>>> 2012-08-01 16:44:59,544 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (HA-Worker-2:work-110) Successfully cleanued up resources for the vm
>>> VM[DomainRouter|r-124-VM] in Starting state
>>> 2012-08-01 16:44:59,551 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110) VM state transitted from :Starting to Stopped
>>> with event: OperationFailedvm's original host id: 13 new host id: null
>>> host id before state transition: 16
>>> 2012-08-01 16:44:59,554 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110) Hosts's actual total CPU: 57456 and CPU after
>>> applying overprovisioning: 57456
>>> 2012-08-01 16:44:59,554 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110) release cpu from host: 16, old used:
>>> 25500,reserved: 0, actual total: 57456, total with overprovisioning:
>>> 57456; new used: 25000,reserved:0; movedfromreserved:
>>> false,moveToReserveredfalse
>>> 2012-08-01 16:44:59,554 DEBUG [cloud.capacity.CapacityManagerImpl]
>>> (HA-Worker-2:work-110) release mem from host: 16, old used:
>>> 26977763328,reserved: 0, total: 100492176960; new used:
>>> 26843545600,reserved:0; movedfromreserved: false,moveToReserveredfalse
>>> 2012-08-01 16:44:59,557 WARN  [cloud.ha.HighAvailabilityManagerImpl]
>>> (HA-Worker-2:work-110) Unable to restart VM[DomainRouter|r-124-VM] due
>>> to Resource [Host:16] is unreachable: Host 16: Unable to start instance
>>> due to null
>>> 2012-08-01 16:44:59,561 INFO  [cloud.ha.HighAvailabilityManagerImpl]
>>> (HA-Worker-2:work-110) Rescheduling
>>>HAWork[110-HA-124-Stopped-Scheduled]
>>> to try again at Wed Aug 01 16:46:00 CDT 2012
>>> [root@control1 /]#
>>>
>>> --
>>> Regards,
>>>
>>> Nik
>>>
>>> Nik Martin
>>> VP Business Development
>>> Nfina Technologies, Inc.
>>> +1.251.243.0043 x1003
>>> Relentless Reliability
>>>
>>
>>
>
>
>--
>Regards,
>
>Nik
>
>Nik Martin
>VP Business Development
>Nfina Technologies, Inc.
>+1.251.243.0043 x1003
>Relentless Reliability


Mime
View raw message