cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From cs user <acldstk...@gmail.com>
Subject Re: Cloudstack 4.4.2 - unable to start new systemvm's
Date Thu, 26 Mar 2015 12:32:45 GMT
Hello All,

As I've just posted, but to put the answer in this thread, the fix for us
was:

update networks set broadcast_uri="vlan://untagged" where mode='Dhcp';

These were set to null for each of our zones after the upgrade.

Cheers!


On Thu, Mar 26, 2015 at 8:50 AM, cs user <acldstkusr@gmail.com> wrote:

> Hello,
>
> Just for some more detail, this section appears to have some useful info
> in there:
>
> 2015-03-25 22:29:45,723 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739
>>> FirstFitRoutingAllocator) Host Allocator returning 1 suitable hosts
>>
>> 2015-03-25 22:29:45,726 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Checking suitable pools for volume (Id, Type): (65231,ROOT)
>>
>> 2015-03-25 22:29:45,726 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Volume has pool already allocated, checking if pool can be reused, poolId:
>>> 217
>>
>> 2015-03-25 22:29:45,729 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Planner need not allocate a pool for this volume since its READY
>>
>> 2015-03-25 22:29:45,729 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Trying to find a potenial host and associated storage pools from the
>>> suitable host/pool lists for this VM
>>
>> 2015-03-25 22:29:45,729 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Checking if host: 137 can access any suitable storage pool for volume: ROOT
>>
>> 2015-03-25 22:29:45,732 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Host: 137 can access pool: 217
>>
>> 2015-03-25 22:29:45,735 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Found a potential host id: 137 name: xen008.live.dc1.internal.net and
>>> associated storage pools for this VM
>>
>> 2015-03-25 22:29:45,739 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Returning Deployment Destination:
>>> Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))]
>>> : Dest[Zone(20)-Pod(26)-Cluster(11)-Host(137)-Storage()]
>>
>> 2015-03-25 22:29:45,739 DEBUG [c.c.v.VirtualMachineManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Deployment found  - P0=VM[ConsoleProxy|v-10830-VM],
>>> P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))]
>>> : Dest[Zone(20)-Pod(26)-Cluster(11)-Host(137)-Storage()]
>>
>> 2015-03-25 22:29:45,811 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739) VM
>>> state transitted from :Starting to Starting with event: OperationRetryvm's
>>> original host id: null new host id: 137 host id before state transition: 143
>>
>> 2015-03-25 22:29:45,827 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Hosts's actual total CPU: 69600 and CPU after applying overprovisioning:
>>> 278400
>>
>> 2015-03-25 22:29:45,827 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Hosts's actual total RAM: 98942627584 and RAM after applying
>>> overprovisioning: 98942623744
>>
>> 2015-03-25 22:29:45,827 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> release cpu from host: 143, old used: 3000,reserved: 0, actual total:
>>> 69600, total with overprovisioning: 278400; new used: 2500,reserved:0;
>>> movedfromreserved: false,moveToReserveredfalse
>>
>> 2015-03-25 22:29:45,827 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> release mem from host: 143, old used: 3758096384,reserved: 0, total:
>>> 98942623744; new used: 2684354560,reserved:0; movedfromreserved:
>>> false,moveToReserveredfalse
>>
>> 2015-03-25 22:29:45,853 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Hosts's actual total CPU: 69600 and CPU after applying overprovisioning:
>>> 278400
>>
>> 2015-03-25 22:29:45,853 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739) We
>>> are allocating VM, increasing the used capacity of this host:137
>>
>> 2015-03-25 22:29:45,853 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Current Used CPU: 16500 , Free CPU:261900 ,Requested CPU: 500
>>
>> 2015-03-25 22:29:45,853 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Current Used RAM: 17314086912 , Free RAM:131551952896 ,Requested RAM:
>>> 1073741824
>>
>> 2015-03-25 22:29:45,853 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739) CPU
>>> STATS after allocation: for host: 137, old used: 16500, old reserved: 0,
>>> actual total: 69600, total with overprovisioning: 278400; new used:17000,
>>> reserved:0; requested cpu:500,alloc_from_last:false
>>
>> 2015-03-25 22:29:45,853 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739) RAM
>>> STATS after allocation: for host: 137, old used: 17314086912, old reserved:
>>> 0, total: 148866039808; new used: 18387828736, reserved: 0; requested mem:
>>> 1073741824,alloc_from_last:false
>>
>> 2015-03-25 22:29:45,860 DEBUG [c.c.v.VirtualMachineManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739) VM
>>> is being created in podId: 26
>>
>> 2015-03-25 22:29:45,884 DEBUG [o.a.c.e.o.NetworkOrchestrator]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Network id=214 is already implemented
>>
>> 2015-03-25 22:29:45,948 DEBUG [o.a.c.e.o.NetworkOrchestrator]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Asking VirtualRouter to prepare for
>>> Nic[68711-10830-d53d4a9c-6aa4-445e-840e-beeb2d734301-192.168.64.116]
>>
>> 2015-03-25 22:29:45,955 DEBUG [o.a.c.e.o.NetworkOrchestrator]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Asking SecurityGroupProvider to prepare for
>>> Nic[68711-10830-d53d4a9c-6aa4-445e-840e-beeb2d734301-192.168.64.116]
>>
>> 2015-03-25 22:29:45,970 DEBUG [o.a.c.e.o.NetworkOrchestrator]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Network id=212 is already implemented
>>
>> 2015-03-25 22:29:46,025 DEBUG [o.a.c.e.o.NetworkOrchestrator]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Network id=211 is already implemented
>>
>> 2015-03-25 22:29:46,068 DEBUG [c.c.n.g.PodBasedNetworkGuru]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Allocated a nic
>>> NicProfile[68727-10830-d53d4a9c-6aa4-445e-840e-beeb2d734301-192.168.97.214-null
>>> for VM[ConsoleProxy|v-10830-VM]
>>
>> 2015-03-25 22:29:46,082 DEBUG [o.a.c.e.o.VolumeOrchestrator]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Checking if we need to prepare 1 volumes for VM[ConsoleProxy|v-10830-VM]
>>
>> 2015-03-25 22:29:46,085 DEBUG [o.a.c.e.o.VolumeOrchestrator]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739) No
>>> need to recreate the volume: Vol[65231|vm=10830|ROOT], since it already has
>>> a pool assigned: 217, adding disk to VM
>>
>> 2015-03-25 22:29:46,142 DEBUG [c.c.c.ConsoleProxyManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Boot Args for VM[ConsoleProxy|v-10830-VM]:  template=domP type=consoleproxy
>>> host=cloudstack-internal.internal.net port=8250 name=v-10830-VM
>>> premium=true zone=20 pod=26 guid=Proxy.10830 proxy_vm=10830
>>> disable_rp_filter=true vmpassword=J3aGeYd5 eth2ip=192.168.64.116
>>> eth2mask=255.255.254.0 gateway=192.168.65.254 eth0ip=169.254.3.145
>>> eth0mask=255.255.0.0 eth1ip=192.168.97.214 eth1mask=255.255.255.0 mgmtcidr=
>>> 192.168.225.0/24 localgw=192.168.97.254 internaldns1=192.168.0.224
>>> internaldns2=192.168.0.224 dns1=192.168.0.224 dns2=192.168.0.224
>>
>> 2015-03-25 22:29:46,226 DEBUG [c.c.a.t.Request]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739) Seq
>>> 137-6516145710851687335: Sending  { Cmd , MgmtId: 345049290103, via: 137(
>>> xen008.live.dc1.internal.net), Ver: v1, Flags: 100011,
>>> [{"com.cloud.agent.api.StartCommand":{"vm":{"id":10830,"name":"v-10830-VM","bootloader":"PyGrub","type":"ConsoleProxy","cpus":1,"minSpeed":125,"maxSpeed":500,"minRam":1073741824,"maxRam":1073741824,"arch":"x86_64","os":"Debian
>>> GNU/Linux 7(64-bit)","platformEmulator":"Debian Squeeze 6.0
>>> (64-bit)","bootArgs":" template=domP type=consoleproxy host=
>>> cloudstack-internal.internal.net port=8250 name=v-10830-VM premium=true
>>> zone=20 pod=26 guid=Proxy.10830 proxy_vm=10830 disable_rp_filter=true
>>> vm","rebootOnCrash":false,"enableHA":false,"limitCpuUse":false,"enableDynamicallyScaleVm":false,"vncPassword":"89c5c58244f552e6","params":{"memoryOvercommitRatio":"1","cpuOvercommitRatio":"4"},"uuid":"596eacf8-2cde-4f15-97d5-69e94e5f549b","disks":[{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"9a0c4813-5aa0-40cd-9107-950d52dad0eb","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"a2a715ac-9e9b-35a3-b6d5-b542d24f663f","id":217,"poolType":"NetworkFilesystem","host":"
>>> cs-pri-nonprod-01.data.dc.internal.net
>>> ","path":"/CS_dc_NONPROD_PRI_01","port":2049,"url":"NetworkFilesystem://
>>> cs-pri-nonprod-01.data.dc.internal.net/CS_dc_NONPROD_PRI_01/?ROLE=Primary&STOREUUID=a2a715ac-9e9b-35a3-b6d5-b542d24f663f
>>> "}},"name":"ROOT-10830","size":2621440000,"path":"391d7844-1973-488c-a41d-d8e6a5928b98","volumeId":65231,"vmName":"v-10830-VM","accountId":1,"format":"VHD","id":65231,"deviceId":0,"cacheMode":"NONE","hypervisorType":"XenServer"}},"diskSeq":0,"path":"391d7844-1973-488c-a41d-d8e6a5928b98","type":"ROOT","_details":{"managed":"false","storagePort":"2049","storageHost":"
>>> cs-pri-nonprod-01.data.dc.internal.net","volumeSize":"2621440000"}}],"nics":[{"deviceId":2,"networkRateMbps":-1,"defaultNic":true,"uuid":"74a7a202-1b98-4c33-a1f9-99674545cdbb","ip":"192.168.64.116","netmask":"255.255.254.0","gateway":"192.168.65.254","mac":"06:44:d2:00:00:83","dns1":"192.168.0.224","dns2":"192.168.0.224","broadcastType":"Vlan","type":"Guest","isSecurityGroupEnabled":true,"name":"CS-Guest-Public"},{"deviceId":0,"networkRateMbps":-1,"defaultNic":false,"uuid":"da48f94f-f38d-4a44-9945-936139a99a88","ip":"169.254.3.145","netmask":"255.255.0.0","gateway":"169.254.0.1","mac":"0e:00:a9:fe:03:91","broadcastType":"LinkLocal","type":"Control","isSecurityGroupEnabled":false},{"deviceId":1,"networkRateMbps":-1,"defaultNic":false,"uuid":"46756e8d-bc6b-4463-967f-bfc28976a569","ip":"192.168.97.214","netmask":"255.255.255.0","gateway":"192.168.97.254","mac":"06:71:10:00:00:0e","broadcastType":"Native","type":"Management","isSecurityGroupEnabled":false,"name":"CS-Mgmt"}]},"hostIp":"192.168.97.8","executeInSequence":false,"wait":0}},{"com.cloud.agent.api.check.CheckSshCommand":{"ip":"169.254.3.145","port":3922,"interval":6,"retries":100,"name":"v-10830-VM","wait":0}}]
>>> }
>>
>> 2015-03-25 22:29:46,227 DEBUG [c.c.a.t.Request]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739) Seq
>>> 137-6516145710851687335: Executing:  { Cmd , MgmtId: 345049290103, via: 137(
>>> xen008.live.dc1.internal.net), Ver: v1, Flags: 100011,
>>> [{"com.cloud.agent.api.StartCommand":{"vm":{"id":10830,"name":"v-10830-VM","bootloader":"PyGrub","type":"ConsoleProxy","cpus":1,"minSpeed":125,"maxSpeed":500,"minRam":1073741824,"maxRam":1073741824,"arch":"x86_64","os":"Debian
>>> GNU/Linux 7(64-bit)","platformEmulator":"Debian Squeeze 6.0
>>> (64-bit)","bootArgs":" template=domP type=consoleproxy host=
>>> cloudstack-internal.internal.net port=8250 name=v-10830-VM premium=true
>>> zone=20 pod=26 guid=Proxy.10830 proxy_vm=10830 disable_rp_filter=true
>>> vm","rebootOnCrash":false,"enableHA":false,"limitCpuUse":false,"enableDynamicallyScaleVm":false,"vncPassword":"89c5c58244f552e6","params":{"memoryOvercommitRatio":"1","cpuOvercommitRatio":"4"},"uuid":"596eacf8-2cde-4f15-97d5-69e94e5f549b","disks":[{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"9a0c4813-5aa0-40cd-9107-950d52dad0eb","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"a2a715ac-9e9b-35a3-b6d5-b542d24f663f","id":217,"poolType":"NetworkFilesystem","host":"
>>> cs-pri-nonprod-01.data.dc.internal.net
>>> ","path":"/CS_dc_NONPROD_PRI_01","port":2049,"url":"NetworkFilesystem://
>>> cs-pri-nonprod-01.data.dc.internal.net/CS_dc_NONPROD_PRI_01/?ROLE=Primary&STOREUUID=a2a715ac-9e9b-35a3-b6d5-b542d24f663f
>>> "}},"name":"ROOT-10830","size":2621440000,"path":"391d7844-1973-488c-a41d-d8e6a5928b98","volumeId":65231,"vmName":"v-10830-VM","accountId":1,"format":"VHD","id":65231,"deviceId":0,"cacheMode":"NONE","hypervisorType":"XenServer"}},"diskSeq":0,"path":"391d7844-1973-488c-a41d-d8e6a5928b98","type":"ROOT","_details":{"managed":"false","storagePort":"2049","storageHost":"
>>> cs-pri-nonprod-01.data.dc.internal.net","volumeSize":"2621440000"}}],"nics":[{"deviceId":2,"networkRateMbps":-1,"defaultNic":true,"uuid":"74a7a202-1b98-4c33-a1f9-99674545cdbb","ip":"192.168.64.116","netmask":"255.255.254.0","gateway":"192.168.65.254","mac":"06:44:d2:00:00:83","dns1":"192.168.0.224","dns2":"192.168.0.224","broadcastType":"Vlan","type":"Guest","isSecurityGroupEnabled":true,"name":"CS-Guest-Public"},{"deviceId":0,"networkRateMbps":-1,"defaultNic":false,"uuid":"da48f94f-f38d-4a44-9945-936139a99a88","ip":"169.254.3.145","netmask":"255.255.0.0","gateway":"169.254.0.1","mac":"0e:00:a9:fe:03:91","broadcastType":"LinkLocal","type":"Control","isSecurityGroupEnabled":false},{"deviceId":1,"networkRateMbps":-1,"defaultNic":false,"uuid":"46756e8d-bc6b-4463-967f-bfc28976a569","ip":"192.168.97.214","netmask":"255.255.255.0","gateway":"192.168.97.254","mac":"06:71:10:00:00:0e","broadcastType":"Native","type":"Management","isSecurityGroupEnabled":false,"name":"CS-Mgmt"}]},"hostIp":"192.168.97.8","executeInSequence":false,"wait":0}},{"com.cloud.agent.api.check.CheckSshCommand":{"ip":"169.254.3.145","port":3922,"interval":6,"retries":100,"name":"v-10830-VM","wait":0}}]
>>> }
>>
>> 2015-03-25 22:29:46,530 DEBUG [c.c.a.t.Request]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739) Seq
>>> 137-6516145710851687335: Received:  { Ans: , MgmtId: 345049290103, via:
>>> 137, Ver: v1, Flags: 10, { StartAnswer } }
>>
>> 2015-03-25 22:29:46,538 INFO  [c.c.v.VirtualMachineManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Unable to start VM on Host[-137-Routing] due to Unable to start v-10830-VM
>>> due to
>>
>> 2015-03-25 22:29:46,547 DEBUG [c.c.v.VirtualMachineManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Cleaning up resources for the vm VM[ConsoleProxy|v-10830-VM] in Starting
>>> state
>>
>> 2015-03-25 22:29:46,557 DEBUG [c.c.a.t.Request]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739) Seq
>>> 137-6516145710851687336: Sending  { Cmd , MgmtId: 345049290103, via: 137(
>>> xen008.live.dc1.internal.net), Ver: v1, Flags: 100011,
>>> [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":false,"checkBeforeCleanup":false,"vmName":"v-10830-VM","wait":0}}]
>>> }
>>
>> 2015-03-25 22:29:46,557 DEBUG [c.c.a.t.Request]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739) Seq
>>> 137-6516145710851687336: Executing:  { Cmd , MgmtId: 345049290103, via: 137(
>>> xen008.live.dc1.internal.net), Ver: v1, Flags: 100011,
>>> [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":false,"checkBeforeCleanup":false,"vmName":"v-10830-VM","wait":0}}]
>>> }
>>
>> 2015-03-25 22:29:46,585 DEBUG [c.c.a.t.Request]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739) Seq
>>> 137-6516145710851687336: Received:  { Ans: , MgmtId: 345049290103, via:
>>> 137, Ver: v1, Flags: 10, { StopAnswer } }
>>
>> 2015-03-25 22:29:46,619 DEBUG [o.a.c.e.o.NetworkOrchestrator]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Asking VirtualRouter to release
>>> NicProfile[68711-10830-d53d4a9c-6aa4-445e-840e-beeb2d734301-192.168.64.116-null
>>
>> 2015-03-25 22:29:46,653 DEBUG [o.a.c.e.o.NetworkOrchestrator]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Asking SecurityGroupProvider to release
>>> NicProfile[68711-10830-d53d4a9c-6aa4-445e-840e-beeb2d734301-192.168.64.116-null
>>
>> 2015-03-25 22:29:46,661 DEBUG [c.c.n.g.ControlNetworkGuru]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Released nic: NicProfile[68719-10830-null-null-null
>>
>> 2015-03-25 22:29:46,678 DEBUG [c.c.d.d.DataCenterIpAddressDaoImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Releasing ip address for
>>> reservationId=d53d4a9c-6aa4-445e-840e-beeb2d734301, instance=68727
>>
>> 2015-03-25 22:29:46,679 DEBUG [c.c.n.g.PodBasedNetworkGuru]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Released nic: NicProfile[68727-10830-null-null-null
>>
>> 2015-03-25 22:29:46,688 DEBUG [c.c.v.VirtualMachineManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Successfully released network resources for the vm
>>> VM[ConsoleProxy|v-10830-VM]
>>
>> 2015-03-25 22:29:46,688 DEBUG [c.c.v.VirtualMachineManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Successfully cleanued up resources for the vm VM[ConsoleProxy|v-10830-VM]
>>> in Starting state
>>
>> 2015-03-25 22:29:46,695 DEBUG [c.c.v.VirtualMachineManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Root volume is ready, need to place VM in volume's cluster
>>
>> 2015-03-25 22:29:46,695 DEBUG [c.c.v.VirtualMachineManagerImpl]
>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>> Vol[65231|vm=10830|ROOT] is READY, changing deployment plan to use this
>>> pool's dcId: 20 , podId: 26 , and clusterId: 11
>>
>>
>>
>
> Notice this line:
>
> 2015-03-25 22:29:46,538 INFO  [c.c.v.VirtualMachineManagerImpl]
>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>> Unable to start VM on Host[-137-Routing] due to Unable to start v-10830-VM
>> due to
>>
>
> It seems that the vm does attempt to start on the host, but it doesn't
> come up. This is what I can see from within xencenter, the vm appears in a
> powered down state for a brief moment and then dies.
>
> Thanks!
>
>
> On Thu, Mar 26, 2015 at 8:36 AM, cs user <acldstkusr@gmail.com> wrote:
>
>> Hi Ilya,
>>
>> Many thanks for getting back to me, found the following:
>>
>> 2015-03-25 22:29:46,754 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
>>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739
>>>> FirstFitRoutingAllocator) Found 3 hosts for allocation after
>>>> prioritization: [Host[-154-Routing], Host[-137-Routing], Host[-143-Routing]]
>>>
>>> 2015-03-25 22:29:46,754 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
>>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739
>>>> FirstFitRoutingAllocator) Looking for speed=500Mhz, Ram=1024
>>>
>>> 2015-03-25 22:29:46,754 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
>>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739
>>>> FirstFitRoutingAllocator) Host name: xen006.live.dc1.internal.net,
>>>> hostId: 154 is in avoid set, skipping this and trying other available hosts
>>>
>>> 2015-03-25 22:29:46,754 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
>>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739
>>>> FirstFitRoutingAllocator) Host name: xen008.live.dc1.internal.net,
>>>> hostId: 137 is in avoid set, skipping this and trying other available hosts
>>>
>>> 2015-03-25 22:29:46,754 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
>>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739
>>>> FirstFitRoutingAllocator) Host name: xen007.live.dc1.internal.net,
>>>> hostId: 143 is in avoid set, skipping this and trying other available hosts
>>>
>>> 2015-03-25 22:29:46,754 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
>>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739
>>>> FirstFitRoutingAllocator) Host Allocator returning 0 suitable hosts
>>>
>>> 2015-03-25 22:29:46,754 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
>>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739) No
>>>> suitable hosts found
>>>
>>> 2015-03-25 22:29:46,754 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
>>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739) No
>>>> suitable hosts found under this Cluster: 11
>>>
>>> 2015-03-25 22:29:46,758 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
>>>> (Work-Job-Executor-105:ctx-d52c9025 job-184934/job-186529 ctx-9bb73739)
>>>> Could not find suitable Deployment Destination for this VM under any
>>>> clusters, returning.
>>>
>>>
>>>
>> Any reason why these would be in the avoid set?
>>
>> Thanks!
>>
>>
>> On Wed, Mar 25, 2015 at 11:11 PM, ilya <ilya.mailing.lists@gmail.com>
>> wrote:
>>
>>> I see insufficient capacity..
>>>
>>> post content of
>>>
>>> grep -E 'job-184934|job-185963' management-server.log
>>>
>>>
>>>
>>>
>>>
>>> On 3/25/15 2:52 PM, cs user wrote:
>>>
>>>> Hello,
>>>>
>>>> We have upgraded from 4.3 to 4.4.2 and can no longer start new
>>>> systemvm's
>>>> (or normal instances in zones which have both a secondary storage vm and
>>>> router running). Existing system vm's appear to have been upgraded.
>>>>
>>>> We are using Xenserver version 6.1.
>>>>
>>>> They appear for a split second in XenCenter but then immediately die.
>>>>
>>>> There isn't much in the logs to go off, I will paste that in below.
>>>>
>>>> One thing I have noticed is that a bunch of files got created in this
>>>> directory on our xenservers today during the upgrade.
>>>>
>>>> /opt/cloud/bin/  (All new files have been placed here)
>>>>
>>>> Where as we already have a bunch of files in this location with similar
>>>> names.
>>>>
>>>> /opt/xensource/bin/ (Old files with dates from 2013)
>>>>
>>>>
>>>> For example the vhd-util file is present in both, but an older version
>>>> of
>>>> it exists in /opt/xensource/bin/.
>>>>
>>>> Does cloudstack 4.4.2 support Xenserver 6.1?
>>>>
>>>> Have the above files been moved to the correct location for my version
>>>> of
>>>> xen? Should I try to move them into /opt/xensource/bin/ ?
>>>>
>>>> I've tried this with the vhd-util file, but it didn't seem to make a
>>>> difference. Perhaps some of the scripts need updating?
>>>>
>>>>
>>>> As promised here are the logs we can see when an instance tries to
>>>> start.
>>>> All hosts are up and running and their state is green.
>>>>
>>>> Any help would be appreciated, thanks!
>>>>
>>>> ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-8:ctx-1e15b764
>>>>
>>>>> job-184934/job-185963) Unable to complete AsyncJobVO {id:185963,
>>>>>> userId: 1,
>>>>>> accountId: 1, instanceType: null, instanceId: null, cmd:
>>>>>> com.cloud.vm.VmWorkStart, cmdInfo:
>>>>>> rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oA
>>>>>> BGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50
>>>>>> UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5n
>>>>>> L0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYX
>>>>>> ZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-
>>>>>> AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-
>>>>>> AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-
>>>>>> AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkS
>>>>>> gAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-
>>>>>> AAN4cAAAAAAAAAABAAAAAAAAAAEAAAAAAAAqTnQAGVZpcnR1YWxNYWNoaW5l
>>>>>> TWFuYWdlckltcGwAAAAAAAAAAHBwcHBwcHBwcHA,
>>>>>> cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode:
0,
>>>>>> result: null, initMsid: 345049290103, completeMsid: null, lastUpdated:
>>>>>> null, lastPolled: null, created: Wed Mar 25 21:41:13 GMT 2015}, job
>>>>>> origin:184934
>>>>>>
>>>>> com.cloud.exception.InsufficientServerCapacityException: Unable to
>>>>> create
>>>>>
>>>>>> a deployment for VM[ConsoleProxy|v-10830-VM]Scope=interface
>>>>>> com.cloud.dc.DataCenter; id=20
>>>>>>
>>>>>          at
>>>>>
>>>>>> com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(
>>>>>> VirtualMachineManagerImpl.java:947)
>>>>>>
>>>>>          at
>>>>>
>>>>>> com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(
>>>>>> VirtualMachineManagerImpl.java:5195)
>>>>>>
>>>>>          at sun.reflect.GeneratedMethodAccessor281.invoke(Unknown
>>>>> Source)
>>>>>
>>>>>          at
>>>>>
>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(
>>>>>> DelegatingMethodAccessorImpl.java:43)
>>>>>>
>>>>>          at java.lang.reflect.Method.invoke(Method.java:601)
>>>>>
>>>>>          at
>>>>>
>>>>>> com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(
>>>>>> VmWorkJobHandlerProxy.java:107)
>>>>>>
>>>>>          at
>>>>>
>>>>>> com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(
>>>>>> VirtualMachineManagerImpl.java:5340)
>>>>>>
>>>>>          at
>>>>>
>>>>>> com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102)
>>>>>>
>>>>>          at
>>>>>
>>>>>> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.
>>>>>> runInContext(AsyncJobManagerImpl.java:503)
>>>>>>
>>>>>          at
>>>>>
>>>>>> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(
>>>>>> ManagedContextRunnable.java:49)
>>>>>>
>>>>>          at
>>>>>
>>>>>> org.apache.cloudstack.managed.context.impl.
>>>>>> DefaultManagedContext$1.call(DefaultManagedContext.java:56)
>>>>>>
>>>>>          at
>>>>>
>>>>>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.
>>>>>> callWithContext(DefaultManagedContext.java:103)
>>>>>>
>>>>>          at
>>>>>
>>>>>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.
>>>>>> runWithContext(DefaultManagedContext.java:53)
>>>>>>
>>>>>          at
>>>>>
>>>>>> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(
>>>>>> ManagedContextRunnable.java:46)
>>>>>>
>>>>>          at
>>>>>
>>>>>> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(
>>>>>> AsyncJobManagerImpl.java:460)
>>>>>>
>>>>>          at
>>>>>
>>>>>> java.util.concurrent.Executors$RunnableAdapter.
>>>>>> call(Executors.java:471)
>>>>>>
>>>>>          at
>>>>>
>>>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>>>>
>>>>>          at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>>>
>>>>>          at
>>>>>
>>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(
>>>>>> ThreadPoolExecutor.java:1110)
>>>>>>
>>>>>          at
>>>>>
>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>>>>> ThreadPoolExecutor.java:603)
>>>>>>
>>>>>          at java.lang.Thread.run(Thread.java:722)
>>>>>
>>>>> WARN  [c.c.v.SystemVmLoadScanner] (consoleproxy-1:ctx-df8209d7)
>>>>> Unexpected
>>>>>
>>>>>> exception Job failed due to exception Unable to create a deployment
>>>>>> for
>>>>>> VM[ConsoleProxy|v-10830-VM]
>>>>>>
>>>>> java.lang.RuntimeException: Job failed due to exception Unable to
>>>>> create a
>>>>>
>>>>>> deployment for VM[ConsoleProxy|v-10830-VM]
>>>>>>
>>>>>          at
>>>>>
>>>>>> com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:114)
>>>>>>
>>>>>          at
>>>>>
>>>>>> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.
>>>>>> runInContext(AsyncJobManagerImpl.java:503)
>>>>>>
>>>>>          at
>>>>>
>>>>>> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(
>>>>>> ManagedContextRunnable.java:49)
>>>>>>
>>>>>          at
>>>>>
>>>>>> org.apache.cloudstack.managed.context.impl.
>>>>>> DefaultManagedContext$1.call(DefaultManagedContext.java:56)
>>>>>>
>>>>>          at
>>>>>
>>>>>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.
>>>>>> callWithContext(DefaultManagedContext.java:103)
>>>>>>
>>>>>          at
>>>>>
>>>>>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.
>>>>>> runWithContext(DefaultManagedContext.java:53)
>>>>>>
>>>>>          at
>>>>>
>>>>>> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(
>>>>>> ManagedContextRunnable.java:46)
>>>>>>
>>>>>          at
>>>>>
>>>>>> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(
>>>>>> AsyncJobManagerImpl.java:460)
>>>>>>
>>>>>          at
>>>>>
>>>>>> java.util.concurrent.Executors$RunnableAdapter.
>>>>>> call(Executors.java:471)
>>>>>>
>>>>>          at
>>>>>
>>>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>>>>
>>>>>          at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>>>
>>>>>          at
>>>>>
>>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(
>>>>>> ThreadPoolExecutor.java:1110)
>>>>>>
>>>>>          at
>>>>>
>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>>>>> ThreadPoolExecutor.java:603)
>>>>>>
>>>>>          at java.lang.Thread.run(Thread.java:722)
>>>>>
>>>>>
>>>>>
>>>>>
>>>
>>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message