incubator-cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Caleb Call <calebc...@me.com>
Subject Re: Cannot use NIC bond on guest/public networks
Date Thu, 06 Sep 2012 22:35:30 GMT
Good info, thanks.

On Sep 6, 2012, at 4:22 PM, Anthony Xu <Xuefei.Xu@citrix.com> wrote:

> You can add a host to a cluster with a bonded interface by executing "xe pool-join" command,
> You need to move ip configure from slave eth device to bonding device by executing xsconsole
on the new host.
> 
> 
> Anthony
> 
>> -----Original Message-----
>> From: Caleb Call [mailto:calebcall@me.com]
>> Sent: Thursday, September 06, 2012 2:40 PM
>> To: cloudstack-users@incubator.apache.org
>> Subject: Re: Cannot use NIC bond on guest/public networks
>> 
>> In a convoluted way, yes.  You can't add a host in Xen Center to a
>> cluster with a bonded interface.  It prompts you and says bonding is
>> not supported.  If you add the host to the cluster without bonding and
>> then bond the host after it's added, it works fine.  Beware, in our
>> experience that when you use Xen Center to bond the interface it will
>> go through and bond the entire cluster.
>> 
>> 
>> On Sep 6, 2012, at 12:12 PM, Anthony Xu <Xuefei.Xu@citrix.com> wrote:
>> 
>>> You can use bonded management interface,
>>> You cannot use VLAN management interface, XenServer doesn't support
>> it at this point.
>>> 
>>> 
>>> Anthony
>>> 
>>>> -----Original Message-----
>>>> From: Nik Martin [mailto:nik.martin@nfinausa.com]
>>>> Sent: Thursday, September 06, 2012 7:25 AM
>>>> To: cloudstack-users@incubator.apache.org
>>>> Subject: Re: Cannot use NIC bond on guest/public networks
>>>> 
>>>> On 09/05/2012 10:58 PM, William Clark wrote:
>>>>> This is correct, you cannot use a bonded interface for the
>> management
>>>> interface.  Not sure why this is the case.
>>>>> 
>>>>> Bill Clark
>>>>> Sent from my iPhone
>>>>> 
>>>>> On Sep 5, 2012, at 3:22 PM, Nik Martin <nik.martin@nfinausa.com>
>>>> wrote:
>>>>> 
>>>> 
>>>> Bill,
>>>> 
>>>> Is this a limitation of Cloudstack?  Xenserver has TONS of documents
>>>> regarding and recommending a bonded interface for management.  It
>> has
>>>> explicit details about how bonded management interfaces *should*
>> work,
>>>> but they don't.  The main limitation is that they have to me
>>>> active-passive (Bond mode 1).
>>>> 
>>>> Nik
>>>> 
>>>> 
>>>> 
>>>>>> On 09/05/2012 01:30 PM, Anthony Xu wrote:
>>>>>>> Hi Nik,
>>>>>>> 
>>>>>>> You need to move ip configuration from eth* to bond* device.
>>>>>>> 
>>>>>>> Check NIC Bonding for XenServer (Optional) in
>>>> http://support.citrix.com/servlet/KbServlet/download/31038-102-
>>>> 685337/CloudPlatform3.0.3_3.0.4InstallGuide.pdf
>>>>>>> 
>>>>>>> 
>>>>>>> Anthony
>>>>>>> 
>>>>>> 
>>>>>> Thank you.  Xencenter makes it LOOK like it renames the slave
>>>> interfaces, when in fact they are still named "guest_pub". Xencenter
>>>> shows them as "guest_pub(slave)".  I can get VMs added and started
>>>> properly now, but I still cannot communicate with xenserver 6.02
>> hosts
>>>> with bonded management interfaces.  This is a xenserver/openvswitch
>>>> issue though.
>>>>>> 
>>>>>> Regards,
>>>>>> 
>>>>>> Nik
>>>>>> 
>>>>>> 
>>>>>>> 
>>>>>>>> -----Original Message-----
>>>>>>>> From: Nik Martin [mailto:nik.martin@nfinausa.com]
>>>>>>>> Sent: Wednesday, September 05, 2012 11:11 AM
>>>>>>>> To: cloudstack-users@incubator.apache.org
>>>>>>>> Subject: Cannot use NIC bond on guest/public networks
>>>>>>>> 
>>>>>>>> I'm having an issue using bonded interfaces on gguest and
public
>>>>>>>> networks.  Details:
>>>>>>>> 
>>>>>>>> 3 HVs, Xenserver 6.02 all patches applied, open vswitch backend,
>>>> sixe
>>>>>>>> nics in each, configured as:
>>>>>>>> eth0 management
>>>>>>>> eth1+3 bonded, tagged as guest_pub
>>>>>>>> eth2, unused(management backup)
>>>>>>>> eth4+5 bonded, tagged as storage, which is iSCSI based
>>>>>>>> 
>>>>>>>> Each bond slave connects to  pair of L3 switches that are
>> properly
>>>>>>>> stacked
>>>>>>>> 
>>>>>>>> If I unbond bond 1+3, vms are fine, guest and public traffic
>> flows
>>>> fine.
>>>>>>>>  If I stop cloud-management service, open xencenter and bond
>>>> eth1+3,
>>>>>>>> then name the resulting network "guest_pub", it now carries
the
>>>> label
>>>>>>>> for my network tag for guest and public traffic.
>>>>>>>> 
>>>>>>>> When I restart cloud-management, as soon as my system VMs
try to
>>>> start,
>>>>>>>> they start bouncing wildly in xencenter.  The VIF is created,
>> and
>>>> looks
>>>>>>>> like "VLAN-SOMEUUID-15, where 15 is the vlan for my public
>> network.
>>>>>>>> The
>>>>>>>> VIF never gets assigned to the bond.  If I look in the
>> cloudstack
>>>> log,
>>>>>>>> I
>>>>>>>> see:
>>>>>>>> 
>>>>>>>> (DirectAgent-97:null) PV args are -- quiet
>>>>>>>> 
>>>> 
>> console=hvc0%template=domP%type=consoleproxy%host=172.16.5.2%port=8250%
>>>>>>>> name=v-2-
>>>>>>>> 
>>>> 
>> VM%premium=true%zone=1%pod=1%guid=Proxy.2%proxy_vm=2%disable_rp_filter=
>>>>>>>> 
>>>> 
>> true%eth2ip=172.16.15.118%eth2mask=255.255.255.0%gateway=172.16.15.1%et
>>>>>>>> 
>>>> 
>> h0ip=169.254.2.227%eth0mask=255.255.0.0%eth1ip=172.16.5.34%eth1mask=255
>>>>>>>> .255.255.0%mgmtcidr=172.16.5.0/24%localgw=172.16.5.1%internaldns
>> 1=
>>>> 8.8.8
>>>>>>>> .8%internaldns2=8.8.4.4%dns1=8.8.8.8%dns2=8.8.4.4
>>>>>>>> 2012-09-05 12:54:31,762 DEBUG [xen.resource.CitrixResourceBase]
>>>>>>>> (DirectAgent-97:null) VBD 14e0d8bc-8b5b-2316-cd1d-07c04389e242
>>>> created
>>>>>>>> for Vol[2|ROOT|a77d8900-aa57-4ce9-9c73-6db35d985f48|2147483648]
>>>>>>>> 2012-09-05 12:54:31,824 DEBUG [xen.resource.CitrixResourceBase]
>>>>>>>> (DirectAgent-97:null) Creating VIF for v-2-VM on nic
>>>>>>>> [Nic:Public-172.16.15.118-vlan://15]
>>>>>>>> 2012-09-05 12:54:31,824 DEBUG [xen.resource.CitrixResourceBase]
>>>>>>>> (DirectAgent-97:null) Looking for network named guest_pub
>>>>>>>> 2012-09-05 12:54:31,829 DEBUG [xen.resource.CitrixResourceBase]
>>>>>>>> (DirectAgent-97:null) Found more than one network with the
name
>>>>>>>> guest_pub
>>>>>>>> 2012-09-05 12:54:31,845 DEBUG [xen.resource.CitrixResourceBase]
>>>>>>>> (DirectAgent-97:null) Found a network called guest_pub on
>>>>>>>> host=172.16.5.5;  Network=f008aa20-1ea1-f6e5-c881-08911e3549e5;
>>>>>>>> pif=64d27657-d3be-df29-d365-0baa2dab5303
>>>>>>>> 2012-09-05 12:54:31,855 DEBUG [xen.resource.CitrixResourceBase]
>>>>>>>> (DirectAgent-97:null) Creating VLAN 15 on host 172.16.5.5
on
>>>> device
>>>>>>>> eth1
>>>>>>>> 2012-09-05 12:54:31,885 WARN  [xen.resource.CitrixResourceBase]
>>>>>>>> (DirectAgent-97:null) Catch Exception: class
>>>>>>>> com.xensource.xenapi.Types$XenAPIException due to
>>>>>>>> CANNOT_ADD_VLAN_TO_BOND_SLAVEOpaqueRef:70faa00e-7444-ee41-3cf3-
>>>>>>>> 7d151b6fd561
>>>>>>>> CANNOT_ADD_VLAN_TO_BOND_SLAVEOpaqueRef:70faa00e-7444-ee41-3cf3-
>>>>>>>> 7d151b6fd561
>>>>>>>>      at
>> com.xensource.xenapi.Types.checkResponse(Types.java:1731)
>>>>>>>>      at
>>>> com.xensource.xenapi.Connection.dispatch(Connection.java:372)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.XenServerConnectionPool$XenServerConn
>>>>>>>> ection.dispatch(XenServerConnectionPool.java:905)
>>>>>>>>      at com.xensource.xenapi.VLAN.create(VLAN.java:349)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.enableVlanNetwork(
>>>>>>>> CitrixResourceBase.java:3646)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.getNetwork(CitrixR
>>>>>>>> esourceBase.java:635)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.createVif(CitrixRe
>>>>>>>> sourceBase.java:671)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.execute(CitrixReso
>>>>>>>> urceBase.java:1104)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeRequest(Cit
>>>>>>>> rixResourceBase.java:466)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.XenServer56Resource.executeRequest(Xe
>>>>>>>> nServer56Resource.java:69)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.agent.manager.DirectAgentAttache$Task.run(DirectAgentAttache.
>>>>>>>> java:187)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>>>>>>>      at
>>>>>>>> 
>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>>>>>>      at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.ac
>>>>>>>> cess$101(ScheduledThreadPoolExecutor.java:165)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.ru
>>>>>>>> n(ScheduledThreadPoolExecutor.java:266)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.ja
>>>>>>>> va:1110)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.j
>>>>>>>> ava:603)
>>>>>>>>      at java.lang.Thread.run(Thread.java:679)
>>>>>>>> 2012-09-05 12:54:31,886 WARN  [xen.resource.CitrixResourceBase]
>>>>>>>> (DirectAgent-97:null) Unable to start v-2-VM due to
>>>>>>>> CANNOT_ADD_VLAN_TO_BOND_SLAVEOpaqueRef:70faa00e-7444-ee41-3cf3-
>>>>>>>> 7d151b6fd561
>>>>>>>>      at
>> com.xensource.xenapi.Types.checkResponse(Types.java:1731)
>>>>>>>>      at
>>>> com.xensource.xenapi.Connection.dispatch(Connection.java:372)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.XenServerConnectionPool$XenServerConn
>>>>>>>> ection.dispatch(XenServerConnectionPool.java:905)
>>>>>>>>      at com.xensource.xenapi.VLAN.create(VLAN.java:349)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.enableVlanNetwork(
>>>>>>>> CitrixResourceBase.java:3646)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.getNetwork(CitrixR
>>>>>>>> esourceBase.java:635)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.createVif(CitrixRe
>>>>>>>> sourceBase.java:671)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.execute(CitrixReso
>>>>>>>> urceBase.java:1104)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeRequest(Cit
>>>>>>>> rixResourceBase.java:466)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.XenServer56Resource.executeRequest(Xe
>>>>>>>> nServer56Resource.java:69)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.agent.manager.DirectAgentAttache$Task.run(DirectAgentAttache.
>>>>>>>> java:187)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>>>>>>>      at
>>>>>>>> 
>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>>>>>>      at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.ac
>>>>>>>> cess$101(ScheduledThreadPoolExecutor.java:165)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.ru
>>>>>>>> n(ScheduledThreadPoolExecutor.java:266)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.ja
>>>>>>>> va:1110)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.j
>>>>>>>> ava:603)
>>>>>>>>      at java.lang.Thread.run(Thread.java:679)
>>>>>>>> 2012-09-05 12:54:31,977 WARN  [xen.resource.CitrixResourceBase]
>>>>>>>> (DirectAgent-97:null) Unable to clean up VBD due to
>>>>>>>> You gave an invalid object reference.  The object may have
>>>> recently
>>>>>>>> been
>>>>>>>> deleted.  The class parameter gives the type of reference
given,
>>>> and
>>>>>>>> the
>>>>>>>> handle parameter echoes the bad value given.
>>>>>>>>      at com.xensource.xenapi.Types.checkResponse(Types.java:211)
>>>>>>>>      at
>>>> com.xensource.xenapi.Connection.dispatch(Connection.java:372)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.XenServerConnectionPool$XenServerConn
>>>>>>>> ection.dispatch(XenServerConnectionPool.java:905)
>>>>>>>>      at com.xensource.xenapi.VBD.unplug(VBD.java:1058)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.handleVmStartFailu
>>>>>>>> re(CitrixResourceBase.java:929)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.execute(CitrixReso
>>>>>>>> urceBase.java:1167)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeRequest(Cit
>>>>>>>> rixResourceBase.java:466)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.XenServer56Resource.executeRequest(Xe
>>>>>>>> nServer56Resource.java:69)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.agent.manager.DirectAgentAttache$Task.run(DirectAgentAttache.
>>>>>>>> java:187)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>>>>>>>      at
>>>>>>>> 
>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>>>>>>      at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.ac
>>>>>>>> cess$101(ScheduledThreadPoolExecutor.java:165)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.ru
>>>>>>>> n(ScheduledThreadPoolExecutor.java:266)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.ja
>>>>>>>> va:1110)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.j
>>>>>>>> ava:603)
>>>>>>>>      at java.lang.Thread.run(Thread.java:679)
>>>>>>>> 2012-09-05 12:54:31,996 WARN  [xen.resource.CitrixResourceBase]
>>>>>>>> (DirectAgent-97:null) Unable to clean up VBD due to
>>>>>>>> You gave an invalid object reference.  The object may have
>>>> recently
>>>>>>>> been
>>>>>>>> deleted.  The class parameter gives the type of reference
given,
>>>> and
>>>>>>>> the
>>>>>>>> handle parameter echoes the bad value given.
>>>>>>>>      at com.xensource.xenapi.Types.checkResponse(Types.java:211)
>>>>>>>>      at
>>>> com.xensource.xenapi.Connection.dispatch(Connection.java:372)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.XenServerConnectionPool$XenServerConn
>>>>>>>> ection.dispatch(XenServerConnectionPool.java:905)
>>>>>>>>      at com.xensource.xenapi.VBD.unplug(VBD.java:1058)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.handleVmStartFailu
>>>>>>>> re(CitrixResourceBase.java:929)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.execute(CitrixReso
>>>>>>>> urceBase.java:1167)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeRequest(Cit
>>>>>>>> rixResourceBase.java:466)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.hypervisor.xen.resource.XenServer56Resource.executeRequest(Xe
>>>>>>>> nServer56Resource.java:69)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> com.cloud.agent.manager.DirectAgentAttache$Task.run(DirectAgentAttache.
>>>>>>>> java:187)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>>>>>>>      at
>>>>>>>> 
>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>>>>>>      at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.ac
>>>>>>>> cess$101(ScheduledThreadPoolExecutor.java:165)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.ru
>>>>>>>> n(ScheduledThreadPoolExecutor.java:266)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.ja
>>>>>>>> va:1110)
>>>>>>>>      at
>>>>>>>> 
>>>> 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.j
>>>>>>>> ava:603)
>>>>>>>>      at java.lang.Thread.run(Thread.java:679)
>>>>>>>> 2012-09-05 12:54:31,996 DEBUG [xen.resource.CitrixResourceBase]
>>>>>>>> (DirectAgent-97:null) The VM is in stopped state, detected
>> problem
>>>>>>>> during startup : v-2-VM
>>>>>>>> 2012-09-05 12:54:31,996 DEBUG [agent.manager.DirectAgentAttache]
>>>>>>>> (DirectAgent-97:null) Seq 4-204669124: Cancelling because
one of
>>>> the
>>>>>>>> answers is false and it is stop on error.
>>>>>>>> 2012-09-05 12:54:31,996 DEBUG [agent.manager.DirectAgentAttache]
>>>>>>>> (DirectAgent-97:null) Seq 4-204669124: Response Received:
>>>>>>>> 2012-09-05 12:54:31,997 DEBUG [agent.transport.Request]
>>>>>>>> (DirectAgent-97:null) Seq 4-204669124: Processing:  { Ans:
,
>>>> MgmtId:
>>>>>>>> 130577622632, via: 4, Ver: v1, Flags: 110,
>>>>>>>> [{"StartAnswer":{"vm":{"id":2,"name":"v-2-
>>>>>>>> 
>>>> 
>> VM","bootloader":"PyGrub","type":"ConsoleProxy","cpus":1,"speed":500,"m
>>>>>>>> 
>> inRam":1073741824,"maxRam":1073741824,"arch":"x86_64","os":"Debian
>>>>>>>> GNU/Linux 5.0 (32-bit)","bootArgs":" template=domP
>>>> type=consoleproxy
>>>>>>>> host=172.16.5.2 port=8250 name=v-2-VM premium=true zone=1
pod=1
>>>>>>>> guid=Proxy.2 proxy_vm=2 disable_rp_filter=true
>>>> eth2ip=172.16.15.118
>>>>>>>> eth2mask=255.255.255.0 gateway=172.16.15.1 eth0ip=169.254.2.227
>>>>>>>> eth0mask=255.255.0.0 eth1ip=172.16.5.34 eth1mask=255.255.255.0
>>>>>>>> mgmtcidr=172.16.5.0/24 localgw=172.16.5.1 internaldns1=8.8.8.8
>>>>>>>> internaldns2=8.8.4.4 dns1=8.8.8.8
>>>>>>>> 
>>>> 
>> dns2=8.8.4.4","rebootOnCrash":false,"enableHA":false,"limitCpuUse":fals
>>>>>>>> 
>>>> 
>> e,"vncPassword":"9d8805d384dc10ce","params":{},"disks":[{"id":2,"name":
>>>>>>>> "ROOT-2","mountPoint":"/iqn.2012-
>>>>>>>> 01.com.nfinausa:san01/1","path":"a77d8900-aa57-4ce9-9c73-
>>>>>>>> 
>>>> 
>> 6db35d985f48","size":2147483648,"type":"ROOT","storagePoolType":"IscsiL
>>>>>>>> UN","storagePoolUuid":"776299a8-a770-30f7-9f6b-
>>>>>>>> 
>>>> 
>> 8c422790f5b7","deviceId":0}],"nics":[{"deviceId":2,"networkRateMbps":-
>>>>>>>> 
>>>> 
>> 1,"defaultNic":true,"ip":"172.16.15.118","netmask":"255.255.255.0","gat
>>>>>>>> 
>>>> 
>> eway":"172.16.15.1","mac":"06:18:16:00:00:27","dns1":"8.8.8.8","dns2":"
>>>>>>>> 
>>>> 
>> 8.8.4.4","broadcastType":"Vlan","type":"Public","broadcastUri":"vlan://
>>>>>>>> 
>>>> 
>> 15","isolationUri":"vlan://15","isSecurityGroupEnabled":false,"name":"g
>>>>>>>> uest_pub"},{"deviceId":0,"networkRateMbps":-
>>>>>>>> 
>>>> 
>> 1,"defaultNic":false,"ip":"169.254.2.227","netmask":"255.255.0.0","gate
>>>>>>>> 
>>>> 
>> way":"169.254.0.1","mac":"0e:00:a9:fe:02:e3","broadcastType":"LinkLocal
>>>>>>>> 
>>>> 
>> ","type":"Control","isSecurityGroupEnabled":false},{"deviceId":1,"netwo
>>>>>>>> rkRateMbps":-1,"defaultNic":f
>>>>>>>> alse,"ip"
>>>>>>>> :"172.16.5.34","netmask":"255.255.255.0","gateway":"172.16.5.1",
>> "m
>>>> ac":"
>>>>>>>> 
>>>> 
>> 06:58:58:00:00:0f","broadcastType":"Native","type":"Management","isSecu
>>>>>>>> 
>>>> 
>> rityGroupEnabled":false,"name":"management"}]},"result":false,"details"
>>>>>>>> :"Unable
>>>>>>>> to start v-2-VM due to ","wait":0}}] }
>>>>>>>> 2012-09-05 12:54:31,997 WARN
>> [cloud.vm.VirtualMachineManagerImpl]
>>>>>>>> (DirectAgent-97:null) Cleanup failed due to Unable to start
v-2-
>> VM
>>>> due
>>>>>>>> to
>>>>>>>> 2012-09-05 12:54:31,997 DEBUG [agent.transport.Request]
>>>>>>>> (consoleproxy-1:null) Seq 4-204669124: Received:  { Ans:
,
>> MgmtId:
>>>>>>>> 130577622632, via: 4, Ver: v1, Flags: 110, { StartAnswer
} }
>>>>>>>> 2012-09-05 12:54:31,997 DEBUG [agent.manager.AgentAttache]
>>>>>>>> (DirectAgent-97:null) Seq 4-204669124: No more commands found
>>>>>>>> 2012-09-05 12:54:31,997 WARN
>> [cloud.vm.VirtualMachineManagerImpl]
>>>>>>>> (consoleproxy-1:null) Cleanup failed due to Unable to start
v-2-
>> VM
>>>> due
>>>>>>>> to
>>>>>>>> 2012-09-05 12:54:32,002 INFO
>> [cloud.vm.VirtualMachineManagerImpl]
>>>>>>>> (consoleproxy-1:null) Unable to start VM on Host[-4-Routing]
due
>>>> to
>>>>>>>> Unable to start v-2-VM due to
>>>>>>>> 2012-09-05 12:54:32,005 DEBUG
>> [cloud.vm.VirtualMachineManagerImpl]
>>>>>>>> (consoleproxy-1:null) Cleaning up resources for the vm
>>>>>>>> VM[ConsoleProxy|v-2-VM] in Starting state
>>>>>>>> 2012-09-05 12:54:32,006 DEBUG [agent.transport.Request]
>>>>>>>> (consoleproxy-1:null) Seq 4-204669125: Sending  { Cmd , MgmtId:
>>>>>>>> 130577622632, via: 4, Ver: v1, Flags: 100111,
>>>>>>>> [{"StopCommand":{"isProxy":false,"vmName":"v-2-VM","wait":0}}]
}
>>>>>>>> 2012-09-05 12:54:32,006 DEBUG [agent.transport.Request]
>>>>>>>> (consoleproxy-1:null) Seq 4-204669125: Executing:  { Cmd
,
>> MgmtId:
>>>>>>>> 130577622632, via: 4, Ver: v1, Flags: 100111,
>>>>>>>> [{"StopCommand":{"isProxy":false,"vmName":"v-2-VM","wait":0}}]
}
>>>>>>>> 2012-09-05 12:54:32,006 DEBUG [agent.manager.DirectAgentAttache]
>>>>>>>> (DirectAgent-104:null) Seq 4-204669125: Executing request
>>>>>>>> 2012-09-05 12:54:32,097 INFO  [xen.resource.CitrixResourceBase]
>>>>>>>> (DirectAgent-104:null) VM does not exist on
>>>>>>>> XenServer0c091805-ebac-4279-9899-52018163a557
>>>>>>>> 2012-09-05 12:54:32,097 DEBUG [agent.manager.DirectAgentAttache]
>>>>>>>> (DirectAgent-104:null) Seq 4-204669125: Response Received:
>>>>>>>> 2012-09-05 12:54:32,098 DEBUG [agent.transport.Request]
>>>>>>>> (DirectAgent-104:null) Seq 4-204669125: Processing:  { Ans:
,
>>>> MgmtId:
>>>>>>>> 130577622632, via: 4, Ver: v1, Flags: 110,
>>>>>>>> 
>>>> 
>> [{"StopAnswer":{"vncPort":0,"bytesSent":0,"bytesReceived":0,"result":tr
>>>>>>>> ue,"details":"VM
>>>>>>>> does not exist","wait":0}}] }
>>>>>>>> 2012-09-05 12:54:32,098 DEBUG
>> [cloud.vm.VirtualMachineManagerImpl]
>>>>>>>> (DirectAgent-104:null) Cleanup succeeded. Details VM does
not
>>>> exist
>>>>>>>> 2012-09-05 12:54:32,098 DEBUG [agent.transport.Request]
>>>>>>>> (consoleproxy-1:null) Seq 4-204669125: Received:  { Ans:
,
>> MgmtId:
>>>>>>>> 130577622632, via: 4, Ver: v1, Flags: 110, { StopAnswer }
}
>>>>>>>> 2012-09-05 12:54:32,098 DEBUG [agent.manager.AgentAttache]
>>>>>>>> (DirectAgent-104:null) Seq 4-204669125: No more commands
found
>>>>>>>> 2012-09-05 12:54:32,098 DEBUG
>> [cloud.vm.VirtualMachineManagerImpl]
>>>>>>>> (consoleproxy-1:null) Cleanup succeeded. Details VM does
not
>> exist
>>>>>>>> 2012-09-05 12:54:32,119 DEBUG [dc.dao.DataCenterIpAddressDaoImpl]
>>>>>>>> (consoleproxy-1:null) Releasing ip address for
>>>>>>>> reservationId=636f327b-5183-46bd-adf2-57bf0d5fae80, instance=7
>>>>>>>> 2012-09-05 12:54:32,125 DEBUG
>> [cloud.vm.VirtualMachineManagerImpl]
>>>>>>>> (consoleproxy-1:null) Successfully cleanued up resources
for the
>>>> vm
>>>>>>>> VM[ConsoleProxy|v-2-VM] in Starting state
>>>>>>>> 2012-09-05 12:54:32,126 DEBUG
>> [cloud.vm.VirtualMachineManagerImpl]
>>>>>>>> (consoleproxy-1:null) Root volume is ready, need to place
VM in
>>>>>>>> volume's
>>>>>>>> cluster
>>>>>>>> 2012-09-05 12:54:32,126 DEBUG
>> [cloud.vm.VirtualMachineManagerImpl]
>>>>>>>> (consoleproxy-1:null) Vol[2|vm=2|ROOT] is READY, changing
>>>> deployment
>>>>>>>> plan to use this pool's dcId: 1 , podId: 1 , and clusterId:
1
>>>>>>>> 2012-09-05 12:54:32,127 DEBUG [cloud.deploy.FirstFitPlanner]
>>>>>>>> (consoleproxy-1:null) DeploymentPlanner allocation algorithm:
>>>> random
>>>>>>>> 2012-09-05 12:54:32,127 DEBUG [cloud.deploy.FirstFitPlanner]
>>>>>>>> (consoleproxy-1:null) Trying to allocate a host and storage
>> pools
>>>> from
>>>>>>>> dc:1, pod:1,cluster:1, requested cpu: 500, requested ram:
>>>> 1073741824
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> --
>>>>>>>> Regards,
>>>>>>>> 
>>>>>>>> Nik
>>>>>>>> 
>>>>>>>> Nik Martin
>>>>>>>> VP Business Development
>>>>>>>> Nfina Technologies, Inc.
>>>>>>>> +1.251.243.0043 x1003
>>>>>>>> Relentless Reliability
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> Regards,
>>>>>> 
>>>>>> Nik
>>>>>> 
>>>>>> Nik Martin
>>>>>> VP Business Development
>>>>>> Nfina Technologies, Inc.
>>>>>> +1.251.243.0043 x1003
>>>>>> Relentless Reliability
>>>> 
>>>> 
>>>> --
>>>> Regards,
>>>> 
>>>> Nik
>>>> 
>>>> Nik Martin
>>>> VP Business Development
>>>> Nfina Technologies, Inc.
>>>> +1.251.243.0043 x1003
>>>> Relentless Reliability
> 


Mime
View raw message