cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Burwell <jburw...@basho.com>
Subject Re: SSVM Network Configuration Issue
Date Mon, 17 Dec 2012 17:01:46 GMT
Prasanna,

I applied the changes suggested below, and the host now fails to startup properly with the
following error in the log:

2012-12-17 08:59:56,408 WARN  [cloud.resource.ResourceManagerImpl] (AgentTaskPool-2:null)
Unable to connect due to 
com.cloud.exception.ConnectionException: Incorrect Network setup on agent, Reinitialize agent
after network names are setup, details 
: For Physical Network id:200, Guest Network is not configured on the backend by name cloud-guest
        at com.cloud.network.NetworkManagerImpl.processConnect(NetworkManagerImpl.java:6656)
        at com.cloud.agent.manager.AgentManagerImpl.notifyMonitorsOfConnection(AgentManagerImpl.java:611)
        at com.cloud.agent.manager.AgentManagerImpl.handleDirectConnectAgent(AgentManagerImpl.java:1502)
        at com.cloud.resource.ResourceManagerImpl.createHostAndAgent(ResourceManagerImpl.java:1648)
        at com.cloud.resource.ResourceManagerImpl.createHostAndAgent(ResourceManagerImpl.java:1685)
        at com.cloud.agent.manager.AgentManagerImpl$SimulateStartTask.run(AgentManagerImpl.java:1152)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:679)
2012-12-17 08:59:56,408 DEBUG [cloud.host.Status] (AgentTaskPool-2:null) Transition:[Resource
state = Enabled, Agent event = AgentDisconnected, Host id = 1, name = devcloud]


I have also attached a copy of my Marvin configuration for your reference.

Thanks for your help,
-John



On Dec 15, 2012, at 9:45 PM, Prasanna Santhanam <tsp@apache.org> wrote:

> Traffic types carry label information on the physical nic and the
> label that is associated with it. It is via the traffic label that you would
> tell cloudstack about the label's you've given to the hypervisor's interfaces.
> 
> Your devcloud.cfg will have to be altered for this:
> 
> After altering it should look something like
> 
> .... SNIP ....
> "traffictypes": [
>                        {
>                            "xen": "cloud-guest", 
>                            "typ": "Guest"
>                        }, 
>                        {
>                            "typ": "Management",
>                            "xen" : "cloud-mgmt"
>                        }, 
>                    ], 
> .... SNIP ....
> 
> An example is available under 
> incubator-cloudstack/tools/marvin/marvin/configGenerator.py
> 
> Look for the method: describe_setup_in_eip_mode()
> 
> 
> HTH
> -- 
> Prasanna.,
> 
> 
> On Sun, Dec 16, 2012 at 02:55:30AM +0530, John Burwell wrote:
>> Rohit,
>> 
>> As I stated below, I know which the VIF->PIF->device Xen mapping needs
>> to be used for each network.  My question is how do I configure
>> CloudStack with that information.
>> 
>> Thanks for your help,
>> -John
>> 
>> 
>> 
>> 
>> On Dec 15, 2012, at 2:51 PM, Rohit Yadav <rohit.yadav@citrix.com> wrote:
>> 
>>> About xen bridging, these can help;
>>> 
>>> brctl show all (bridge vif mappings)
>>> ip show addr xenbr0 (bridge specific info)
>>> brctl showmacs br0 (bridge mac mappings)
>>> 
>>> Wiki:
>>> http://wiki.xen.org/wiki/Xen_FAQ_Networking
>>> http://wiki.xen.org/wiki/XenNetworking
>>> 
>>> Regards.
>>> 
>>> ________________________________________
>>> From: John Burwell [jburwell@basho.com]
>>> Sent: Saturday, December 15, 2012 8:35 PM
>>> To: cloudstack-dev@incubator.apache.org
>>> Subject: Re: SSVM Network Configuration Issue
>>> 
>>> Marcus,
>>> 
>>> That's what I thought.  The Xen physical bridge names are xenbr0 (to
>>> eth0) and xenbr1 (to eth1).  Using basic network configuration, I set
>>> the Xen network traffic labels for each to the appropriate bridge
>>> device name.  I receive errors regarding invalid network device when
>>> it attempts to create a VM.    Does anyone else know how determine the
>>> mapping of physical devices to CloudStack Xen network traffic labels?
>>> 
>>> Thanks,
>>> -John
>>> 
>>> On Dec 15, 2012, at 1:20 AM, Marcus Sorensen <shadowsor@gmail.com> wrote:
>>> 
>>>> Vlans in advanced/KVM should only be required for the guest networks. If
I
>>>> create a bridge on physical eth0, name it 'br0', and a bridge on physical
>>>> eth1, naming it 'br1', and then set my management network to label 'br0',
>>>> my public network to 'br1', and my guest network to 'br1', it should use
>>>> the bridges you asked for when connecting the system VMs for the specified
>>>> traffic. I'd just leave the 'vlan' blank when specifying public and
>>>> pod(management) IPs. In this scenario, the only place you need to enter
>>>> vlans is on the guest, and it should create new tagged interfaces/bridges
>>>> on eth1(per your label of br1) as new guest networks are brought online.
>>>> This is how my dev VMs are usually set up.
>>>> 
>>>> 
>>>> On Thu, Dec 6, 2012 at 11:03 AM, John Burwell <jburwell@basho.com>
wrote:
>>>> 
>>>>> Marcus,
>>>>> 
>>>>> My question, more specifically, is are VLANs required to implement traffic
>>>>> labels?  Also, can traffic labels be configured in Basic networking mode
or
>>>>> do I need to switch my configuration to Advanced?
>>>>> 
>>>>> I am not disagreeing on the how DNS servers should be associated with
>>>>> interfaces nor do I think a network operator should be required to make
any
>>>>> upstream router configuration changes.  I am simply saying that CloudStack
>>>>> should not make assumptions about the gateways that have been specified.
>>>>> The behavior I experienced of CloudStack attempting to
>>>>> "correct" my configuration by injecting another route fails the rule
of
>>>>> least surprise and is based on incomplete knowledge.  In my opinion,
>>>>> CloudStack (or any system of its ilk) should faithfully (or slavishly)
>>>>> realize the routes on the system VM as specified.  If the configuration
is
>>>>> incorrect, networking will fail in an expected manner, and the operator
can
>>>>> adjust their environment as necessary.   Otherwise, there is an upstream
>>>>> router configuration to which CloudStack has no visibility, but with
which
>>>>> it is completely compatible.  Essentially, I am asking CloudStack to
do
>>>>> less, assume I know what I am doing, and break in a manner consistent
with
>>>>> other network applications.
>>>>> 
>>>>> Thanks,
>>>>> -John
>>>>> 
>>>>> On Dec 6, 2012, at 12:30 PM, Marcus Sorensen <shadowsor@gmail.com>
wrote:
>>>>> 
>>>>>> Traffic labels essentially tell the system which physical network
to use.
>>>>>> So if you've allocated a vlan for a specific traffic type, it will
first
>>>>>> look at the tag associated with that traffic type, figure out which
>>>>>> physical interface goes with that, and then create a tagged interface
and
>>>>>> bridge also on that physical.
>>>>>> 
>>>>>> I guess we'll just have to disagree, I think the current behavior
makes
>>>>>> total sense.  To me, internal DNS should always use the management
>>>>>> interface, since it's internally facing. There's no sane way to do
that
>>>>>> other than a static route on the system vm (it seems you're suggesting
>>>>> that
>>>>>> the network operator force something like this on the upstream router,
>>>>>> which seems really strange to require everyone to create static routes
on
>>>>>> their public network to force specific IPs back into their internal
>>>>>> networks, so correct me if I have the wrong impression).  Cloudstack
is
>>>>>> doing exactly what you tell it to. You told it that 10.0.3.2 should
be
>>>>>> accessible via your internal network by setting it as your internal
DNS.
>>>>>> The fact that a broken config doesn't work isn't CloudStack's fault.
>>>>>> 
>>>>>> Note that internal DNS is just the default for the ssvm, public DNS
is
>>>>>> still offered as a backup, so had you not said that 10.0.3.2 was
>>>>> available
>>>>>> on your internal network (perhaps offering a dummy internal DNS
>>>>>> address or 192.68.56.1),
>>>>>> lookups would fall back to public and everything would work as expected
>>>>> as
>>>>>> well.
>>>>>> 
>>>>>> There is also a global config called 'use.external.dns', but in setting
>>>>>> this, restarting the management server, recreating system VMs, I
don't
>>>>> see
>>>>>> a noticeable difference on any of this, so perhaps that would solve
your
>>>>>> issue as well but it's either broken or doesn't do what I thought
it
>>>>> would.
>>>>>> 
>>>>>> 
>>>>>> On Thu, Dec 6, 2012 at 8:39 AM, John Burwell <jburwell@basho.com>
wrote:
>>>>>> 
>>>>>>> Marcus,
>>>>>>> 
>>>>>>> Are traffic labels independent of VLANs?  I ask because my current
XCP
>>>>>>> network configuration is bridged, and I am not using Open vSwitch.
>>>>>>> 
>>>>>>> I disagree on the routing issue.  CloudStack should do what's
told
>>>>> because
>>>>>>> it does not have insight into or control of the configuration
of the
>>>>> routes
>>>>>>> in the layers beneath it.  If CloudStack simply did as it was
told, it
>>>>>>> would fail as expected in a typical networking environment while
>>>>> preserving
>>>>>>> the flexibility of configuration expected by a network engineer.
>>>>>>> 
>>>>>>> Thanks,
>>>>>>> -John
>>>>>>> 
>>>>>>> On Dec 6, 2012, at 10:35 AM, Marcus Sorensen <shadowsor@gmail.com>
>>>>> wrote:
>>>>>>> 
>>>>>>>> I can't really tell you for xen, although it might be similar
to KVM.
>>>>>>>> During setup I would set a traffic label matching the name
of the
>>>>> bridge,
>>>>>>>> for example if my public interface were eth0 and the bridge
I had set
>>>>> up
>>>>>>>> was br0, I'd go to the zone network settings, find public
traffic, and
>>>>>>> set
>>>>>>>> a label on it of "br0". Maybe someone more familiar with
the xen setup
>>>>>>> can
>>>>>>>> help.
>>>>>>>> 
>>>>>>>> On the DNS, it makes sense from the perspective that the
ssvm has
>>>>> access
>>>>>>> to
>>>>>>>> your internal networks, thus it uses your internal DNS. Its
default
>>>>>>> gateway
>>>>>>>> is public. So if I have a DNS server on an internal network
at
>>>>>>>> 10.30.20.10/24, and my management network on 192.168.10.0/24,
this
>>>>> route
>>>>>>>> has to be set in order for the DNS server to be reachable.
You would
>>>>>>> under
>>>>>>>> normal circumstances not want to use a DNS server on public
net as your
>>>>>>>> internal DNS setting anyway, although I agree that the route
insertion
>>>>>>>> should have a bit more sanity checking and not set a static
route to
>>>>> your
>>>>>>>> default gateway.
>>>>>>>> On Dec 6, 2012 6:31 AM, "John Burwell" <jburwell@basho.com>
wrote:
>>>>>>>> 
>>>>>>>>> Marcus,
>>>>>>>>> 
>>>>>>>>> I setup a small PowerDNS recursor on 192.168.56.15, configured
the DNS
>>>>>>> for
>>>>>>>>> the management network to use it, and the route table
in the SSVM is
>>>>> now
>>>>>>>>> correct.  However, this behavior does not seem correct.
 At a minimum,
>>>>>>> it
>>>>>>>>> violates the rule of least surprise.  CloudStack shouldn't
be adding
>>>>>>>>> gateways that are not configured.  Therefore, I have
entered a
>>>>>>> defect[1] to
>>>>>>>>> remove the behavior.
>>>>>>>>> 
>>>>>>>>> With the route table fixed, I am now experiencing a new
problem.  The
>>>>>>>>> external NIC (10.0.3.0/24) on the SSVM is being connected
to the
>>>>>>> internal
>>>>>>>>> NIC (192.168.56.0/24) on the host.  The host-only network
>>>>>>> (192.168.56.15)
>>>>>>>>> is configured on xenbr0 and the NAT network is configured
on xenbr1.
>>>>>>> As a
>>>>>>>>> reference, the following is the contents of the
>>>>> /etc/network/interfaces
>>>>>>>>> file and ifconfig from devcloud host:
>>>>>>>>> 
>>>>>>>>> root@zone1:/opt/cloudstack/apache-tomcat-6.0.32/bin#
cat
>>>>>>>>> /etc/network/interfaces
>>>>>>>>> # The loopback network interface
>>>>>>>>> auto lo
>>>>>>>>> iface lo inet loopback
>>>>>>>>> 
>>>>>>>>> auto eth0
>>>>>>>>> iface eth0 inet manual
>>>>>>>>> 
>>>>>>>>> allow-hotplug eth1
>>>>>>>>> iface eth1 inet manual
>>>>>>>>> 
>>>>>>>>> # The primary network interface
>>>>>>>>> auto xenbr0
>>>>>>>>> iface xenbr0 inet static
>>>>>>>>> address 192.168.56.15
>>>>>>>>> netmask 255.255.255.0
>>>>>>>>> network 192.168.56.0
>>>>>>>>> broadcast 192.168.56.255
>>>>>>>>> dns_nameserver 192.168.56.15
>>>>>>>>> bridge_ports eth0
>>>>>>>>> 
>>>>>>>>> auto xenbr1
>>>>>>>>> iface xenbr1 inet dhcp
>>>>>>>>> bridge_ports eth1
>>>>>>>>> dns_nameserver 8.8.8.8 8.8.4.4
>>>>>>>>> post-up route add default gw 10.0.3.2
>>>>>>>>> 
>>>>>>>>> root@zone1:/opt/cloudstack/apache-tomcat-6.0.32/bin#
ifconfig
>>>>>>>>> eth0      Link encap:Ethernet  HWaddr 08:00:27:7e:74:9c
>>>>>>>>>      inet6 addr: fe80::a00:27ff:fe7e:749c/64 Scope:Link
>>>>>>>>>      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>>>>>>>>      RX packets:777 errors:0 dropped:0 overruns:0 frame:0
>>>>>>>>>      TX packets:188 errors:0 dropped:0 overruns:0 carrier:0
>>>>>>>>>      collisions:0 txqueuelen:1000
>>>>>>>>>      RX bytes:109977 (109.9 KB)  TX bytes:11900 (11.9
KB)
>>>>>>>>> 
>>>>>>>>> eth1      Link encap:Ethernet  HWaddr 08:00:27:df:00:00
>>>>>>>>>      inet6 addr: fe80::a00:27ff:fedf:0/64 Scope:Link
>>>>>>>>>      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>>>>>>>>      RX packets:4129 errors:0 dropped:0 overruns:0 frame:0
>>>>>>>>>      TX packets:3910 errors:0 dropped:0 overruns:0 carrier:0
>>>>>>>>>      collisions:0 txqueuelen:1000
>>>>>>>>>      RX bytes:478719 (478.7 KB)  TX bytes:2542459 (2.5
MB)
>>>>>>>>> 
>>>>>>>>> lo        Link encap:Local Loopback
>>>>>>>>>      inet addr:127.0.0.1  Mask:255.0.0.0
>>>>>>>>>      inet6 addr: ::1/128 Scope:Host
>>>>>>>>>      UP LOOPBACK RUNNING  MTU:16436  Metric:1
>>>>>>>>>      RX packets:360285 errors:0 dropped:0 overruns:0
frame:0
>>>>>>>>>      TX packets:360285 errors:0 dropped:0 overruns:0
carrier:0
>>>>>>>>>      collisions:0 txqueuelen:0
>>>>>>>>>      RX bytes:169128181 (169.1 MB)  TX bytes:169128181
(169.1 MB)
>>>>>>>>> 
>>>>>>>>> vif1.0    Link encap:Ethernet  HWaddr fe:ff:ff:ff:ff:ff
>>>>>>>>>      inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
>>>>>>>>>      UP BROADCAST RUNNING NOARP PROMISC  MTU:1500  Metric:1
>>>>>>>>>      RX packets:6 errors:0 dropped:0 overruns:0 frame:0
>>>>>>>>>      TX packets:152 errors:0 dropped:0 overruns:0 carrier:0
>>>>>>>>>      collisions:0 txqueuelen:32
>>>>>>>>>      RX bytes:292 (292.0 B)  TX bytes:9252 (9.2 KB)
>>>>>>>>> 
>>>>>>>>> vif1.1    Link encap:Ethernet  HWaddr fe:ff:ff:ff:ff:ff
>>>>>>>>>      inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
>>>>>>>>>      UP BROADCAST RUNNING NOARP PROMISC  MTU:1500  Metric:1
>>>>>>>>>      RX packets:566 errors:0 dropped:0 overruns:0 frame:0
>>>>>>>>>      TX packets:1405 errors:0 dropped:0 overruns:0 carrier:0
>>>>>>>>>      collisions:0 txqueuelen:32
>>>>>>>>>      RX bytes:44227 (44.2 KB)  TX bytes:173995 (173.9
KB)
>>>>>>>>> 
>>>>>>>>> vif1.2    Link encap:Ethernet  HWaddr fe:ff:ff:ff:ff:ff
>>>>>>>>>      inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
>>>>>>>>>      UP BROADCAST RUNNING NOARP PROMISC  MTU:1500  Metric:1
>>>>>>>>>      RX packets:3 errors:0 dropped:0 overruns:0 frame:0
>>>>>>>>>      TX packets:838 errors:0 dropped:0 overruns:0 carrier:0
>>>>>>>>>      collisions:0 txqueuelen:32
>>>>>>>>>      RX bytes:84 (84.0 B)  TX bytes:111361 (111.3 KB)
>>>>>>>>> 
>>>>>>>>> vif4.0    Link encap:Ethernet  HWaddr fe:ff:ff:ff:ff:ff
>>>>>>>>>      inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
>>>>>>>>>      UP BROADCAST RUNNING NOARP PROMISC  MTU:1500  Metric:1
>>>>>>>>>      RX packets:64 errors:0 dropped:0 overruns:0 frame:0
>>>>>>>>>      TX packets:197 errors:0 dropped:0 overruns:0 carrier:0
>>>>>>>>>      collisions:0 txqueuelen:32
>>>>>>>>>      RX bytes:10276 (10.2 KB)  TX bytes:18453 (18.4 KB)
>>>>>>>>> 
>>>>>>>>> vif4.1    Link encap:Ethernet  HWaddr fe:ff:ff:ff:ff:ff
>>>>>>>>>      inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
>>>>>>>>>      UP BROADCAST RUNNING NOARP PROMISC  MTU:1500  Metric:1
>>>>>>>>>      RX packets:2051 errors:0 dropped:0 overruns:0 frame:0
>>>>>>>>>      TX packets:2446 errors:0 dropped:0 overruns:0 carrier:0
>>>>>>>>>      collisions:0 txqueuelen:32
>>>>>>>>>      RX bytes:233914 (233.9 KB)  TX bytes:364243 (364.2
KB)
>>>>>>>>> 
>>>>>>>>> vif4.2    Link encap:Ethernet  HWaddr fe:ff:ff:ff:ff:ff
>>>>>>>>>      inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
>>>>>>>>>      UP BROADCAST RUNNING NOARP PROMISC  MTU:1500  Metric:1
>>>>>>>>>      RX packets:3 errors:0 dropped:0 overruns:0 frame:0
>>>>>>>>>      TX packets:582 errors:0 dropped:0 overruns:0 carrier:0
>>>>>>>>>      collisions:0 txqueuelen:32
>>>>>>>>>      RX bytes:84 (84.0 B)  TX bytes:74700 (74.7 KB)
>>>>>>>>> 
>>>>>>>>> vif4.3    Link encap:Ethernet  HWaddr fe:ff:ff:ff:ff:ff
>>>>>>>>>      inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
>>>>>>>>>      UP BROADCAST RUNNING NOARP PROMISC  MTU:1500  Metric:1
>>>>>>>>>      RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>>>>>>>>>      TX packets:585 errors:0 dropped:0 overruns:0 carrier:0
>>>>>>>>>      collisions:0 txqueuelen:32
>>>>>>>>>      RX bytes:0 (0.0 B)  TX bytes:74826 (74.8 KB)
>>>>>>>>> 
>>>>>>>>> xapi0     Link encap:Ethernet  HWaddr fe:ff:ff:ff:ff:ff
>>>>>>>>>      inet addr:169.254.0.1  Bcast:169.254.255.255  Mask:255.255.0.0
>>>>>>>>>      inet6 addr: fe80::c870:1aff:fec2:22b/64 Scope:Link
>>>>>>>>>      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>>>>>>>>      RX packets:568 errors:0 dropped:0 overruns:0 frame:0
>>>>>>>>>      TX packets:1132 errors:0 dropped:0 overruns:0 carrier:0
>>>>>>>>>      collisions:0 txqueuelen:0
>>>>>>>>>      RX bytes:76284 (76.2 KB)  TX bytes:109085 (109.0
KB)
>>>>>>>>> 
>>>>>>>>> xenbr0    Link encap:Ethernet  HWaddr 08:00:27:7e:74:9c
>>>>>>>>>      inet addr:192.168.56.15  Bcast:192.168.56.255
>>>>>>> Mask:255.255.255.0
>>>>>>>>>      inet6 addr: fe80::a00:27ff:fe7e:749c/64 Scope:Link
>>>>>>>>>      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>>>>>>>>      RX packets:4162 errors:0 dropped:0 overruns:0 frame:0
>>>>>>>>>      TX packets:3281 errors:0 dropped:0 overruns:0 carrier:0
>>>>>>>>>      collisions:0 txqueuelen:0
>>>>>>>>>      RX bytes:469199 (469.1 KB)  TX bytes:485688 (485.6
KB)
>>>>>>>>> 
>>>>>>>>> xenbr1    Link encap:Ethernet  HWaddr 08:00:27:df:00:00
>>>>>>>>>      inet addr:10.0.3.15  Bcast:10.0.3.255  Mask:255.255.255.0
>>>>>>>>>      inet6 addr: fe80::a00:27ff:fedf:0/64 Scope:Link
>>>>>>>>>      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>>>>>>>>      RX packets:4129 errors:0 dropped:0 overruns:0 frame:0
>>>>>>>>>      TX packets:3114 errors:0 dropped:0 overruns:0 carrier:0
>>>>>>>>>      collisions:0 txqueuelen:0
>>>>>>>>>      RX bytes:404327 (404.3 KB)  TX bytes:2501443 (2.5
MB)
>>>>>>>>> 
>>>>>>>>> These physical NICs on the host translate to the following
Xen PIFs:
>>>>>>>>> 
>>>>>>>>> root@zone1:/opt/cloudstack/apache-tomcat-6.0.32/bin#
xe pif-list
>>>>>>>>> uuid ( RO)                  : 207413c9-5058-7a40-6c96-2dab21057f30
>>>>>>>>>            device ( RO): eth1
>>>>>>>>> currently-attached ( RO): true
>>>>>>>>>              VLAN ( RO): -1
>>>>>>>>>      network-uuid ( RO): 1679ddb1-5a21-b827-ab07-c16275d5ce72
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> uuid ( RO)                  : c0274787-e768-506f-3191-f0ac17b0c72b
>>>>>>>>>            device ( RO): eth0
>>>>>>>>> currently-attached ( RO): true
>>>>>>>>>              VLAN ( RO): -1
>>>>>>>>>      network-uuid ( RO): 8ee927b1-a35d-ac10-4471-d7a6a475839a
>>>>>>>>> 
>>>>>>>>> The following is the ifconfig from the SSVM:
>>>>>>>>> 
>>>>>>>>> root@s-5-TEST:~# ifconfig
>>>>>>>>> eth0      Link encap:Ethernet  HWaddr 0e:00:a9:fe:03:8b
>>>>>>>>>      inet addr:169.254.3.139  Bcast:169.254.255.255
>>>>>>> Mask:255.255.0.0
>>>>>>>>>      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>>>>>>>>      RX packets:235 errors:0 dropped:0 overruns:0 frame:0
>>>>>>>>>      TX packets:92 errors:0 dropped:0 overruns:0 carrier:0
>>>>>>>>>      collisions:0 txqueuelen:1000
>>>>>>>>>      RX bytes:21966 (21.4 KiB)  TX bytes:16404 (16.0
KiB)
>>>>>>>>>      Interrupt:8
>>>>>>>>> 
>>>>>>>>> eth1      Link encap:Ethernet  HWaddr 06:bc:62:00:00:05
>>>>>>>>>      inet addr:192.168.56.104  Bcast:192.168.56.255
>>>>>>>>> Mask:255.255.255.0
>>>>>>>>>      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>>>>>>>>      RX packets:2532 errors:0 dropped:0 overruns:0 frame:0
>>>>>>>>>      TX packets:2127 errors:0 dropped:0 overruns:0 carrier:0
>>>>>>>>>      collisions:0 txqueuelen:1000
>>>>>>>>>      RX bytes:341242 (333.2 KiB)  TX bytes:272183 (265.8
KiB)
>>>>>>>>>      Interrupt:10
>>>>>>>>> 
>>>>>>>>> eth2      Link encap:Ethernet  HWaddr 06:12:72:00:00:37
>>>>>>>>>      inet addr:10.0.3.204  Bcast:10.0.3.255  Mask:255.255.255.0
>>>>>>>>>      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>>>>>>>>      RX packets:600 errors:0 dropped:0 overruns:0 frame:0
>>>>>>>>>      TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
>>>>>>>>>      collisions:0 txqueuelen:1000
>>>>>>>>>      RX bytes:68648 (67.0 KiB)  TX bytes:126 (126.0 B)
>>>>>>>>>      Interrupt:11
>>>>>>>>> 
>>>>>>>>> eth3      Link encap:Ethernet  HWaddr 06:25:e2:00:00:15
>>>>>>>>>      inet addr:192.168.56.120  Bcast:192.168.56.255
>>>>>>>>> Mask:255.255.255.0
>>>>>>>>>      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>>>>>>>>      RX packets:603 errors:0 dropped:0 overruns:0 frame:0
>>>>>>>>>      TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>>>>>>>>>      collisions:0 txqueuelen:1000
>>>>>>>>>      RX bytes:68732 (67.1 KiB)  TX bytes:0 (0.0 B)
>>>>>>>>>      Interrupt:12
>>>>>>>>> 
>>>>>>>>> lo        Link encap:Local Loopback
>>>>>>>>>      inet addr:127.0.0.1  Mask:255.0.0.0
>>>>>>>>>      UP LOOPBACK RUNNING  MTU:16436  Metric:1
>>>>>>>>>      RX packets:61 errors:0 dropped:0 overruns:0 frame:0
>>>>>>>>>      TX packets:61 errors:0 dropped:0 overruns:0 carrier:0
>>>>>>>>>      collisions:0 txqueuelen:0
>>>>>>>>>      RX bytes:5300 (5.1 KiB)  TX bytes:5300 (5.1 KiB)
>>>>>>>>> 
>>>>>>>>> Finally, the following are the vif params for the eth2
device on the
>>>>>>> SSVM
>>>>>>>>> depicting its connection to eth0 instead of eth1:
>>>>>>>>> 
>>>>>>>>> root@zone1:/opt/cloudstack/apache-tomcat-6.0.32/bin#
!1243
>>>>>>>>> xe vif-param-list uuid=be44bb30-5700-b461-760e-10fe93079210
>>>>>>>>> uuid ( RO)                        :
>>>>> be44bb30-5700-b461-760e-10fe93079210
>>>>>>>>>                 vm-uuid ( RO): 7958d91f-e52d-a25d-718c-7f831ae701d7
>>>>>>>>>           vm-name-label ( RO): s-5-TEST
>>>>>>>>>      allowed-operations (SRO): attach; unplug_force;
unplug
>>>>>>>>>      current-operations (SRO):
>>>>>>>>>                  device ( RO): 2
>>>>>>>>>                     MAC ( RO): 06:12:72:00:00:37
>>>>>>>>>       MAC-autogenerated ( RO): false
>>>>>>>>>                     MTU ( RO): 1500
>>>>>>>>>      currently-attached ( RO): true
>>>>>>>>>      qos_algorithm_type ( RW): ratelimit
>>>>>>>>>    qos_algorithm_params (MRW): kbps: 25600
>>>>>>>>> qos_supported_algorithms (SRO):
>>>>>>>>>            other-config (MRW): nicira-iface-id:
>>>>>>>>> 3d68b9f8-98d1-4ac7-92d8-fb57cb8b0adc; nicira-vm-id:
>>>>>>>>> 7958d91f-e52d-a25d-718c-7f831ae701d7
>>>>>>>>>            network-uuid ( RO): 8ee927b1-a35d-ac10-4471-d7a6a475839a
>>>>>>>>>      network-name-label ( RO): Pool-wide network associated
with
>>>>>>> eth0
>>>>>>>>>             io_read_kbs ( RO): 0.007
>>>>>>>>>            io_write_kbs ( RO): 0.000
>>>>>>>>> 
>>>>>>>>> How do I configure CloudStack such that the guest network
NIC on the
>>>>> VM
>>>>>>>>> will be connected to correct physical NIC?
>>>>>>>>> 
>>>>>>>>> Thanks for your help,
>>>>>>>>> -John
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> [1]: https://issues.apache.org/jira/browse/CLOUDSTACK-590
>>>>>>>>> 
>>>>>>>>> On Dec 5, 2012, at 2:47 PM, Marcus Sorensen <shadowsor@gmail.com>
>>>>>>> wrote:
>>>>>>>>> 
>>>>>>>>>> Yes, see your cmdline. internaldns1=10.0.3.2, so
it is forcing the
>>>>> use
>>>>>>> of
>>>>>>>>>> management network to route to 10.0.3.2 for DNS.
that's where the
>>>>> route
>>>>>>>>> is
>>>>>>>>>> coming from. you will want to use something on your
management net
>>>>> for
>>>>>>>>>> internal DNS, or something other than that router.
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> On Wed, Dec 5, 2012 at 11:59 AM, John Burwell <jburwell@basho.com>
>>>>>>>>> wrote:
>>>>>>>>>> 
>>>>>>>>>>> Anthony,
>>>>>>>>>>> 
>>>>>>>>>>> I apologize for forgetting to response to the
part of your answer
>>>>> the
>>>>>>>>>>> first part of the question.  I had set the management.network.cidr
>>>>> and
>>>>>>>>> host
>>>>>>>>>>> global settings to 192.168.0.0/24 and 192.168.56.18
respectively.
>>>>>>>>> Please
>>>>>>>>>>> see the zone1.devcloud.cfg Marvin configuration
attached to my
>>>>>>> original
>>>>>>>>>>> email for the actual setting, as well as, the
network configurations
>>>>>>>>> used
>>>>>>>>>>> when this problem occurs.
>>>>>>>>>>> 
>>>>>>>>>>> Thanks,
>>>>>>>>>>> -John
>>>>>>>>>>> 
>>>>>>>>>>> On Dec 5, 2012, at 12:46 PM, Anthony Xu <Xuefei.Xu@citrix.com>
>>>>> wrote:
>>>>>>>>>>> 
>>>>>>>>>>>> Hi join,
>>>>>>>>>>>> 
>>>>>>>>>>>> Try following,
>>>>>>>>>>>> 
>>>>>>>>>>>> Set global configuration management.network.cidr
to your management
>>>>>>>>>>> server CIDR, if this configuration is not available
in UI, you can
>>>>>>>>> change
>>>>>>>>>>> it in DB directly.
>>>>>>>>>>>> 
>>>>>>>>>>>> Restart management,
>>>>>>>>>>>> Stop/Start SSVM and CPVM.
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> And could you post "cat /proc/cmdline" in
SSVM?
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> Anthony
>>>>>>>>>>>> 
>>>>>>>>>>>>> -----Original Message-----
>>>>>>>>>>>>> From: John Burwell [mailto:jburwell@basho.com]
>>>>>>>>>>>>> Sent: Wednesday, December 05, 2012 9:11
AM
>>>>>>>>>>>>> To: cloudstack-dev@incubator.apache.org
>>>>>>>>>>>>> Subject: Re: SSVM Network Configuration
Issue
>>>>>>>>>>>>> 
>>>>>>>>>>>>> All,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> I was wondering if anyone else is experiencing
this problem when
>>>>>>> using
>>>>>>>>>>>>> secondary storage on a devcloud-style
VM with a host-only and NAT
>>>>>>>>>>>>> adapter.  One aspect of this issue that
seems interesting is that
>>>>>>>>>>>>> following route table from the SSVM:
>>>>>>>>>>>>> 
>>>>>>>>>>>>> root@s-5-TEST:~# route
>>>>>>>>>>>>> Kernel IP routing table
>>>>>>>>>>>>> Destination     Gateway         Genmask
        Flags Metric Ref
>>>>>>>>> Use
>>>>>>>>>>>>> Iface
>>>>>>>>>>>>> 10.0.3.2        192.168.56.1    255.255.255.255
UGH   0      0
>>>>>>>>> 0
>>>>>>>>>>>>> eth1
>>>>>>>>>>>>> 10.0.3.0        *               255.255.255.0
  U     0      0
>>>>>>>>> 0
>>>>>>>>>>>>> eth2
>>>>>>>>>>>>> 192.168.56.0    *               255.255.255.0
  U     0      0
>>>>>>>>> 0
>>>>>>>>>>>>> eth1
>>>>>>>>>>>>> 192.168.56.0    *               255.255.255.0
  U     0      0
>>>>>>>>> 0
>>>>>>>>>>>>> eth3
>>>>>>>>>>>>> link-local      *               255.255.0.0
    U     0      0
>>>>>>>>> 0
>>>>>>>>>>>>> eth0
>>>>>>>>>>>>> default         10.0.3.2        0.0.0.0
        UG    0      0
>>>>>>>>> 0
>>>>>>>>>>>>> eth2
>>>>>>>>>>>>> 
>>>>>>>>>>>>> In particular, the gateways for the management
and guest networks
>>>>> do
>>>>>>>>>>>>> not match to the configuration provided
to the management server
>>>>>>> (i.e.
>>>>>>>>>>>>> 10.0.3.2 is the gateway for the 10.0.3.0/24
network and
>>>>>>> 192.168.56.1
>>>>>>>>> is
>>>>>>>>>>>>> the gateway for the 192.168.56.0/24 network).
 With this
>>>>>>>>> configuration,
>>>>>>>>>>>>> the SSVM has a socket connection to the
management server, but is
>>>>> in
>>>>>>>>>>>>> alert state.  Finally, when I remove
the host-only NIC and use
>>>>> only
>>>>>>> a
>>>>>>>>>>>>> NAT adapter the SSVM's networking works
as expecting leading me to
>>>>>>>>>>>>> believe that the segregated network configuration
is at the root
>>>>> of
>>>>>>>>> the
>>>>>>>>>>>>> problem.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Until I can get the networking on the
SSVM configured, I am unable
>>>>>>> to
>>>>>>>>>>>>> complete the testing of the S3-backed
Secondary Storage
>>>>> enhancement.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Thank you for your help,
>>>>>>>>>>>>> -John
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Dec 3, 2012, at 4:46 PM, John Burwell
<jburwell@basho.com>
>>>>>>> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> All,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> I am setting up a multi-zone devcloud
configuration on VirtualBox
>>>>>>>>>>>>> 4.2.4 using the Ubuntu 12.04.1 and Xen
4.1.  I have configured the
>>>>>>>>> base
>>>>>>>>>>>>> management server VM (zone1) to serve
as both the zone1, as well
>>>>> as,
>>>>>>>>>>>>> the management server (running MySql)
with eth0 as a host-only
>>>>>>> adapter
>>>>>>>>>>>>> and a static IP of 192.168.56.15 and
eth1 as a NAT adapter (see
>>>>> the
>>>>>>>>>>>>> attached zone1-interfaces file for the
exact network configuration
>>>>>>> on
>>>>>>>>>>>>> the VM).  The management and guest networks
are configured as
>>>>>>> follows:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Zone 1
>>>>>>>>>>>>>> Management: 192.168.56.100-149 gw
192.168.56.1 dns 10.0.3.2 (?)
>>>>>>>>>>>>>> Guest: 10.0.3.200-10.0.3.220 gw 10.0.3.2
dns 8.8.8.8
>>>>>>>>>>>>>> Zone 2
>>>>>>>>>>>>>> Management: 192.168.56.150-200 gw
192.68.56.1 dns 10.0.3.2 (?)
>>>>>>>>>>>>>> Guest: 10.0.3.221-240 gw 10.0.3.2
dns 8.8.8.8
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> The management server deploys and
starts without error.  I then
>>>>>>>>>>>>> populate the configuration it using the
attached Marvin
>>>>>>> configuration
>>>>>>>>>>>>> file (zone1.devcloud.cfg) and restart
the management server in
>>>>> order
>>>>>>>>> to
>>>>>>>>>>>>> allow the global configuration option
changes to take effect.
>>>>>>>>>>>>> Following the restart, the CPVM and SSVM
start without error.
>>>>>>>>>>>>> Unfortunately, they drop into alert status,
and the SSVM is unable
>>>>>>> to
>>>>>>>>>>>>> connect outbound through the guest network
(very important for my
>>>>>>>>> tests
>>>>>>>>>>>>> because I am testing S3-backed secondary
storage).
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> From the diagnostic checks I have
performed on the management
>>>>>>> server
>>>>>>>>>>>>> and the SSVM, it appears that the daemon
on the SSVM is connecting
>>>>>>>>> back
>>>>>>>>>>>>> to the management server.  I have attached
a set of diagnostic
>>>>>>>>>>>>> information from the management server
>>>>>>> (mgmtsvr-zone1-diagnostics.log)
>>>>>>>>>>>>> and SSVM server (ssvm-zone1-diagnostics.log)
that includes the
>>>>>>> results
>>>>>>>>>>>>> of ifconfig, route, netstat and ping
checks, as well as, other
>>>>>>>>>>>>> information (e.g. the contents of /var/cache/cloud/cmdline
on the
>>>>>>>>> SSVM).
>>>>>>>>>>>>> Finally, I have attached the vmops log
from the management server
>>>>>>>>>>>>> (vmops-zone1.log).
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> What changes need to be made to management
server configuration
>>>>> in
>>>>>>>>>>>>> order to start up an SSVM that can communicate
with the secondary
>>>>>>>>>>>>> storage NFS volumes, management server,
and connect to hosts on
>>>>> the
>>>>>>>>>>>>> Internet?
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Thanks for your help,
>>>>>>>>>>>>>> -John
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> <ssvm-zone1-diagnostics.log>
>>>>>>>>>>>>>> <vmops-zone1.tar.gz>
>>>>>>>>>>>>>> <mgmtsvr-zone1-diagnostics.log>
>>>>>>>>>>>>>> <zone1-interfaces>
>>>>>>>>>>>>>> <zone1.devcloud.cfg>
>>>>> 
>>>>> 
> 


Mime
View raw message