Return-Path: X-Original-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 577B7D332 for ; Mon, 17 Dec 2012 17:02:32 +0000 (UTC) Received: (qmail 80637 invoked by uid 500); 17 Dec 2012 17:02:31 -0000 Delivered-To: apmail-incubator-cloudstack-dev-archive@incubator.apache.org Received: (qmail 80446 invoked by uid 500); 17 Dec 2012 17:02:24 -0000 Mailing-List: contact cloudstack-dev-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: cloudstack-dev@incubator.apache.org Delivered-To: mailing list cloudstack-dev@incubator.apache.org Received: (qmail 80310 invoked by uid 99); 17 Dec 2012 17:02:18 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 17 Dec 2012 17:02:18 +0000 X-ASF-Spam-Status: No, hits=2.6 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,TRACKER_ID X-Spam-Check-By: apache.org Received-SPF: unknown ~allinclude:aspmx.googlemail.com (athena.apache.org: encountered unrecognized mechanism during SPF processing of domain of jburwell@basho.com) Received: from [209.85.160.175] (HELO mail-gh0-f175.google.com) (209.85.160.175) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 17 Dec 2012 17:02:10 +0000 Received: by mail-gh0-f175.google.com with SMTP id z2so1209387ghb.6 for ; Mon, 17 Dec 2012 09:01:49 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:content-type:message-id:mime-version:subject:date:references :to:in-reply-to:x-mailer:x-gm-message-state; bh=ZCTZs4Exr+bg3CTIwz3wQinHSeXqv6m0vG1qFiRMnxg=; b=QAoOrjZ2yemzJVCPkp/Th8f/4VE6hWq5iSDqtniTAX4Tjv8a0fZQMLzlfCjdwsQH0v q5s+HSBw1VplBpSfngNsEonxgzTC0e0qhK95ZoUP0gnNdRuoc0sB899lP7sMljRNWKWe aok47cDPJZ/AQP5EZgFc4qac4qOTPQgc2l4HBfyhPwDKvgXM+JiLHcmgsQI5ocTustnh dBpLwMBChTsp617nf4FOm3znrX7HSKSlySUwMGxj7F8BjNC2HLr1JqmWdOqzM5UMgGqe TU6XGP3ZP8JNITF7BrXscBV1UitwOHc6WqYcfYeRLxvofhOVbeTiXcl5KkFrODsVlQfA b55w== Received: by 10.58.74.196 with SMTP id w4mr24465982vev.7.1355763709112; Mon, 17 Dec 2012 09:01:49 -0800 (PST) Received: from [10.0.8.31] (wsip-68-225-89-79.dc.dc.cox.net. [68.225.89.79]) by mx.google.com with ESMTPS id f13sm6371248vep.12.2012.12.17.09.01.46 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 17 Dec 2012 09:01:47 -0800 (PST) From: John Burwell Content-Type: multipart/alternative; boundary="Apple-Mail=_0C178030-E06A-4642-9F92-4F62CF733056" Message-Id: <2D54FE04-3DCE-476D-8368-132C5D798C70@basho.com> Mime-Version: 1.0 (Mac OS X Mail 6.2 \(1499\)) Subject: Re: SSVM Network Configuration Issue Date: Mon, 17 Dec 2012 12:01:46 -0500 References: <31F9D215-1AF7-480B-B9AF-0264A8DB5020@basho.com> <46556D82-2DEC-402C-A72B-A68A4625E430@basho.com> <4953720894961613373@unknownmsgid> <0BCCCE152323764BB7FD6AE6D7A1D9060102894ECFC7@BANPMAILBOX01.citrite.net> <-9204348060023964496@unknownmsgid> <20121216024509.GA63830@cloud-2.local> To: cloudstack-dev@incubator.apache.org In-Reply-To: <20121216024509.GA63830@cloud-2.local> X-Mailer: Apple Mail (2.1499) X-Gm-Message-State: ALoCoQmI2B/XhSYNobgVfTAmV1vYgiRFIVmfK0UmIVhY3YckAdN8DHhzQR22B7NGWFBbgl8jCXV3 X-Virus-Checked: Checked by ClamAV on apache.org --Apple-Mail=_0C178030-E06A-4642-9F92-4F62CF733056 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=us-ascii Prasanna, I applied the changes suggested below, and the host now fails to startup = properly with the following error in the log: 2012-12-17 08:59:56,408 WARN [cloud.resource.ResourceManagerImpl] = (AgentTaskPool-2:null) Unable to connect due to=20 com.cloud.exception.ConnectionException: Incorrect Network setup on = agent, Reinitialize agent after network names are setup, details=20 : For Physical Network id:200, Guest Network is not configured on the = backend by name cloud-guest at = com.cloud.network.NetworkManagerImpl.processConnect(NetworkManagerImpl.jav= a:6656) at = com.cloud.agent.manager.AgentManagerImpl.notifyMonitorsOfConnection(AgentM= anagerImpl.java:611) at = com.cloud.agent.manager.AgentManagerImpl.handleDirectConnectAgent(AgentMan= agerImpl.java:1502) at = com.cloud.resource.ResourceManagerImpl.createHostAndAgent(ResourceManagerI= mpl.java:1648) at = com.cloud.resource.ResourceManagerImpl.createHostAndAgent(ResourceManagerI= mpl.java:1685) at = com.cloud.agent.manager.AgentManagerImpl$SimulateStartTask.run(AgentManage= rImpl.java:1152) at = java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:= 1110) at = java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java= :603) at java.lang.Thread.run(Thread.java:679) 2012-12-17 08:59:56,408 DEBUG [cloud.host.Status] (AgentTaskPool-2:null) = Transition:[Resource state =3D Enabled, Agent event =3D = AgentDisconnected, Host id =3D 1, name =3D devcloud] I have also attached a copy of my Marvin configuration for your = reference. Thanks for your help, -John On Dec 15, 2012, at 9:45 PM, Prasanna Santhanam wrote: > Traffic types carry label information on the physical nic and the > label that is associated with it. It is via the traffic label that you = would > tell cloudstack about the label's you've given to the hypervisor's = interfaces. >=20 > Your devcloud.cfg will have to be altered for this: >=20 > After altering it should look something like >=20 > .... SNIP .... > "traffictypes": [ > { > "xen": "cloud-guest",=20 > "typ": "Guest" > },=20 > { > "typ": "Management", > "xen" : "cloud-mgmt" > },=20 > ],=20 > .... SNIP .... >=20 > An example is available under=20 > incubator-cloudstack/tools/marvin/marvin/configGenerator.py >=20 > Look for the method: describe_setup_in_eip_mode() >=20 >=20 > HTH > --=20 > Prasanna., >=20 >=20 > On Sun, Dec 16, 2012 at 02:55:30AM +0530, John Burwell wrote: >> Rohit, >>=20 >> As I stated below, I know which the VIF->PIF->device Xen mapping = needs >> to be used for each network. My question is how do I configure >> CloudStack with that information. >>=20 >> Thanks for your help, >> -John >>=20 >>=20 >>=20 >>=20 >> On Dec 15, 2012, at 2:51 PM, Rohit Yadav = wrote: >>=20 >>> About xen bridging, these can help; >>>=20 >>> brctl show all (bridge vif mappings) >>> ip show addr xenbr0 (bridge specific info) >>> brctl showmacs br0 (bridge mac mappings) >>>=20 >>> Wiki: >>> http://wiki.xen.org/wiki/Xen_FAQ_Networking >>> http://wiki.xen.org/wiki/XenNetworking >>>=20 >>> Regards. >>>=20 >>> ________________________________________ >>> From: John Burwell [jburwell@basho.com] >>> Sent: Saturday, December 15, 2012 8:35 PM >>> To: cloudstack-dev@incubator.apache.org >>> Subject: Re: SSVM Network Configuration Issue >>>=20 >>> Marcus, >>>=20 >>> That's what I thought. The Xen physical bridge names are xenbr0 (to >>> eth0) and xenbr1 (to eth1). Using basic network configuration, I = set >>> the Xen network traffic labels for each to the appropriate bridge >>> device name. I receive errors regarding invalid network device when >>> it attempts to create a VM. Does anyone else know how determine = the >>> mapping of physical devices to CloudStack Xen network traffic = labels? >>>=20 >>> Thanks, >>> -John >>>=20 >>> On Dec 15, 2012, at 1:20 AM, Marcus Sorensen = wrote: >>>=20 >>>> Vlans in advanced/KVM should only be required for the guest = networks. If I >>>> create a bridge on physical eth0, name it 'br0', and a bridge on = physical >>>> eth1, naming it 'br1', and then set my management network to label = 'br0', >>>> my public network to 'br1', and my guest network to 'br1', it = should use >>>> the bridges you asked for when connecting the system VMs for the = specified >>>> traffic. I'd just leave the 'vlan' blank when specifying public and >>>> pod(management) IPs. In this scenario, the only place you need to = enter >>>> vlans is on the guest, and it should create new tagged = interfaces/bridges >>>> on eth1(per your label of br1) as new guest networks are brought = online. >>>> This is how my dev VMs are usually set up. >>>>=20 >>>>=20 >>>> On Thu, Dec 6, 2012 at 11:03 AM, John Burwell = wrote: >>>>=20 >>>>> Marcus, >>>>>=20 >>>>> My question, more specifically, is are VLANs required to implement = traffic >>>>> labels? Also, can traffic labels be configured in Basic = networking mode or >>>>> do I need to switch my configuration to Advanced? >>>>>=20 >>>>> I am not disagreeing on the how DNS servers should be associated = with >>>>> interfaces nor do I think a network operator should be required to = make any >>>>> upstream router configuration changes. I am simply saying that = CloudStack >>>>> should not make assumptions about the gateways that have been = specified. >>>>> The behavior I experienced of CloudStack attempting to >>>>> "correct" my configuration by injecting another route fails the = rule of >>>>> least surprise and is based on incomplete knowledge. In my = opinion, >>>>> CloudStack (or any system of its ilk) should faithfully (or = slavishly) >>>>> realize the routes on the system VM as specified. If the = configuration is >>>>> incorrect, networking will fail in an expected manner, and the = operator can >>>>> adjust their environment as necessary. Otherwise, there is an = upstream >>>>> router configuration to which CloudStack has no visibility, but = with which >>>>> it is completely compatible. Essentially, I am asking CloudStack = to do >>>>> less, assume I know what I am doing, and break in a manner = consistent with >>>>> other network applications. >>>>>=20 >>>>> Thanks, >>>>> -John >>>>>=20 >>>>> On Dec 6, 2012, at 12:30 PM, Marcus Sorensen = wrote: >>>>>=20 >>>>>> Traffic labels essentially tell the system which physical network = to use. >>>>>> So if you've allocated a vlan for a specific traffic type, it = will first >>>>>> look at the tag associated with that traffic type, figure out = which >>>>>> physical interface goes with that, and then create a tagged = interface and >>>>>> bridge also on that physical. >>>>>>=20 >>>>>> I guess we'll just have to disagree, I think the current behavior = makes >>>>>> total sense. To me, internal DNS should always use the = management >>>>>> interface, since it's internally facing. There's no sane way to = do that >>>>>> other than a static route on the system vm (it seems you're = suggesting >>>>> that >>>>>> the network operator force something like this on the upstream = router, >>>>>> which seems really strange to require everyone to create static = routes on >>>>>> their public network to force specific IPs back into their = internal >>>>>> networks, so correct me if I have the wrong impression). = Cloudstack is >>>>>> doing exactly what you tell it to. You told it that 10.0.3.2 = should be >>>>>> accessible via your internal network by setting it as your = internal DNS. >>>>>> The fact that a broken config doesn't work isn't CloudStack's = fault. >>>>>>=20 >>>>>> Note that internal DNS is just the default for the ssvm, public = DNS is >>>>>> still offered as a backup, so had you not said that 10.0.3.2 was >>>>> available >>>>>> on your internal network (perhaps offering a dummy internal DNS >>>>>> address or 192.68.56.1), >>>>>> lookups would fall back to public and everything would work as = expected >>>>> as >>>>>> well. >>>>>>=20 >>>>>> There is also a global config called 'use.external.dns', but in = setting >>>>>> this, restarting the management server, recreating system VMs, I = don't >>>>> see >>>>>> a noticeable difference on any of this, so perhaps that would = solve your >>>>>> issue as well but it's either broken or doesn't do what I thought = it >>>>> would. >>>>>>=20 >>>>>>=20 >>>>>> On Thu, Dec 6, 2012 at 8:39 AM, John Burwell = wrote: >>>>>>=20 >>>>>>> Marcus, >>>>>>>=20 >>>>>>> Are traffic labels independent of VLANs? I ask because my = current XCP >>>>>>> network configuration is bridged, and I am not using Open = vSwitch. >>>>>>>=20 >>>>>>> I disagree on the routing issue. CloudStack should do what's = told >>>>> because >>>>>>> it does not have insight into or control of the configuration of = the >>>>> routes >>>>>>> in the layers beneath it. If CloudStack simply did as it was = told, it >>>>>>> would fail as expected in a typical networking environment while >>>>> preserving >>>>>>> the flexibility of configuration expected by a network engineer. >>>>>>>=20 >>>>>>> Thanks, >>>>>>> -John >>>>>>>=20 >>>>>>> On Dec 6, 2012, at 10:35 AM, Marcus Sorensen = >>>>> wrote: >>>>>>>=20 >>>>>>>> I can't really tell you for xen, although it might be similar = to KVM. >>>>>>>> During setup I would set a traffic label matching the name of = the >>>>> bridge, >>>>>>>> for example if my public interface were eth0 and the bridge I = had set >>>>> up >>>>>>>> was br0, I'd go to the zone network settings, find public = traffic, and >>>>>>> set >>>>>>>> a label on it of "br0". Maybe someone more familiar with the = xen setup >>>>>>> can >>>>>>>> help. >>>>>>>>=20 >>>>>>>> On the DNS, it makes sense from the perspective that the ssvm = has >>>>> access >>>>>>> to >>>>>>>> your internal networks, thus it uses your internal DNS. Its = default >>>>>>> gateway >>>>>>>> is public. So if I have a DNS server on an internal network at >>>>>>>> 10.30.20.10/24, and my management network on 192.168.10.0/24, = this >>>>> route >>>>>>>> has to be set in order for the DNS server to be reachable. You = would >>>>>>> under >>>>>>>> normal circumstances not want to use a DNS server on public net = as your >>>>>>>> internal DNS setting anyway, although I agree that the route = insertion >>>>>>>> should have a bit more sanity checking and not set a static = route to >>>>> your >>>>>>>> default gateway. >>>>>>>> On Dec 6, 2012 6:31 AM, "John Burwell" = wrote: >>>>>>>>=20 >>>>>>>>> Marcus, >>>>>>>>>=20 >>>>>>>>> I setup a small PowerDNS recursor on 192.168.56.15, configured = the DNS >>>>>>> for >>>>>>>>> the management network to use it, and the route table in the = SSVM is >>>>> now >>>>>>>>> correct. However, this behavior does not seem correct. At a = minimum, >>>>>>> it >>>>>>>>> violates the rule of least surprise. CloudStack shouldn't be = adding >>>>>>>>> gateways that are not configured. Therefore, I have entered a >>>>>>> defect[1] to >>>>>>>>> remove the behavior. >>>>>>>>>=20 >>>>>>>>> With the route table fixed, I am now experiencing a new = problem. The >>>>>>>>> external NIC (10.0.3.0/24) on the SSVM is being connected to = the >>>>>>> internal >>>>>>>>> NIC (192.168.56.0/24) on the host. The host-only network >>>>>>> (192.168.56.15) >>>>>>>>> is configured on xenbr0 and the NAT network is configured on = xenbr1. >>>>>>> As a >>>>>>>>> reference, the following is the contents of the >>>>> /etc/network/interfaces >>>>>>>>> file and ifconfig from devcloud host: >>>>>>>>>=20 >>>>>>>>> root@zone1:/opt/cloudstack/apache-tomcat-6.0.32/bin# cat >>>>>>>>> /etc/network/interfaces >>>>>>>>> # The loopback network interface >>>>>>>>> auto lo >>>>>>>>> iface lo inet loopback >>>>>>>>>=20 >>>>>>>>> auto eth0 >>>>>>>>> iface eth0 inet manual >>>>>>>>>=20 >>>>>>>>> allow-hotplug eth1 >>>>>>>>> iface eth1 inet manual >>>>>>>>>=20 >>>>>>>>> # The primary network interface >>>>>>>>> auto xenbr0 >>>>>>>>> iface xenbr0 inet static >>>>>>>>> address 192.168.56.15 >>>>>>>>> netmask 255.255.255.0 >>>>>>>>> network 192.168.56.0 >>>>>>>>> broadcast 192.168.56.255 >>>>>>>>> dns_nameserver 192.168.56.15 >>>>>>>>> bridge_ports eth0 >>>>>>>>>=20 >>>>>>>>> auto xenbr1 >>>>>>>>> iface xenbr1 inet dhcp >>>>>>>>> bridge_ports eth1 >>>>>>>>> dns_nameserver 8.8.8.8 8.8.4.4 >>>>>>>>> post-up route add default gw 10.0.3.2 >>>>>>>>>=20 >>>>>>>>> root@zone1:/opt/cloudstack/apache-tomcat-6.0.32/bin# ifconfig >>>>>>>>> eth0 Link encap:Ethernet HWaddr 08:00:27:7e:74:9c >>>>>>>>> inet6 addr: fe80::a00:27ff:fe7e:749c/64 Scope:Link >>>>>>>>> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 >>>>>>>>> RX packets:777 errors:0 dropped:0 overruns:0 frame:0 >>>>>>>>> TX packets:188 errors:0 dropped:0 overruns:0 carrier:0 >>>>>>>>> collisions:0 txqueuelen:1000 >>>>>>>>> RX bytes:109977 (109.9 KB) TX bytes:11900 (11.9 KB) >>>>>>>>>=20 >>>>>>>>> eth1 Link encap:Ethernet HWaddr 08:00:27:df:00:00 >>>>>>>>> inet6 addr: fe80::a00:27ff:fedf:0/64 Scope:Link >>>>>>>>> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 >>>>>>>>> RX packets:4129 errors:0 dropped:0 overruns:0 frame:0 >>>>>>>>> TX packets:3910 errors:0 dropped:0 overruns:0 carrier:0 >>>>>>>>> collisions:0 txqueuelen:1000 >>>>>>>>> RX bytes:478719 (478.7 KB) TX bytes:2542459 (2.5 MB) >>>>>>>>>=20 >>>>>>>>> lo Link encap:Local Loopback >>>>>>>>> inet addr:127.0.0.1 Mask:255.0.0.0 >>>>>>>>> inet6 addr: ::1/128 Scope:Host >>>>>>>>> UP LOOPBACK RUNNING MTU:16436 Metric:1 >>>>>>>>> RX packets:360285 errors:0 dropped:0 overruns:0 frame:0 >>>>>>>>> TX packets:360285 errors:0 dropped:0 overruns:0 carrier:0 >>>>>>>>> collisions:0 txqueuelen:0 >>>>>>>>> RX bytes:169128181 (169.1 MB) TX bytes:169128181 (169.1 = MB) >>>>>>>>>=20 >>>>>>>>> vif1.0 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff >>>>>>>>> inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link >>>>>>>>> UP BROADCAST RUNNING NOARP PROMISC MTU:1500 Metric:1 >>>>>>>>> RX packets:6 errors:0 dropped:0 overruns:0 frame:0 >>>>>>>>> TX packets:152 errors:0 dropped:0 overruns:0 carrier:0 >>>>>>>>> collisions:0 txqueuelen:32 >>>>>>>>> RX bytes:292 (292.0 B) TX bytes:9252 (9.2 KB) >>>>>>>>>=20 >>>>>>>>> vif1.1 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff >>>>>>>>> inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link >>>>>>>>> UP BROADCAST RUNNING NOARP PROMISC MTU:1500 Metric:1 >>>>>>>>> RX packets:566 errors:0 dropped:0 overruns:0 frame:0 >>>>>>>>> TX packets:1405 errors:0 dropped:0 overruns:0 carrier:0 >>>>>>>>> collisions:0 txqueuelen:32 >>>>>>>>> RX bytes:44227 (44.2 KB) TX bytes:173995 (173.9 KB) >>>>>>>>>=20 >>>>>>>>> vif1.2 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff >>>>>>>>> inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link >>>>>>>>> UP BROADCAST RUNNING NOARP PROMISC MTU:1500 Metric:1 >>>>>>>>> RX packets:3 errors:0 dropped:0 overruns:0 frame:0 >>>>>>>>> TX packets:838 errors:0 dropped:0 overruns:0 carrier:0 >>>>>>>>> collisions:0 txqueuelen:32 >>>>>>>>> RX bytes:84 (84.0 B) TX bytes:111361 (111.3 KB) >>>>>>>>>=20 >>>>>>>>> vif4.0 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff >>>>>>>>> inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link >>>>>>>>> UP BROADCAST RUNNING NOARP PROMISC MTU:1500 Metric:1 >>>>>>>>> RX packets:64 errors:0 dropped:0 overruns:0 frame:0 >>>>>>>>> TX packets:197 errors:0 dropped:0 overruns:0 carrier:0 >>>>>>>>> collisions:0 txqueuelen:32 >>>>>>>>> RX bytes:10276 (10.2 KB) TX bytes:18453 (18.4 KB) >>>>>>>>>=20 >>>>>>>>> vif4.1 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff >>>>>>>>> inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link >>>>>>>>> UP BROADCAST RUNNING NOARP PROMISC MTU:1500 Metric:1 >>>>>>>>> RX packets:2051 errors:0 dropped:0 overruns:0 frame:0 >>>>>>>>> TX packets:2446 errors:0 dropped:0 overruns:0 carrier:0 >>>>>>>>> collisions:0 txqueuelen:32 >>>>>>>>> RX bytes:233914 (233.9 KB) TX bytes:364243 (364.2 KB) >>>>>>>>>=20 >>>>>>>>> vif4.2 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff >>>>>>>>> inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link >>>>>>>>> UP BROADCAST RUNNING NOARP PROMISC MTU:1500 Metric:1 >>>>>>>>> RX packets:3 errors:0 dropped:0 overruns:0 frame:0 >>>>>>>>> TX packets:582 errors:0 dropped:0 overruns:0 carrier:0 >>>>>>>>> collisions:0 txqueuelen:32 >>>>>>>>> RX bytes:84 (84.0 B) TX bytes:74700 (74.7 KB) >>>>>>>>>=20 >>>>>>>>> vif4.3 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff >>>>>>>>> inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link >>>>>>>>> UP BROADCAST RUNNING NOARP PROMISC MTU:1500 Metric:1 >>>>>>>>> RX packets:0 errors:0 dropped:0 overruns:0 frame:0 >>>>>>>>> TX packets:585 errors:0 dropped:0 overruns:0 carrier:0 >>>>>>>>> collisions:0 txqueuelen:32 >>>>>>>>> RX bytes:0 (0.0 B) TX bytes:74826 (74.8 KB) >>>>>>>>>=20 >>>>>>>>> xapi0 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff >>>>>>>>> inet addr:169.254.0.1 Bcast:169.254.255.255 = Mask:255.255.0.0 >>>>>>>>> inet6 addr: fe80::c870:1aff:fec2:22b/64 Scope:Link >>>>>>>>> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 >>>>>>>>> RX packets:568 errors:0 dropped:0 overruns:0 frame:0 >>>>>>>>> TX packets:1132 errors:0 dropped:0 overruns:0 carrier:0 >>>>>>>>> collisions:0 txqueuelen:0 >>>>>>>>> RX bytes:76284 (76.2 KB) TX bytes:109085 (109.0 KB) >>>>>>>>>=20 >>>>>>>>> xenbr0 Link encap:Ethernet HWaddr 08:00:27:7e:74:9c >>>>>>>>> inet addr:192.168.56.15 Bcast:192.168.56.255 >>>>>>> Mask:255.255.255.0 >>>>>>>>> inet6 addr: fe80::a00:27ff:fe7e:749c/64 Scope:Link >>>>>>>>> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 >>>>>>>>> RX packets:4162 errors:0 dropped:0 overruns:0 frame:0 >>>>>>>>> TX packets:3281 errors:0 dropped:0 overruns:0 carrier:0 >>>>>>>>> collisions:0 txqueuelen:0 >>>>>>>>> RX bytes:469199 (469.1 KB) TX bytes:485688 (485.6 KB) >>>>>>>>>=20 >>>>>>>>> xenbr1 Link encap:Ethernet HWaddr 08:00:27:df:00:00 >>>>>>>>> inet addr:10.0.3.15 Bcast:10.0.3.255 Mask:255.255.255.0 >>>>>>>>> inet6 addr: fe80::a00:27ff:fedf:0/64 Scope:Link >>>>>>>>> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 >>>>>>>>> RX packets:4129 errors:0 dropped:0 overruns:0 frame:0 >>>>>>>>> TX packets:3114 errors:0 dropped:0 overruns:0 carrier:0 >>>>>>>>> collisions:0 txqueuelen:0 >>>>>>>>> RX bytes:404327 (404.3 KB) TX bytes:2501443 (2.5 MB) >>>>>>>>>=20 >>>>>>>>> These physical NICs on the host translate to the following Xen = PIFs: >>>>>>>>>=20 >>>>>>>>> root@zone1:/opt/cloudstack/apache-tomcat-6.0.32/bin# xe = pif-list >>>>>>>>> uuid ( RO) : = 207413c9-5058-7a40-6c96-2dab21057f30 >>>>>>>>> device ( RO): eth1 >>>>>>>>> currently-attached ( RO): true >>>>>>>>> VLAN ( RO): -1 >>>>>>>>> network-uuid ( RO): 1679ddb1-5a21-b827-ab07-c16275d5ce72 >>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>>> uuid ( RO) : = c0274787-e768-506f-3191-f0ac17b0c72b >>>>>>>>> device ( RO): eth0 >>>>>>>>> currently-attached ( RO): true >>>>>>>>> VLAN ( RO): -1 >>>>>>>>> network-uuid ( RO): 8ee927b1-a35d-ac10-4471-d7a6a475839a >>>>>>>>>=20 >>>>>>>>> The following is the ifconfig from the SSVM: >>>>>>>>>=20 >>>>>>>>> root@s-5-TEST:~# ifconfig >>>>>>>>> eth0 Link encap:Ethernet HWaddr 0e:00:a9:fe:03:8b >>>>>>>>> inet addr:169.254.3.139 Bcast:169.254.255.255 >>>>>>> Mask:255.255.0.0 >>>>>>>>> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 >>>>>>>>> RX packets:235 errors:0 dropped:0 overruns:0 frame:0 >>>>>>>>> TX packets:92 errors:0 dropped:0 overruns:0 carrier:0 >>>>>>>>> collisions:0 txqueuelen:1000 >>>>>>>>> RX bytes:21966 (21.4 KiB) TX bytes:16404 (16.0 KiB) >>>>>>>>> Interrupt:8 >>>>>>>>>=20 >>>>>>>>> eth1 Link encap:Ethernet HWaddr 06:bc:62:00:00:05 >>>>>>>>> inet addr:192.168.56.104 Bcast:192.168.56.255 >>>>>>>>> Mask:255.255.255.0 >>>>>>>>> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 >>>>>>>>> RX packets:2532 errors:0 dropped:0 overruns:0 frame:0 >>>>>>>>> TX packets:2127 errors:0 dropped:0 overruns:0 carrier:0 >>>>>>>>> collisions:0 txqueuelen:1000 >>>>>>>>> RX bytes:341242 (333.2 KiB) TX bytes:272183 (265.8 KiB) >>>>>>>>> Interrupt:10 >>>>>>>>>=20 >>>>>>>>> eth2 Link encap:Ethernet HWaddr 06:12:72:00:00:37 >>>>>>>>> inet addr:10.0.3.204 Bcast:10.0.3.255 = Mask:255.255.255.0 >>>>>>>>> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 >>>>>>>>> RX packets:600 errors:0 dropped:0 overruns:0 frame:0 >>>>>>>>> TX packets:3 errors:0 dropped:0 overruns:0 carrier:0 >>>>>>>>> collisions:0 txqueuelen:1000 >>>>>>>>> RX bytes:68648 (67.0 KiB) TX bytes:126 (126.0 B) >>>>>>>>> Interrupt:11 >>>>>>>>>=20 >>>>>>>>> eth3 Link encap:Ethernet HWaddr 06:25:e2:00:00:15 >>>>>>>>> inet addr:192.168.56.120 Bcast:192.168.56.255 >>>>>>>>> Mask:255.255.255.0 >>>>>>>>> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 >>>>>>>>> RX packets:603 errors:0 dropped:0 overruns:0 frame:0 >>>>>>>>> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 >>>>>>>>> collisions:0 txqueuelen:1000 >>>>>>>>> RX bytes:68732 (67.1 KiB) TX bytes:0 (0.0 B) >>>>>>>>> Interrupt:12 >>>>>>>>>=20 >>>>>>>>> lo Link encap:Local Loopback >>>>>>>>> inet addr:127.0.0.1 Mask:255.0.0.0 >>>>>>>>> UP LOOPBACK RUNNING MTU:16436 Metric:1 >>>>>>>>> RX packets:61 errors:0 dropped:0 overruns:0 frame:0 >>>>>>>>> TX packets:61 errors:0 dropped:0 overruns:0 carrier:0 >>>>>>>>> collisions:0 txqueuelen:0 >>>>>>>>> RX bytes:5300 (5.1 KiB) TX bytes:5300 (5.1 KiB) >>>>>>>>>=20 >>>>>>>>> Finally, the following are the vif params for the eth2 device = on the >>>>>>> SSVM >>>>>>>>> depicting its connection to eth0 instead of eth1: >>>>>>>>>=20 >>>>>>>>> root@zone1:/opt/cloudstack/apache-tomcat-6.0.32/bin# !1243 >>>>>>>>> xe vif-param-list uuid=3Dbe44bb30-5700-b461-760e-10fe93079210 >>>>>>>>> uuid ( RO) : >>>>> be44bb30-5700-b461-760e-10fe93079210 >>>>>>>>> vm-uuid ( RO): = 7958d91f-e52d-a25d-718c-7f831ae701d7 >>>>>>>>> vm-name-label ( RO): s-5-TEST >>>>>>>>> allowed-operations (SRO): attach; unplug_force; unplug >>>>>>>>> current-operations (SRO): >>>>>>>>> device ( RO): 2 >>>>>>>>> MAC ( RO): 06:12:72:00:00:37 >>>>>>>>> MAC-autogenerated ( RO): false >>>>>>>>> MTU ( RO): 1500 >>>>>>>>> currently-attached ( RO): true >>>>>>>>> qos_algorithm_type ( RW): ratelimit >>>>>>>>> qos_algorithm_params (MRW): kbps: 25600 >>>>>>>>> qos_supported_algorithms (SRO): >>>>>>>>> other-config (MRW): nicira-iface-id: >>>>>>>>> 3d68b9f8-98d1-4ac7-92d8-fb57cb8b0adc; nicira-vm-id: >>>>>>>>> 7958d91f-e52d-a25d-718c-7f831ae701d7 >>>>>>>>> network-uuid ( RO): = 8ee927b1-a35d-ac10-4471-d7a6a475839a >>>>>>>>> network-name-label ( RO): Pool-wide network associated = with >>>>>>> eth0 >>>>>>>>> io_read_kbs ( RO): 0.007 >>>>>>>>> io_write_kbs ( RO): 0.000 >>>>>>>>>=20 >>>>>>>>> How do I configure CloudStack such that the guest network NIC = on the >>>>> VM >>>>>>>>> will be connected to correct physical NIC? >>>>>>>>>=20 >>>>>>>>> Thanks for your help, >>>>>>>>> -John >>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>>> [1]: https://issues.apache.org/jira/browse/CLOUDSTACK-590 >>>>>>>>>=20 >>>>>>>>> On Dec 5, 2012, at 2:47 PM, Marcus Sorensen = >>>>>>> wrote: >>>>>>>>>=20 >>>>>>>>>> Yes, see your cmdline. internaldns1=3D10.0.3.2, so it is = forcing the >>>>> use >>>>>>> of >>>>>>>>>> management network to route to 10.0.3.2 for DNS. that's where = the >>>>> route >>>>>>>>> is >>>>>>>>>> coming from. you will want to use something on your = management net >>>>> for >>>>>>>>>> internal DNS, or something other than that router. >>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>> On Wed, Dec 5, 2012 at 11:59 AM, John Burwell = >>>>>>>>> wrote: >>>>>>>>>>=20 >>>>>>>>>>> Anthony, >>>>>>>>>>>=20 >>>>>>>>>>> I apologize for forgetting to response to the part of your = answer >>>>> the >>>>>>>>>>> first part of the question. I had set the = management.network.cidr >>>>> and >>>>>>>>> host >>>>>>>>>>> global settings to 192.168.0.0/24 and 192.168.56.18 = respectively. >>>>>>>>> Please >>>>>>>>>>> see the zone1.devcloud.cfg Marvin configuration attached to = my >>>>>>> original >>>>>>>>>>> email for the actual setting, as well as, the network = configurations >>>>>>>>> used >>>>>>>>>>> when this problem occurs. >>>>>>>>>>>=20 >>>>>>>>>>> Thanks, >>>>>>>>>>> -John >>>>>>>>>>>=20 >>>>>>>>>>> On Dec 5, 2012, at 12:46 PM, Anthony Xu = >>>>> wrote: >>>>>>>>>>>=20 >>>>>>>>>>>> Hi join, >>>>>>>>>>>>=20 >>>>>>>>>>>> Try following, >>>>>>>>>>>>=20 >>>>>>>>>>>> Set global configuration management.network.cidr to your = management >>>>>>>>>>> server CIDR, if this configuration is not available in UI, = you can >>>>>>>>> change >>>>>>>>>>> it in DB directly. >>>>>>>>>>>>=20 >>>>>>>>>>>> Restart management, >>>>>>>>>>>> Stop/Start SSVM and CPVM. >>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>>> And could you post "cat /proc/cmdline" in SSVM? >>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>>> Anthony >>>>>>>>>>>>=20 >>>>>>>>>>>>> -----Original Message----- >>>>>>>>>>>>> From: John Burwell [mailto:jburwell@basho.com] >>>>>>>>>>>>> Sent: Wednesday, December 05, 2012 9:11 AM >>>>>>>>>>>>> To: cloudstack-dev@incubator.apache.org >>>>>>>>>>>>> Subject: Re: SSVM Network Configuration Issue >>>>>>>>>>>>>=20 >>>>>>>>>>>>> All, >>>>>>>>>>>>>=20 >>>>>>>>>>>>> I was wondering if anyone else is experiencing this = problem when >>>>>>> using >>>>>>>>>>>>> secondary storage on a devcloud-style VM with a host-only = and NAT >>>>>>>>>>>>> adapter. One aspect of this issue that seems interesting = is that >>>>>>>>>>>>> following route table from the SSVM: >>>>>>>>>>>>>=20 >>>>>>>>>>>>> root@s-5-TEST:~# route >>>>>>>>>>>>> Kernel IP routing table >>>>>>>>>>>>> Destination Gateway Genmask Flags = Metric Ref >>>>>>>>> Use >>>>>>>>>>>>> Iface >>>>>>>>>>>>> 10.0.3.2 192.168.56.1 255.255.255.255 UGH 0 = 0 >>>>>>>>> 0 >>>>>>>>>>>>> eth1 >>>>>>>>>>>>> 10.0.3.0 * 255.255.255.0 U 0 = 0 >>>>>>>>> 0 >>>>>>>>>>>>> eth2 >>>>>>>>>>>>> 192.168.56.0 * 255.255.255.0 U 0 = 0 >>>>>>>>> 0 >>>>>>>>>>>>> eth1 >>>>>>>>>>>>> 192.168.56.0 * 255.255.255.0 U 0 = 0 >>>>>>>>> 0 >>>>>>>>>>>>> eth3 >>>>>>>>>>>>> link-local * 255.255.0.0 U 0 = 0 >>>>>>>>> 0 >>>>>>>>>>>>> eth0 >>>>>>>>>>>>> default 10.0.3.2 0.0.0.0 UG 0 = 0 >>>>>>>>> 0 >>>>>>>>>>>>> eth2 >>>>>>>>>>>>>=20 >>>>>>>>>>>>> In particular, the gateways for the management and guest = networks >>>>> do >>>>>>>>>>>>> not match to the configuration provided to the management = server >>>>>>> (i.e. >>>>>>>>>>>>> 10.0.3.2 is the gateway for the 10.0.3.0/24 network and >>>>>>> 192.168.56.1 >>>>>>>>> is >>>>>>>>>>>>> the gateway for the 192.168.56.0/24 network). With this >>>>>>>>> configuration, >>>>>>>>>>>>> the SSVM has a socket connection to the management server, = but is >>>>> in >>>>>>>>>>>>> alert state. Finally, when I remove the host-only NIC and = use >>>>> only >>>>>>> a >>>>>>>>>>>>> NAT adapter the SSVM's networking works as expecting = leading me to >>>>>>>>>>>>> believe that the segregated network configuration is at = the root >>>>> of >>>>>>>>> the >>>>>>>>>>>>> problem. >>>>>>>>>>>>>=20 >>>>>>>>>>>>> Until I can get the networking on the SSVM configured, I = am unable >>>>>>> to >>>>>>>>>>>>> complete the testing of the S3-backed Secondary Storage >>>>> enhancement. >>>>>>>>>>>>>=20 >>>>>>>>>>>>> Thank you for your help, >>>>>>>>>>>>> -John >>>>>>>>>>>>>=20 >>>>>>>>>>>>> On Dec 3, 2012, at 4:46 PM, John Burwell = >>>>>>> wrote: >>>>>>>>>>>>>=20 >>>>>>>>>>>>>> All, >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> I am setting up a multi-zone devcloud configuration on = VirtualBox >>>>>>>>>>>>> 4.2.4 using the Ubuntu 12.04.1 and Xen 4.1. I have = configured the >>>>>>>>> base >>>>>>>>>>>>> management server VM (zone1) to serve as both the zone1, = as well >>>>> as, >>>>>>>>>>>>> the management server (running MySql) with eth0 as a = host-only >>>>>>> adapter >>>>>>>>>>>>> and a static IP of 192.168.56.15 and eth1 as a NAT adapter = (see >>>>> the >>>>>>>>>>>>> attached zone1-interfaces file for the exact network = configuration >>>>>>> on >>>>>>>>>>>>> the VM). The management and guest networks are configured = as >>>>>>> follows: >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> Zone 1 >>>>>>>>>>>>>> Management: 192.168.56.100-149 gw 192.168.56.1 dns = 10.0.3.2 (?) >>>>>>>>>>>>>> Guest: 10.0.3.200-10.0.3.220 gw 10.0.3.2 dns 8.8.8.8 >>>>>>>>>>>>>> Zone 2 >>>>>>>>>>>>>> Management: 192.168.56.150-200 gw 192.68.56.1 dns = 10.0.3.2 (?) >>>>>>>>>>>>>> Guest: 10.0.3.221-240 gw 10.0.3.2 dns 8.8.8.8 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> The management server deploys and starts without error. = I then >>>>>>>>>>>>> populate the configuration it using the attached Marvin >>>>>>> configuration >>>>>>>>>>>>> file (zone1.devcloud.cfg) and restart the management = server in >>>>> order >>>>>>>>> to >>>>>>>>>>>>> allow the global configuration option changes to take = effect. >>>>>>>>>>>>> Following the restart, the CPVM and SSVM start without = error. >>>>>>>>>>>>> Unfortunately, they drop into alert status, and the SSVM = is unable >>>>>>> to >>>>>>>>>>>>> connect outbound through the guest network (very important = for my >>>>>>>>> tests >>>>>>>>>>>>> because I am testing S3-backed secondary storage). >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> =46rom the diagnostic checks I have performed on the = management >>>>>>> server >>>>>>>>>>>>> and the SSVM, it appears that the daemon on the SSVM is = connecting >>>>>>>>> back >>>>>>>>>>>>> to the management server. I have attached a set of = diagnostic >>>>>>>>>>>>> information from the management server >>>>>>> (mgmtsvr-zone1-diagnostics.log) >>>>>>>>>>>>> and SSVM server (ssvm-zone1-diagnostics.log) that includes = the >>>>>>> results >>>>>>>>>>>>> of ifconfig, route, netstat and ping checks, as well as, = other >>>>>>>>>>>>> information (e.g. the contents of /var/cache/cloud/cmdline = on the >>>>>>>>> SSVM). >>>>>>>>>>>>> Finally, I have attached the vmops log from the management = server >>>>>>>>>>>>> (vmops-zone1.log). >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> What changes need to be made to management server = configuration >>>>> in >>>>>>>>>>>>> order to start up an SSVM that can communicate with the = secondary >>>>>>>>>>>>> storage NFS volumes, management server, and connect to = hosts on >>>>> the >>>>>>>>>>>>> Internet? >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> Thanks for your help, >>>>>>>>>>>>>> -John >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>=20 >>>>>=20 >=20 --Apple-Mail=_0C178030-E06A-4642-9F92-4F62CF733056 Content-Type: multipart/mixed; boundary="Apple-Mail=_D24C43B8-CD5D-48D5-8CBE-01FA1F4A13D5" --Apple-Mail=_D24C43B8-CD5D-48D5-8CBE-01FA1F4A13D5 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=us-ascii
2012-12-17 08:59:56,408 = WARN  [cloud.resource.ResourceManagerImpl] (AgentTaskPool-2:null) = Unable to connect due = to 
com.cloud.exception.ConnectionException: Incorrect = Network setup on agent, Reinitialize agent after network names are = setup, details 
: For Physical Network id:200, Guest = Network is not configured on the backend by name = cloud-guest
        at = com.cloud.network.NetworkManagerImpl.processConnect(NetworkManagerImpl.jav= a:6656)
        at = com.cloud.agent.manager.AgentManagerImpl.notifyMonitorsOfConnection(AgentM= anagerImpl.java:611)
        at = com.cloud.agent.manager.AgentManagerImpl.handleDirectConnectAgent(AgentMan= agerImpl.java:1502)
        at = com.cloud.resource.ResourceManagerImpl.createHostAndAgent(ResourceManagerI= mpl.java:1648)
        at = com.cloud.resource.ResourceManagerImpl.createHostAndAgent(ResourceManagerI= mpl.java:1685)
        at = com.cloud.agent.manager.AgentManagerImpl$SimulateStartTask.run(AgentManage= rImpl.java:1152)
        at = java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:= 1110)
        at = java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java= :603)
        at = java.lang.Thread.run(Thread.java:679)
2012-12-17 08:59:56,408 = DEBUG [cloud.host.Status] (AgentTaskPool-2:null) Transition:[Resource = state =3D Enabled, Agent event =3D AgentDisconnected, Host id =3D 1, = name =3D devcloud]


I have also = attached a copy of my Marvin configuration for your = reference.

Thanks for your = help,
-John

= --Apple-Mail=_D24C43B8-CD5D-48D5-8CBE-01FA1F4A13D5 Content-Disposition: attachment; filename=zone1.devcloud.cfg Content-Type: application/octet-stream; x-unix-mode=0644; name="zone1.devcloud.cfg" Content-Transfer-Encoding: quoted-printable {=0D=0A=20=20=20=20"zones":=20[=0D=0A=20=20=20=20=20=20=20=20{=0D=0A=20=20= =20=20=20=20=20=20=20=20=20=20"name":=20"zone00",=0D=0A=20=20=20=20=20=20= =20=20=20=20=20=20"physical_networks":=20[=0D=0A=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20{=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20"broadcastdomainrange":=20"Zone",=0D=0A=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20"name":=20"zone00-pn00",=0D=0A=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20"traffictypes":=20= [=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20{=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20"xen":=20"cloud-guest",=0D=0A=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20"typ":=20= "Guest"=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20},=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20{=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20"xen":=20"cloud-mgmt",=0D=0A=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= "typ":=20"Management"=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20}=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20],=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20"providers":=20[=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20{=0D=0A=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= "broadcastdomainrange":=20"ZONE",=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20"name":=20= "VirtualRouter"=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20},=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20{=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20"broadcastdomainrange":=20= "Pod",=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20"name":=20"SecurityGroupProvider"=0D=0A=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20}=0D=0A=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20]=0D=0A=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20}=0D=0A=20=20=20=20=20=20=20=20=20=20=20= =20],=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20"dns1":=20"8.8.8.8",=0D=0A= =20=20=20=20=20=20=20=20=20=20=20=20"securitygroupenabled":=20"true",=0D=0A= =20=20=20=20=20=20=20=20=20=20=20=20"networktype":=20"Basic",=0D=0A=20=20= =20=20=20=20=20=20=20=20=20=20"pods":=20[=0D=0A=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20{=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20"name":=20"zone00-pod00",=0D=0A=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20"startip":=20"192.168.56.100",=0D=0A=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20"endip":=20= "192.168.56.149",=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20"netmask":=20"255.255.255.0",=0D=0A=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20"gateway":=20"192.168.56.1",=0D=0A=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20"guestIpRanges":=20[=0D= =0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= {=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20"startip":=20"10.0.3.200",=0D=0A=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20"endip":=20= "10.0.3.220",=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20"netmask":=20"255.255.255.0",=0D=0A=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= "gateway":=20"10.0.3.2"=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20}=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20],=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20"clusters":=20[=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20{=0D=0A=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20"clustername":=20= "zone00-pod00-cluster00",=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20"hypervisor":=20"XenServer",=0D= =0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20"hosts":=20[=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20{=0D=0A=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20"name":=20"192.168.56.15",=0D=0A=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20"username":=20"root",=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= "url":=20"http://192.168.56.15/",=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= "password":=20"password"=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20}=0D=0A=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20],=0D= =0A=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20"clustertype":=20"CloudManaged"=0D=0A=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20}=0D=0A=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20]=0D=0A=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20}=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20],=0D=0A= =20=20=20=20=20=20=20=20=20=20=20=20"internaldns1":=20"192.168.56.15",=0D= =0A=20=20=20=20=20=20=20=20=20=20=20=20"localstorageenabled":=20"true",=0D= =0A=20=20=20=20=20=20=20=20=20=20=20=20"secondaryStorages":=20[=0D=0A=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20{=0D=0A=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20"url":=20= "nfs://192.168.56.15/opt/storage/secondary"=0D=0A=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20}=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20]=0D=0A= =20=20=20=20=20=20=20=20}=0D=0A=20=20=20=20],=0D=0A=20=20=20=20"logger":=20= [=0D=0A=20=20=20=20=20=20=20=20{=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20= "name":=20"TestClient",=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20"file":=20= "/tmp/testclient.log"=0D=0A=20=20=20=20=20=20=20=20},=0D=0A=20=20=20=20=20= =20=20=20{=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20"name":=20= "TestCase",=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20"file":=20= "/tmp/testcase.log"=0D=0A=20=20=20=20=20=20=20=20}=0D=0A=20=20=20=20],=0D= =0A=20=20=20=20"globalConfig":=20[=0D=0A=20=20=20=20=20=20=20=20{=0D=0A=20= =20=20=20=20=20=20=20=20=20=20=20"name":=20"expunge.workers",=0D=0A=20=20= =20=20=20=20=20=20=20=20=20=20"value":=20"3"=0D=0A=20=20=20=20=20=20=20=20= },=0D=0A=20=20=20=20=20=20=20=20{=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20= "name":=20"expunge.delay",=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20= "value":=20"60"=0D=0A=20=20=20=20=20=20=20=20},=0D=0A=20=20=20=20=20=20=20= =20{=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20"name":=20= "expunge.interval",=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20"value":=20= "60"=0D=0A=20=20=20=20=20=20=20=20},=0D=0A=20=20=20=20=20=20=20=20{=0D=0A= =20=20=20=20=20=20=20=20=20=20=20=20"name":=20= "system.vm.use.local.storage",=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20= "value":=20"true"=0D=0A=20=20=20=20=20=20=20=20},=0D=0A=20=20=20=20=20=20= =20=20{=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20"name":=20"s3.enable",=0D= =0A=20=20=20=20=20=20=20=20=20=20=20=20"value":=20"true"=0D=0A=20=20=20=20= =20=20=20=20},=0D=0A=20=20=20=20=20=20=20=20{=0D=0A=20=20=20=20=20=20=20=20= =20=20=20=20"name":=20"cpu.overprovisioning.factor",=0D=0A=20=20=20=20=20= =20=20=20=20=20=20=20"value":=20"30"=0D=0A=20=20=20=20=20=20=20=20},=0D=0A= =20=20=20=20=20=20=20=20{=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20= "name":=20"mem.overprovisioning.factor",=0D=0A=20=20=20=20=20=20=20=20=20= =20=20=20"value":=20"30"=0D=0A=20=20=20=20=20=20=20=20},=0D=0A=20=20=20=20= =20=20=20=20{=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20"name":=20= "storage.overprovisioning.factor",=0D=0A=20=20=20=20=20=20=20=20=20=20=20= =20"value":=20"30"=0D=0A=20=20=20=20=20=20=20=20},=0D=0A=20=20=20=20=20=20= =20=20{=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20"name":=20= "management.network.cidr",=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20= "value":=20"192.168.56.0/24"=0D=0A=20=20=20=20=20=20=20=20},=0D=0A=20=20=20= =20=20=20=20=20{=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20"name":=20= "host",=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20"value":=20= "192.168.56.15"=0D=0A=20=20=20=20=20=20=20=20},=0D=0A=20=20=20=20=20=20=20= =20{=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20"name":=20= "enable.ec2.api",=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20"value":=20= "false"=0D=0A=20=20=20=20=20=20=20=20},=0D=0A=20=20=20=20=20=20=20=20{=0D= =0A=20=20=20=20=20=20=20=20=20=20=20=20"name":=20"enable.s3.api",=0D=0A=20= =20=20=20=20=20=20=20=20=20=20=20"value":=20"false"=0D=0A=20=20=20=20=20=20= =20=20}=0D=0A=20=20=20=20],=0D=0A=20=20=20=20"mgtSvr":=20[=0D=0A=20=20=20= =20=20=20=20=20{=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20"mgtSvrIp":=20= "127.0.0.1",=0D=0A=20=20=20=20=20=20=20=20=20=20=20=20"port":=208096=0D=0A= =20=20=20=20=20=20=20=20}=0D=0A=20=20=20=20]=0D=0A}=0D=0A=0D=0A= --Apple-Mail=_D24C43B8-CD5D-48D5-8CBE-01FA1F4A13D5 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=us-ascii tsp@apache.org> wrote:
Traffic = types carry label information on the physical nic and the
label that = is associated with it. It is via the traffic label that you = would
tell cloudstack about the label's you've given to the = hypervisor's interfaces.

Your devcloud.cfg will have to be = altered for this:

After altering it should look something = like

.... SNIP ....
"traffictypes": [
=             &n= bsp;          {
=             &n= bsp;           &nbs= p;  "xen": "cloud-guest",
=             &n= bsp;           &nbs= p;  "typ": "Guest"
=             &n= bsp;          },
=             &n= bsp;          {
=             &n= bsp;           &nbs= p;  "typ": "Management",
=             &n= bsp;           &nbs= p;  "xen" : "cloud-mgmt"
=             &n= bsp;          },
=             &n= bsp;      ],
.... SNIP ....

An = example is available under =
incubator-cloudstack/tools/marvin/marvin/configGenerator.py

Loo= k for the method: describe_setup_in_eip_mode()


HTH
-- =
Prasanna.,


On Sun, Dec 16, 2012 at 02:55:30AM +0530, John = Burwell wrote:
Rohit,

As I stated = below, I know which the VIF->PIF->device Xen mapping needs
to = be used for each network.  My question is how do I = configure
CloudStack with that information.

Thanks for your = help,
-John




On Dec 15, 2012, at 2:51 PM, Rohit = Yadav <rohit.yadav@citrix.com> = wrote:

About xen bridging, these can = help;

brctl show all (bridge vif mappings)
ip show addr xenbr0 = (bridge specific info)
brctl showmacs br0 (bridge mac = mappings)

Wiki:
http://wiki.xen.org/w= iki/Xen_FAQ_Networking
http://wiki.xen.org/wiki/XenNetworking
Regards.

________________________________________
From: John = Burwell [jburwell@basho.com]
Sent: Saturday, December 15, 2012 8:35 = PM
To: cloudstack-dev@incubator.apache.org
Subject: Re: SSVM = Network Configuration Issue

Marcus,

That's what I thought. =  The Xen physical bridge names are xenbr0 (to
eth0) and xenbr1 = (to eth1).  Using basic network configuration, I set
the Xen = network traffic labels for each to the appropriate bridge
device = name.  I receive errors regarding invalid network device when
it = attempts to create a VM.    Does anyone else know how = determine the
mapping of physical devices to CloudStack Xen network = traffic labels?

Thanks,
-John

On Dec 15, 2012, at 1:20 = AM, Marcus Sorensen <shadowsor@gmail.com> = wrote:

Vlans in advanced/KVM should = only be required for the guest networks. If I
create a bridge on = physical eth0, name it 'br0', and a bridge on physical
eth1, naming = it 'br1', and then set my management network to label 'br0',
my = public network to 'br1', and my guest network to 'br1', it should = use
the bridges you asked for when connecting the system VMs for the = specified
traffic. I'd just leave the 'vlan' blank when specifying = public and
pod(management) IPs. In this scenario, the only place you = need to enter
vlans is on the guest, and it should create new tagged = interfaces/bridges
on eth1(per your label of br1) as new guest = networks are brought online.
This is how my dev VMs are usually set = up.


On Thu, Dec 6, 2012 at 11:03 AM, John Burwell = <jburwell@basho.com> wrote:

Marcus,

My question, more specifically, is are = VLANs required to implement traffic
labels?  Also, can traffic = labels be configured in Basic networking mode or
do I need to switch = my configuration to Advanced?

I am not disagreeing on the how DNS = servers should be associated with
interfaces nor do I think a network = operator should be required to make any
upstream router configuration = changes.  I am simply saying that CloudStack
should not make = assumptions about the gateways that have been specified.
The behavior = I experienced of CloudStack attempting to
"correct" my configuration = by injecting another route fails the rule of
least surprise and is = based on incomplete knowledge.  In my opinion,
CloudStack (or = any system of its ilk) should faithfully (or slavishly)
realize the = routes on the system VM as specified.  If the configuration = is
incorrect, networking will fail in an expected manner, and the = operator can
adjust their environment as necessary. =   Otherwise, there is an upstream
router configuration to = which CloudStack has no visibility, but with which
it is completely = compatible.  Essentially, I am asking CloudStack to do
less, = assume I know what I am doing, and break in a manner consistent = with
other network applications.

Thanks,
-John

On = Dec 6, 2012, at 12:30 PM, Marcus Sorensen <shadowsor@gmail.com> = wrote:

Traffic labels essentially tell = the system which physical network to use.
So if you've allocated a = vlan for a specific traffic type, it will first
look at the tag = associated with that traffic type, figure out which
physical = interface goes with that, and then create a tagged interface = and
bridge also on that physical.

I guess we'll just have to = disagree, I think the current behavior makes
total sense.  To = me, internal DNS should always use the management
interface, since = it's internally facing. There's no sane way to do that
other than a = static route on the system vm (it seems you're = suggesting
that
the network = operator force something like this on the upstream router,
which = seems really strange to require everyone to create static routes = on
their public network to force specific IPs back into their = internal
networks, so correct me if I have the wrong impression). =  Cloudstack is
doing exactly what you tell it to. You told it = that 10.0.3.2 should be
accessible via your internal network by = setting it as your internal DNS.
The fact that a broken config = doesn't work isn't CloudStack's fault.

Note that internal DNS is = just the default for the ssvm, public DNS is
still offered as a = backup, so had you not said that 10.0.3.2 = was
available
on your = internal network (perhaps offering a dummy internal DNS
address or = 192.68.56.1),
lookups would fall back to public and everything would = work as expected
as
well.

There is also a global config called = 'use.external.dns', but in setting
this, restarting the management = server, recreating system VMs, I = don't
see
a noticeable = difference on any of this, so perhaps that would solve your
issue as = well but it's either broken or doesn't do what I thought = it
would.


On Thu, = Dec 6, 2012 at 8:39 AM, John Burwell <jburwell@basho.com> = wrote:

Marcus,

Are traffic = labels independent of VLANs?  I ask because my current = XCP
network configuration is bridged, and I am not using Open = vSwitch.

I disagree on the routing issue.  CloudStack should = do what's told
because
it does not have insight into or = control of the configuration of = the
routes
in the layers beneath it. =  If CloudStack simply did as it was told, it
would fail as = expected in a typical networking environment = while
preserving
the flexibility of configuration = expected by a network engineer.

Thanks,
-John

On Dec 6, = 2012, at 10:35 AM, Marcus Sorensen = <shadowsor@gmail.com>
wrote:

I can't really tell you for xen, although it might be = similar to KVM.
During setup I would set a traffic label matching the = name of = the
bridge,
for = example if my public interface were eth0 and the bridge I had = set
up
was = br0, I'd go to the zone network settings, find public traffic, = and
set
a label on it of = "br0". Maybe someone more familiar with the xen = setup
can
help.

On = the DNS, it makes sense from the perspective that the ssvm = has
access
to
your internal networks, thus it uses your internal DNS. = Its default
gateway
is = public. So if I have a DNS server on an internal network = at
10.30.20.10/24, and my management network on 192.168.10.0/24, = this
route
has to = be set in order for the DNS server to be reachable. You = would
under
normal = circumstances not want to use a DNS server on public net as = your
internal DNS setting anyway, although I agree that the route = insertion
should have a bit more sanity checking and not set a static = route to
your
default = gateway.
On Dec 6, 2012 6:31 AM, "John Burwell" = <jburwell@basho.com> wrote:

Marcus,

I setup a small PowerDNS recursor on = 192.168.56.15, configured the = DNS
for
the management network to use = it, and the route table in the SSVM = is
now
correct.  However, this = behavior does not seem correct.  At a = minimum,
it
violates the rule of least = surprise.  CloudStack shouldn't be adding
gateways that are not = configured.  Therefore, I have entered = a
defect[1] to
remove the behavior.

With = the route table fixed, I am now experiencing a new problem. =  The
external NIC (10.0.3.0/24) on the SSVM is being connected = to the
internal
NIC (192.168.56.0/24) on the = host.  The host-only = network
(192.168.56.15)
is configured on xenbr0 and the = NAT network is configured on xenbr1.
As = a
reference, the = following is the contents of = the
/etc/network/in= terfaces
file = and ifconfig from devcloud = host:

root@zone1:/opt/cloudstack/apache-tomcat-6.0.32/bin# = cat
/etc/network/interfaces
# The loopback network = interface
auto lo
iface lo inet loopback

auto eth0
iface = eth0 inet manual

allow-hotplug eth1
iface eth1 inet = manual

# The primary network interface
auto xenbr0
iface = xenbr0 inet static
address 192.168.56.15
netmask = 255.255.255.0
network 192.168.56.0
broadcast = 192.168.56.255
dns_nameserver 192.168.56.15
bridge_ports = eth0

auto xenbr1
iface xenbr1 inet dhcp
bridge_ports = eth1
dns_nameserver 8.8.8.8 8.8.4.4
post-up route add default gw = 10.0.3.2

root@zone1:/opt/cloudstack/apache-tomcat-6.0.32/bin# = ifconfig
eth0      Link encap:Ethernet =  HWaddr 08:00:27:7e:74:9c
     inet6 = addr: fe80::a00:27ff:fe7e:749c/64 Scope:Link
=      UP BROADCAST RUNNING MULTICAST =  MTU:1500  Metric:1
     RX = packets:777 errors:0 dropped:0 overruns:0 frame:0
=      TX packets:188 errors:0 dropped:0 = overruns:0 carrier:0
     collisions:0 = txqueuelen:1000
     RX bytes:109977 (109.9 = KB)  TX bytes:11900 (11.9 KB)

eth1 =      Link encap:Ethernet  HWaddr = 08:00:27:df:00:00
     inet6 addr: = fe80::a00:27ff:fedf:0/64 Scope:Link
     UP = BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
=      RX packets:4129 errors:0 dropped:0 = overruns:0 frame:0
     TX packets:3910 = errors:0 dropped:0 overruns:0 carrier:0
=      collisions:0 txqueuelen:1000
=      RX bytes:478719 (478.7 KB)  TX = bytes:2542459 (2.5 MB)

lo =        Link encap:Local Loopback
=      inet addr:127.0.0.1 =  Mask:255.0.0.0
     inet6 addr: = ::1/128 Scope:Host
     UP LOOPBACK RUNNING =  MTU:16436  Metric:1
     RX = packets:360285 errors:0 dropped:0 overruns:0 frame:0
=      TX packets:360285 errors:0 dropped:0 = overruns:0 carrier:0
     collisions:0 = txqueuelen:0
     RX bytes:169128181 (169.1 = MB)  TX bytes:169128181 (169.1 MB)

vif1.0 =    Link encap:Ethernet  HWaddr fe:ff:ff:ff:ff:ff
=      inet6 addr: fe80::fcff:ffff:feff:ffff/64 = Scope:Link
     UP BROADCAST RUNNING NOARP = PROMISC  MTU:1500  Metric:1
=      RX packets:6 errors:0 dropped:0 overruns:0 = frame:0
     TX packets:152 errors:0 = dropped:0 overruns:0 carrier:0
=      collisions:0 txqueuelen:32
=      RX bytes:292 (292.0 B)  TX bytes:9252 = (9.2 KB)

vif1.1    Link encap:Ethernet =  HWaddr fe:ff:ff:ff:ff:ff
     inet6 = addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
=      UP BROADCAST RUNNING NOARP PROMISC =  MTU:1500  Metric:1
     RX = packets:566 errors:0 dropped:0 overruns:0 frame:0
=      TX packets:1405 errors:0 dropped:0 = overruns:0 carrier:0
     collisions:0 = txqueuelen:32
     RX bytes:44227 (44.2 KB) =  TX bytes:173995 (173.9 KB)

vif1.2    Link = encap:Ethernet  HWaddr fe:ff:ff:ff:ff:ff
=      inet6 addr: fe80::fcff:ffff:feff:ffff/64 = Scope:Link
     UP BROADCAST RUNNING NOARP = PROMISC  MTU:1500  Metric:1
=      RX packets:3 errors:0 dropped:0 overruns:0 = frame:0
     TX packets:838 errors:0 = dropped:0 overruns:0 carrier:0
=      collisions:0 txqueuelen:32
=      RX bytes:84 (84.0 B)  TX bytes:111361 = (111.3 KB)

vif4.0    Link encap:Ethernet =  HWaddr fe:ff:ff:ff:ff:ff
     inet6 = addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
=      UP BROADCAST RUNNING NOARP PROMISC =  MTU:1500  Metric:1
     RX = packets:64 errors:0 dropped:0 overruns:0 frame:0
=      TX packets:197 errors:0 dropped:0 = overruns:0 carrier:0
     collisions:0 = txqueuelen:32
     RX bytes:10276 (10.2 KB) =  TX bytes:18453 (18.4 KB)

vif4.1    Link = encap:Ethernet  HWaddr fe:ff:ff:ff:ff:ff
=      inet6 addr: fe80::fcff:ffff:feff:ffff/64 = Scope:Link
     UP BROADCAST RUNNING NOARP = PROMISC  MTU:1500  Metric:1
=      RX packets:2051 errors:0 dropped:0 = overruns:0 frame:0
     TX packets:2446 = errors:0 dropped:0 overruns:0 carrier:0
=      collisions:0 txqueuelen:32
=      RX bytes:233914 (233.9 KB)  TX = bytes:364243 (364.2 KB)

vif4.2    Link = encap:Ethernet  HWaddr fe:ff:ff:ff:ff:ff
=      inet6 addr: fe80::fcff:ffff:feff:ffff/64 = Scope:Link
     UP BROADCAST RUNNING NOARP = PROMISC  MTU:1500  Metric:1
=      RX packets:3 errors:0 dropped:0 overruns:0 = frame:0
     TX packets:582 errors:0 = dropped:0 overruns:0 carrier:0
=      collisions:0 txqueuelen:32
=      RX bytes:84 (84.0 B)  TX bytes:74700 = (74.7 KB)

vif4.3    Link encap:Ethernet =  HWaddr fe:ff:ff:ff:ff:ff
     inet6 = addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
=      UP BROADCAST RUNNING NOARP PROMISC =  MTU:1500  Metric:1
     RX = packets:0 errors:0 dropped:0 overruns:0 frame:0
=      TX packets:585 errors:0 dropped:0 = overruns:0 carrier:0
     collisions:0 = txqueuelen:32
     RX bytes:0 (0.0 B) =  TX bytes:74826 (74.8 KB)

xapi0     Link = encap:Ethernet  HWaddr fe:ff:ff:ff:ff:ff
=      inet addr:169.254.0.1 =  Bcast:169.254.255.255  Mask:255.255.0.0
=      inet6 addr: fe80::c870:1aff:fec2:22b/64 = Scope:Link
     UP BROADCAST RUNNING = MULTICAST  MTU:1500  Metric:1
=      RX packets:568 errors:0 dropped:0 = overruns:0 frame:0
     TX packets:1132 = errors:0 dropped:0 overruns:0 carrier:0
=      collisions:0 txqueuelen:0
=      RX bytes:76284 (76.2 KB)  TX = bytes:109085 (109.0 KB)

xenbr0    Link = encap:Ethernet  HWaddr 08:00:27:7e:74:9c
=      inet addr:192.168.56.15 =  Bcast:192.168.56.255
Mask:255.255.255.0=
=      inet6 addr: fe80::a00:27ff:fe7e:749c/64 = Scope:Link
     UP BROADCAST RUNNING = MULTICAST  MTU:1500  Metric:1
=      RX packets:4162 errors:0 dropped:0 = overruns:0 frame:0
     TX packets:3281 = errors:0 dropped:0 overruns:0 carrier:0
=      collisions:0 txqueuelen:0
=      RX bytes:469199 (469.1 KB)  TX = bytes:485688 (485.6 KB)

xenbr1    Link = encap:Ethernet  HWaddr 08:00:27:df:00:00
=      inet addr:10.0.3.15  Bcast:10.0.3.255 =  Mask:255.255.255.0
     inet6 addr: = fe80::a00:27ff:fedf:0/64 Scope:Link
     UP = BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
=      RX packets:4129 errors:0 dropped:0 = overruns:0 frame:0
     TX packets:3114 = errors:0 dropped:0 overruns:0 carrier:0
=      collisions:0 txqueuelen:0
=      RX bytes:404327 (404.3 KB)  TX = bytes:2501443 (2.5 MB)

These physical NICs on the host translate = to the following Xen = PIFs:

root@zone1:/opt/cloudstack/apache-tomcat-6.0.32/bin# xe = pif-list
uuid ( RO) =             &n= bsp;    : 207413c9-5058-7a40-6c96-2dab21057f30
=            device = ( RO): eth1
currently-attached ( RO): true
=             &n= bsp;VLAN ( RO): -1
     network-uuid ( RO): = 1679ddb1-5a21-b827-ab07-c16275d5ce72


uuid ( RO) =             &n= bsp;    : c0274787-e768-506f-3191-f0ac17b0c72b
=            device = ( RO): eth0
currently-attached ( RO): true
=             &n= bsp;VLAN ( RO): -1
     network-uuid ( RO): = 8ee927b1-a35d-ac10-4471-d7a6a475839a

The following is the = ifconfig from the SSVM:

root@s-5-TEST:~# ifconfig
eth0 =      Link encap:Ethernet  HWaddr = 0e:00:a9:fe:03:8b
     inet = addr:169.254.3.139 =  Bcast:169.254.255.255
Mask:255.255.0.0<= br>
=      UP BROADCAST RUNNING MULTICAST =  MTU:1500  Metric:1
     RX = packets:235 errors:0 dropped:0 overruns:0 frame:0
=      TX packets:92 errors:0 dropped:0 = overruns:0 carrier:0
     collisions:0 = txqueuelen:1000
     RX bytes:21966 (21.4 = KiB)  TX bytes:16404 (16.0 KiB)
=      Interrupt:8

eth1 =      Link encap:Ethernet  HWaddr = 06:bc:62:00:00:05
     inet = addr:192.168.56.104  Bcast:192.168.56.255
Mask:255.255.255.0
=      UP BROADCAST RUNNING MULTICAST =  MTU:1500  Metric:1
     RX = packets:2532 errors:0 dropped:0 overruns:0 frame:0
=      TX packets:2127 errors:0 dropped:0 = overruns:0 carrier:0
     collisions:0 = txqueuelen:1000
     RX bytes:341242 (333.2 = KiB)  TX bytes:272183 (265.8 KiB)
=      Interrupt:10

eth2 =      Link encap:Ethernet  HWaddr = 06:12:72:00:00:37
     inet addr:10.0.3.204 =  Bcast:10.0.3.255  Mask:255.255.255.0
=      UP BROADCAST RUNNING MULTICAST =  MTU:1500  Metric:1
     RX = packets:600 errors:0 dropped:0 overruns:0 frame:0
=      TX packets:3 errors:0 dropped:0 overruns:0 = carrier:0
     collisions:0 = txqueuelen:1000
     RX bytes:68648 (67.0 = KiB)  TX bytes:126 (126.0 B)
=      Interrupt:11

eth3 =      Link encap:Ethernet  HWaddr = 06:25:e2:00:00:15
     inet = addr:192.168.56.120  Bcast:192.168.56.255
Mask:255.255.255.0
=      UP BROADCAST RUNNING MULTICAST =  MTU:1500  Metric:1
     RX = packets:603 errors:0 dropped:0 overruns:0 frame:0
=      TX packets:0 errors:0 dropped:0 overruns:0 = carrier:0
     collisions:0 = txqueuelen:1000
     RX bytes:68732 (67.1 = KiB)  TX bytes:0 (0.0 B)
=      Interrupt:12

lo =        Link encap:Local Loopback
=      inet addr:127.0.0.1 =  Mask:255.0.0.0
     UP LOOPBACK = RUNNING  MTU:16436  Metric:1
=      RX packets:61 errors:0 dropped:0 = overruns:0 frame:0
     TX packets:61 = errors:0 dropped:0 overruns:0 carrier:0
=      collisions:0 txqueuelen:0
=      RX bytes:5300 (5.1 KiB)  TX = bytes:5300 (5.1 KiB)

Finally, the following are the vif params = for the eth2 device on = the
SSVM
depicting its connection to eth0 = instead of = eth1:

root@zone1:/opt/cloudstack/apache-tomcat-6.0.32/bin# = !1243
xe vif-param-list = uuid=3Dbe44bb30-5700-b461-760e-10fe93079210
uuid ( RO) =             &n= bsp;          :
be44bb30-5700-b461-760e-10f= e93079210
=             &n= bsp;   vm-uuid ( RO): = 7958d91f-e52d-a25d-718c-7f831ae701d7
=           vm-name-label = ( RO): s-5-TEST
     allowed-operations = (SRO): attach; unplug_force; unplug
=      current-operations (SRO):
=             &n= bsp;    device ( RO): 2
=             &n= bsp;       MAC ( RO): = 06:12:72:00:00:37
=       MAC-autogenerated ( RO): false
=             &n= bsp;       MTU ( RO): 1500
=      currently-attached ( RO): true
=      qos_algorithm_type ( RW): ratelimit
=    qos_algorithm_params (MRW): kbps: = 25600
qos_supported_algorithms (SRO):
=            other-co= nfig (MRW): nicira-iface-id:
3d68b9f8-98d1-4ac7-92d8-fb57cb8b0adc; = nicira-vm-id:
7958d91f-e52d-a25d-718c-7f831ae701d7
=            network-= uuid ( RO): 8ee927b1-a35d-ac10-4471-d7a6a475839a
=      network-name-label ( RO): Pool-wide = network associated with
eth0
=             io= _read_kbs ( RO): 0.007
=            io_write= _kbs ( RO): 0.000

How do I configure CloudStack such that the = guest network NIC on = the
VM
will be connected to correct = physical NIC?

Thanks for your help,
-John


[1]: = https://issues.apache.org/jira/browse/CLOUDSTACK-590

On Dec 5, = 2012, at 2:47 PM, Marcus Sorensen = <shadowsor@gmail.com>
wrote:

Yes, see your cmdline. internaldns1=3D10.0.3.2, so it is = forcing = the
us= e
of
management network to route to 10.0.3.2 for DNS. that's = where = the
ro= ute
is
coming from. you will want to use something on your = management = net
fo= r
internal= DNS, or something other than that router.


On Wed, Dec 5, = 2012 at 11:59 AM, John Burwell = <jburwell@basho.com>
wrote:

Anthony,

I apologize = for forgetting to response to the part of your = answer
the
first = part of the question.  I had set the = management.network.cidr
and
host
global settings to = 192.168.0.0/24 and 192.168.56.18 = respectively.
Please
see the zone1.devcloud.cfg = Marvin configuration attached to = my
original
email for the actual setting, as = well as, the network = configurations
used
when this problem = occurs.

Thanks,
-John

On Dec 5, 2012, at 12:46 PM, = Anthony Xu = <Xuefei.Xu@citrix.com>
wrote:

Hi = join,

Try following,

Set global configuration = management.network.cidr to your management
server CIDR, = if this configuration is not available in UI, you = can
change
it in DB = directly.

Restart = management,
Stop/Start SSVM and CPVM.


And could you post = "cat /proc/cmdline" in SSVM?



Anthony

-----Original Message-----
From: John Burwell = [mailto:jburwell@basho.com]
Sent: Wednesday, December 05, 2012 9:11 = AM
To: cloudstack-dev@incubator.apache.org
Subject: Re: SSVM = Network Configuration Issue

All,

I was wondering if anyone = else is experiencing this problem = when
<= /blockquote>using
secondary storage on a devcloud-style VM with a host-only = and NAT
adapter.  One aspect of this issue that seems = interesting is that
following route table from the = SSVM:

root@s-5-TEST:~# route
Kernel IP routing = table
Destination     Gateway =         Genmask =         Flags Metric = Ref
Use
Iface
10.0.3.2 =        192.168.56.1 =    255.255.255.255 UGH   0 =      0
=
0
eth1
10.0.3.0 =        * =             &n= bsp; 255.255.255.0   U     0 =      0
=
0
eth2
192.168.56.0    * =             &n= bsp; 255.255.255.0   U     0 =      0
=
0
eth1
192.168.56.0    * =             &n= bsp; 255.255.255.0   U     0 =      0
=
0
eth3
link-local      * =             &n= bsp; 255.255.0.0     U =     0 =      0
=
0
eth0
default =         10.0.3.2 =        0.0.0.0 =         UG    0 =      0
=
0
eth2

In particular, the gateways for the management = and guest = networks
do
not match to the configuration = provided to the management = server
(i.e.
10.0.3.2= is the gateway for the 10.0.3.0/24 network = and
192.168.56.1
is
the = gateway for the 192.168.56.0/24 network).  With = this
configuration,=
the SSVM has a socket connection = to the management server, but = is
in
alert state.  Finally, when = I remove the host-only NIC and = use
only
a
NAT adapter the SSVM's = networking works as expecting leading me to
believe that the = segregated network configuration is at the = root
<= /blockquote>
of
the
problem.

Until I can get = the networking on the SSVM configured, I am = unable
to
complete= the testing of the S3-backed Secondary = Storage
enhancement.

Thank you for your = help,
-John

On Dec 3, 2012, at 4:46 PM, John Burwell = <jburwell@basho.com>
wrote:

All,

I am setting up a multi-zone devcloud = configuration on VirtualBox
4.2.4 using the Ubuntu = 12.04.1 and Xen 4.1.  I have configured = the
base
management server VM (zone1) to = serve as both the zone1, as = well
<= /blockquote>
as,
the management server (running = MySql) with eth0 as a = host-only
adapter
and a = static IP of 192.168.56.15 and eth1 as a NAT adapter = (see
<= /blockquote>
the
attached zone1-interfaces file = for the exact network = configuration
on
the = VM).  The management and guest networks are configured = as
follows:

Zone 1
Management: = 192.168.56.100-149 gw 192.168.56.1 dns 10.0.3.2 (?)
Guest: = 10.0.3.200-10.0.3.220 gw 10.0.3.2 dns 8.8.8.8
Zone 2
Management: = 192.168.56.150-200 gw 192.68.56.1 dns 10.0.3.2 (?)
Guest: = 10.0.3.221-240 gw 10.0.3.2 dns 8.8.8.8

The management server = deploys and starts without error.  I then
populate = the configuration it using the attached = Marvin
configuration
file = (zone1.devcloud.cfg) and restart the management server = in
order
to
allow the global configuration = option changes to take effect.
Following the restart, the CPVM and = SSVM start without error.
Unfortunately, they drop into alert status, = and the SSVM is = unable
to
connect = outbound through the guest network (very important for = my
tests
because I am testing S3-backed = secondary storage).

=46rom the = diagnostic checks I have performed on the = management
server
and the SSVM, it appears that = the daemon on the SSVM is = connecting
back
=
to the management server. =  I have attached a set of diagnostic
information from the = management = server
(mgmtsvr-zone1-diagnostics.log)
and SSVM server = (ssvm-zone1-diagnostics.log) that includes = the
results
of = ifconfig, route, netstat and ping checks, as well as, = other
information (e.g. the contents of /var/cache/cloud/cmdline on = the
SSVM).
Finally, I have attached the = vmops log from the management = server
(vmops-zone1.log).

What = changes need to be made to management server = configuration
in
order to start up an SSVM that = can communicate with the secondary
storage NFS volumes, management = server, and connect to hosts = on
the
Internet?

Thanks for your = help,
-John

<ssvm-zone1-diagnostics.log>
<vmops-zone= 1.tar.gz>
<mgmtsvr-zone1-diagnostics.log>
<zone1-interfa= ces>
<zone1.devcloud.cfg>



=

= --Apple-Mail=_D24C43B8-CD5D-48D5-8CBE-01FA1F4A13D5-- --Apple-Mail=_0C178030-E06A-4642-9F92-4F62CF733056--