cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Remi Bergsma <RBerg...@schubergphilis.com>
Subject Re: VirtualRouter has duplicated public interface and broken NAT functionality
Date Tue, 17 Nov 2015 06:42:11 GMT
Hi,

You are right, eth3 should not be there. Somehow the detection of the public interface goes
wrong and it ends up creating a new one.

In the logs, instead of ethX, I see null which is an indication:

2015-11-17 01:38:17,760 DEBUG [cloud.agent.Agent] (agentRequest-Handler-1:null) Processing
command: com.cloud.agent.api.routing.IpAssocCommand
2015-11-17 01:38:17,769 DEBUG [kvm.resource.OvsVifDriver] (agentRequest-Handler-1:null) plugging
nic=[Nic:Public-null-vlan://104]

I've seen this before and I think it is the combination of ovs with a tagged (as in vlan)
network. 

You probably have specified a vlan tag in CloudStack (104?) on some bridge. If you can change
it to use untagged, but then point to an ovs bridge that is tagged it will most likely work.
You then move the tagging from CloudStack to ovs. I think this is how I worked around it when
I saw this. 

Don't get me wrong, what you did should work so this is a bug. The cause needs to be figured
out but it wasn't too obvious. 

I'm also curious to know if 4.6.0 (which will be available this week) still has this issue.


This is what I remember, hope it gives some pointers. 

Regards, Remi 



> On 17 Nov 2015, at 06:56, Alexander Couzens <lynxis@fe80.eu> wrote:
> 
> Hi,
> 
> a VirtualRouter is not nat-ing any Packets. Even there are egress
> rules, it's not possible to ping anything from a guest VM to the outer world.
> 
> I'm not sure if I'm doing something wrong or if this is a bug.
> 
> The VR has XXX.XXX.221.102 as public ip, guest net is 10.1.1.0/24.
> The webui + cloudmonkey shows me 3 nics on the VM, but it has obvious 4 nics.
> Looking into the database confirms it, there isn't any forth nic.
> 
> Logging into the machine shows
> # ip r
> default via XXX.XXX.221.126 dev eth2 
> 10.1.1.0/24 dev eth0  proto kernel  scope link  src 10.1.1.1 
> XXX.XXX.221.96/27 dev eth2  proto kernel  scope link  src XXX.XXX.221.102 
> XXX.XXX.221.96/27 dev eth3  proto kernel  scope link  src XXX.XXX.221.102 
> 169.254.0.0/16 dev eth1  proto kernel  scope link  src 169.254.0.158 
> 
> Also looking on iptables I found the nat rule:
> Chain POSTROUTING (policy ACCEPT 6 packets, 415 bytes)
> pkts bytes target     prot opt in     out     source               destination      
  
>    0     0 SNAT       all  --  *      eth3    0.0.0.0/0 0.0.0.0/0            to:XXX.XXX.221.102
> 
> Mention the eth3, which is the later interface. But the traffic will route from the guest
vm is
> using eth2. There are policy routing which uses the table Table_eth3, but it also uses
eth2 for traffic instead of eth3.
> # ip r s ta Table_eth3
> default via XXX.XXX.221.126 dev eth2  proto static 
> throw 10.1.1.0/24  proto static 
> throw XXX.XXX.221.96/27  proto static 
> throw 169.254.0.0/16  proto static 
> 
> So I would say that eth3 is wrong.
> When the vm is created it only has 3 network interface and looks correct, but later cloudstack
> generates the last one looking into the log [1].
> 
> Any ideas?
> 
> Best
> lynxis
> 
> 
> [1]
> 2015-11-17 01:38:17,760 DEBUG [cloud.agent.Agent] (agentRequest-Handler-1:null) Processing
command: com.cloud.agent.api.routing.AggregationControlCommand
> 2015-11-17 01:38:17,760 DEBUG [cloud.agent.Agent] (agentRequest-Handler-1:null) Processing
command: com.cloud.agent.api.routing.IpAssocCommand
> 2015-11-17 01:38:17,769 DEBUG [kvm.resource.OvsVifDriver] (agentRequest-Handler-1:null)
plugging nic=[Nic:Public-null-vlan://104]
> 2015-11-17 01:38:17,898 DEBUG [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-1:null)
Executing: /usr/share/cloudstack-common/scripts/network/domr/router_proxy.sh netusage.sh 169.254.0.158
-a eth3 
> 2015-11-17 01:38:18,033 DEBUG [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-1:null)
Execution is successful.
> 2015-11-17 01:38:18,034 DEBUG [cloud.agent.Agent] (agentRequest-Handler-1:null) Processing
command: com.cloud.agent.api.routing.SetFirewallRulesCommand
> 2015-11-17 01:38:18,034 DEBUG [cloud.agent.Agent] (agentRequest-Handler-1:null) Processing
command: com.cloud.agent.api.routing.SetMonitorServiceCommand
> 2015-11-17 01:38:18,034 DEBUG [cloud.agent.Agent] (agentRequest-Handler-1:null) Processing
command: com.cloud.agent.api.routing.DhcpEntryCommand
> 2015-11-17 01:38:18,034 DEBUG [cloud.agent.Agent] (agentRequest-Handler-1:null) Processing
command: com.cloud.agent.api.routing.VmDataCommand
> 2015-11-17 01:38:18,035 DEBUG [cloud.agent.Agent] (agentRequest-Handler-1:null) Processing
command: com.cloud.agent.api.routing.AggregationControlCommand
> 2015-11-17 01:38:18,271 DEBUG [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-1:null)
Executing: /usr/share/cloudstack-common/scripts/network/domr/router_proxy.sh vr_cfg.sh 169.254.0.158
-c /var/cache/cloud/VR-1c6cc724-79ac-4155-bb73-284514213d10.cfg 
> 2015-11-17 01:38:19,062 DEBUG [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-1:null)
Execution is successful.
> 
> ## versions and setup
> using ubuntu 14.04 machines with packages:
> cloudstack-management 4.5.2
> cloudstack-agent 4.5.2 using kvm
> 
> system vm:
> Cloudstack Release 4.5.2 Tue Aug 11 00:42:47 UTC 2015
> 
> network configuration
> advanced setup, with management, guest, internet
> # kvm-label - descriptions
> cloudbr0 - management.
> ovs-trunk - guest,internet. trunk interface using openvswitch + vlans
> 
> As primary storage I'm using a shared-mountpoint,
> As secondary storage I'm using nfs over the management network.
> 
> consolenproxy + storage vm are working.
> -- 
> Alexander Couzens
> 
> mail: lynxis@fe80.eu
> jabber: lynxis@fe80.eu
> mobile: +4915123277221
> gpg: 390D CF78 8BF9 AA50 4F8F  F1E2 C29E 9DA6 A0DF 8604

Mime
View raw message