[openstack-dev] [neutron] VXLAN with single-NIC compute nodes: Avoiding the MTU pitfalls

Ian Wells ijw.ubuntu at cack.org.uk
Fri Mar 13 08:54:26 UTC 2015

On 12 March 2015 at 05:33, Fredy Neeser <Fredy.Neeser at solnet.ch> wrote:

>  2.  I'm using policy routing on my hosts to steer VXLAN traffic (UDP
> dest. port 4789) to interface br-ex.12 --  all other traffic from
> is source routed from br-ex.1, presumably because br-ex.1 is a
> lower-numbered interface than br-ex.12  (?) -- interesting question whether
> I'm relying here on the order in which I created these two interfaces.

OK, I have to admit I've never used policy routing in anger so I can't
speak with much confidence here.  I wonder if anything (link down, for
instance) might cause Linux to change its preference behaviour, though, and
your to-the-world packets haven't got a policy from what you say so a
preference change would be a disaster.

3.  It's not clear to me how to setup multiple nodes with packstack if a
> node's tunnel IP does not equal its admin IP (or the OpenStack API IP in
> case of a controller node).  With packstack, I can only specify the compute
> node IPs through CONFIG_COMPUTE_HOSTS.  Presumably, these IPs are used for
> both packstack deployment (admin IP) and for configuring the VXLAN tunnel
> IPs (local_ip and remote_ip parameters).  How would I specify different IPs
> for these purposes?  (Recall that my hosts have a single NIC).

I don't think the single NIC is an issue, particularly, and even less so if
you have multiple interfaces, even VLAN interfaces, with different
addresses.  At that point you should be able to use
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1.12 , which would need to be created and
addressed by the point you run packstack, as it expects it to be there at
this point.  In fact the closed bug
https://bugs.launchpad.net/packstack/+bug/1279517 suggests that you're not
the first to try this and it does work (though since the change it refers
to isn't merged you might need to say ...=eth1_12 to keep packstack happy).

You may find that configuring a VLAN interface for eth1.12 (not in a
> bridge, with a local address suitable for communication with compute nodes,
> for VXLAN traffic) and eth1.1 (in br-ex, for external traffic to use) does
> better for you.
>   Hmm, I only have one NIC (eth0).

Apparently I can't read - where I'm putting eth1 I mean eth0 in your setup,
I must have misread it early on.   I'll try and make the switch.

eth0.1 is shorthand notation for eth0 VLAN 1, and there are a bunch of
interface management commands to create interfaces of this type. It appears
to be possible to configure this in the network setup scripts -
describes the Redhat way, though I've only done this on Ubuntu and Debian

In order to attach eth0 to br-ex, I had to configure it as an OVSPort.
> Maybe I misunderstand your alternative, but are you suggesting  to
> configure  eth0.1 as an OVSPort (connected to br-ex), and  eth0.12 as a
> standalone interface?  (Not sure a physical interface can be "brain split"
> in such a way.)

eth0.1 is a full on network interface and should work as an OVS port.  You
would configure the external network in Openstack as flat, rather than
containing VLAN segments, because the tagging is done outside of Openstack
with this approach (otherwise you'd end up with double tagged packets).

And yes, eth0.12 would be a standalone interface.

Note that my physical switch uses a native VLAN of 1  and is configured
> with "Untag all ports" for VLAN 1. Moreover, OVSPort eth0 (attached to
> br-ex) is configured for VLAN trunking with a native VLAN of 1 (vlan_mode:
> native-untagged, trunks: [1,12], tag: 1), so within bridge br-ex, native
> packets are tagged 1.

Yes, as I say, if you moved over to the eth0.1 mechanism above you'd want
the packets to be untagged at the eth0.1 OVS port, because receiving them
via eth0.1 would untag them (and sending them would tag them) and OVS
doesn't need to help you out on the VLAN front any more.

I'm still not a fan of your setup but I don't know if it's just because
it's not where my natural preference lies.  You may be inches from making
it work perfectly, and I'm not sure I would be able to tell.  That said,
policy routing seems like a workaround to a problem you're having with
packstack; I would definitely go with two addresses if there were any way
to make it configure properly.  If I were doing this there would also be
quite a lot of experimentation to verify my guesswork, I have to admit, so
it's not an easy answer.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150313/c9aedf35/attachment.html>

More information about the OpenStack-dev mailing list