[Openstack] Compute host management IP and VLAN (OpenVSwitch) provider networks on a single bonded interface

Adnan Smajlovic a.smajlovic at gmail.com
Fri Jul 28 17:28:46 UTC 2017


Hi,

We have the following setup:

 - OpenStack Icehouse (Ubuntu 14.04 LTS)
   - Deployed via puppet-openstack module
 - Neutron (OpenVSwitch version 2.0.2)
    - L3 networking is all handled by physical devices (no Neutron L3
components in use)
    - Neutron VLAN provider networks
    - No tunnelling of any sort

In order to preserve network ports on a Cisco Nexus 9000 we opted to limit
our compute hosts to a single bonded interface (eth0 and eth1, using LACP
to form bond0 across two switches) and provide some isolation using VLANs -
accepting the fact that it's risky to make use of a single NIC with 2 x 10G
ports.  That aside, the network configuration of a compute host:

########################
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual
    bond-master bond0

auto eth1
iface eth1 inet manual
    bond-master bond0

# Trunk interface to be used for VLAN provider networks and any other local
interfaces
auto bond0
iface bond0 inet manual
    bond-slaves eth0 eth1
    bond-mode 4
    bond-miimon 80
    bond-downdelay 200
    bond-updelay 200
    bond-lacp-rate 1
    up ifconfig $IFACE 0.0.0.0 up
    up ip link set $IFACE promisc on
    down ip link set $IFACE promisc off
    down ifconfig $IFACE down

# Management IP
auto bond0.10
iface bond0.10 inet static
    address 10.0.10.100
    netmask 255.255.255.0
    gateway 10.0.10.1

# Storage network
auto bond0.20
iface bond0.20 inet static
    address 10.0.20.100
    netmask 255.255.255.0

########################

Similarly, the ML2 configuration on the controller:

[ml2]
type_drivers = vlan
tenant_network_types = vlan
mechanism_drivers = openvswitch

[ml2_type_flat]

[ml2_type_vlan]
network_vlan_ranges = default:1:1001,default:1006:4094

[ml2_type_gre]

[ml2_type_vxlan]

[securitygroup]
enable_security_group = True
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[agent]
l2_population=False
polling_interval=2
arp_responder=False

[ovs]
enable_tunneling=False
integration_bridge=br-int
bridge_mappings=default:br-ex

########################

Finally, the OVS bridges:

    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port int-br-ex
            Interface int-br-ex
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
        Port "bond0"
            Interface "bond0"
    ovs_version: "2.0.2"

########################

In general, this works absolutely fine for us.  Neutron networks are
created with relevant segmentation IDs to isolate various ranges/subnets
and instances.  Security groups are enabled but rules have been added to
permit all inbound ICMP, TCP, and UDP traffic.  An additional layer of
security is provided by host firewalls (iptables) on the VMs.

The problem we are now facing - for specific reasons, we wish to deploy a
virtual network appliance that has an interface on the OpenStack management
network, permitting hypervisor hosts to communicate with the said device on
the relevant network range (10.0.10.0/24) without any routing
requirements.  What we have noted in an attempt to create a Neutron network
with segmentation ID 10:

- All segmentation IDs (VLAN IDs) used by VM instances, other than VLAN 10,
continue to function as expected
   - The virtual network appliance interface on VLAN 10 is unreachable by
any device on the same range
      - ARP traffic is showing as leaving the compute host, responded to,
and noted in tcpdump output on all hosts with a management network
interface, but nothing is noted as coming back over the OVS integration
bridge

Doing a bit of OVS doc reading, someone pointed out that 'VLAN splintering'
may help.  When enabling the feature on bond0 the effect is that traffic
starts to register on the virtual network appliance interface but the
compute host management interface (bond0.10) goes down.

Having a trunk interface being used for Neutron VLAN provider networks and
an 802.1q (vconfig) interface on the same compute host for management IP
purposes appears to be a no-no.  The reason for this could be completely
obvious to the networking elite out there but we don't completely
understand why this is and some clarification would be very much
appreciated :)

A side note - seems that James Denton alluded to an incompatibility to this
approach a while back on the OpenStack list (
http://markmail.org/message/fcaklkctwmaeagbw), at least with respect to the
Linux bridge ML2 plugin, but no additional details were provided.

Happy to provide any additional information to get to the bottom of this.

Regards,

--
Adnan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20170728/05e44c67/attachment.html>


More information about the Openstack mailing list