[neutron] Allowing multiple segments from a routed network to the same host

Kris G. Lindgren klindgren at godaddy.com
Tue Nov 20 22:06:02 UTC 2018


Hello all,
 
We have a use case where we need to allow multiple segments from the same routed network to a host.  We have a spine-leaf L3 topology.  Where the L2 domain exists only at the leaf level.  In our use case we need to have multiple L2 vlans off the same switches pair mapped to all the servers on that switch.  This is mainly done to split the L2 broadcast domains for that switch pair.  Currently on our older clouds with non-routed networks we might have between 3 and 7 vlans each containing a /22 of ip address spacing, mapped to all the hosts on that switch.  Where each of those vlans/subnets belong to the same L3 network segment, but are just used to keep the broadcast domains small on the switches.
 
We are currently trying to implement the multi-segment to same host approach using ml2 + linuxbridge + vlan .  We are running into a number of problems that we are working through, however before we go to much further I wanted to reach out and see if we are possibly just doing something wrong?
 
Setup:
1.) Create a routed network with 2 segments, each segment is created with network_type: vlan with physical_network: fixed-net, using different vlans for the segments IE vlan 401 and vlan 402. 
2.) Add one subnet per segment.
3.) Configure linuxbirdge agent - ml2.conf :
       [ml2]
       type_drivers = vlan
       tenant_network_types = vlan
       mechanism_drivers = linuxbridge
       extension_drivers = port_security
       
       [ml2_type_vlan]
       network_vlan_ranges = fixed-net:1:4095
       
       [ml2_type_flat]
       flat_networks = physnet-mah
       
       [securitygroup]
       firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
       
       [linux_bridge]
       #physical_interface_mappings = fixed-net:br0
       bridge_mappings = fixed-net:br0
4.)  Attempt to boot a vm
 
Problems:
1.) The first issue that you hit will be the arbitrary more than 1 segment per host check that was added.  If I remove: https://github.com/openstack/neutron/blob/master/neutron/objects/subnet.py#L319-L328  The vm will boot with an ip from the first segment/subnet and everything is bound correctly.
 
2.) If I create a port on the second segment and boot the vm using that port.  The vm will boot successfully using that port, however the HV br0.402 is not mapped into the bridge.  The logs indicate that the port was bound using the first segment vlan 401.  This is because https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mech_agent.py#L106-L110 receives 2 segments for network object, both segments are bindable to the host, so the first segment that is returned is vlan 401 segment, and neutron says its bindable and uses that.  The fix that I have working here is 2 fold.  First, since the port has a fixed_ip’s dict in it with a subnet_id in the port object, and the subnet contains the segment_id with which the ip of the port is supposed to be associated with.  I first modify: https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L525 to add the segment_id into the fixed_ip dict, if the subnet_id has one:
<snip>
          )
         for ip in port['fixed_ips']:
            if 'subnet_id' in ip:
                subnet = self.get_subnet(orig_context._plugin_context,
                                         ip['subnet_id'])
                ip['segment_id'] = subnet['segment_id']
        self._update_port_dict_binding(port, new_binding) 
<snip>
Then, In the mech_agent I check if any of the fixed_ips has a segment_id and if it matches one of the segments on the network object, bind using that.
<snip>
             if agent['alive']:
                ips = context.current['fixed_ips']
                for ip in ips:
                    if 'segment_id' in ip:
                        for segment in context.segments_to_bind:
                            if ip['segment_id'] == segment['id']:
                                if self.try_to_bind_segment_for_agent(context,
                                            segment, agent):
                                    LOG.debug("Bound using segment: %s",
                                              segment)
                                    return
                    for segment in context.segments_to_bind:
<snip>
This causes the correct segment_id to get sent to the agent and the agent creates br0.402 and assigns it to the bridge for the network.  However, the agent assigns both br0.401 and br0.402 to the same linux bridge:

-bash-4.2# brctl show
<snip>
brq3188bc9c-49                                8000.9e8444c8e353       no                           br0.401
                                                                                                                br0.402
                                                                                                                tap1789c805-1c
                                                                                                                tap770e7357-be
<snip>
 
My guess is in order to correctly handle this neutron needs to create the brq interface based upon the segment vs's the network.  I haven't yet started working on this particular issue.  However, at this point I wanted to reach out to you guys to:
1.) Explain our use case.
2.) Get feedback on the path taken so far, as to be honest - while it works... its seems hacky.  What we are trying to do is be able to use the vlan driver with segments and have the linuxbridge agents create the br0.<vlan_id> interfaces for us dynamically based upon the vlan id that we create the segment with.  This will allow us to be able to have multiple of those segments (from the same routed network) per switch mapped into a single host.  We are trying to avoid, outside of neutron, creating all the vlan interfaces/bridges, then just giving neutron the mappings in physical_interface_mappings or bridge_mappings.  My understanding was that if these were normal provider networks using the vlan driver, I can have both networks map to the same interface on different vlans, by using the same physical_network name, and neutron will do the correct thing for me.
3.). Determine the correct process to follow to start working on this upstream.  Does this need a spec?  a RFE?

Regards,
Kris Lindgren



More information about the openstack-discuss mailing list