[neutron] Use the same network interface with multiple ML2 mechanism drivers
Dear all, I know that ML2 Neutron core plugin is designed to support multiple mechanism and type drivers simultaneously. But I'd like to know: is it possible to use the same network interface configured with different ML2 mechanism drivers? I'm planning to use openvswitch and linuxbridge as mechanism drivers along with VLAN as type driver. Could it be possible to have the following configuration for that purpose?: ml2_conf.ini: [ml2] mechanism_drivers = openvswitch,linuxbridge [ml2_type_vlan] network_vlan_ranges = physnet1:40:60,physnet2:60:80 eth3 is a port of the provider bridge: ovs-vsctl add-port br-provider eth3 openvswitch_agent.ini: [ovs] bridge_mappings = physnet1:br-provider linuxbridge_agent.ini: [linux_bridge] physical_interface_mappings = physnet2:eth3 If it's mandatory to use different network interfaces any guide or sample reference about implementing multiple mechanism drivers would be highly appreciated. Thanks in advance, Gabriel Gamero
Hi, Dnia niedziela, 8 listopada 2020 04:06:50 CET Gabriel Omar Gamero Montenegro pisze:
Dear all,
I know that ML2 Neutron core plugin is designed to support multiple mechanism and type drivers simultaneously. But I'd like to know: is it possible to use the same network interface configured with different ML2 mechanism drivers?
I'm planning to use openvswitch and linuxbridge as mechanism drivers along with VLAN as type driver. Could it be possible to have the following configuration for that purpose?:
ml2_conf.ini: [ml2] mechanism_drivers = openvswitch,linuxbridge [ml2_type_vlan] network_vlan_ranges = physnet1:40:60,physnet2:60:80
eth3 is a port of the provider bridge: ovs-vsctl add-port br-provider eth3
openvswitch_agent.ini: [ovs] bridge_mappings = physnet1:br-provider
linuxbridge_agent.ini: [linux_bridge] physical_interface_mappings = physnet2:eth3
I don't think it's will work because You would need to have same interface in the ovs bridge (br-provider) and use it by linuxbridge. But TBH this is a bit strange configuration for me. I can imaging different computes which are using different backends. But why You want to use linuxbridge and openvswitch agents together on same compute node?
If it's mandatory to use different network interfaces any guide or sample reference about implementing multiple mechanism drivers would be highly appreciated.
Thanks in advance, Gabriel Gamero
-- Slawek Kaplonski Principal Software Engineer Red Hat
I'm planning to deploy OpenStack with 2 mechanism drivers on physical servers with only one network interface card: openvswitch and SR-IOV. According to what I read, it is possible to use the same physical interface using these 2 mechanism drivers. I assume this is possible because a NIC with SR-IOV capabilities can divide a NIC into a PhysicalFunction (which I'm using for openvswitch) and many VirtualFunctions (which I'm using for SR-IOV). Before editing something on physical servers I was planning to use a test environment with virtual machines where I do not count with a NIC with SR-IOV capabilities. Since the mechanism driver of openvswitch will work with security groups and the SR-IOV mechanism driver can't have security groups enabled, I was planning to use linux bridge as a replacement and disable the security feature. Thus, I can make security tests with a SDN module that I'm developing for networks with SR-IOV in OpenStack. Thanks. Gabriel Gamero. El lun., 9 nov. 2020 a las 2:36, Slawek Kaplonski (<skaplons@redhat.com>) escribió:
Hi,
Dnia niedziela, 8 listopada 2020 04:06:50 CET Gabriel Omar Gamero Montenegro pisze:
Dear all,
I know that ML2 Neutron core plugin is designed to support multiple mechanism and type drivers simultaneously. But I'd like to know: is it possible to use the same network interface configured with different ML2 mechanism drivers?
I'm planning to use openvswitch and linuxbridge as mechanism drivers along with VLAN as type driver. Could it be possible to have the following configuration for that purpose?:
ml2_conf.ini: [ml2] mechanism_drivers = openvswitch,linuxbridge [ml2_type_vlan] network_vlan_ranges = physnet1:40:60,physnet2:60:80
eth3 is a port of the provider bridge: ovs-vsctl add-port br-provider eth3
openvswitch_agent.ini: [ovs] bridge_mappings = physnet1:br-provider
linuxbridge_agent.ini: [linux_bridge] physical_interface_mappings = physnet2:eth3
I don't think it's will work because You would need to have same interface in the ovs bridge (br-provider) and use it by linuxbridge. But TBH this is a bit strange configuration for me. I can imaging different computes which are using different backends. But why You want to use linuxbridge and openvswitch agents together on same compute node?
If it's mandatory to use different network interfaces any guide or sample reference about implementing multiple mechanism drivers would be highly appreciated.
Thanks in advance, Gabriel Gamero
-- Slawek Kaplonski Principal Software Engineer Red Hat
On Mon, 2020-11-09 at 08:36 +0100, Slawek Kaplonski wrote:
Hi,
Dnia niedziela, 8 listopada 2020 04:06:50 CET Gabriel Omar Gamero Montenegro pisze:
Dear all,
I know that ML2 Neutron core plugin is designed to support multiple mechanism and type drivers simultaneously. But I'd like to know: is it possible to use the same network interface configured with different ML2 mechanism drivers? you can use the sriov nic agent to manage VF and use either the linux bridge agent or ovs agent to managed the pf on the same host.
I'm planning to use openvswitch and linuxbridge as mechanism drivers along with VLAN as type driver. Could it be possible to have the following configuration for that purpose?:
ml2_conf.ini: [ml2] mechanism_drivers = openvswitch,linuxbridge [ml2_type_vlan] network_vlan_ranges = physnet1:40:60,physnet2:60:80
eth3 is a port of the provider bridge: ovs-vsctl add-port br-provider eth3
openvswitch_agent.ini: [ovs] bridge_mappings = physnet1:br-provider
linuxbridge_agent.ini: [linux_bridge] physical_interface_mappings = physnet2:eth3
I don't think it's will work because You would need to have same interface in the ovs bridge (br-provider) and use it by linuxbridge. But TBH this is a bit strange configuration for me. I can imaging different computes which are using different backends. But why You want to use linuxbridge and openvswitch agents together on same compute node?
ya in this case it wont work although there are cases wehre it would. linux bridge is better fro multi cast heavy workslond so you might want all vxlan traffic to be handeled by linux bridge whith all vlan traffic handeled by ovs.
If it's mandatory to use different network interfaces any guide or sample reference about implementing multiple mechanism drivers would be highly appreciated.
its really intened to have different ml2 dirver on differnet hosts with one exception. if the vnic types supported by each ml2 driver do not over lap then you can have two different ml2 drivers on the same host. e.g. sriov + somethign else. it should also be noted that mixed deployemtn really only work properly for vlan and flat netwroks. tuneled networks like vxlan will not form the required mesh to allow comunication between host with different backends. if you want to run linux bridge and ovs on the same host you could do it by using 2 nics with different physnets e.g. using eth2 for ovs and eth3 for linux bridge openvswitch_agent.ini:
[ovs] bridge_mappings = ovs-physnet:br-provider added eth2 to br-provider
linuxbridge_agent.ini:
[linux_bridge] physical_interface_mappings = linuxbridge-physnet:eth3
tunnesl will only ever be served by one of the 2 ml2 drivers on any one host determined by mechanism_drivers = openvswitch,linuxbridge in this case ovs would be used for all tunnel traffic unless you segreate the vni ranges so that linxu bgride and ovs have different ranges. netuon will still basically allocate networks in assendign order fo vnis so it will file one before using the other as an operator you could specify the vni to use when creating a network but i dont belive that normal users can do that. in general i would not advice doing this and just using 1 ml2 driver for vnic_type=normal on a singel host as it make debuging much much harder and vendors wont generaly support this custom config. its possible to do but you really really need to know how the different backends work. simple is normally better when it comes to networking.
Thanks in advance, Gabriel Gamero
participants (3)
-
Gabriel Omar Gamero Montenegro
-
Sean Mooney
-
Slawek Kaplonski