I'm planning to deploy OpenStack with 2 mechanism drivers on physical servers with only one network interface card: openvswitch and SR-IOV. According to what I read, it is possible to use the same physical interface using these 2 mechanism drivers. I assume this is possible because a NIC with SR-IOV capabilities can divide a NIC into a PhysicalFunction (which I'm using for openvswitch) and many VirtualFunctions (which I'm using for SR-IOV). Before editing something on physical servers I was planning to use a test environment with virtual machines where I do not count with a NIC with SR-IOV capabilities. Since the mechanism driver of openvswitch will work with security groups and the SR-IOV mechanism driver can't have security groups enabled, I was planning to use linux bridge as a replacement and disable the security feature. Thus, I can make security tests with a SDN module that I'm developing for networks with SR-IOV in OpenStack. Thanks. Gabriel Gamero. El lun., 9 nov. 2020 a las 2:36, Slawek Kaplonski (<skaplons@redhat.com>) escribió:
Hi,
Dnia niedziela, 8 listopada 2020 04:06:50 CET Gabriel Omar Gamero Montenegro pisze:
Dear all,
I know that ML2 Neutron core plugin is designed to support multiple mechanism and type drivers simultaneously. But I'd like to know: is it possible to use the same network interface configured with different ML2 mechanism drivers?
I'm planning to use openvswitch and linuxbridge as mechanism drivers along with VLAN as type driver. Could it be possible to have the following configuration for that purpose?:
ml2_conf.ini: [ml2] mechanism_drivers = openvswitch,linuxbridge [ml2_type_vlan] network_vlan_ranges = physnet1:40:60,physnet2:60:80
eth3 is a port of the provider bridge: ovs-vsctl add-port br-provider eth3
openvswitch_agent.ini: [ovs] bridge_mappings = physnet1:br-provider
linuxbridge_agent.ini: [linux_bridge] physical_interface_mappings = physnet2:eth3
I don't think it's will work because You would need to have same interface in the ovs bridge (br-provider) and use it by linuxbridge. But TBH this is a bit strange configuration for me. I can imaging different computes which are using different backends. But why You want to use linuxbridge and openvswitch agents together on same compute node?
If it's mandatory to use different network interfaces any guide or sample reference about implementing multiple mechanism drivers would be highly appreciated.
Thanks in advance, Gabriel Gamero
-- Slawek Kaplonski Principal Software Engineer Red Hat