Hi!

So, what are your doubts? This kind of setup is totally possible to do. At very least while using ml2.ovs/lxb as a network driver.

Assuming, that your interface, that you're going to use for VLANs is named bond0, provider_networks can look like that then:

provider_networks:
    ...
    - network:
       container_bridge: "br-vlan"
       container_type: "veth"
       network_interface: "bond0"
       net_name: "vlan-net"
       type: "vlan"
       range: "200:1200" 
       group_binds:
         - neutron_openvswitch_agent

With that config you don't need to create a br-vlan bridge anywhere, just having bond0 interface consistently across all compute and network nodes is enough.

After that in neutron you can create a network like that:
openstack network create --provider-network-type vlan --provider-physical-network vlan-net --provider-segment 200 vlan-200

You can check more docs on OVS setup here:
https://docs.openstack.org/openstack-ansible-os_neutron/latest/app-openvswitch.html#openstack-ansible-user-variables
https://docs.openstack.org/openstack-ansible/latest/user/network-arch/example.html

But keep in mind that vxlans are used more commonly and are a recommended way to connect VMs between compute nodes and with neutron l3 routers for floating IPs functionality.

I'm not very familiar with ml2.ovn though to answer how to setup VLANs to pass directly to computes there, as it might be slightly different in terms of group binding at least. But according to doc https://docs.openstack.org/neutron/latest/install/ovn/manual_install.html it should be pretty much same otherwise.

Hope this helps.

сб, 20 мая 2023 г., 01:23 Murilo Morais <murilo@evocorp.com.br>:
Good evening everyone!

I'm trying to set up a lab for testing using Openstack Ansible (OSA) and I'm having a lot of trouble understanding/setting up the network.

I'm trying something similar to AIO (All-in-one) but with customizations (2 compute node).

I'm using Debian 11 as OS.

My problem is that I need the instances to communicate through VLANs that are being delivered directly to the interface of each compute node, as I need the same instances to participate in an existing network.

I have a lot of doubts about this type of setup and how the configuration of provider_networks would be.

Thanks in advance!