Hi,

We have an HCI deployment with 3 controllers and 9 compute/storage nodes.
Two of the controllers have the role of neutron server.
The platform uses two bonded interfaces :
bond1 : is used for : neutron_external_interface

bond0 : with many vlans on top of it to segregate the rest of the networks :
     - bond0 : vlan natif used for nodes deployment (dhcp, tftp, pxeboot)
     - bond0.10 : vlan 10 ceph public
     - bond0.20 : vlan 20 ceph cluster
     - bond0.30 : vlan 30 API
     - bond0.40 : vlan 40 tunnel
     - bond0.50 : vlan 50 Public network, here are the public IPs of the 03 controllers, the public horizon VIP interface is created here.

In our configuration we have "enable_neutron_provider_networks = yes", which means that an instance can have a public IP directly without using a virtual-router + NAT. But it does not work.

If we create and instance with a private network, then we attach to it a floating IP, the VM is reachable from the Internet, but if we attach the VM directly to the public network, it does not get an IP address from the public pool, we think it's a dhcp problem but we could not find the source, we think it's the Vlan part.

The controllers are in Vlan 50, if we create a virtual-router it gets its public IP without any problem. But if we are not mistaken, if an instance is plugged directly into the public network, it uses bond1 to send its dhcp requests, but since this interface is not in vlan 50, the requests don't get to the controllers, is this right? If yes, is there a solution? can we use bond1.50 as an interface for kolla's neutron_external_interface instead?



Regards.