[Kolla-ansible][Neutron] VMs not getting public IPs if attached directly to public subnet
Hi, We have an HCI deployment with 3 controllers and 9 compute/storage nodes. Two of the controllers have the role of neutron server. The platform uses two bonded interfaces : bond1 : is used for : *neutron_external_interface* bond0 : with many vlans on top of it to segregate the rest of the networks : - bond0 : vlan natif used for nodes deployment (dhcp, tftp, pxeboot) - bond0.10 : vlan 10 ceph public - bond0.20 : vlan 20 ceph cluster - bond0.30 : vlan 30 API - bond0.40 : vlan 40 tunnel * - bond0.50 : vlan 50 Public network, here are the public IPs of the 03 controllers, the public horizon VIP interface is created here.* In our configuration we have *"enable_neutron_provider_networks = yes"*, which means that an instance can have a public IP directly without using a virtual-router + NAT. But it does not work. If we create and instance with a private network, then we attach to it a floating IP, the VM is reachable from the Internet, but if we attach the VM directly to the public network, it does not get an IP address from the public pool, we think it's a dhcp problem but we could not find the source, we think it's the *Vlan part.* The controllers are in Vlan 50, if we create a virtual-router it gets its public IP without any problem. But if we are not mistaken, if an instance is plugged directly into the public network, it uses bond1 to send its dhcp requests, but since this interface is not in vlan 50, the requests don't get to the controllers, is this right? If yes, is there a solution? can we use bond1.50 as an interface for kolla's *neutron_external_interface * instead? Regards.
Hi, this question has been asked multiple times, you should be able to find a couple of threads. We use config-drive for provider networks to inject the metadata (ip, gateway, etc.) into the instances. Regards, Eugen Zitat von wodel youchi <wodel.youchi@gmail.com>:
Hi,
We have an HCI deployment with 3 controllers and 9 compute/storage nodes. Two of the controllers have the role of neutron server. The platform uses two bonded interfaces : bond1 : is used for : *neutron_external_interface*
bond0 : with many vlans on top of it to segregate the rest of the networks : - bond0 : vlan natif used for nodes deployment (dhcp, tftp, pxeboot) - bond0.10 : vlan 10 ceph public - bond0.20 : vlan 20 ceph cluster - bond0.30 : vlan 30 API - bond0.40 : vlan 40 tunnel * - bond0.50 : vlan 50 Public network, here are the public IPs of the 03 controllers, the public horizon VIP interface is created here.*
In our configuration we have *"enable_neutron_provider_networks = yes"*, which means that an instance can have a public IP directly without using a virtual-router + NAT. But it does not work.
If we create and instance with a private network, then we attach to it a floating IP, the VM is reachable from the Internet, but if we attach the VM directly to the public network, it does not get an IP address from the public pool, we think it's a dhcp problem but we could not find the source, we think it's the *Vlan part.*
The controllers are in Vlan 50, if we create a virtual-router it gets its public IP without any problem. But if we are not mistaken, if an instance is plugged directly into the public network, it uses bond1 to send its dhcp requests, but since this interface is not in vlan 50, the requests don't get to the controllers, is this right? If yes, is there a solution? can we use bond1.50 as an interface for kolla's *neutron_external_interface * instead?
Regards.
Hi, Thanks for the reply. Is the analysis of the problem correct? We tried this, we created an instance with an interface in the public network, the interface did not get initialized, then we did : 1 - fix a public IP address on the interface : the instance did not connect to the internet. 2 - create a vlan interface (vlan 50) with a public ip : the instance did not connect to the internet. it seems that the analysis is wrong or we are missing something!!!? Regards. Le mar. 29 nov. 2022 à 12:02, Eugen Block <eblock@nde.ag> a écrit :
Hi,
this question has been asked multiple times, you should be able to find a couple of threads. We use config-drive for provider networks to inject the metadata (ip, gateway, etc.) into the instances.
Regards, Eugen
Zitat von wodel youchi <wodel.youchi@gmail.com>:
Hi,
We have an HCI deployment with 3 controllers and 9 compute/storage nodes. Two of the controllers have the role of neutron server. The platform uses two bonded interfaces : bond1 : is used for : *neutron_external_interface*
bond0 : with many vlans on top of it to segregate the rest of the networks : - bond0 : vlan natif used for nodes deployment (dhcp, tftp, pxeboot) - bond0.10 : vlan 10 ceph public - bond0.20 : vlan 20 ceph cluster - bond0.30 : vlan 30 API - bond0.40 : vlan 40 tunnel * - bond0.50 : vlan 50 Public network, here are the public IPs of the 03 controllers, the public horizon VIP interface is created here.*
In our configuration we have *"enable_neutron_provider_networks = yes"*, which means that an instance can have a public IP directly without using a virtual-router + NAT. But it does not work.
If we create and instance with a private network, then we attach to it a floating IP, the VM is reachable from the Internet, but if we attach the VM directly to the public network, it does not get an IP address from the public pool, we think it's a dhcp problem but we could not find the source, we think it's the *Vlan part.*
The controllers are in Vlan 50, if we create a virtual-router it gets its public IP without any problem. But if we are not mistaken, if an instance is plugged directly into the public network, it uses bond1 to send its dhcp requests, but since this interface is not in vlan 50, the requests don't get to the controllers, is this right? If yes, is there a solution? can we use bond1.50 as an interface for kolla's *neutron_external_interface * instead?
Regards.
I asked a similar question recently and included a summary of my conclusions in the last post: https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031230... If you must create an instance in the public subnet for some reason (rather than assign a floating IP), the workaround I found was to disable port security entirely (I do not recommend this). Tobias McNulty Chief Executive Officer www.caktusgroup.com On Tue, Nov 29, 2022 at 7:08 AM wodel youchi <wodel.youchi@gmail.com> wrote:
Hi,
Thanks for the reply.
Is the analysis of the problem correct?
We tried this, we created an instance with an interface in the public network, the interface did not get initialized, then we did : 1 - fix a public IP address on the interface : the instance did not connect to the internet. 2 - create a vlan interface (vlan 50) with a public ip : the instance did not connect to the internet.
it seems that the analysis is wrong or we are missing something!!!?
Regards.
Le mar. 29 nov. 2022 à 12:02, Eugen Block <eblock@nde.ag> a écrit :
Hi,
this question has been asked multiple times, you should be able to find a couple of threads. We use config-drive for provider networks to inject the metadata (ip, gateway, etc.) into the instances.
Regards, Eugen
Zitat von wodel youchi <wodel.youchi@gmail.com>:
Hi,
We have an HCI deployment with 3 controllers and 9 compute/storage nodes. Two of the controllers have the role of neutron server. The platform uses two bonded interfaces : bond1 : is used for : *neutron_external_interface*
bond0 : with many vlans on top of it to segregate the rest of the networks : - bond0 : vlan natif used for nodes deployment (dhcp, tftp, pxeboot) - bond0.10 : vlan 10 ceph public - bond0.20 : vlan 20 ceph cluster - bond0.30 : vlan 30 API - bond0.40 : vlan 40 tunnel * - bond0.50 : vlan 50 Public network, here are the public IPs of the 03 controllers, the public horizon VIP interface is created here.*
In our configuration we have *"enable_neutron_provider_networks = yes"*, which means that an instance can have a public IP directly without using a virtual-router + NAT. But it does not work.
If we create and instance with a private network, then we attach to it a floating IP, the VM is reachable from the Internet, but if we attach the VM directly to the public network, it does not get an IP address from the public pool, we think it's a dhcp problem but we could not find the source, we think it's the *Vlan part.*
The controllers are in Vlan 50, if we create a virtual-router it gets its public IP without any problem. But if we are not mistaken, if an instance is plugged directly into the public network, it uses bond1 to send its dhcp requests, but since this interface is not in vlan 50, the requests don't get to the controllers, is this right? If yes, is there a solution? can we use bond1.50 as an interface for kolla's *neutron_external_interface * instead?
Regards.
participants (3)
-
Eugen Block
-
Tobias McNulty
-
wodel youchi