Openstack cannot access to the Internet
# Openstack instances cannot access Internet [linuxbridge] I am having serious issues in the deployment of the Openstack scenario related to the Linux Bridge. This is the scenario: - Controller machine: - Management Interface `enp2s0`: 138.100.10.25. - Compute machine: - Management Interface `enp2s0`: 138.100.10.26. - Provider Interface `enp0s20f0u4`: 138.100.10.27. Openstack Train scenario has been successfully deployed in Centos 8, choosing networking option 2 (self-service network). To verify the functionality, an image has been uploaded, created an Openstack flavor and security group, and launched a couple of cirrOS instances for connection testing. We have created a provider network following [this tutorial](https://docs.openstack.org/newton/install-guide-rdo/launch-instance-networks...) and a selfservice network following [this one](https://docs.openstack.org/newton/install-guide-rdo/launch-instance-networks...). The network scenario is the next one: As can be seen in the network topology, an external network 138.100.10.0/21 (provider) and an internal network 192.168.1.1 (selfservice) have been created, connected through a router by the interfaces 138.100.10.198 and 192.168.1.1, both active. Our problem is that our Linux bridge is not working as expected: the Openstack cirrOS instances has no internet access. This is the controller `ip a` and `brctl show` command output: This is the compute `ip a` and `brctl show` command output: (The output of `ovs-vsctl show` command is empty in both machines). **Are the Linux Bridges correctly created?** These are the Linux bridge configuration files: * Controller `/etc/neutron/plugins/ml2/linuxbridge_agent.ini`: ``` [linux_bridge] physical_interface_mappings = provider:enp2s0 # enp2s0 is the interface associated to 138.100.10.25 [vxlan] enable_vxlan = true local_ip = 138.100.10.25 # controller has only 1 IP l2_population = true ``` * Compute `/etc/neutron/plugins/ml2/linuxbridge_agent.ini`: ``` [linux_bridge] physical_interface_mappings = provider:enp0s20f0u4 # interface associated to 138.100.10.26 [vxlan] enable_vxlan = true local_ip = 138.100.10.27 l2_population = true ``` An **observation** to keep in mind is that compute management interface (`138.100.10.26`) is inaccessible from anywhere, which I think is not correct since this prevents us, for example, from accessing the instance console through the URL. I have made some conection tests and these are the results: - There is **connection** between Cirros A and Cirros B (in both directions). - There is **connection** between Cirros A/B and self-service gateway (192.168.1.1) (in both directions). - There is **connection** between Cirros A/B and provider gateway (138.100.10.198) (in both directions). - There is **connection** between Cirros A/B and controller management interface (138.100.10.25) (in both directions). - There is **no connection** between Cirros A/B and compute management interface (138.100.10.26). This interface is not accessible. - There is **connection** between Cirros A/B and compute provider interface (138.100.10.27) (in both directions). I do not know if there is a problem on Linux bridge configuration files, or maybe I need another network interface on controller machine.
On 2/11/21 12:21 PM, Jaime wrote:
# Openstack instances cannot access Internet [linuxbridge]
I am having serious issues in the deployment of the Openstack scenario related to the Linux Bridge. This is the scenario:
- Controller machine: - Management Interface `enp2s0`: 138.100.10.25. - Compute machine: - Management Interface `enp2s0`: 138.100.10.26. - Provider Interface `enp0s20f0u4`: 138.100.10.27.
I'm not sure what you got wrong, but if I may... You should *not* expose your compute machines to the internet (and probably not your controller either, except the API). You should set them up with a private network address (192.168.x.x or 10.x.x.x for example). Only your VMs should have access to internet. I would strongly recommend revisiting your network setup. I hope this helps, Cheers, Thomas Goirand (zigo)
On 2021-02-11 18:45:48 +0100 (+0100), Thomas Goirand wrote: [...]
You should *not* expose your compute machines to the internet (and probably not your controller either, except the API). You should set them up with a private network address (192.168.x.x or 10.x.x.x for example). Only your VMs should have access to internet. I would strongly recommend revisiting your network setup. [...]
I'm not trying to be pedantic, but just because something has an RFC 1918 address doesn't mean it's not also exposed to the Internet (for example via a port forward, or 1:1 NAT, or through a proxy, or an interface alias, or another interface, or another address family like inet6, or...). Conversely, using globally routable addresses doesn't mean those systems are necessarily exposed to the Internet either (they could be secured behind this new-fangled contraption called a "network firewall" which is a far more thorough means of policy enforcement than merely hopes and wishes that certain addresses won't be reachable thanks to loosely obeyed routing conventions). While not wasting global IPv4 addresses on systems which don't need to be generally reachable is probably a sensible idea from the perspective of v4 address exhaustion/conservation, it's dangerous to assume or suggest that something is secure from remote tampering just because it happens to have an RFC 1918 address on it. -- Jeremy Stanley
On 2/11/21 8:11 PM, Jeremy Stanley wrote:
On 2021-02-11 18:45:48 +0100 (+0100), Thomas Goirand wrote: [...]
You should *not* expose your compute machines to the internet (and probably not your controller either, except the API). You should set them up with a private network address (192.168.x.x or 10.x.x.x for example). Only your VMs should have access to internet. I would strongly recommend revisiting your network setup. [...]
I'm not trying to be pedantic, but just because something has an RFC 1918 address doesn't mean it's not also exposed to the Internet (for example via a port forward, or 1:1 NAT, or through a proxy, or an interface alias, or another interface, or another address family like inet6, or...). Conversely, using globally routable addresses doesn't mean those systems are necessarily exposed to the Internet either (they could be secured behind this new-fangled contraption called a "network firewall" which is a far more thorough means of policy enforcement than merely hopes and wishes that certain addresses won't be reachable thanks to loosely obeyed routing conventions).
While not wasting global IPv4 addresses on systems which don't need to be generally reachable is probably a sensible idea from the perspective of v4 address exhaustion/conservation, it's dangerous to assume or suggest that something is secure from remote tampering just because it happens to have an RFC 1918 address on it.
I very much agree with that. On top of this, I would also suggest that the compute nodes (and in fact, any component of the infrastructure) have no access to the internet outbound as well, simply because they don't need it. To get things installed, just setup a package mirror, or a proxy. For the VMs connectivity, if using DVR, the br-ex of the compute nodes (and the network nodes) can be connected to a VLAN that will be different from the one used for managing the infrastructure. Neutron manages this pretty well. Cheers, Thomas Goirand (zigo)
participants (3)
-
Jaime
-
Jeremy Stanley
-
Thomas Goirand