[openstack-ansible] Dedicated gateway hosts not working with OVN
Dmitriy Rabotyagov
noonedeadpunk at gmail.com
Sat Sep 2 16:08:23 UTC 2023
Hi,
I think this is known issue which should be fixed with the following patch:
https://review.opendev.org/c/openstack/openstack-ansible/+/892540
In the meanwhile you should be able to workaround the issue by creating
/etc/openstack_deploy/env.d/nova.yml file with following content:
nova_compute_container:
belongs_to:
- compute_containers
- kvm-compute_containers
- qemu-compute_containers
contains:
- neutron_sriov_nic_agent
- neutron_ovn_controller
- nova_compute
properties:
is_metal: true
You might also need to remove computes from the inventory using
/opt/openstack-ansible/scripts/inventory-manage.py -r cmp03
They will be re-added next time running openstack-ansible or
dynamic-inventory.py. Removing them is needed to ensure that they're not
part of ovn-gateway related group.
You might also need to stop ovn-gateway service on these computes manually,
but I'm not sure 100% about that.
On Sat, Sep 2, 2023, 17:47 Roger Rivera <roger.riverac at gmail.com> wrote:
> Hello,
>
> We have deployed an openstack-ansible cluster to test it on_metal with
> OVN and defined *dedicated gateway hosts* connecting to the external
> network with the *network-gateway_hosts* host group. Unfortunately, we
> are not able to connect to the external/provider networks. It seems that
> traffic wants to reach external networks via the hypervisor nodes and not
> the gateway hosts.
>
> Any suggestions on changes needed to our configuration will be highly
> appreciated.
>
> Environment:
> -Openstack Antelope
> -Ubuntu 22 on all hosts
> -3 infra hosts - 1xNIC (ens1)
> -2 compute hosts - 1xNIC (ens1)
> -2 gateway hosts - 2xNIC (ens1 internal, ens2 external)
> -No linux bridges are created.
>
> The gateway hosts are the only ones physically connected to the external
> network via physical interface ens2. Therefore, we need all external
> provider network traffic to traverse via these gateway hosts.
>
> Tenant networks work fine and VMs can talk to each other. However, when a
> VM is spawned with a floating IP to the external network, they are unable
> to reach the outside network.
>
> Relevant content from openstack-ansible configuration files:
>
>
> =.=.=.=.=.=.=.=
> openstack_user_config.yml
> =.=.=.=.=.=.=.=
> ```
> ...
> provider_networks:
> - network:
> container_bridge: "br-mgmt"
> container_type: "veth"
> container_interface: "ens1"
> ip_from_q: "management"
> type: "raw"
> group_binds:
> - all_containers
> - hosts
> is_management_address: true
> - network:
> container_bridge: "br-vxlan"
> container_type: "veth"
> container_interface: "ens1"
> ip_from_q: "tunnel"
> #type: "vxlan"
> type: "geneve"
> range: "1:1000"
> net_name: "geneve"
> group_binds:
> - neutron_ovn_controller
> - network:
> container_bridge: "br-flat"
> container_type: "veth"
> container_interface: "ens1"
> type: "flat"
> net_name: "flat"
> group_binds:
> - neutron_ovn_controller
> - network:
> container_bridge: "br-vlan"
> container_type: "veth"
> container_interface: "ens1"
> type: "vlan"
> range: "101:300,401:500"
> net_name: "vlan"
> group_binds:
> - neutron_ovn_controller
> - network:
> container_bridge: "br-storage"
> container_type: "veth"
> container_interface: "ens1"
> ip_from_q: "storage"
> type: "raw"
> group_binds:
> - glance_api
> - cinder_api
> - cinder_volume
> - nova_compute
>
> ...
>
> compute-infra_hosts:
> inf1:
> ip: 172.16.0.1
> inf2:
> ip: 172.16.0.2
> inf3:
> ip: 172.16.0.3
>
> compute_hosts:
> cmp4:
> ip: 172.16.0.21
> cmp3:
> ip: 172.16.0.22
>
> network_hosts:
> inf1:
> ip: 172.16.0.1
> inf2:
> ip: 172.16.0.2
> inf3:
> ip: 172.16.0.3
>
> network-gateway_hosts:
> net1:
> ip: 172.16.0.31
> net2:
> ip: 172.16.0.32
>
> ```
>
>
> =.=.=.=.=.=.=.=
> user_variables.yml
> =.=.=.=.=.=.=.=
> ```
> ---
> debug: false
> install_method: source
> rabbitmq_use_ssl: False
> haproxy_use_keepalived: False
> ...
> neutron_plugin_type: ml2.ovn
> neutron_plugin_base:
> - neutron.services.ovn_l3.plugin.OVNL3RouterPlugin
>
> neutron_ml2_drivers_type: geneve,vlan,flat
> neutron_ml2_conf_ini_overrides:
> ml2:
> tenant_network_types: geneve
>
> ...
> ```
>
> =.=.=.=.=.=.=.=
> env.d/neutron.yml
> =.=.=.=.=.=.=.=
> ```
> component_skel:
> neutron_ovn_controller:
> belongs_to:
> - neutron_all
> neutron_ovn_northd:
> belongs_to:
> - neutron_all
>
> container_skel:
> neutron_agents_container:
> contains: {}
> properties:
> is_metal: true
> neutron_ovn_northd_container:
> belongs_to:
> - network_containers
> contains:
> - neutron_ovn_northd
>
> ```
>
> =.=.=.=.=.=.=.=
> env.d/nova.yml
> =.=.=.=.=.=.=.=
> ```
> component_skel:
> nova_compute_container:
> belongs_to:
> - compute_containers
> - kvm-compute_containers
> - lxd-compute_containers
> - qemu-compute_containers
> contains:
> - neutron_ovn_controller
> - nova_compute
> properties:
> is_metal: true
> ```
>
> =.=.=.=.=.=.=.=
> group_vars/network_hosts
> =.=.=.=.=.=.=.=
> ```
> openstack_host_specific_kernel_modules:
> - name: "openvswitch"
> pattern: "CONFIG_OPENVSWITCH"
> ```
>
> The nodes layout is like this:
>
> [image: image.png]
>
>
> Any guidance on what we have wrong or how to improve this configuration
> will be appreciated. We need to make external traffic for VMs to go out via
> the gateway nodes and not the compute/hypervisor nodes.
>
> Thank you.
>
> Roger
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230902/cbdcfe1a/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 16574 bytes
Desc: not available
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230902/cbdcfe1a/attachment-0001.png>
More information about the openstack-discuss
mailing list