Re: [openstack-ansible] Dedicated gateway hosts not working with OVN
Hey, 1. Sorry, my bad, was copying you from my phone, so the extra section ( container_skel) that is required has slipped my paste. So /etc/openstack_deploy/env.d/nova.yml should look like this: container_skel: nova_compute_container: belongs_to: - compute_containers - kvm-compute_containers - qemu-compute_containers contains: - neutron_sriov_nic_agent - neutron_ovn_controller - nova_compute properties: is_metal: true 2. Now I actually see more issues in defined openstack_user_config. I'm not sure if that is the reason of the error or not, but it still must be adjusted: a) replace network_hosts with network-infra_hosts. Defining network_hosts also adds infra servers to neutron_l3_agent (and other agents) which has in fact no effect, but triggers a bug, where run_once is treated wrongly. But this will cause failure down the line and I assume that's not it yet. You might need to clean up inventory as a result. b) also define network-northd_hosts - this usually is usually set to infra nodes, and spawns inside LXC. I would also suggest to check out doc on OVN configuration: https://docs.openstack.org/openstack-ansible-os_neutron/latest/app-ovn.html c) For the issue itself. Most likely, it is looking for `container_bridge` or `host_bind_override` key for some network. As one of these keys are expected in order to create a mapping and ovs bridges for you. It does combine net_name and one of these keys. So it would be interesting to see adjusted openstack_user_config once the above issues are sorted out. I can also suggest defining mappings in neutron_provider_networks directly, like mentioned in the documentation above.
Hello, I appreciate the prompt feedback. Unfortunately, after making multiple changes, we cannot make external networks to connect via gateway-hosts. Our follow up investigation has shown the following: 1. Removed flat and vlan provider_networks from /etc/openstack_deploy/openstack_user_config.yml. Only management provider_networks was defined here : provider_networks: - network: container_bridge: "br-mgmt" container_type: "veth" container_interface: "ens4" ip_from_q: "management" type: "raw" group_binds: - all_containers - hosts is_management_address: true 2. Defined ML2 information and network types in /etc/openstack_deploy/user_variables.yml: neutron_ml2_conf_ini_overrides: ml2: tenant_network_types: geneve ml2_type_flat: flat_networks: flat ml2_type_geneve: vni_ranges: 1:1000 max_header_size: 38 3. Moved neutron_provider_networks configuration on a per-host basis and removed network_mappings and network_interface_mappings for compute hosts in /etc/openstack_deploy/host_vars/ compute node /etc/openstack_deploy/host_vars/cmp3: neutron_provider_networks: network_types: "geneve" network_geneve_ranges: "1:1000" gateway node /etc/openstack_deploy/host_vars/net1: neutron_provider_networks: network_types: "geneve" network_geneve_ranges: "1:1000" network_mappings: "flat:br-flat" network_interface_mappings: "br-flat:ens2" 4. Upon checking the new recreated inventory targets the correct neutron_ovn_gateway hosts /etc/openstack_deploy/openstack_inventory.json … "component": "neutron_ovn_gateway", "container_name": "net1", "container_networks": { "management_address": { "address": "172.16.0.31", "bridge": "br-mgmt", -- "component": "neutron_ovn_gateway", "container_name": "net2", "container_networks": { "management_address": { "address": "172.16.0.32", "bridge": "br-mgmt", -- "neutron_ovn_gateway": { "children": [], "hosts": [ "net1", "net2" … 5. The correct ovn-cms-options=enable-chassis-as-gw is set on gateway nodes only: ovn-sbctl list chassis | grep 'hostname\|ovn-cms-options' hostname : net2 other_config : {ct-no-masked-label="true", datapath-type=system, iface-types="afxdp,afxdp-nonpmd,bareudp,erspan,geneve,gre,gtpu,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", is-interconn="false", mac-binding-timestamp="true", ovn-bridge-mappings="flat:br-flat", ovn-chassis-mac-mappings="", ovn-cms-options=enable-chassis-as-gw, ovn-ct-lb-related="true", ovn-enable-lflow-cache="true", ovn-limit-lflow-cache="", ovn-memlimit-lflow-cache-kb="", ovn-monitor-all="false", ovn-trim-limit-lflow-cache="", ovn-trim-timeout-ms="", ovn-trim-wmark-perc-lflow-cache="", port-up-notif="true"} hostname : net1 other_config : {ct-no-masked-label="true", datapath-type=system, iface-types="afxdp,afxdp-nonpmd,bareudp,erspan,geneve,gre,gtpu,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", is-interconn="false", mac-binding-timestamp="true", ovn-bridge-mappings="flat:br-flat", ovn-chassis-mac-mappings="", ovn-cms-options=enable-chassis-as-gw, ovn-ct-lb-related="true", ovn-enable-lflow-cache="true", ovn-limit-lflow-cache="", ovn-memlimit-lflow-cache-kb="", ovn-monitor-all="false", ovn-trim-limit-lflow-cache="", ovn-trim-timeout-ms="", ovn-trim-wmark-perc-lflow-cache="", port-up-notif="true"} hostname : cmp3 other_config : {ct-no-masked-label="true", datapath-type=system, iface-types="afxdp,afxdp-nonpmd,bareudp,erspan,geneve,gre,gtpu,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", is-interconn="false", mac-binding-timestamp="true", ovn-bridge-mappings="", ovn-chassis-mac-mappings="", ovn-cms-options="", ovn-ct-lb-related="true", ovn-enable-lflow-cache="true", ovn-limit-lflow-cache="", ovn-memlimit-lflow-cache-kb="", ovn-monitor-all="false", ovn-trim-limit-lflow-cache="", ovn-trim-timeout-ms="", ovn-trim-wmark-perc-lflow-cache="", port-up-notif="true"} hostname : cmp4 other_config : {ct-no-masked-label="true", datapath-type=system, iface-types="afxdp,afxdp-nonpmd,bareudp,erspan,geneve,gre,gtpu,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", is-interconn="false", mac-binding-timestamp="true", ovn-bridge-mappings="", ovn-chassis-mac-mappings="", ovn-cms-options="", ovn-ct-lb-related="true", ovn-enable-lflow-cache="true", ovn-limit-lflow-cache="", ovn-memlimit-lflow-cache-kb="", ovn-monitor-all="false", ovn-trim-limit-lflow-cache="", ovn-trim-timeout-ms="", ovn-trim-wmark-perc-lflow-cache="", port-up-notif="true"} RESULT: VMs fail to launch with external network (flat). Error logs "Binding failed for port": Sep 6 19:42:37 net1 nova-conductor[4270]: 2023-09-06 19:42:37.599 4270 ERROR nova.scheduler.utils [None req-25a6a8d6-8122-4621-a2c2-8ca0be5e594c 52059c7247434072b6823d1701fec23e 116579f970b242b996ac717fa7580311 - - default default] [instance: 8760706e-d38f-454d-b90f-b9d5d322ba99] Error from last host: dev-usc1-ost-cmp4 (node dev-usc1-ost-cmp4.openstack.local): ['Traceback (most recent call last):\n', ' File "/openstack/venvs/nova-0.1.0.dev8112/lib/python3.10/site-packages/nova/compute/manager.py", line 2607, in _build_and_run_instance\n self.driver.spawn(context, instance, image_meta,\n', ' File "/openstack/venvs/nova-0.1.0.dev8112/lib/python3.10/site-packages/nova/virt/libvirt/driver.py", line 4383, in spawn\n xml = self._get_guest_xml(context, instance, network_info,\n', ' File "/openstack/venvs/nova-0.1.0.dev8112/lib/python3.10/site-packages/nova/virt/libvirt/driver.py", line 7516, in _get_guest_xml\n network_info_str = str(network_info)\n', ' File "/openstack/venvs/nova-0.1.0.dev8112/lib/python3.10/site-packages/nova/network/model.py", line 620, in __str__\n return self._sync_wrapper(fn, *args, **kwargs)\n', ' File "/openstack/venvs/nova-0.1.0.dev8112/lib/python3.10/site-packages/nova/network/model.py", line 603, in _sync_wrapper\n self.wait()\n', ' File "/openstack/venvs/nova-0.1.0.dev8112/lib/python3.10/site-packages/nova/network/model.py", line 635, in wait\n self[:] = self._gt.wait()\n', ' File "/openstack/venvs/nova-0.1.0.dev8112/lib/python3.10/site-packages/eventlet/greenthread.py", line 181, in wait\n return self._exit_event.wait()\n', ' File "/openstack/venvs/nova-0.1.0.dev8112/lib/python3.10/site-packages/eventlet/event.py", line 132, in wait\n current.throw(*self._exc)\n', ' File "/openstack/venvs/nova-0.1.0.dev8112/lib/python3.10/site-packages/eventlet/greenthread.py", line 221, in main\n result = function(*args, **kwargs)\n', ' File "/openstack/venvs/nova-0.1.0.dev8112/lib/python3.10/site-packages/nova/utils.py", line 654, in context_wrapper\n return func(*args, **kwargs)\n', ' File "/openstack/venvs/nova-0.1.0.dev8112/lib/python3.10/site-packages/nova/compute/manager.py", line 1987, in _allocate_network_async\n raise e\n', ' File "/openstack/venvs/nova-0.1.0.dev8112/lib/python3.10/site-packages/nova/compute/manager.py", line 1965, in _allocate_network_async\n nwinfo = self.network_api.allocate_for_instance(\n', ' File "/openstack/venvs/nova-0.1.0.dev8112/lib/python3.10/site-packages/nova/network/neutron.py", line 1216, in allocate_for_instance\n created_port_ids = self._update_ports_for_instance(\n', ' File "/openstack/venvs/nova-0.1.0.dev8112/lib/python3.10/site-packages/nova/network/neutron.py", line 1352, in _update_ports_for_instance\n with excutils.save_and_reraise_exception():\n', ' File "/openstack/venvs/nova-0.1.0.dev8112/lib/python3.10/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/openstack/venvs/nova-0.1.0.dev8112/lib/python3.10/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/openstack/venvs/nova-0.1.0.dev8112/lib/python3.10/site-packages/nova/network/neutron.py", line 1327, in _update_ports_for_instance\n updated_port = self._update_port(\n', ' File "/openstack/venvs/nova-0.1.0.dev8112/lib/python3.10/site-packages/nova/network/neutron.py", line 585, in _update_port\n _ensure_no_port_binding_failure(port)\n', ' File "/openstack/venvs/nova-0.1.0.dev8112/lib/python3.10/site-packages/nova/network/neutron.py", line 294, in _ensure_no_port_binding_failure\n raise exception.PortBindingFailed(port_id=port[\'id\'])\n', 'nova.exception.PortBindingFailed: Binding failed for port b82f4518-ecba-49d9-a21d-2646d3f33efd, please check neutron logs for more information.\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', ' File "/openstack/venvs/nova-0.1.0.dev8112/lib/python3.10/site-packages/nova/compute/manager.py", line 2428, in _do_build_and_run_instance\n self._build_and_run_instance(context, instance, image,\n', ' File "/openstack/venvs/nova-0.1.0.dev8112/lib/python3.10/site-packages/nova/compute/manager.py", line 2703, in _build_and_run_instance\n raise exception.RescheduledException(\n', 'nova.exception.RescheduledException: Build of instance 8760706e-d38f-454d-b90f-b9d5d322ba99 was re-scheduled: Binding failed for port b82f4518-ecba-49d9-a21d-2646d3f33efd, please check neutron logs for more information.\n'] All we need is to make sure external networks are routed via gateway-hosts and not via compute nodes. In our case, compute nodes have only one physical interface with an IP address and no connectivity to the flat network. No layer 2 connectivity is available on compute nodes either. That's the reason why we must traverse external traffic via gateway nodes only. It is worth noting that tenant/internal networks work fine. What are we doing wrong? Thank you.
Hey, If you are using standalone gateway hosts and compute nodes do not have access to external networks, I think it is expected that you can not bind a port from the external network to the VM. In this case access to the external network is done only through L3 routers. So the idea is the following: You attach only a private (geneve) networks to the VMs. Then, you create a neutron router, which acts as a gateway for the private network, and is attached to an external network as well. Then you create a floating IP from the external network and attach it to the port of the VM from the internal one. This way you make the external network routed via gateway hosts. Also, regarding your original issue "The task includes an option with an undefined variable. The error was: list object has no element 1" - we had similar case in IRC yesterday, and James Denton has found the workaround there by defining the br-ex bridge instead of naming it as br-flat, here was his paste that worked out for the folk: https://paste.opendev.org/show/bjw3b5ncP6dbhj34ltJU/ He also found an issue in our logic that caused this issue and has proposed a patch for that: https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/893924
Hello Dimitry, Thanks again for your help. Unfortunately, we've tried everything that's been suggested to no avail. And it seems plausible that external connectivity will not be achieved on the compute nodes if there are no bridges mapped to the external network on those hosts. Keep in mind these compute hosts do not have the ens2 physical interface to bind the ext-br or br-flat bridges to. Having said that, we would have loved to see a complete OVN scenario reference configuration with dedicated networking/gateway nodes. The documentation we have reviewed assumes compute nodes as gateways and that bridges can be set up on compute nodes, which is not our case. We are relying 100% on a single L3 interface on compute nodes with GENEVE as a tunneling protocol. And it is because of GENEVE that private east/west traffic works without a problem. Only networking nodes have that second ens2 network interface that physically connects to the external network, hence the need to make those chassis as gateway nodes. Again, our setup has the following configuration: -Compute nodes with x1 L3 NIC and IP. -Network/gateway nodes with x1 L3 NIC and x1 L2 NIC with connection to external network. Thank you.
I'm not a huge expert in OVN, but I believe this specific part works in pretty much the same way for OVS and LXB. We have exactly same usecase as you do, but with OVS for now. And the only way to get external connectivity is to create neutron router, which will be used as a gateway to public networks. And router should be created on OVN gateway nodes from what I know. So your VMs always have only geneve network, that is passed inside the router, and then router connected to external network on gateway nodes. Floating IP is kind of 1-to-1 NAT on the router, which allows to access your VM through external network (and router). Attaching public network to the VM directly in your scenario should not be possible by design. Feel free to join us on #openstack-ansible channel on OFTC IRC network and we will be glad to answer your questions. On Thu, Sep 7, 2023, 19:30 Roger Rivera <roger.riverac@gmail.com> wrote:
Hello Dimitry,
Thanks again for your help. Unfortunately, we've tried everything that's been suggested to no avail. And it seems plausible that external connectivity will not be achieved on the compute nodes if there are no bridges mapped to the external network on those hosts. Keep in mind these compute hosts do not have the ens2 physical interface to bind the ext-br or br-flat bridges to.
Having said that, we would have loved to see a complete OVN scenario reference configuration with dedicated networking/gateway nodes.
The documentation we have reviewed assumes compute nodes as gateways and that bridges can be set up on compute nodes, which is not our case. We are relying 100% on a single L3 interface on compute nodes with GENEVE as a tunneling protocol. And it is because of GENEVE that private east/west traffic works without a problem.
Only networking nodes have that second ens2 network interface that physically connects to the external network, hence the need to make those chassis as gateway nodes.
Again, our setup has the following configuration:
-Compute nodes with x1 L3 NIC and IP. -Network/gateway nodes with x1 L3 NIC and x1 L2 NIC with connection to external network.
Thank you.
Hi Roger, I believe there is an expectation that if a compute node will host a router or instance connected to a VLAN (provider or tenant network), it should have the provider network interface plumbed to it (and mapped accordingly). On the compute, you can get look at the external_ids field of the ‘ovs-vsctl list open_vswitch’ output and see ovn-bridge-mappings populated. If it’s also a gateway node, you’d see ‘ovn-cms-options=enable-chassis-as-gw’. The consensus among those I’ve talked to in the past is that network nodes should be gateway nodes, rather than enabling the compute nodes to also be gateway nodes. Others might feel differently. There are some things you can do with the neutron provider setup in OSA to treat network/gateway nodes differently from compute nodes from a plumbing POV; heterogenous vs homogenous network and bridge configuration. This doc, https://docs.openstack.org/openstack-ansible/latest/user/prod/provnet_groups..., might help – but don’t hesitate to ask for more help if that’s what you’re looking for. -- James Denton Principal Architect Rackspace Private Cloud - OpenStack james.denton@rackspace.com From: Dmitriy Rabotyagov <noonedeadpunk@gmail.com> Date: Thursday, September 7, 2023 at 12:46 PM To: Roger Rivera <roger.riverac@gmail.com> Cc: openstack-discuss <openstack-discuss@lists.openstack.org> Subject: Re: [openstack-ansible] Dedicated gateway hosts not working with OVN CAUTION: This message originated externally, please use caution when clicking on links or opening attachments! I'm not a huge expert in OVN, but I believe this specific part works in pretty much the same way for OVS and LXB. We have exactly same usecase as you do, but with OVS for now. And the only way to get external connectivity is to create neutron router, which will be used as a gateway to public networks. And router should be created on OVN gateway nodes from what I know. So your VMs always have only geneve network, that is passed inside the router, and then router connected to external network on gateway nodes. Floating IP is kind of 1-to-1 NAT on the router, which allows to access your VM through external network (and router). Attaching public network to the VM directly in your scenario should not be possible by design. Feel free to join us on #openstack-ansible channel on OFTC IRC network and we will be glad to answer your questions. On Thu, Sep 7, 2023, 19:30 Roger Rivera <roger.riverac@gmail.com<mailto:roger.riverac@gmail.com>> wrote: Hello Dimitry, Thanks again for your help. Unfortunately, we've tried everything that's been suggested to no avail. And it seems plausible that external connectivity will not be achieved on the compute nodes if there are no bridges mapped to the external network on those hosts. Keep in mind these compute hosts do not have the ens2 physical interface to bind the ext-br or br-flat bridges to. Having said that, we would have loved to see a complete OVN scenario reference configuration with dedicated networking/gateway nodes. The documentation we have reviewed assumes compute nodes as gateways and that bridges can be set up on compute nodes, which is not our case. We are relying 100% on a single L3 interface on compute nodes with GENEVE as a tunneling protocol. And it is because of GENEVE that private east/west traffic works without a problem. Only networking nodes have that second ens2 network interface that physically connects to the external network, hence the need to make those chassis as gateway nodes. Again, our setup has the following configuration: -Compute nodes with x1 L3 NIC and IP. -Network/gateway nodes with x1 L3 NIC and x1 L2 NIC with connection to external network. Thank you.
Hello James, Thank you for the information. Unfortunately nothing seems to work in this scenario/environment. We have worked on deployments where compute nodes have a direct interface to external networks. but this dedicated network/gateway scenario has proven difficult to implement with OVN. The main issue is displayed when we remove all bridge mappings from compute nodes (neutron_ovn_controller host group). Leaving the provider network mappings exclusively on the network/gateway hosts (neutron_ovn_gateway host group). Upon attempting to attach an interface to the external network, neutron complains about a PortBindingFailed error. Our relevant configuration files/content: https://paste.opendev.org/show/821535/ I have run out of ideas here. Any help would be appreciated. Thanks
Hi Roger, Since last 1 week I have been watching this thread and it just keeps getting interesting. As James and Dimitry mentioned earlier that wherever "ovn-cms-options=enable-chassis-as-gw" is set that node will act like gateway of all VM and It will handle all external in/out traffic. Can you post output of ovs-vsctl show and ovs-vsctl list Open_vSwitch of your gateway node? I will see if I can set up my lab to create a scenario similar to you to untangle this mistry.
Hello Satish, I appreciate your feedback and any help will be greatly appreciated. Please find the requested outputs pasted here: https://paste.opendev.org/show/bHWvGMUYW35sU43zUxem/ I've included outputs for one compute and one network/gateway node. As a recap, among other nodes, the environment includes: -2x compute - 1x NIC ens1 with IPv4 (geneve) - no bridges -2x network/gateway nodes - 2x NICs - ens1 with IPv4 (geneve), ens2 as external net interface, br-vlan connected to ens2 bridge. Let me know if you need further information. Much appreciated. Thank you.
Hi Roger, That output looks as I would expect, thank you. Can you please provide the output for ‘openstack network show’ for the network being attached to the VM? Thanks, James Get Outlook for iOS<https://aka.ms/o0ukef> ________________________________ From: Roger Rivera <roger.riverac@gmail.com> Sent: Friday, September 8, 2023 8:11:06 AM To: Satish Patel <satish.txt@gmail.com> Cc: James Denton <james.denton@rackspace.com>; Dmitriy Rabotyagov <noonedeadpunk@gmail.com>; openstack-discuss <openstack-discuss@lists.openstack.org> Subject: Re: [openstack-ansible] Dedicated gateway hosts not working with OVN CAUTION: This message originated externally, please use caution when clicking on links or opening attachments! Hello Satish, I appreciate your feedback and any help will be greatly appreciated. Please find the requested outputs pasted here: https://paste.opendev.org/show/bHWvGMUYW35sU43zUxem/ I've included outputs for one compute and one network/gateway node. As a recap, among other nodes, the environment includes: -2x compute - 1x NIC ens1 with IPv4 (geneve) - no bridges -2x network/gateway nodes - 2x NICs - ens1 with IPv4 (geneve), ens2 as external net interface, br-vlan connected to ens2 bridge. Let me know if you need further information. Much appreciated. Thank you.
Hello James, I appreciate the prompt response. Please see the output for openstack network show <net> , pasted at https://paste.opendev.org/show/bIhYhu6fDWoMaIiyaRMJ/ Thanks you On Fri, Sep 8, 2023 at 9:31 AM James Denton <james.denton@rackspace.com> wrote:
Hi Roger,
That output looks as I would expect, thank you.
Can you please provide the output for ‘openstack network show’ for the network being attached to the VM?
Thanks, James
Get Outlook for iOS <https://aka.ms/o0ukef> ------------------------------ *From:* Roger Rivera <roger.riverac@gmail.com> *Sent:* Friday, September 8, 2023 8:11:06 AM *To:* Satish Patel <satish.txt@gmail.com> *Cc:* James Denton <james.denton@rackspace.com>; Dmitriy Rabotyagov < noonedeadpunk@gmail.com>; openstack-discuss < openstack-discuss@lists.openstack.org> *Subject:* Re: [openstack-ansible] Dedicated gateway hosts not working with OVN
CAUTION: This message originated externally, please use caution when clicking on links or opening attachments!
Hello Satish,
I appreciate your feedback and any help will be greatly appreciated. Please find the requested outputs pasted here: https://paste.opendev.org/show/bHWvGMUYW35sU43zUxem/
I've included outputs for one compute and one network/gateway node.
As a recap, among other nodes, the environment includes:
-2x compute - 1x NIC ens1 with IPv4 (geneve) - no bridges -2x network/gateway nodes - 2x NICs - ens1 with IPv4 (geneve), ens2 as external net interface, br-vlan connected to ens2 bridge.
Let me know if you need further information. Much appreciated.
Thank you.
-- *Roger Rivera*
Thanks, Roger. Super helpful. If you’re attempting to launch the VM on that network it will fail, since that network is only plumbed to the net nodes as an 'external' network. For the VM, you will want to create a non-provider (tenant) network, which would likely be geneve, and then create a neutron router that connects to the external network and the tenant network. Your VM traffic would then traverse that router from compute->net node and out over the geneve overlay. Keep us posted. James Get Outlook for iOS<https://aka.ms/o0ukef> ________________________________ From: Roger Rivera <roger.riverac@gmail.com> Sent: Friday, September 8, 2023 8:43:19 AM To: James Denton <james.denton@rackspace.com> Cc: Satish Patel <satish.txt@gmail.com>; Dmitriy Rabotyagov <noonedeadpunk@gmail.com>; openstack-discuss <openstack-discuss@lists.openstack.org> Subject: Re: [openstack-ansible] Dedicated gateway hosts not working with OVN CAUTION: This message originated externally, please use caution when clicking on links or opening attachments! Hello James, I appreciate the prompt response. Please see the output for openstack network show <net> , pasted at https://paste.opendev.org/show/bIhYhu6fDWoMaIiyaRMJ/ Thanks you On Fri, Sep 8, 2023 at 9:31 AM James Denton <james.denton@rackspace.com<mailto:james.denton@rackspace.com>> wrote: Hi Roger, That output looks as I would expect, thank you. Can you please provide the output for ‘openstack network show’ for the network being attached to the VM? Thanks, James Get Outlook for iOS<https://aka.ms/o0ukef> ________________________________ From: Roger Rivera <roger.riverac@gmail.com<mailto:roger.riverac@gmail.com>> Sent: Friday, September 8, 2023 8:11:06 AM To: Satish Patel <satish.txt@gmail.com<mailto:satish.txt@gmail.com>> Cc: James Denton <james.denton@rackspace.com<mailto:james.denton@rackspace.com>>; Dmitriy Rabotyagov <noonedeadpunk@gmail.com<mailto:noonedeadpunk@gmail.com>>; openstack-discuss <openstack-discuss@lists.openstack.org<mailto:openstack-discuss@lists.openstack.org>> Subject: Re: [openstack-ansible] Dedicated gateway hosts not working with OVN CAUTION: This message originated externally, please use caution when clicking on links or opening attachments! Hello Satish, I appreciate your feedback and any help will be greatly appreciated. Please find the requested outputs pasted here: https://paste.opendev.org/show/bHWvGMUYW35sU43zUxem/ I've included outputs for one compute and one network/gateway node. As a recap, among other nodes, the environment includes: -2x compute - 1x NIC ens1 with IPv4 (geneve) - no bridges -2x network/gateway nodes - 2x NICs - ens1 with IPv4 (geneve), ens2 as external net interface, br-vlan connected to ens2 bridge. Let me know if you need further information. Much appreciated. Thank you. -- Roger Rivera
Hello everyone, The openstack-ansible deployment was all good. Turns out it was a network configuration problem for the external network. Aside from the below issues which have been addressed, everything runs as expected now. -OVN deployments fail when the provider bridge is defined for gateway hosts and not for compute nodes. -Add split(',') python list to supported_provider_types on templates/horizon_local_settings.py.j2 Thank you everyone for all the help. This cleared out a lot of doubts. Best regards. On Fri, Sep 8, 2023 at 11:28 AM James Denton <james.denton@rackspace.com> wrote:
Thanks, Roger. Super helpful.
If you’re attempting to launch the VM on that network it will fail, since that network is only plumbed to the net nodes as an 'external' network. For the VM, you will want to create a non-provider (tenant) network, which would likely be geneve, and then create a neutron router that connects to the external network and the tenant network. Your VM traffic would then traverse that router from compute->net node and out over the geneve overlay.
Keep us posted.
James
Get Outlook for iOS <https://aka.ms/o0ukef> ------------------------------ *From:* Roger Rivera <roger.riverac@gmail.com> *Sent:* Friday, September 8, 2023 8:43:19 AM *To:* James Denton <james.denton@rackspace.com> *Cc:* Satish Patel <satish.txt@gmail.com>; Dmitriy Rabotyagov < noonedeadpunk@gmail.com>; openstack-discuss < openstack-discuss@lists.openstack.org> *Subject:* Re: [openstack-ansible] Dedicated gateway hosts not working with OVN
CAUTION: This message originated externally, please use caution when clicking on links or opening attachments!
Hello James,
I appreciate the prompt response. Please see the output for openstack network show <net> , pasted at https://paste.opendev.org/show/bIhYhu6fDWoMaIiyaRMJ/
Thanks you
On Fri, Sep 8, 2023 at 9:31 AM James Denton <james.denton@rackspace.com> wrote:
Hi Roger,
That output looks as I would expect, thank you.
Can you please provide the output for ‘openstack network show’ for the network being attached to the VM?
Thanks, James
Get Outlook for iOS <https://aka.ms/o0ukef> ------------------------------ *From:* Roger Rivera <roger.riverac@gmail.com> *Sent:* Friday, September 8, 2023 8:11:06 AM *To:* Satish Patel <satish.txt@gmail.com> *Cc:* James Denton <james.denton@rackspace.com>; Dmitriy Rabotyagov < noonedeadpunk@gmail.com>; openstack-discuss < openstack-discuss@lists.openstack.org> *Subject:* Re: [openstack-ansible] Dedicated gateway hosts not working with OVN
CAUTION: This message originated externally, please use caution when clicking on links or opening attachments!
Hello Satish,
I appreciate your feedback and any help will be greatly appreciated. Please find the requested outputs pasted here: https://paste.opendev.org/show/bHWvGMUYW35sU43zUxem/
I've included outputs for one compute and one network/gateway node.
As a recap, among other nodes, the environment includes:
-2x compute - 1x NIC ens1 with IPv4 (geneve) - no bridges -2x network/gateway nodes - 2x NICs - ens1 with IPv4 (geneve), ens2 as external net interface, br-vlan connected to ens2 bridge.
Let me know if you need further information. Much appreciated.
Thank you.
-- *Roger Rivera*
-- *Roger Rivera*
participants (4)
-
Dmitriy Rabotyagov
-
James Denton
-
Roger Rivera
-
Satish Patel