Hi, Dnia środa, 25 października 2023 20:12:38 CEST ddorra@t-online.de pisze:
Hi,
many thanks for the hint! I'm wondering how the original instructions can work at all where the interface is added to the bridge mapping (https://docs.openstack.org/neutron/2023.2/install/compute-install-option2-ub...)
It can't. This was mistake in our docs and I just proposed fix for it https://review.opendev.org/c/openstack/neutron/+/899376
I assume this needs to be done on both the controller and the compute node?
Yes.
However, now the error says "failed to bind port..."...
For the reason of failed bind of the port You need to check neutron-server logs. This is important line: 74d f94c5c69fe0e4456a19426da0877431f - default default] Failed to bind port 64d76a96-e739-4680-b367-e46f2fd46639 on host loscompute1 for vnic_type normal using segments [{'id': 'ce394d12-4255-4298-8332-5f39fbea7444', 'network_type': 'vxlan', 'physical_network': None, 'segmentation_id': 805, 'network_id': '32f53edc-a394-419f-a438-a664183ee618'}] From that it seems that You are using vxlan network, not provider network. So bridge_mappings aren't needed for You at all. But You need to make sure that vxlan tunnel_type is enabled in the agent's config. Please check https://docs.openstack.org/neutron/2023.2/admin/deploy-ovs-selfservice.html for more details.
What I did:
root@loscontrol:~# ovs-vsctl list-br br-ex br-int
root@loscontrol:~# ovs-vsctl list-ports br-ex eth1 phy-br-ex root@loscontrol:~#
root@loscontrol:/var/log/neutron.bug# ovs-vsctl list-ports br-int int-br-ex int-br-int phy-br-int qg-08258e2f-e4 qr-6a730cc3-2f tap32da418c-66 tapfa41c9d6-c7
root@loscontrol:~# cat /etc/neutron/plugins/ml2/openvswitch_agent.ini [DEFAULT] [agent] [dhcp] [network_log] [ovs] bridge_mappings = provider:br-ex [securitygroup] enable_security_group = true firewall_driver = openvswitch [vxlan] local_ip = 192.168.2.70 l2_population = true root@loscontrol:~#
root@loscompute1:~# ovs-vsctl list-ports br-ex eth1 phy-br-ex
root@loscompute1:/var/log/neutron.bug# ovs-vsctl list-ports br-int int-br-ex int-br-int phy-br-int
root@loscompute1:~# cat /etc/neutron/plugins/ml2/openvswitch_agent.ini [DEFAULT] [agent] [dhcp] [network_log]
[ovs] bridge_mappings = provider:br-ex
[securitygroup] enable_security_group = true firewall_driver = openvswitch
[vxlan] local_ip = 192.168.2.71 l2_population = true root@loscompute1:~#
Result:
control/neutron-server.log
74d f94c5c69fe0e4456a19426da0877431f - default default] Failed to bind port 64d76a96-e739-4680-b367-e46f2fd46639 on host loscompute1 for vnic_type normal using segments [{'id': 'ce394d12-4255-4298-8332-5f39fbea7444', 'network_type': 'vxlan', 'physical_network': None, 'segmentation_id': 805, 'network_id': '32f53edc-a394-419f-a438-a664183ee618'}] 2023-10-24 19:51:39.209 2295 INFO neutron.plugins.ml2.plugin [req-4a27d80e-1e3a-4e13-9833-5865b11a60a8 0ab49c1bf051415d8f98a3d61f38974d f94c5c69fe0e4456a19426da0877431f - default default] Attempt 10 to bind port 64d76a96-e739-4680-b367-e46f2fd46639 vvvvvvvvvvvvvv 2023-10-24 19:51:39.215 2295 ERROR neutron.plugins.ml2.managers [req-4a27d80e-1e3a-4e13-9833-5865b11a60a8 0ab49c1bf051415d8f98a3d61f38974d f94c5c69fe0e4456a19426da0877431f - default default] Failed to bind port 64d76a96-e739-4680-b367-e46f2fd46639 on host loscompute1 for vnic_type normal using segments [{'id': 'ce394d12-4255-4298-8332-5f39fbea7444', 'network_type': 'vxlan', 'physical_network': None, 'segmentation_id': 805, 'network_id': '32f53edc-a394-419f-a438-a664183ee618'}] ^^^^^^^^^^^^^
compute1/nova-compute.log
2023-10-24 19:51:39.901 1162 ERROR nova.compute.manager [req-b55b8f60-c0fc-4160-a367-8f6985a5410b 4611881642e94bf391f9895ecef81b8c 0b73c02d65d241fd8fa66ced30065027 - default default] [instance: 4b0af7d3-0be4-4aa5-a220-5bcad9cf4549] Failed to build and run instance: nova.exception.PortBindingFailed: Binding failed for port 64d76a96-e739-4680-b367-e46f2fd46639, please check neutron logs for more information.
compute1/neutron-openvswitch-agent.log
2023-10-24 20:10:06.605 2086 INFO os_ken.base.app_manager [-] instantiating app os_ken.app.ofctl.service of OfctlService 2023-10-24 20:10:06.605 2086 INFO neutron.agent.agent_extensions_manager [-] Loaded agent extensions: [] 2023-10-24 20:10:06.657 2086 INFO neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge [-] Bridge br-int has datapath-ID 00005e65b1943a49 2023-10-24 20:10:09.016 2086 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Mapping physical network provider to bridge br-ex 2023-10-24 20:10:09.016 2086 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Bridge br-ex datapath-id = 0x0000080027916058 2023-10-24 20:10:09.020 2086 INFO neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge [-] Bridge br-ex has datapath-ID 0000080027916058 ....NO ERRORS except 2023-10-24 20:10:03.545 1168 ERROR neutron.agent.common.async_process [-] Error received from [ovsdb-client monitor tcp:127.0.0.1:6640 Interface name,ofport,external_ids --format=json]: 2023-10-24T20:10:03Z|00001|fatal_signal|WARN|terminating with signal 15 (signal 15) 2023-10-24 20:10:03.546 1168 ERROR neutron.agent.common.async_process [-] Error received from [ovsdb-client monitor tcp:127.0.0.1:6640 Interface name,ofport,external_ids --format=json]: None
root@loscontrol:/var/log/neutron.bug# openstack port list +--------------------------------------+------+-------------------+--------------------------------------------------------------------------+--------+ | ID | Name | MAC Address | Fixed IP Addresses | Status | +--------------------------------------+------+-------------------+--------------------------------------------------------------------------+--------+ | 08258e2f-e4c8-4874-9961-72fbdaee9790 | | fa:16:3e:82:6c:01 | ip_address='10.0.0.80', subnet_id='2d3c3de4-9a0d-4a21-9af1-8ecb9f6f16d5' | ACTIVE | | 32da418c-6653-4850-b286-fd54fce15205 | | fa:16:3e:d4:e8:a8 | ip_address='172.0.0.2', subnet_id='e38e25c2-4683-48fb-a7a0-7cbd7d276ee1' | DOWN | | 488a5ec3-7c49-4ee6-9986-0acac27575aa | | fa:16:3e:15:4a:e0 | ip_address='10.0.0.66', subnet_id='2d3c3de4-9a0d-4a21-9af1-8ecb9f6f16d5' | N/A | | 6a730cc3-2f14-4431-9652-97898431e561 | | fa:16:3e:61:2b:40 | ip_address='172.0.0.1', subnet_id='e38e25c2-4683-48fb-a7a0-7cbd7d276ee1' | DOWN | | fa41c9d6-c710-4415-a87e-98c1649aaac4 | | fa:16:3e:b0:2c:6c | ip_address='10.0.0.50', subnet_id='2d3c3de4-9a0d-4a21-9af1-8ecb9f6f16d5' | ACTIVE | +--------------------------------------+------+-------------------+--------------------------------------------------------------------------+--------+ root@loscontrol:/var/log/neutron.bug#
-----Original-Nachricht----- Betreff: Re: [neutron][openvswitch][antelope] Bridge eth1 for physical network provider does not exist Datum: 2023-10-23T22:46:37+0200 Von: "Sławek Kapłoński" <skaplons@redhat.com> An: "openstack-discuss@lists.openstack.org" <openstack-discuss@lists.openstack.org>
Hi,
You need to create bridge (e.g. br-ex), add your eth1 to that bridge and put name of the bridge in the bridge_mapping.
Dnia poniedziałek, 23 października 2023 21:44:43 CEST ddorra@t-online.de pisze:
Hello,
I'm installing Openstack Antelope with network option 2 ( following https://docs.openstack.org/neutron/2023.2/install/compute-install-option2-ub...) The interface name of the provider network is eth1, so I put this to bridge_mappings. The local IP is from the management network.
#------------------------------------------------------ # /etc/neutron/plugins/ml2/openvswitch_agent.ini # [DEFAULT] [agent] [dhcp] [network_log] [ovs] bridge_mappings = provider:eth1 [securitygroup] enable_security_group = true firewall_driver = openvswitch [vxlan] local_ip = 192.168.2.71 l2_population = true #--------------------------------------------------------
However, the neutron log complains that bridger eth1 does not exist. Launching of instances fails neutron-openvswitch-agent.log: 2023-10-23 19:26:00.062 17604 INFO os_ken.base.app_manager [-] instantiating app os_ken.app.ofctl.service of OfctlService 2023-10-23 19:26:00.062 17604 INFO neutron.agent.agent_extensions_manager [-] Loaded agent extensions: [] 2023-10-23 19:26:00.108 17604 INFO neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge [-] Bridge br-int has datapath-ID 00005e65b1943a49 2023-10-23 19:26:02.438 17604 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Mapping physical network provider to bridge eth1 vvvvvvv 2023-10-23 19:26:02.438 17604 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Bridge eth1 for physical network provider does not exist. Agent terminated! ^^^^^^^ 2023-10-23 19:26:03.914 17619 INFO neutron.common.config [-] Logging enabled! 2023-10-23 19:26:03.914 17619 INFO neutron.common.config [-] /usr/bin/neutron-openvswitch-agent version 20.4.0 2023-10-23 19:26:03.914 17619 INFO os_ken.base.app_manager [-] loading app neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_oskenapp
----------------------------------------- Additional information root@control:/var/log/neutron# openstack network list +--------------------------------------+----------+--------------------------------------+ | ID | Name | Subnets | +--------------------------------------+----------+--------------------------------------+ | 32f53edc-a394-419f-a438-a664183ee618 | doznet | e38e25c2-4683-48fb-a7a0-7cbd7d276ee1 | | 74e3ee6a-1116-4ff6-9e99-530c3cbaef28 | provider | 2d3c3de4-9a0d-4a21-9af1-8ecb9f6f16d5 | +--------------------------------------+----------+--------------------------------------+ root@control:/var/log/neutron#
root@compute1:/etc/neutron/plugins/ml2# ip a 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 08:00:27:9f:c7:89 brd ff:ff:ff:ff:ff:ff inet 192.168.2.71/24 brd 192.168.2.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe9f:c789/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 08:00:27:91:60:58 brd ff:ff:ff:ff:ff:ff inet 10.0.0.71/24 brd 10.0.0.255 scope global eth1 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe91:6058/64 scope link valid_lft forever preferred_lft forever 4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether da:3b:3a:a0:59:97 brd ff:ff:ff:ff:ff:ff 5: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 5e:65:b1:94:3a:49 brd ff:ff:ff:ff:ff:ff 6: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 52:54:00:53:87:59 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever
What are the proper settings? Any help appreciated Dieter
-- Slawek Kaplonski Principal Software Engineer Red Hat
-- Slawek Kaplonski Principal Software Engineer Red Hat