No, reading from the OVN SB.

On Wed, Jan 24, 2024 at 2:15 PM Dmitriy Rabotyagov <noonedeadpunk@gmail.com> wrote:
And neutron-ovn-metadata-agent is not using RPC either?

ср, 24 янв. 2024 г. в 12:54, Rodolfo Alonso Hernandez <ralonsoh@redhat.com>:
>
> If you are not using a DHCP agent (for baremetal ports) then RPC is not needed when using ML2/OVN.
>
> uWSGI is still not working with ML2/OVN.
>
> On Wed, Jan 24, 2024 at 12:23 PM Dmitriy Rabotyagov <noonedeadpunk@gmail.com> wrote:
>>
>> Sorry, have a side-tracked question about RPC.
>>
>> I assume, then you also don't need to have transport_url or anything
>> related to RPC, right?
>> But how things are about messaging notifications? Will these still be
>> used by neutron to report back resource creation with OVN?
>>
>> Also - is uWSGI not working with OVN or was that fixed and it can be used?
>>
>> ср, 24 янв. 2024 г. в 10:03, Rodolfo Alonso Hernandez <ralonsoh@redhat.com>:
>> >
>> > Hello:
>> >
>> > In order to avoid the RPC warning messages, please set "rpc_workers=0" [1]. With ML2/OVN there is no need for RPC communication (unless you have baremetal ports and DHCP agents). OVN has a builtin DHCP server.
>> >
>> > "ovn-router" and "neutron.services.ovn_l3.plugin.OVNL3RouterPlugin" is the same service, please remove one from your configuration.
>> >
>> > Please open a Launchpad bug and provide the logs in DEBUG mode to check what is happening in your environment. If I'm not wrong, you have changed the host name in the ERROR message. Check that the name you have in your local OVS Open vSwitch register is the same as the Neutron name for this node:
>> > root@u22ovn:/etc/neutron# ovs-vsctl list open . | grep external_ids
>> > external_ids        : {hostname=u22ovn, ovn-bridge=br-int, ...}
>> >
>> > Regards.
>> >
>> > [1]https://review.opendev.org/c/openstack/neutron/+/823637
>> >
>> > On Tue, Jan 23, 2024 at 8:57 PM Dmitriy Rabotyagov <noonedeadpunk@gmail.com> wrote:
>> >>
>> >> Hey,
>> >>
>> >> Not sure what specifically is wrong at this point (as not really great expert about ovn at the moment), but would like to ask some more questions which might be relevant.
>> >>
>> >> 1. In openstack_user_config how have you defined network-northd_hosts / network-gateway_hosts ?
>> >>
>> >> 2. I do see couple of plugins that are not compatible with ovn at least not on existing stable releases, including vpnaas, fwaas, and not sure about neutron_dynamic_routing.services.bgp.bgp_plugin.BgpPlugin since another project intends to provide same functionality:
>> >> https://opendev.org/openstack/ovn-bgp-agent
>> >>
>> >> And that one does not work out of the box right now - I was about to check on what's needed to get it working and land support for the upcoming release.
>> >>
>> >> 3. I'm also not sure about provided mappings, since it seems you wanna have both flat and vlan network, while providing only 1 mapping.
>> >> In ovs at least (I think ovn is same here) if you have only 1 interface for both cases (flat and vlan) - the only way to have both is map "flat" network as tagged vlan. As interface is added to the bridge and then can't be re-used as other "provider" type.
>> >>
>> >> On Tue, Jan 23, 2024, 20:17 jjjamesg <jjjamesg@proton.me> wrote:
>> >>>
>> >>> I can't for the life of me get external connectivity working, openstack itself works i just cant get external connectivity working,
>> >>>
>> >>> I can see ovs has created br-ex with bond1 attached.
>> >>> ovs-vsctl get open . external_ids:ovn-bridge-mapping shows: "physnst1:br-ex"
>> >>>
>> >>> I have tried creating the network with both:
>> >>>
>> >>> openstack network create  --share --external --provider-physical-network physnet1 --provider-network-type vlan --provider-segment 233 public
>> >>> openstack network create  --share --external --provider-physical-network physnet1 --provider-network-type flat publicnet-flat
>> >>>
>> >>> Only errors i can see is in the neutron container when i try and either create an instance on said network or attach a fip is is:
>> >>> (similar errors from a vlan type network this is just the error i pulled at the time)
>> >>>
>> >>> ERROR neutron.plugins.ml2.managers [req-de7f1cb8-ac8e-4dd6-ad2a-e14afdb152b5 req-c6a2aaff-ccf9-4ef3-9ef
>> >>> 7-1a8e4c7a1e41 9c27de6bdc4449abbbbcd5e5c4951bb9 2fced1ff77f5428eb2b63879bdd608cc - - default default] Failed to bind port 03cc1708-d4a2-4b11-bca2-40ebe0acff60 on host 02 for vnic_type normal using segments [{'id': '2ef64682-b5ff-472b-935d-a3c2b615d70f', 'network_type': 'flat', 'physical_network': 'physnet1', 'segmentation_id': None, 'network_id': '52b4f4e9-47f3-431b-8c45-ec8cc17f561d'}]
>> >>>
>> >>>  WARNING neutron.scheduler.dhcp_agent_scheduler [None req-aafa81d9-479c-4f56-be6f-54d4ab3b42aa 1b29a
>> >>> 2297a224cdcaaba84e0eac30205 bafabcf2414d4adf98cab165c3f3de12 - - default default] No more DHCP agents
>> >>>
>> >>>  WARNING neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api [None req-aafa81d9-479c-4f56-be6f-54d4ab3
>> >>> b42aa 1b29a2297a224cdcaaba84e0eac30205 bafabcf2414d4adf98cab165c3f3de12 - - default default] Unable to schedule network 5fa049f5-0ed4-4287-9c63-dd753c853c35: no agents available; will retry on subsequent port and subnet creation events.
>> >>>
>> >>> I just cant figure out WHY this isn't working..
>> >>>
>> >>> Below is my config:
>> >>>
>> >>> #############
>> >>> ### /etc/network/interfaces:
>> >>> #############
>> >>>
>> >>> auto bond1
>> >>> iface bond1 inet manual
>> >>>     bond-slaves eno2 eno4
>> >>>     bond-mode 802.3ad
>> >>>     bond-miimon 100
>> >>>     bond-downdelay 200
>> >>>     bond-updelay 200
>> >>>     bond-lacp-rate 1
>> >>>     mtu 9000
>> >>>
>> >>> auto bond1.30
>> >>> iface bond1.30 inet manual
>> >>>     vlan-raw-device bond1
>> >>>
>> >>> ## (br-pubv is for my public vip)
>> >>> auto bond1.232
>> >>> iface bond1.232 inet manual
>> >>>     vlan-raw-device bond1
>> >>>
>> >>> auto br-overlay
>> >>> iface br-overlay inet static
>> >>>     bridge_stp off
>> >>>     bridge_waitport 0
>> >>>     bridge_fd 0
>> >>>     bridge_ports bond1.30
>> >>>     address
>> >>> auto br-pubv
>> >>> iface br-pubv inet static
>> >>>     address
>> >>>     gateway
>> >>>     bridge_stp off
>> >>>     bridge_waitport 0
>> >>>     bridge_fd 0
>> >>>     bridge_ports bond1.232
>> >>>
>> >>>
>> >>> #############
>> >>> ### openstack_user_config.yml:
>> >>> #############
>> >>>
>> >>>     - network:
>> >>>         container_bridge: "br-ex"
>> >>>         network_interface: "bond1"
>> >>>         type: "vlan"
>> >>>         range: "232:332"
>> >>>         net_name: "physnet1"
>> >>>         group_binds:
>> >>>           - neutron_ovn_controller
>> >>>
>> >>> #############
>> >>> ###  user_variables.yml
>> >>> #############
>> >>>
>> >>> neutron_plugin_type: ml2.ovn
>> >>> neutron_plugin_base:
>> >>>   - ovn-router
>> >>>   - qos
>> >>>   - neutron.services.ovn_l3.plugin.OVNL3RouterPlugin
>> >>>   - neutron_dynamic_routing.services.bgp.bgp_plugin.BgpPlugin
>> >>>   - vpnaas
>> >>>   - metering
>> >>>   - firewall_v2
>> >>> neutron_ml2_drivers_type: "vlan,local,geneve,flat"
>> >>>
>> >>> neutron_provider_networks:
>> >>>   network_types: "geneve"
>> >>>   network_geneve_ranges: "1:1000"
>> >>>   network_vlan_ranges: "physnet1"
>> >>>   network_mappings: "physnet1:br-ex"
>> >>>   network_interface_mappings: "br-ex:bond1"
>> >>>
>> >>> Bond1 is a trunk port (no native vlan), that has access to both overlay vlan and the 232:332 vlan range which both work when setting either network as a bridge on bond1 so connectivity is there.
>> >>>
>> >>> I have network-northd/gateway_hosts defined as well as:
>> >>>
>> >>> neutron_neutron_conf_overrides:
>> >>>   ovn:
>> >>>     enable_distributed_floating_ip: True
>> >>>
>> >>> openstack_host_specific_kernel_modules:
>> >>>   - name: "openvswitch"
>> >>>     pattern: "CONFIG_OPENVSWITCH"
>> >>>
>> >>> This should by all accounts just work, but for some reason for me it's not, what steps have i missed?
>> >>>
>>