<div dir="ltr">Dmitry, good night!<br><br>I decided to start from scratch and remove things that I won't be using (at least not now).<br><br>When removing the flat network the errors are gone and the VMs start.<br><br>I just have another question related to Neutron, but I'll start another thread.<br><br>Thanks a lot for the help!</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Em ter., 18 de jul. de 2023 às 14:12, Dmitriy Rabotyagov <<a href="mailto:noonedeadpunk@gmail.com">noonedeadpunk@gmail.com</a>> escreveu:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hey,<br>
<br>
Thanks for providing the output.<br>
Regarding neutron-openvswitch-agent utilization - that might be a<br>
side-effect of neutron-server failures.<br>
<br>
Regarding neutron-server failures, reading stack trace again, it seems<br>
that neutron-server is failing to reach the placement service:<br>
HTTPConnectionPool(host='172.29.236.2', port=8780) Failed to<br>
establish a new connection: [Errno 32] EPIPE<br>
<br>
I assume that 172.29.236.2 is your internal VIP that's on haproxy? Are<br>
you able to reach the IP and connect to the port 8780 from inside of<br>
the neutron-server container? You can try leveraging telnet or simple<br>
curl from inside of the neutron-server container to verify that<br>
connectivity is fine.<br>
<br>
Also good to ensure, that haproxy does have placement backends UP, you<br>
can verify that with running `echo "show stat" | nc -U<br>
/run/haproxy.stat | grep placement`<br>
<br>
вт, 18 июл. 2023 г. в 15:14, Murilo Morais <<a href="mailto:murilo@evocorp.com.br" target="_blank">murilo@evocorp.com.br</a>>:<br>
><br>
> I noticed that the "neutron-openvswitch-agent" process constantly goes up to 100% in CPU usage. This is normal?<br>
><br>
> Em ter., 18 de jul. de 2023 às 09:44, Murilo Morais <<a href="mailto:murilo@evocorp.com.br" target="_blank">murilo@evocorp.com.br</a>> escreveu:<br>
>><br>
>> Hi Dmitriy, thanks for answering!<br>
>><br>
>> I really didn't send any details of my setup, apologies for that.<br>
>><br>
>> I'm using OVS with the following configuration:<br>
>><br>
>> provider_networks:<br>
>> - network:<br>
>> container_bridge: br-provider<br>
>> container_type: veth<br>
>> type: vlan<br>
>> range: "100:200"<br>
>> net_name: vlan<br>
>> group_binds:<br>
>> - neutron_openvswitch_agent<br>
>><br>
>> - network:<br>
>> container_bridge: br-provider<br>
>> container_type: veth<br>
>> type: flat<br>
>> net_name: flat<br>
>> group_binds:<br>
>> - neutron_openvswitch_agent<br>
>><br>
>><br>
>> neutron_plugin_type: ml2.ovs<br>
>> neutron_ml2_drivers_type: "flat,vlan"<br>
>> neutron_plugin_base:<br>
>> - router<br>
>> - metering<br>
>><br>
>><br>
>> root@dcn2-utility-container-c45f8b09:/# openstack network agent list<br>
>> +--------------------------------------+----------------+------+-------------------+-------+-------+------------------------+<br>
>> | ID | Agent Type | Host | Availability Zone | Alive | State | Binary |<br>
>> +--------------------------------------+----------------+------+-------------------+-------+-------+------------------------+<br>
>> | 9a4625ef-2988-4b96-a927-30a9bb0244a4 | Metadata agent | dcn2 | None | :-) | UP | neutron-metadata-agent |<br>
>> | a222c0ae-c2e5-44cc-b478-ca8176daad19 | Metering agent | dcn2 | None | :-) | UP | neutron-metering-agent |<br>
>> | c6be1985-a67e-4099-a1d6-fa517810e138 | L3 agent | dcn2 | nova | :-) | UP | neutron-l3-agent |<br>
>> | da97e2a6-535b-4f1e-828d-9b2fbb3e036b | DHCP agent | dcn2 | nova | :-) | UP | neutron-dhcp-agent |<br>
>> +--------------------------------------+----------------+------+-------------------+-------+-------+------------------------+<br>
>><br>
>><br>
>> root@dcn2-utility-container-c45f8b09:/# openstack compute service list<br>
>> +--------------------------------------+----------------+----------------------------------+----------+---------+-------+----------------------------+<br>
>> | ID | Binary | Host | Zone | Status | State | Updated At |<br>
>> +--------------------------------------+----------------+----------------------------------+----------+---------+-------+----------------------------+<br>
>> | a0524af5-bc88-4793-aee2-c2c87cd0e8cc | nova-conductor | dcn2-nova-api-container-aac2b913 | internal | enabled | up | 2023-07-18T12:32:26.000000 |<br>
>> | c860f25e-bd30-49f9-8289-076b230bbc2d | nova-scheduler | dcn2-nova-api-container-aac2b913 | internal | enabled | up | 2023-07-18T12:32:27.000000 |<br>
>> | 6457e0a1-b075-4999-8855-0f36e2e3a95a | nova-compute | dcn2 | nova | enabled | up | 2023-07-18T12:32:27.000000 |<br>
>> +--------------------------------------+----------------+----------------------------------+----------+---------+-------+----------------------------+<br>
>><br>
>><br>
>> The br-provider bridge exists and is UP.<br>
</blockquote></div>