Thanks for your feedback Sean.
Em seg., 19 de jun. de 2023 às 14:01, smooney@redhat.com escreveu:
On Mon, 2023-06-19 at 12:03 -0300, Roberto Bartzen Acosta wrote:
Hello Neutron folks,
We discussed in the Operators feedback session about OVN heartbeat and
the
use of "infinity" values for large-scale deployments because we have a significant infrastructure impact when a short 'agent_down_time' is configured.
agent_down_time is intended to specify how long the heartbeat can be missed before the agent is considered down. it was not intented to contol the interval at which the heatbeat was sent.
https://opendev.org/openstack/neutron/commit/628442aed7400251f12809a45605bd7... intoduced a colation between the two but it resulted in the agent incorrectly being considered down and causing port binding failures if the agent_down_time was set too large.
The merged patch [1] limited the maximum delay to 10 seconds. I
understand
the requirement to use random values to avoid load spikes, but why does this fix limit the heartbeat to 10 seconds? What is the goal of the agent_down_time parameter in this case? How will it work for someone who has hundreds of compute nodes / metadata agents?
the change in [1] shoudl just change the delay before _update_chassis is invoked that at least was the intent. im expecting the interval between heatbeats to be ratlimaited via the mechim that was used before
https://opendev.org/openstack/neutron/commit/628442aed7400251f12809a45605bd7... was implemented.
i.e. whwen a SbGlobalUpdateEvent is generated now we are clamping the max wait to 10 seconds instead of cfg.CONF.agent_down_time // 2 which was causing port binding failures.
the timer object will run the passed in fucntion after the timer interval has expired.
https://docs.python.org/3/library/threading.html#timer-objects
but it will not re run multiple times and the function we are invoking does not loop internally so only one update will happen per invocation of run.
i believe the actual heatbeat/reporting interval is controlled by cfg.CONF.AGENT.report_interval
https://github.com/openstack/neutron/blob/cbb89fdb1414a1b3a8e8b3a9a4154ef627...
so i think if you want to reduce the interval in a large envionment to can do that by setting
[AGENT] report_interval=<your value>
I agree that the mechanism for sending heartbeats is controlled by report_interval, however, from what I understand, their original idea would be to configure complementary values: report_interval and agent_down_time would be associated with the status of network agents.
https://docs.openstack.org/neutron/2023.1/configuration/neutron.html report_interval: "Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half or less than agent_down_time." agent_down_time: "Seconds to regard the agent is down; should be at least twice report_interval, to be sure the agent is down for good."
im not that familiar with this code but that was my original understanding. the sllep before its rerun is calucated in oslo.service
https://github.com/openstack/oslo.service/blob/1.38.0/oslo_service/loopingca...
https://github.com/openstack/oslo.service/blob/1.38.0/oslo_service/loopingca...
the neutron core team can correct me if that is incorrect but i would not expct this to negitivly impact large clouds.
Note 1: My point is that the function SbGlobalUpdateEvent seems to be using the agent_down_time disassociated from the original function ( double / half relation).
Note 2: I'm curious to know the behavior of this modification with more than 200 chassis and with thousands of OVN routers. In this case, with many configurations being applied at the same time (a lot of events in SB_Global) and that require the agent running on Chassis to respond the report_interval at the same time as it is transitioning configs (probably millions of openflow flows entries). Is 10 seconds enough?
Regards, Roberto
[1] - https://review.opendev.org/c/openstack/neutron/+/883687