[nova][neutron][cyborg] Bandwidth (and accel) providers are broken if CONF.host is set
balazs.gibizer at est.tech
Thu Nov 28 08:54:06 UTC 2019
On Wed, Nov 27, 2019 at 17:03, Sean Mooney <smooney at redhat.com> wrote:
> On Wed, 2019-11-27 at 16:20 +0100, Bence Romsics wrote:
>> > > resource_provider_hypervisors = br-physnet0:hypervisor0,...
>> > this also wont work as the same bridge name will exists on
>> multipel hosts
>> Of course the same bridge/nic name can exist on multiple hosts. And
>> each report_state message is clearly belonging to a single agent and
>> the configurations field is persisted per agent, so there won't be a
>> collision ever.
> that is in the non iroinc smart nic case. in the ironic smart nic
> case with the ovs super agent
> which is the only case where there would be multiple hypervisor
> managed by the same
> agent the agent will be remote.
When you say "ironic smart nic case with the ovs super agent", do you
refer to this abandoned spec ?
> so in the non ironic case it does not need to be a list.
> in the smartnic case it might need to be a list
In that spec the author proposes  not to break the 1-1 mapping
between OVS agent and remote OVS. So as far as I see there is no need
for a list in this case either.
> but a mapping of bridge or pyshnet wont be unique
> and a agent hostname (CONF.host) to hypervior host would be 1:N so
> its not clear how you would select
> form the N RPs if all you know form nova is the binding host which is
> the service host not hypervior hostname.
Are we talking about a problem during binding here? As this feels to be
a different problem than from creating device RPs under the proper
compute node RP.
Anyhow my simple understanding is the following:
* a physical NIC or an OVS integration bridge always belongs to one
single hypervisor. While a hypervisor might have more than on physical
NIC or an OVS bridge
* the identity (e.g. hypervisor hostname) of such hypervisor is known
at deployment time
* the neutron agent config can have a mapping between the device (NIC
or OVS bridge) and the hypervisor identity and this mapping can be sent
up to the neutron server via RPC
* the neutron agent already sends up the service host name where the
agent runs to the neutron server via RPC.
* the neutron server knowing the service host and the device ->
hypervisor identity mapping can find the compute node RP under which
the device RP needs to be created.
@Sean: Where does my list of reasoning breaks from your perspective?
More information about the openstack-discuss