[openstack-dev] [nova][placement] PTG Summary and Rocky Priorities
Matt Riedemann
mriedemos at gmail.com
Thu Mar 8 19:57:39 UTC 2018
On 3/8/2018 6:51 AM, Jay Pipes wrote:
> - VGPU_DISPLAY_HEAD resource class should be removed and replaced with
> a set of os-traits traits that indicate the maximum supported number of
> display heads for the vGPU type
>
How does a trait express a quantifiable limit? Would we end up have
several different traits with varying levels of limits?
>
> - Multiple agreements about strict minimum bandwidth support feature in
> nova - Spec has already been updated accordingly:
> https://review.openstack.org/#/c/502306/
>
> - For now we keep the hostname as the information connecting the
> nova-compute and the neutron-agent on the same host but we are aiming
> for having the hostname as an FQDN to avoid possible ambiguity.
>
> - We agreed not to make this feature dependent on moving the nova
> port create to the conductor. The current scope is to support
> pre-created neutron port only.
I could rat-hole in the spec, but figured it would be good to also
mention it here. When we were talking about this in Dublin, someone also
mentioned that depending on the network on which nova-compute creates a
port, the port could have a QoS policy applied to it for bandwidth, and
then nova-compute would need to allocate resources in Placement for that
port (with the instance as the consumer). So then we'd be doing
allocations both in the scheduler for pre-created ports and in the
compute for ports that nova creates. So the scope statement here isn't
entirely true, and leaves us with some technical debt until we move port
creation to conductor. Or am I missing something?
>
> - Neutron will provide the resource request in the port API so this
> feature does not depend on the neutron port binding API work
>
> - Neutron will create resource providers in placement under the
> compute RP. Also Neutron will report inventories on those RPs
>
> - Nova will do the claim of the port related resources in placement
> and the consumer_id will be the instance UUID
--
Thanks,
Matt
More information about the OpenStack-dev
mailing list