[openstack-dev] [nova][neutron] Rocky PTG summary - nova/neutron

melanie witt melwittt at gmail.com
Thu Mar 15 20:30:00 UTC 2018


Hello Stackers,

I've put together an etherpad [0] for the summary of the nova/neutron 
session from the PTG in the Croke Park Hotel breakfast area and included 
it as a plain text export on this email. Please feel free to edit or 
reply to this thread to add/correct anything I've missed.

Cheers,
-melanie

[0] https://etherpad.openstack.org/p/nova-ptg-rocky-neutron-summary

*Nova/Neutron: Rocky PTG Summary

https://etherpad.openstack.org/p/nova-ptg-rocky L159

*Key topics

   * NUMA-aware vSwitches
   * Minimum bandwidth-based scheduling
   * New port binding API in Neutron
   * Filtering instances by floating IP in Nova
   * Nova bug around re-attaching network interfaces on nova-compute 
restart -- port re-plug and re-create results in loss of some 
configuration like VLANs
     * https://bugs.launchpad.net/nova/+bug/1670628
   * Routed provider networks needs to move to placement aggregates

*Agreements and decisions

   * For NUMA-aware vSwitches, we'll go forward with a config-based 
solution for Rocky and deprecate it later when we have support in 
placement for the necessary inventory reporting bit (which will be 
implemented as part of the bandwidth-based scheduling work). We'll use 
dynamic attributes like "physnet_mapping_[name] = nodes" to avoid the 
JSON blob problem (Cinder and Manila do this) and thusly we'll avoid 
having to deprecate any additional YAML config file or JSON blob based 
thing when the placement support is available.
     * Spec: https://review.openstack.org/#/c/541290/
   * On minimum bandwidth-based scheduling:
     * Neutron will create the network related RPs under the compute RP 
in placement
       * It's reasonable to require unique hostnames (across cells, the 
internet, the world) and we'll solve the host -- compute uuid issue 
separately
     * Neutron will report the bandwidth inventory to placement
     * On the interaction of Neutron and Nova to communicate the 
requested bandwidth per port:
       * The requested minimum bandwidth for a neturon port wil be 
available in the neutron port API 
https://review.openstack.org/#/c/396297/7/specs/pike/strict-minimum-bandwidth-support.rst@68
       * The work does not depend on the new neutron port binding API
       * We'll need not just resources but traits as well on the neutron 
port and neutron should add the physnet to the port as a trait. We'll 
assume that the requested resources and traits are from a single 
provider per port
     * We don't need to block bandwidth-based scheduling support for 
doing port creation in conductor (it's not trivial), however, if nova 
creates a port on a network with a QoS policy, nova is going to have to 
munge the allocations and update placement (from nova-compute) ... so 
maybe we should block this on moving port creation to conductor after all
     * Nova will merge the requested bandwidth into the 
allocation_candidate request by a new request filter
     * Nova will create the allocation in placement for bandwidth 
resources and the allocation uuid will be the instance uuid. Multiple 
ports with different QoS rules will be distinguishable because they will 
have allocations from different providers
     * As PF/VF modeling in placement has not been done yet we can phase 
this feature to support OVS first and add support for SRIOV after the 
PF/VF modelling is done
     * Nova spec: https://review.openstack.org/#/c/502306/
     * Neutron spec: https://review.openstack.org/#/c/508149
   * On the new port binding API in Neutron, there is solid progress on 
the Neutron side and the Nova skeleton patches are making progress and 
depend on the Neutron patch, so some testing will be possible soon 
(still need to plumb in the libvirt driver changes)
     * Spec: 
https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/neutron-new-port-binding-api.html
     * Neutron patch: https://review.openstack.org/#/c/414251/
   * On the Nova bug about re-attaching network interfaces:
     * There was a bug in OVS back in 2014 for which a workaround was 
added: https://github.com/openstack/nova/commit/33cc64fb817
     * The bug was fixed in OVS in 2015 and is available in OVS 2.6.0 
onward: 
https://github.com/openvswitch/ovs/commit/e21c6643a02c6b446d2fbdfde366ea303b4c2730
     * The old workaround in Nova (now in os-vif) was determined to be 
causing the bug, so a fix to os-vif was made which essentially reverted 
the workaround: https://review.openstack.org/#/c/546588
     * We can close the bug in Nova once we have a os-vif library 
release and we depend on its version in our requirements.txt
   * On routed provider networks:
     * On the Neutron side, this is already done: 
https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html
     * Summit videos about routed provider networks:
       * 
https://www.openstack.org/videos/barcelona-2016/scaling-up-openstack-networking-with-routed-networks
       * 
https://www.openstack.org/videos/sydney-2017/openstack-networking-routed-networks-new-features




More information about the OpenStack-dev mailing list