[nova][ptg] main etherpad backup

Sean Mooney smooney at redhat.com
Mon May 13 22:19:50 UTC 2019


so it looks like the https://etherpad.openstack.org/p/nova-ptg-train-5 etherpad has died like the 4
before it.

attached is a offline copy i took near the end of ptg which should have the majoriy fo the content
for those that are looking for it


the downside is this is just a copy past i did in to a text file so i dont have any of the strike thouughs
or autor info in it but all the #Aggree: and other notes we took should still be there.

regards
sean
-------------- next part --------------
Nova Train PTG - Denver 2019

For forum session brainstorming use https://etherpad.openstack.org/p/DEN-train-nova-brainstorming

Attendance:

    efried

    sean-k-mooney,sean-k-mooney

    aspiers

    stephenfin

    takashin

    helenafm

    gmann

    Sundar

    mriedem

    gibi

    melwitt

    alex_xu

    mdbooth

    lyarwood

    tssurya

    kashyap (first two days will be sparsely available; be present fully on the last day)

    artom

    egallen

    dakshina-ilangov (joining post 11:30AM on Thur, Fri)

    jaypipes

    adrianc

    IvensZambrano

    johnthetubaguy (afternoon thursday, onwards)

    amodi 

    gryf

    cfriesen (bouncing around rooms a bit)

    med_

    mnestratov

    shuquan

    bauzas

    dklyle

    jichenjc

    sorrison

    jgasparakis

    tetsuro



Team photo Friday 11:50-1200 https://ethercalc.openstack.org/3qd1fj5f3tt3


Topics - Please include your IRC nick next to your topic so we know who to talk to about that topic.

    NUMA

    Topology with placement

    Spec: https://review.openstack.org/#/c/552924/

    XPROJ see https://etherpad.openstack.org/p/ptg-train-xproj-nova-placement

    Subtree affinity with placement [efried]

    XPROJ: see https://etherpad.openstack.org/p/ptg-train-xproj-nova-placement

    completing numa affinity polices for neutron sriov interfaces.  ==> neutron XPROJ (efried 20190422)

    https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/share-pci-between-numa-nodes.html

    ^ dose not work for nutron port as the flavor extra specs and image properties were removed during the implematnion

       and the spec was retroativly updated to document what was implemented. we should fix that by supportin numa polices

       TODO sean-k-mooney to write blueprint/spec

    either reporpose original spec or explore using the new port requestes/traits mechanisium.

    cpu modeling in placement

    jays spec https://review.openstack.org/#/c/555081/

    #agree approve the spec more or less as is and get moving

    how to make it work with numa affinity and cache

    RMD - Resource Management Daemon (dakshina-ilangov/IvensZambrano)

    Base enablement - https://review.openstack.org/#/c/651130/

    The following blueprints reference the base enablement blueprint above

    Power management using CPU core P state control - https://review.openstack.org/#/c/651024/

    Last-level cache - https://review.openstack.org/#/c/651233/

    #agree Generic file (inventory.yaml?) allowing $daemon (RMD) to dictate inventory to report, which can be scheduled via extra_specs

    #agree RMD to monitor (by subscribing to nova and/or libvirt notifications) and effect assignments/changes out-of-band - no communication from virt to RMD

    resource provider yaml (or how we whitelist/model host resources via config in general)

    https://review.openstack.org/#/c/612497/

    Code: https://review.openstack.org/#/c/622622/

    [efried 20190418] scrubbing from agenda for general lack of interest

    AMD SEV support  efried 20190424 - removing from agenda because approved

    Any matters arising (if we're lucky, there won't be any)

    Train spec: https://review.opendev.org/#/c/641994/

    Note this is significantly different from the...

    Stein spec: https://specs.openstack.org/openstack/nova-specs/specs/stein/approved/amd-sev-libvirt-support.html

    ...in that we're now using a resource class and making SEV contexts a quantifiable resource.

    "Guest CPU selection with hypervisor consideration" (kashyap)  efried 20190425 -- removing from agenda because spec/bp approved

    Blueprint: https://blueprints.launchpad.net/nova/+spec/cpu-selection-with-hypervisor-consideration

    Spec: https://review.openstack.org/#/c/645814/

    tl;dr: Re-work (for the better) the way Nova's libvirt driver chooses CPU models.

    Problem: Currently the CPU configuration APIs that Nova's libvirt driver uses — baselineCPU() and compareCPU() — ignore the host hypervisor's (QEMU + KVM) capabilities when determining guest CPU model

    To solve that, libvirt has introduced two new APIs that are "hypervisor-literate" — baselineHypervisorCPU() and compareHypervisorCPU().

    These newer APIs (requires: libvirt-4.0.0; and QEMU-2.9 — both for x86_64) take into account the hypervisor's capabilities, and are therefore much more useful.

    This addresses several problems (along multiple TODOs item the libvirt driver code (refer to _get_guest_cpu_model_config() and _get_cpu_traits() methods in libvirt/driver.py)

    Reference: Slide-28 here: https://kashyapc.fedorapeople.org/Effective-Virtual-CPU-Configuration-in-Nova-Berlin2018.pdf

    Making extra specs less of a forgotten child (stephenfin)

    spec: https://review.openstack.org/#/c/638734/

    Unlike config options, we have no central reference point for flavour extra specs. There are a *lot* of them and I frequently see typos, people setting them wrong etc.

    We don't? https://docs.openstack.org/nova/latest/user/flavors.html#extra-specs

    Do you intend to make this exhaustive / comprehensive / exclusive on the first pass (i.e. unknown/unregistered items trigger an error rather than being allowed)?

    I'd like this to be configurable, though I'm not sure if that's allowed (no to configurable APIs) so maybe warning-only first

    Glance supports metadata definitions but the definitions look old https://github.com/openstack/glance/blob/master/etc/metadefs/compute-cpu-pinning.json

    i think the way that the glance metadata refence the type of resouce they refer to (flavor,image,volume,host aggragate) is also tied into how heat references them.

    the glance metadef my be old but they are used to generate ui and validation logic in horizon too. they are availabe via a glance api endpoint which is how they are consumed by other services.

    There are also several missing metadefs and documented image properties.

    https://developer.openstack.org/api-ref/image/v2/metadefs-index.html

    Do we want to start validating these on the API side (microversion) and before an instance boots?

    Rough PoC for flavour extra spec definition here: http://paste.openstack.org/show/B9unIL8e2KpeSMBGaINe/

    extracting the metadef into an external lib that is importable by several service may be useful

    Not entirely sure if this is necessary. They change very rarely and it could be just as easy to have a "nova metadef" -> "glance metadef" translation tool. Worth discussing though

    We have json schema for scheduler hints but still allow undefined out of tree scheduler hints, do something like that?

    https://github.com/openstack/nova/blob/c7f4190/nova/api/openstack/compute/schemas/servers.py#L93

    Seems like trying to build a new metadefs type API for flavor extra specs in nova would take a lot longer than simply doing json schema validation of the syntax for known extra specs (like scheduler hints).

    (Sundar) +1 to two ideas: (a) Keep a standard list of keys which operators cannot modify and allow operators to add more to the schema (b) Use a new microversion to enforce strict extra specs key checking.

    #agree do it as part of flavor property set

    #agree first do value side validation for known keys, then validate keys, allowing admin ability to augment the schema of valid keys/values

    #agree validate the values without a microversion - doesn't fix the fat finger key issue, but helps with value validation (and doesn't need to all land at once)

    probably need something later for validating keys (strict mode or whatever)

    Persistent memory (alex_xu)

    spec https://review.openstack.org/601596, https://review.openstack.org/622893

    patches: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/virtual-persistent-memory

    NUMA rears its ugly head again

    #agree can "ignore NUMA" initially

    #agree Document lifecycle ops that are and aren't supported

    New virt driver for rsd:

    For management of a composable infrastructure

    bp: https://blueprints.launchpad.net/nova/+spec/rsd-virt-for-nova-implementation

    rsd-virt-for-nova virt driver: https://github.com/openstack/rsd-virt-for-nova

    spec review: https://review.openstack.org/#/c/648665/

    Questions

    Can this be done with nova + ironic?

    Why can't the driver live out of tree?

    What's the 3rd party CI story?

    Nova governance (stephenfin)

    In light of the placement split and massively reduced nova contributor base, it would be good to take the time to examine some of the reasons we do what we do and why...on a day that isn't the last day [efried] how about right after the retrospective on Thursday morning? Do we need more than 15 minutes for this? Right after the retro is fine. It's very subjective and I don't expect to make any decisions on the day. More of a sharing session.

    Ideas

    cores vs. spec cores

    This is done: https://review.openstack.org/#/admin/groups/302,members

    nova cores vs. neutron cores (the subsystem argument)

    two +2s from same company (for a change from someone in the same company?)

    (mriedem): I'm still a -1 on this. Multiple problems with this e.g. reviewing with the blinders and pressure to deliver for your downstream product roadmap ("we'll fix it downstream later") and I as a nova maintainer don't want to be responsible for maintaining technical debt rushed in by a single vendor.

    (kashyap) While I see where yuo're coming from, your comment implies mistrust and that people will intentionally "rush things in".  As long as a particular change is publicly advertized well-enough, gave sufficient time for others to catch up, all necessary assmuptions are described clearly, and respond in _every_ detail that isn't clear to a community reviewer, then it is absolutely reasonable for reviewers from a comapny to merge a change by a contributor from the same company.  This happens _all_ time in other mature open source communities (kernel, QEMU, et al).

    (adrianc) -1 on that, diversty is more likely to ensure community goals.

    (adrianc) perhaps a happy middle e.g bugfixes ?

    (mdbooth): The concern above is completely valid, but I don’t believe it will happen in practise, and in the meantime we’re making it harder on ourselves. I would like to trust our core reviewers, and we can address this if it actually happens. (+1)+1 +1

    separate specs repo vs. in-tree specs directory

    more effort than it's worth

    (anything else that nova does differently to other OpenStack projects and other large, multi-vendor, open source projects)

    Compute capabilities traits placement request filter (mriedem)

    Solution for https://bugs.launchpad.net/nova/+bug/1817927 and other things like booting from a multi-attach volume, we need the scheduler to pick a host with a virt driver that supports those types of requests.

    Think we agreed in https://etherpad.openstack.org/p/ptg-train-xproj-nova-placement toward the bottom that we're OK with this.

    Here's some code: https://review.opendev.org/#/c/645316/

    (gibi): I'm OK to modify the flavor extra_spec for now. I think we agreed yestarday to allow numbered group without resources in placement as a final solution. It is also OK to me. However we have a comment about storing the unnumbered request group in RequestSpec.requested_resources list. (https://github.com/openstack/nova/blob/master/nova/objects/request_spec.py#L93) I tried to do that to give a place where the capability traits can be stored, but failed: https://review.opendev.org/#/c/647396/5//COMMIT_MSG . Is there any reason to still try to store the unnumbered group in the RequestSpec.requested_resources?

    How do you like this hack? https://review.opendev.org/#/c/656885/3/nova/scheduler/manager.py (not traits related)

    [dtroyer][sean-k-mooney] 3rd party CI for NUMA/PCI/SRIOV (mriedem)

    moved to https://etherpad.openstack.org/p/nova-ptg-train-ci

    Corner case issues with root volume detach/attach (mriedem/Kevin_Zheng)

    Go over the stuff that came up late in the Stein cycle:

    tags and multiattach volumes: http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003376.html

    https://etherpad.openstack.org/p/detach-attach-root-volume-corner-cases

    When rebuilding with a new image we reset the stashed image_* system_metadata on the instance and some other fields on the instance based on new image metadata. When attaching a new root volume, the underlying image (and its metadata) could change, so presumably we need to do the same type of updates to the instance record when attaching a new root volume with a new image - agree?

    The state of nova's documentation (stephenfin)

    There are numerous issues with our docs

    Many features aren't documented or are barely documented in nova. I've cleaned up some but there's much more to do

    metadata (the metadata service, config drives, vendordata etc.), console proxy services, man pages,

    cross_az_attach: https://review.opendev.org/#/c/650456/

    Flow diagram for resize like for live migration https://docs.openstack.org/nova/latest/reference/live-migration.html

    Loads of stuff is out-of-date

    If you're a mere user of nova, the docs are essentially useless as admin'y stuff is scattered everywhere

    Other stuff

    Testing guides

    down cells: https://review.opendev.org/#/c/650167/

    Before there's serious time sunk into this, does anyone really care and should it be a priority?

    (kashyap) I enjoy improving documentation, so, FWIW, in my "copious free time" I am willing to help chip in with areas that I know a thing or two about.

    Broader topic: how can we get our respective downstream documentation teams to engage upstream?+∞

    (kashyap) A potential first step is to agree on a "system" (and consistently stick to it).  E.g. the "Django" project's (IIRC, Stephen even mentioned this in Berlin) documentation model (described here: https://www.divio.com/blog/documentation/)

    Tutorials — learning oriented

    How-To guides — problem-oriented

    Explanation — understanding-oriented

    Reference — information-oriented

    (mriedem): I try to push patches to fix busted / incorrect stuff or add missing things when I have the context (I'm looking through our docs for some specific reason). If I don't have the time, I'll report a bug and sometimes those can be marked as low-hanging-fruit for part time contributors to work on those.

    e.g. https://bugs.launchpad.net/nova/+bug/1820283

    TODO(stephenfin): Consider making this a mini cycle goal

    Tech debt:

    Removing cells v1

    Already in progress \o/

    Removing nova-network

    CERN are moving off this entirely (as discussed on the mailing list). We can kill it now?

    (melwitt): We have had the go ahead [from CERN] since Stein to remove nova-network entirely. \o/ 🎉

    Can we remove the nova-console, nova-consoleauth, nova-xvpxvncproxy service? (mriedem, stephenfin)

    The nova-console service is xenapi-specific and was deprecated in stein: https://review.openstack.org/#/c/610075/

    There are, however, REST APIs for it: https://developer.openstack.org/api-ref/compute/#server-consoles-servers-os-consoles-os-console-auth-tokens

    (stephenfin): Maybe I misunderstood you, but I thought these APIs could also talk to the DB stuff Mel did?

    So if we drop the nova-console service, the APIs would no longer work. It seems our options are:

    Delete the nova-console service and obsolete the REST APIs (410 response on all microversions like what we're doing with nova-cells and nova-network APIs)

    Deprecate the REST APIs on a new microversion but continue to support older microversions - this means the nova-console service would live forever.

    Are people still using the nova-console service? Are there alternatives/replacements for xen users? BobBall seemed to suggest there was http://lists.openstack.org/pipermail/openstack-dev/2018-October/135422.html but it's not clear to me.

    Matt DePorter (Rackspace) said this week that they rely on it - but they are on Queens and not sure if there are alternatives (as Bob suggested in the ML).

    But they're also on cellsv1, so they have work to do to upgrade anyway, so they might as well move to whatever isn't these things we want to delete. What is that? So upgrade and migrate from xen to kvm?

    #agree: Do it

    Migrating rootwrap to privsep

    #agree: Continue getting an MVP since it's not any worse than what we have and mikal is doing the work

    Bumping the minimum microversion

    Did this ever progress in ironic?

    #agree: nope

    imagebackend/image cache another go? (mdbooth)

    (mriedem): How could we have real integration testing for this before refactoring it? Not tempest, just a minimal devstack with (for lack of a better word) exercises.

    Why not tempest? Too heavy? Tempest tests the API, you need low-level testing of the cache to see if it's doing what you expect. Whitebox?

    I refactored the unit tests to be somewhat functional-y at the time for exactly this reason. This was before functional was a thing.

    Clean up rest of the needless libvirt driver version constants and compat code(kashyap)

    Mostly an FYI (as this is a recurring item)

    WIP: https://review.opendev.org/#/q/topic:Bump_min_libvirt_and_QEMU_for_Stein+(status:open+OR+status:merged)

    (kashyap) Some more compat code to be cleaned up, it's noted at the end of this (merge) change: https://review.opendev.org/#/c/632507/

    Remove mox (takashin)

    https://review.opendev.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/mox-removal-train

    Remove explicit eventlet usage (preferably entirely), specifically to allow wsgi to no longer require it. (mdbooth)

    Removing virt drivers that no longer has third-party CI (cough XenAPI cough)

    Removing fake libvirt driver (we use that! mdbooth) "we" who? It's basically a hack that's in place only because mocking wasn't done properly. mikal has a series to do the mocking properly and remove that driver. Link: https://review.opendev.org/#/q/topic:fake-cleanup+(status:open+OR+status:merged)

    Ah, fake libvirt *driver*. The fake libvirt module is used in functional (by the real libvirt driver).

    Fixing OSC's live migrate interface (mriedem)

    OSC CLI is not like nova and defaults to 2.1 unless the user overrides on the CLI or uses an environment variable. The "openstack server migrate --live <host>" CLI therefore essentially makes all live migrations by default forced live migrations, bypassing the scheduler, which is very dangerous.

    Changing the interface is likely going to have to be a breaking change and major version bump, but it needs to happen and has been put off too long.

    Let's agree on the desired interface, taking into account that you can also specify a host with cold migration now too, using the same CLI (openstack server migrate).

    See https://review.openstack.org/#/c/627801/ and the referenced changes for attempts at fixing this.

    (dtroyer) IIRC at least part of this can be done without breaking changes and should move forward.  But yeah, its a mess and time to fix it...

    More details in the Forum session etherpad: https://etherpad.openstack.org/p/DEN-osc-compute-api-gaps

    See the ML summary: http://lists.openstack.org/pipermail/openstack-discuss/2019-May/005783.html

    Let's plan the next steps of the bandwidth support feature (gibi)  [efried: XPROJ? does this need to involve neutron folks? If so, please add to https://etherpad.openstack.org/p/ptg-train-xproj-nova-neutron] (gibi): these items are mostly nova only things but I added a XPROJ item to the Neutron pad about multisegement support.

    Obvious next step is supporting server move operations with bandwidth: https://blueprints.launchpad.net/nova/+spec/support-server-move-operations-with-ports-having-resource-request spec: https://review.opendev.org/#/c/652608

    Question: Do we want to add support for server move operations with port having resource request as a new API microversion or as bug fixes? (gibi)

    background from ML http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001881.html

    Server delete and detach port works without require a specific microversion

    Server create works since microversion 2.72

    Server move operations rejected since https://review.openstack.org/#/c/630725

    #agree : no microversion (will be proposed in the spec, and allow people to object there)

    Question: Can the live migration support depend on the existence of multiple portbinding or we have to support the old codepath as well when the port binding is created by the nova-compute on the destination host?

    #agree: Yes, this extension cannot be turned off

    But there are various smaller and bigger enchancements. Tracked in https://blueprints.launchpad.net/nova/+spec/enhance-support-for-ports-having-resource-request Which one seems the most imporant to focus on in Train?

    Use placement to figure out which RP fulfills a port resource request (currently it is done by Nova). This requires the Placement bp https://blueprints.launchpad.net/nova/+spec/placement-resource-provider-request-group-mapping-in-allocation-candidates to be implemented first.

    A consensus is emerging to do the '"mappings" dict next to "allocations"' solution+1+1

    Supporting SRIOV ports with resource request requires virt driver support (currently supported by libvirt driver) to include the parent interface name to the descriptor of the PCI device represents VFs. Introduce a new TRAIT based capability to the virt drivers to report if they support SRIOV port with resource request to be able to drive the scheduling of server using such ports. Today only the pci_claim stops a boot if the virt driver does not support the feature and that leads to reschedule.

    #agree: add a new capablity as a trait

    Automatically set group_policy if more than one RequestGroup is generated for an allocation_candidate query in nova. This first needs an agreement what is a good default value for such policy.

    #agree: 'none' seems to be a sensible policy

    state that default on the nova side, not placement (which wants explicit)

    # agree: priority order: 1) group_policy, 2) capability trait, 3) port mapping

    (gibi) The rest is a long shot in Train but I added them for completness:

    Support attaching a port to a server where the port has resource request. This needs a way to increase the allocation of the running servers. So this requires the in_tree allocation candidate support from placement that was implemented in Stein https://blueprints.launchpad.net/nova/+spec/alloc-candidates-in-tree Also this operation can only be supported if the new, increased allocation still fits to the current compute the server is running on.

    Support attaching a network to a server where the network has a (default) QoS minimum bandwidth rule.

    Support creating a server with a network that has a (default) QoS minimum bandwidth rule. This requires to move the port create from the nova-compute to the nova-conductor first.

    Changing how server create force_hosts/nodes works (mriedem)

    Spec: https://review.openstack.org/#/c/645458/ (merged)

    Blueprint https://blueprints.launchpad.net/nova/+spec/add-host-and-hypervisor-hostname-flag-to-create-server approved

    Code https://review.openstack.org/#/c/645520/ started

    See discussion in the mailing list: http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003813.html

    API change option: Add a new parameter (or couple of parameters) to the server create API which would deprecate the weird az:host:node format for forcing a host/node and if used, would run the requested destination through the scheduler filters. This would be like how cold migrate with a target host works today. If users wanted to continue forcing the host and bypass the scheduler, they could still use an older microversion with the az:host:node format.

    Other options: config option or policy rule

    Integrating openstacksdk and replacing use of python-*client

    blueprint: https://blueprints.launchpad.net/nova/+spec/openstacksdk-in-nova

    code:

    openstacksdk patch to support ksa-conf-based connection construction: https://review.openstack.org/#/c/643601/

    Introduce SDK framework to nova (get_sdk_adapter): https://review.opendev.org/#/c/643664/

    WIP use openstacksdk instead of ksa for placement: https://review.opendev.org/#/c/656023/

    WIP/PoC start using openstacksdk instead of python-ironicclient: https://review.openstack.org/#/c/642899/

    questions:

    Community and/or project goal?

    Move from one-conf-per-service to unified conf and/or clouds.yaml

    How does the operator tell us to do this? Config options for

    location of clouds.yaml

    This should be in [DEFAULT] since it'll apply to all services (that support it)

    Which cloud region (is that the right term?) from clouds.yaml to use

    specifying this option would take precedence, ignore the ksa opts, and trigger use of clouds.yaml

    or perhaps a [DEFAULT] use_sdk_for_every_service_that_supports_it_and_use_this_cloud_region

    Process (blueprints/specs required?)

    API inconsistency cleanup (gmann)

    There are multiple API cleanup were found which seems worth to fix. These cleanups are API change so need microversion bump.

    Instead of increasing microversion separatly for each cleanup, I propose to be fix them under single microversion bump.

    Current list of cleanup - https://etherpad.openstack.org/p/nova-api-cleanup

    #. 400 for unknown param for query param and for request body.http://lists.openstack.org/pipermail/openstack-discuss/2019-May/

    Consensus is sure do this.

    #. Remove OS-* prefix from request and response field.

    Alternative: return both in response, accept either in request

    Dan and John are -1 on removing the old fields

    If you're using an SDK it should hide this for you anyway.

    Consensus in the room is to just not do this.

    #. Making server representation always consistent among all APIs returning the complete server representation.

    GET /servers/detail

    GET /servers/{server_id}

    PUT /servers/{server_id}

    POST /servers/{server_id} (rebuild)

    Consensus in the room is this is fine, it's just more fields in the PUT and rebuild responses.

    #. Return ``servers`` field always in response of GET /os-hypervisors

    this was nacked/deferred (i.e. not to be included in same microversion as above)

    Consensus: do it in the same microversion as the above

    #. Consistent error codes on quota exceeded

    this was nacked/deferred

    Spec - https://review.openstack.org/#/c/603969/

    Do we want to also lump https://review.opendev.org/#/c/648919/ (change flavors.swap default from '' [string] to 0 [int] in the response) into gmann's spec? It's a relatively small change. +1

    https://github.com/openstack/nova/blob/11de108daaab4a70e11f13c8adca3f5926aeb540/nova/api/openstack/compute/views/flavors.py#L55

    Consensus is yeah sure let's do this, clients already have to handle the empty string today for older clouds. This just fixes it for newer clouds.


    Libvirt + block migration + config drive + iso9660

    Would like another option besides enabling rsync or ssh across all compute nodes due to security concerns

    In our specific case we dont use any of the --files option when booting vm's.  We would like to be able to just regenerate the config drive contents on the destination side.  Instead of copying the existing config drive.

    This is for live migration, cold migration, or both?

    (mriedem): Who is "we"? GoDaddy?

    Secure Boot support for QEMU- and KVM-based Nova instances (kashyap)

    Blueprint: https://blueprints.launchpad.net/nova/+spec/allow-secure-boot-for-qemu-kvm-guests

    Spec (needs to be refreshed): https://review.openstack.org/#/c/506720/ (Add UEFI Secure Boot support for QEMU/KVM guests, using OVMF)

    Use case: Prevent guests from running untrusted code ("malware") at boot time.

    Refer to the periodic updates I posted in the Nova specification over the last year, as various pieces of work in lower layers got completed

    Upstream libvirt recently (13 Mar 2019) merged support for auto-selecting guest firmware: https://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=1dd24167b ("news: Document firmware autoselection for QEMU driver")

    NOTE: With the above libvirt work in place, Nova should have all the pieces ready (OVMF, QEMU, and libvirt) to integrate this.

    PS: Nova already has Secure Boot support for HyperV in-tree (http://git.openstack.org/cgit/openstack/nova/commit/?id=29dab997b4e)

    based on this: https://specs.openstack.org/openstack/nova-specs/specs/newton/approved/hyper-v-uefi-secureboot.html

    Action: Kashyap to write a summary brief to the mailing list

    John Garbutt will look at the spec

    Securing privsep (mdbooth)

    Privsep isn't currently providing any security, and is arguably worse than rootwrap: http://lists.openstack.org/pipermail/openstack-discuss/2019-March/004358.html

    Support filtering of allocation_candidates by forbidden aggregates (tpatil)

    Specs: https://review.opendev.org/#/c/609960/

    #action tpatil to answer questions on spec

    Allow compute nodes to use DISK_GB from shared storage RP (tpatil) [efried 20190422 XPROJ placement]

    Specs: https://review.opendev.org/#/c/650188/

    Disabled compute service request filter (mriedem)

    https://bugs.launchpad.net/nova/+bug/1805984

    related bug on using affinity with limits: https://bugs.launchpad.net/nova/+bug/1827628, we could just do a more generic solution for both.

    Modeling server groups in placement is a longer-term thing that requires some thought.

    Could pre-filter using in_tree filter in the strict affinity case, but that only works if there are already members of the group on a host (I think).

    Move affinity to given host in tree

    anti-affinity, retry a few times to see if you get lucky https://www.youtube.com/watch?v=mBluR6cLxJ8

    PoC using forbidden trait and a request filter: https://review.opendev.org/#/c/654596/

    Should we even do this? It would mean we'd have two source of truth about a disabled compute and they could get out of sync (we could heal in a periodic on the compute but still).

    Using a trait for this sort of violates the "traits should only be capabilities" thing "capable of hosting instances" seems like a fundamental capability to me

    https://review.opendev.org/#/c/623558/ attempts to address the reason why CERN limits allocation candidates to a small number (20) (should be fixed regardless)

    If one of the goals with nova's use of placement is to move as many python scheduler filters into placement filtering-in-sql, this would seem to align with that goal.

    How to deal with rolling upgrades while there are older computes? Should the API attempt to set the trait if the compute is too old to have the new code (and remove that in U

    Alternatives:

    Use a placement aggregate for all disabled computes and filter using negative member_of: https://docs.openstack.org/placement/latest/placement-api-microversion-history.html#support-forbidden-aggregates

    We'd have to hard-code the aggregate UUID in nova somewhere.

    This could be hard to debug for an operator since we can't put a meaningful name into a hex UUID.  I liked your suggestion: d15ab1ed-dead-dead-dead-000000000000

    Update all resource class inventory on the compute node provider and set reserved=total (like the ironic driver does when a node is in maintenance): https://github.com/openstack/nova/blob/fc3890667e4971e3f0f35ac921c2a6c25f72adec/nova/virt/ironic/driver.py#L882

    Might be a pain if we have to update all inventory classes. Baremetal providers are a bit easier since they have one custom resource class.

    What about nested provider inventory?

    Unable to configure this behavior like a request filter since it's messing with inventory.

    The compute update_provider_tree code would have to be aware of the disabled status to avoid changing the reserved value.

    #agree: Create a standard trait like COMPUTE_DISABLED and add required=!COMPUTE_DISABLED to every request

    TODO: mriedem to push a spec

    #agree: tssurya to push up the CERN downstream fix for bug 1827628 as the backportable fix for now (affinity problem)

    TODO: tssurya will push a spec for the feature

    Clean up orphan instances:  https://review.opendev.org/#/c/627765/   (yonglihe, alex_xu)

    Problem: Even though splited, still long and boring. Need fully review.

    Last time discussion link: https://etherpad.openstack.org/p/nova-ptg-stein L931

    Who could help on libvirt module?

    (mriedem): I've reviewed the big change before the split, and am still committed to reviewing this, just haven't thought about it lately - just ping me to remind me about reviews (Huawei also needs this). I don't think we really need this as a PTG item.

    (melwitt): I can also help continue to review. I don't think it's a bug (defect) but the launchpad bug has been changed to Wishlist so I guess that's OK. Will need a release note for it. The change is naturally a bit complex -- I haven't gotten around to reviewing it again lately.

    (johnthetubaguy) having attempted this manually recently, I want to review this too, if I can

    (gibi): I feel that this periodic cleanup papers over some bugs in nova results in orphans. Can we fix try to fix the root cause / original bug?

    TODO: https://review.opendev.org/#/c/556751/ so you can archive everything before a given time (not recent stuff). Might help with the case that you archived while a compute was down so the compute wasn't able to delete the guest on compute while it was still in the DB.

    Question: can this be integrated into the existing _cleanup_running_deleted_instances periodic task with new values for the config option, e.g. reap_with_orphans? Rather than mostly duplicating that entire periodic for orphans.

    StarlingX

    Add server sub-resource topology API  https://review.opendev.org/#/c/621476/  (yonglihe, alex_xu)

    Problem: Inernal NUMA information of NOVA kind of too complex for end user, need to expose a clear well defined, understandable information. 

    How we define the infomation is Open: 

    a) Starting from current bp, elimated the fuzzy one, keep the clear one 

    b) Come up with a new set of data, if we have a clear model for all that stuff. 

    discussion link: https://docs.google.com/document/d/1kRRZFq_ha0T9mFDOEzv0PMvXgtnjGm5ii9mSzdqt1VM/edit?usp=sharing

    (alex) remove the cpu topology from the proposal or just move that out of numa topology?

    only using the cpupinning info instead of cpuset?

    hugepage is per Numa node or not?

    bp: https://blueprints.launchpad.net/nova/+spec/show-server-numa-topology

    Last time discussion link:  https://etherpad.openstack.org/p/nova-ptg-stein  L901

    Who could help on NUMA module?

    StarlingX

    Briefly discuss idea for transferring ownership of nova resources (melwitt)

    Want to run the idea by the team and get a sanity check or yea/nay

    From the "Change ownership of resources - followup" session from Monday: https://etherpad.openstack.org/p/DEN-change-ownership-of-resources

    Idea: build upon the implementation in https://github.com/kk7ds/oschown/tree/master/oschown

    Each project (nova, cinder, neutron) has its own dir containing the code related to transferring ownership of its resources, to be available as a plugin

    This way, each project is responsible for providing test coverage, regression testing, upgrade testing (?) of their ownership transfer code. This is meant to address concerns around testing and maintenance of transfer code over time and across releases

    Then, something (micro service that has DB creds to nova/cinder/neutron or maybe an adjuntant workflow) will load plugins from all the projects and be able to carry out ownership changes based on a call to its REST API

    AGREE: sounds reasonable, melwitt to talk to tobberydberg and send summary to ML and figure out next steps

    Reduce RAM & CPU quota usage for shelved servers (mnestratov) - this is actually superseded by https://review.opendev.org/#/c/638073/

    Spec https://review.opendev.org/#/c/656806/

    there was a bug closed as invalid https://bugs.launchpad.net/nova/+bug/1630454 proposing to create spec


    StarlingX Reviews

    RBD: https://review.opendev.org/#/c/640271/, https://review.opendev.org/#/c/642667/

    auto-converge spec: https://review.opendev.org/#/c/651681/

    vCPU model:spec: https://review.openstack.org/#/c/642030/

    NUMA aware live migration This needs fixing first :(

    #action: Prioritize these for review somehow (runway, gerrit priorities, ...)




Thursday:
0900-0915: Settle, greet, caffeinate
0915-0945: Retrospective https://etherpad.openstack.org/p/nova-ptg-train-retrospective
0945-1000: Nova governance (stephenfin)
1000-1030: cpu modeling in placement
1030-1100: Persistent memory (alex_xu, rui zang)
1100-1130: Support filtering of allocation_candidates by forbidden aggregates (tpatil)
1115-1145: Corner case issues with root volume detach/attach (mriedem/Kevin_Zheng)
1145-1215: Making extra specs less of a forgotten child (stephenfin)
1215-1230: The state of nova's documentation (stephenfin)
1230-1330: Lunch
1330-1400: Let's plan the next steps of the bandwidth support feature (gibi)
1400-1430: RMD - Resource Management Daemon Part I (dakshina-ilangov/IvensZambrano)
1430-1445: Integrating openstacksdk and replacing use of python-*client (efried, dustinc, mordred)
1445-1500: RMD - Resource Management Daemon Part II (dakshina-ilangov/IvensZambrano)
1500-beer: Placement XPROJ: https://etherpad.openstack.org/p/ptg-train-xproj-nova-placement (ordered as shown in etherpad)

Friday:
0900-1000: Cyborg XPROJ (Ballroom 4!): https://etherpad.openstack.org/p/ptg-train-xproj-nova-cyborg
1015-1115: Ironic XPROJ: https://etherpad.openstack.org/p/ptg-train-xproj-nova-ironic
1115-1150: Cinder XPROJ: https://etherpad.openstack.org/p/ptg-train-xproj-nova-cinder in the Cinder room 203 they broadcast via microphones so not so portable.
1150-1200: Team Photo https://ethercalc.openstack.org/3qd1fj5f3tt3
1200-1230: API inconsistency cleanup (gmann)
1230-1330: Lunch
1330-1400: Glance topics - todo: dansmith to summarize the idea in the ML
1400-1515: Neutron XPROJ: https://etherpad.openstack.org/p/ptg-train-xproj-nova-neutron
*1430-1440: Placement team picture
1515-1615: Keystone XPROJ: https://etherpad.openstack.org/p/ptg-train-xproj-nova-keystone
1615-1630: Compute capabilities traits placement request filter (mriedem) (aspiers sched)
1630-beer: Disabled compute service request filter (mriedem) (aspiers sched)

Saturday:
0900-1000: [dtroyer][sean-k-mooney] 3rd party CI for NUMA/PCI/SRIOV (mriedem) https://etherpad.openstack.org/p/nova-ptg-train-ci
1000-1030: Clean up orphan instances, Add server sub-resource topology API (yonglihe, alex_xu)
1030-1045: Tech debt
1115-1130: Securing privsep (mdbooth)
1100-1115: StarlingX patches
1115-1130: Secure Boot support for QEMU- and KVM-based Nova instances (kashyap sched)
1130-1200: New virt driver for rsd
1200-1230: Train Theme setting https://etherpad.openstack.org/p/nova-train-themes
1230-1330: Lunch
1330-1345: Governance (single-company patch+approval, trusting cores) cont'd
1345-beer: Continue deferred discussions


Deferred

    Governance:

    two cores from same company

    Mdbooth proposed words: https://etherpad.openstack.org/p/nova-ptg-train-governance

    mini-cores: can we trust them to be SMEs and not just shove in new code? (Isn't that the same question we ask for other cores?)






More information about the openstack-discuss mailing list