[openstack-dev] [nova] Stein forum session notes

melanie witt melwittt at gmail.com
Wed Nov 21 20:38:30 UTC 2018

On Mon, 19 Nov 2018 08:31:59 -0500, Jay Pipes wrote:
> Thanks for the highlights, Melanie. Appreciated. Some thoughts inline...
> On 11/19/2018 04:17 AM, melanie witt wrote:
>> Hey all,
>> Here's some notes I took in forum sessions I attended -- feel free to
>> add notes on sessions I missed.
>> Etherpad links: https://wiki.openstack.org/wiki/Forum/Berlin2018
>> Cheers,
>> -melanie
>> TUE
>> ---
>> Cells v2 updates
>> ================
>> - Went over the etherpad, no objections to anything
>> - Not directly related to the session, but CERN (hallway track) and
>> NeCTAR (dev ML) have both given feedback and asked that the
>> policy-driven idea for handling quota for down cells be avoided. Revived
>> the "propose counting quota in placement" spec to see if there's any way
>> forward here
> \o/
>> Getting users involved in the project
>> =====================================
>> - Disconnect between SIGs/WGs and project teams
>> - Too steep a first step to get involved by subscribing to ML
>> - People confused about how to participate
> Seriously? If subscribing to a mailing list is seen as too much of a
> burden for users to provide feedback, I'm wondering what the point is of
> having an open source community at all.
>> Community outreach when culture, time zones, and language differ
>> ================================================================
>> - Most discussion around how to synchronize real-time communication
>> considering different time zones
>> - Best to emphasize asynchronous communication. Discussion on ML and
>> gerrit reviews
> +1
>> - Helpful to create weekly meeting agenda in advance so contributors
>> from other time zones can add notes/response to discussion items
> +1, though I think it's also good to be able to say "look, nobody has
> brought up anything they'd like to discuss this week so let's not take
> time out of people's busy schedules if there's nothing to discuss".
>> WED
>> ---
>> NFV/HPC pain points
>> ===================
>> Top issues for immediate action: NUMA-aware live migration (spec just
>> needs re-approval), improved scheduler logging (resurrect cfriesen's
>> patch and clean it up), distant third is SRIOV live migration
>> BFV improvements
>> ================
>> - Went over the etherpad, no major objections to anything
>> - Agree: we should expose boot_index from the attachments API
>> - Unclear what to do about post-create delete_on_termination. Being able
>> to specify it for attach sounds reasonable, but is it enough for those
>> asking? Or would it end up serving no one?
>> Better expose what we produce
>> =============================
>> - Project teams should propose patches to openstack/openstack-map to
>> improve their project pages
>> - Would be ideal if project pages included a longer paragraph explaining
>> the project, have a diagram, list SIGs/WGs related to the project, etc
>> Blazar reservations to new resource types
>> =========================================
>> - For nova compute hosts, reservations are done by putting reserved
>> hosts into "blazar" host aggregate and then a special scheduler filter
>> is used to exclude those hosts from scheduling. But how to extend that
>> concept to other projects?
>> - Note: the nova approach will change from scheduler filter => placement
>> request filter
> Didn't we agree in Denver to use a placement request filter that
> generated a forbidden aggregate request for this? I know Matt has had
> concerns about the proposed spec for forbidden aggregates not adequately
> explaining the Nova side configuration, but I was under the impression
> the general idea of using a forbidden aggregate placement request filter
> was a good one?
>> Edge use cases and requirements
>> ===============================
>> - Showed the reference architectures again
>> - Most popular use case was "Mobile service provider 5G/4G virtual RAN
>> deployment and Edge Cloud B2B2X" with seven +1s on the etherpad
> Snore.
> Until one of those +1s is willing to uncouple nova-compute's tight use
> of rabbitmq and RDBMS-over-rabbitmq that we use as our control plane in
> Nova, all the talk of "edge" this and "MEC" that is nothing more than
> ... well, talk.
>> Deletion of project and project resources
>> =========================================
>> - What is wanted: a delete API per service that takes a project_id and
>> force deletes all resources owned by it with --dry-run component
>> - Challenge to work out the dependencies for the order of deletion of
>> all resources in all projects. Disable project, then delete things in
>> order of dependency
>> - Idea: turn os-purge into a REST API and each project implement a
>> plugin for it
> I don't see why a REST API would be needed. We could more easily
> implement the functionality by focusing on a plugin API for each service
> project and leaving it at that.
>> Getting operators' bug fixes upstreamed
>> =======================================
>> - Problem: operator reports a bug and provides a solution, for example,
>> pastes a diff in launchpad or otherwise describes how to fix the bug.
>> How can we increase the chances of those fixes making it to gerrit?
>> - Concern: are there legal issues with accepting patches pasted into
>> launchpad by someone who hasn't signed the ICLA?
>> - Possible actions: create a best practices guide tailored for operators
>> and socialize it among the ops docs/meetup/midcycle group. Example:
>> guidance on how to indicate you don't have time to add test coverage,
>> etc when you propose a patch
>> THU
>> ---
>> Bug triage: why not all the community?
>> ======================================
>> - Cruft and mixing tasks with defect reports makes triage more difficult
>> to manage. Example: difference between a defect reported by a user vs an
>> effective TODO added by a developer. If New bugs were reliably from end
>> users, would we be more likely to triage?
>> - Bug deputy weekly ML reporting could help
>> - Action: copy the generic portion of the nova bug triage wiki doc into
>> the contributor guide docs. The idea/hope being that easy-to-understand
>> instructions available to the wider community might increase the chances
>> of people outside of the project team being capable of triaging bugs, so
>> all of it doesn't fall on project teams
>> - Idea: should we remove the bug supervisor requirement from nova to
>> allow people who haven't joined the bug team to set Status and Importance?
>> Current state of volume encryption
>> ==================================
>> - Feedback: public clouds can't offer encryption because keys are stored
>> in the cloud. Telcos are required to make sure admin can't access
>> secrets. Action: SecuStack has a PoC for E2E key transfer, mnaser to
>> help see what could be upstreamed
>> - Features needed: ability for users to provide keys or use customer
>> barbican or other key store. Thread:
>> http://lists.openstack.org/pipermail/openstack-dev/2018-November/136258.html
>> Cross-technical leadership session (OpenStack, Kata, StarlingX, Airship,
>> Zuul)
>> ========================================================================
>> - Took down the structure of how leadership positions work in each
>> project on the etherpad, look at differences
>> - StarlingX taking a new approach for upstreaming, New strategy: align
>> with master, analyze what they need, and address the gaps (as opposed to
>> pushing all the deltas up). Bug fixes still need to be brought forward,
>> that won't change
>> Concurrency limits for service instance creation
>> ================================================
>> - Looking for ways to test and detect changes in performance as a
>> community. Not straightforward because test hardware must stay
>> consistent in order to detect performance deltas, release to release.
>> Infra can't provide such an environment
>> - Idea: it could help to write up a doc per project with a list of the
>> usual tunables and basic info about how to use them
>> Change of ownership of resources
>> ================================
>> - Ignore the network piece for now, it's the most complicated. Being
>> able to transfer everything else would solve 90% of City Network's use
>> cases
>> - Some ideas around having this be a keystone auth-based access granting
>> instead of an update of project/user, but if keystone could hand user A
>> a token for user B, that token would apply to all resources of user B's,
>> not just the ones desired for transfer
> Whatever happened with the os-chown project Dan started in Denver?
> https://github.com/kk7ds/oschown

What we distilled from the forum session is that at the heart of it, 
what is actually wanted is to be able to grant access to a resource 
owned by project A to project B, for example. It's not so much about 
wanting to literally change project_id/user_id from A to B. So, we asked 
the question, "what if project A could grant access to its resources to 
project B via keystone?" This could work if it is OK for project B to 
gain access to _all_ of project A's resources (since we currently have 
no way to scope access to specific resources). For a use case where it 
is OK for project A to grant access to all of project B's resources, the 
idea of accomplishing this is keystone-only, could work. Doing it 
auth-based through keystone-only would leave project_id/user_id and all 
dependencies intact, making the change only at the auth/project level. 
It is simpler and cleaner.

However, for a use case where it is not OK for project B to gain access 
to all of project A's resources, because we lack the ability to scope 
access to specific resources, the os-chown approach is the only proposal 
we know of that can address it.

So, depending on the use cases, we might be able to explore a keystone 
approach. From what I gathered in the forum session, it sounded like 
City Network might be OK with a project-wide access grant, but Oath 
might need a resource-specific scoped access grant. If those are both 
the case, we would find use in both a keystone access approach and the 
os-chown approach.

>> Update on placement extraction from nova
>> ========================================
>> - Upgrade step additions from integrated placement to extracted
>> placement in TripleO and OpenStackAnsible are being worked on now
>> - Reshaper patches for libvirt and xenapi drivers are up for review
>> - Lab test for vGPU upgrade and reshape + new schedule for libvirt
>> driver patch has been done already
> This is news to me. Can someone provide me a link to where I can get
> some more information about this?
>> - FFU script work needs an owner. Will need to query libvirtd to get
>> mdevs and use PlacementDirect to populate placement
>> Python bindings for the placement API
>> =====================================
>> - Placement client code replicated in different projects: nova, blazar,
>> neutron, cyborg. Want to commonize into python bindings lib
>> - Consensus was that the placement bindings should go into openstacksdk
>> and then projects will consume it from there
>> T series community goal discussion
>> ==================================
>> - Most popular goal ideas: Finish moving legacy python-*client CLIs to
>> python-openstackclient, Deletion of project resources as discussed in
>> forum session earlier in the week, ensure all projects use ServiceTokens
>> when calling one another with incoming token
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

More information about the openstack-discuss mailing list