[nova] Stein forum session notes
Hey all, Here's some notes I took in forum sessions I attended -- feel free to add notes on sessions I missed. Etherpad links: https://wiki.openstack.org/wiki/Forum/Berlin2018 Cheers, -melanie TUE --- Cells v2 updates ================ - Went over the etherpad, no objections to anything - Not directly related to the session, but CERN (hallway track) and NeCTAR (dev ML) have both given feedback and asked that the policy-driven idea for handling quota for down cells be avoided. Revived the "propose counting quota in placement" spec to see if there's any way forward here Getting users involved in the project ===================================== - Disconnect between SIGs/WGs and project teams - Too steep a first step to get involved by subscribing to ML - People confused about how to participate Community outreach when culture, time zones, and language differ ================================================================ - Most discussion around how to synchronize real-time communication considering different time zones - Best to emphasize asynchronous communication. Discussion on ML and gerrit reviews - Helpful to create weekly meeting agenda in advance so contributors from other time zones can add notes/response to discussion items WED --- NFV/HPC pain points =================== Top issues for immediate action: NUMA-aware live migration (spec just needs re-approval), improved scheduler logging (resurrect cfriesen's patch and clean it up), distant third is SRIOV live migration BFV improvements ================ - Went over the etherpad, no major objections to anything - Agree: we should expose boot_index from the attachments API - Unclear what to do about post-create delete_on_termination. Being able to specify it for attach sounds reasonable, but is it enough for those asking? Or would it end up serving no one? Better expose what we produce ============================= - Project teams should propose patches to openstack/openstack-map to improve their project pages - Would be ideal if project pages included a longer paragraph explaining the project, have a diagram, list SIGs/WGs related to the project, etc Blazar reservations to new resource types ========================================= - For nova compute hosts, reservations are done by putting reserved hosts into "blazar" host aggregate and then a special scheduler filter is used to exclude those hosts from scheduling. But how to extend that concept to other projects? - Note: the nova approach will change from scheduler filter => placement request filter Edge use cases and requirements =============================== - Showed the reference architectures again - Most popular use case was "Mobile service provider 5G/4G virtual RAN deployment and Edge Cloud B2B2X" with seven +1s on the etherpad Deletion of project and project resources ========================================= - What is wanted: a delete API per service that takes a project_id and force deletes all resources owned by it with --dry-run component - Challenge to work out the dependencies for the order of deletion of all resources in all projects. Disable project, then delete things in order of dependency - Idea: turn os-purge into a REST API and each project implement a plugin for it Getting operators' bug fixes upstreamed ======================================= - Problem: operator reports a bug and provides a solution, for example, pastes a diff in launchpad or otherwise describes how to fix the bug. How can we increase the chances of those fixes making it to gerrit? - Concern: are there legal issues with accepting patches pasted into launchpad by someone who hasn't signed the ICLA? - Possible actions: create a best practices guide tailored for operators and socialize it among the ops docs/meetup/midcycle group. Example: guidance on how to indicate you don't have time to add test coverage, etc when you propose a patch THU --- Bug triage: why not all the community? ====================================== - Cruft and mixing tasks with defect reports makes triage more difficult to manage. Example: difference between a defect reported by a user vs an effective TODO added by a developer. If New bugs were reliably from end users, would we be more likely to triage? - Bug deputy weekly ML reporting could help - Action: copy the generic portion of the nova bug triage wiki doc into the contributor guide docs. The idea/hope being that easy-to-understand instructions available to the wider community might increase the chances of people outside of the project team being capable of triaging bugs, so all of it doesn't fall on project teams - Idea: should we remove the bug supervisor requirement from nova to allow people who haven't joined the bug team to set Status and Importance? Current state of volume encryption ================================== - Feedback: public clouds can't offer encryption because keys are stored in the cloud. Telcos are required to make sure admin can't access secrets. Action: SecuStack has a PoC for E2E key transfer, mnaser to help see what could be upstreamed - Features needed: ability for users to provide keys or use customer barbican or other key store. Thread: http://lists.openstack.org/pipermail/openstack-dev/2018-November/136258.html Cross-technical leadership session (OpenStack, Kata, StarlingX, Airship, Zuul) ======================================================================== - Took down the structure of how leadership positions work in each project on the etherpad, look at differences - StarlingX taking a new approach for upstreaming, New strategy: align with master, analyze what they need, and address the gaps (as opposed to pushing all the deltas up). Bug fixes still need to be brought forward, that won't change Concurrency limits for service instance creation ================================================ - Looking for ways to test and detect changes in performance as a community. Not straightforward because test hardware must stay consistent in order to detect performance deltas, release to release. Infra can't provide such an environment - Idea: it could help to write up a doc per project with a list of the usual tunables and basic info about how to use them Change of ownership of resources ================================ - Ignore the network piece for now, it's the most complicated. Being able to transfer everything else would solve 90% of City Network's use cases - Some ideas around having this be a keystone auth-based access granting instead of an update of project/user, but if keystone could hand user A a token for user B, that token would apply to all resources of user B's, not just the ones desired for transfer Update on placement extraction from nova ======================================== - Upgrade step additions from integrated placement to extracted placement in TripleO and OpenStackAnsible are being worked on now - Reshaper patches for libvirt and xenapi drivers are up for review - Lab test for vGPU upgrade and reshape + new schedule for libvirt driver patch has been done already - FFU script work needs an owner. Will need to query libvirtd to get mdevs and use PlacementDirect to populate placement Python bindings for the placement API ===================================== - Placement client code replicated in different projects: nova, blazar, neutron, cyborg. Want to commonize into python bindings lib - Consensus was that the placement bindings should go into openstacksdk and then projects will consume it from there T series community goal discussion ================================== - Most popular goal ideas: Finish moving legacy python-*client CLIs to python-openstackclient, Deletion of project resources as discussed in forum session earlier in the week, ensure all projects use ServiceTokens when calling one another with incoming token
Thanks for the highlights, Melanie. Appreciated. Some thoughts inline... On 11/19/2018 04:17 AM, melanie witt wrote:
Hey all,
Here's some notes I took in forum sessions I attended -- feel free to add notes on sessions I missed.
Etherpad links: https://wiki.openstack.org/wiki/Forum/Berlin2018
Cheers, -melanie
TUE ---
Cells v2 updates ================ - Went over the etherpad, no objections to anything - Not directly related to the session, but CERN (hallway track) and NeCTAR (dev ML) have both given feedback and asked that the policy-driven idea for handling quota for down cells be avoided. Revived the "propose counting quota in placement" spec to see if there's any way forward here
\o/
Getting users involved in the project ===================================== - Disconnect between SIGs/WGs and project teams - Too steep a first step to get involved by subscribing to ML - People confused about how to participate
Seriously? If subscribing to a mailing list is seen as too much of a burden for users to provide feedback, I'm wondering what the point is of having an open source community at all.
Community outreach when culture, time zones, and language differ ================================================================ - Most discussion around how to synchronize real-time communication considering different time zones - Best to emphasize asynchronous communication. Discussion on ML and gerrit reviews
+1
- Helpful to create weekly meeting agenda in advance so contributors from other time zones can add notes/response to discussion items
+1, though I think it's also good to be able to say "look, nobody has brought up anything they'd like to discuss this week so let's not take time out of people's busy schedules if there's nothing to discuss".
WED ---
NFV/HPC pain points =================== Top issues for immediate action: NUMA-aware live migration (spec just needs re-approval), improved scheduler logging (resurrect cfriesen's patch and clean it up), distant third is SRIOV live migration
BFV improvements ================ - Went over the etherpad, no major objections to anything - Agree: we should expose boot_index from the attachments API - Unclear what to do about post-create delete_on_termination. Being able to specify it for attach sounds reasonable, but is it enough for those asking? Or would it end up serving no one?
Better expose what we produce ============================= - Project teams should propose patches to openstack/openstack-map to improve their project pages - Would be ideal if project pages included a longer paragraph explaining the project, have a diagram, list SIGs/WGs related to the project, etc
Blazar reservations to new resource types ========================================= - For nova compute hosts, reservations are done by putting reserved hosts into "blazar" host aggregate and then a special scheduler filter is used to exclude those hosts from scheduling. But how to extend that concept to other projects? - Note: the nova approach will change from scheduler filter => placement request filter
Didn't we agree in Denver to use a placement request filter that generated a forbidden aggregate request for this? I know Matt has had concerns about the proposed spec for forbidden aggregates not adequately explaining the Nova side configuration, but I was under the impression the general idea of using a forbidden aggregate placement request filter was a good one?
Edge use cases and requirements =============================== - Showed the reference architectures again - Most popular use case was "Mobile service provider 5G/4G virtual RAN deployment and Edge Cloud B2B2X" with seven +1s on the etherpad
Snore. Until one of those +1s is willing to uncouple nova-compute's tight use of rabbitmq and RDBMS-over-rabbitmq that we use as our control plane in Nova, all the talk of "edge" this and "MEC" that is nothing more than ... well, talk.
Deletion of project and project resources ========================================= - What is wanted: a delete API per service that takes a project_id and force deletes all resources owned by it with --dry-run component - Challenge to work out the dependencies for the order of deletion of all resources in all projects. Disable project, then delete things in order of dependency - Idea: turn os-purge into a REST API and each project implement a plugin for it
I don't see why a REST API would be needed. We could more easily implement the functionality by focusing on a plugin API for each service project and leaving it at that.
Getting operators' bug fixes upstreamed ======================================= - Problem: operator reports a bug and provides a solution, for example, pastes a diff in launchpad or otherwise describes how to fix the bug. How can we increase the chances of those fixes making it to gerrit? - Concern: are there legal issues with accepting patches pasted into launchpad by someone who hasn't signed the ICLA? - Possible actions: create a best practices guide tailored for operators and socialize it among the ops docs/meetup/midcycle group. Example: guidance on how to indicate you don't have time to add test coverage, etc when you propose a patch
THU ---
Bug triage: why not all the community? ====================================== - Cruft and mixing tasks with defect reports makes triage more difficult to manage. Example: difference between a defect reported by a user vs an effective TODO added by a developer. If New bugs were reliably from end users, would we be more likely to triage? - Bug deputy weekly ML reporting could help - Action: copy the generic portion of the nova bug triage wiki doc into the contributor guide docs. The idea/hope being that easy-to-understand instructions available to the wider community might increase the chances of people outside of the project team being capable of triaging bugs, so all of it doesn't fall on project teams - Idea: should we remove the bug supervisor requirement from nova to allow people who haven't joined the bug team to set Status and Importance?
Current state of volume encryption ================================== - Feedback: public clouds can't offer encryption because keys are stored in the cloud. Telcos are required to make sure admin can't access secrets. Action: SecuStack has a PoC for E2E key transfer, mnaser to help see what could be upstreamed - Features needed: ability for users to provide keys or use customer barbican or other key store. Thread: http://lists.openstack.org/pipermail/openstack-dev/2018-November/136258.html
Cross-technical leadership session (OpenStack, Kata, StarlingX, Airship, Zuul) ======================================================================== - Took down the structure of how leadership positions work in each project on the etherpad, look at differences - StarlingX taking a new approach for upstreaming, New strategy: align with master, analyze what they need, and address the gaps (as opposed to pushing all the deltas up). Bug fixes still need to be brought forward, that won't change
Concurrency limits for service instance creation ================================================ - Looking for ways to test and detect changes in performance as a community. Not straightforward because test hardware must stay consistent in order to detect performance deltas, release to release. Infra can't provide such an environment - Idea: it could help to write up a doc per project with a list of the usual tunables and basic info about how to use them
Change of ownership of resources ================================ - Ignore the network piece for now, it's the most complicated. Being able to transfer everything else would solve 90% of City Network's use cases - Some ideas around having this be a keystone auth-based access granting instead of an update of project/user, but if keystone could hand user A a token for user B, that token would apply to all resources of user B's, not just the ones desired for transfer
Whatever happened with the os-chown project Dan started in Denver? https://github.com/kk7ds/oschown
Update on placement extraction from nova ======================================== - Upgrade step additions from integrated placement to extracted placement in TripleO and OpenStackAnsible are being worked on now - Reshaper patches for libvirt and xenapi drivers are up for review - Lab test for vGPU upgrade and reshape + new schedule for libvirt driver patch has been done already
This is news to me. Can someone provide me a link to where I can get some more information about this?
- FFU script work needs an owner. Will need to query libvirtd to get mdevs and use PlacementDirect to populate placement
Python bindings for the placement API ===================================== - Placement client code replicated in different projects: nova, blazar, neutron, cyborg. Want to commonize into python bindings lib - Consensus was that the placement bindings should go into openstacksdk and then projects will consume it from there
T series community goal discussion ================================== - Most popular goal ideas: Finish moving legacy python-*client CLIs to python-openstackclient, Deletion of project resources as discussed in forum session earlier in the week, ensure all projects use ServiceTokens when calling one another with incoming token
__________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
On Mon, 2018-11-19 at 08:31 -0500, Jay Pipes wrote:
Thanks for the highlights, Melanie. Appreciated. Some thoughts inline...
On 11/19/2018 04:17 AM, melanie witt wrote:
Hey all,
Here's some notes I took in forum sessions I attended -- feel free to add notes on sessions I missed.
Etherpad links: https://wiki.openstack.org/wiki/Forum/Berlin2018
Cheers, -melanie
TUE ---
Cells v2 updates ================ - Went over the etherpad, no objections to anything - Not directly related to the session, but CERN (hallway track) and NeCTAR (dev ML) have both given feedback and asked that the policy-driven idea for handling quota for down cells be avoided. Revived the "propose counting quota in placement" spec to see if there's any way forward here
\o/
Getting users involved in the project ===================================== - Disconnect between SIGs/WGs and project teams - Too steep a first step to get involved by subscribing to ML - People confused about how to participate
Seriously? If subscribing to a mailing list is seen as too much of a burden for users to provide feedback, I'm wondering what the point is of having an open source community at all.
i know when i first started working on openstack i found the volume of mails/irc meeting and gerrit comments to be a little overwelming. them my company started sending 5 times the volume of internal mails and i learned how to use outlook filter to fix both issues and could narrow in on the topics that matter more to me. that said i often still miss things on the mailing list. i think it can be a little daunting but i would still prefer this to useing video confrencing exctra as our primay discution medium to have these types of discussion as that blocks async discussions. as a side note i personally found gerrit discciton much easier to engage with initally as it was eaisr to keep track of the topic i cared about.
Community outreach when culture, time zones, and language differ ================================================================ - Most discussion around how to synchronize real-time communication considering different time zones - Best to emphasize asynchronous communication. Discussion on ML and gerrit reviews
+1
- Helpful to create weekly meeting agenda in advance so contributors from other time zones can add notes/response to discussion items
+1, though I think it's also good to be able to say "look, nobody has brought up anything they'd like to discuss this week so let's not take time out of people's busy schedules if there's nothing to discuss".
WED ---
NFV/HPC pain points =================== Top issues for immediate action: NUMA-aware live migration (spec just needs re-approval), improved scheduler logging (resurrect cfriesen's patch and clean it up), distant third is SRIOV live migration
BFV improvements ================ - Went over the etherpad, no major objections to anything - Agree: we should expose boot_index from the attachments API - Unclear what to do about post-create delete_on_termination. Being able to specify it for attach sounds reasonable, but is it enough for those asking? Or would it end up serving no one?
Better expose what we produce ============================= - Project teams should propose patches to openstack/openstack-map to improve their project pages - Would be ideal if project pages included a longer paragraph explaining the project, have a diagram, list SIGs/WGs related to the project, etc
Blazar reservations to new resource types ========================================= - For nova compute hosts, reservations are done by putting reserved hosts into "blazar" host aggregate and then a special scheduler filter is used to exclude those hosts from scheduling. But how to extend that concept to other projects? - Note: the nova approach will change from scheduler filter => placement request filter
Didn't we agree in Denver to use a placement request filter that generated a forbidden aggregate request for this? I know Matt has had concerns about the proposed spec for forbidden aggregates not adequately explaining the Nova side configuration, but I was under the impression the general idea of using a forbidden aggregate placement request filter was a good one?
yes that was the direction we agreed to in denver, e.g. a prefilter that ran before the placement call that detected the present or absence of a node or instance reservation uuid in placement and added the anti affintiy request for the blazar aggreate as appropriate we were going to add a not in tree syntax ?in_tree=!<uuid of aggregate> to placement to enable this. so all of this would happen before the placemetn call not as an addtion post placement filter. there was also some discussion if we shoudl hardcode a known uuid for the balzer az or if it should be a config option for the filter read from the nova.conf. i belive we also discussed if the prefilter would have to interact with blazers api to validate things such as the flavor requiremetn for the instance reservation case but i think that was TBD with the asuumtion that was not nessiarilly required or could be done post placemetn if needed. the same "not in tree" mechisum was proposed for use with the windows aggregate usecase that tusar rasied i belive but my memory is a little fuzzy on all the details. we captured them in the blazer etherpad howerver i think.
Edge use cases and requirements =============================== - Showed the reference architectures again - Most popular use case was "Mobile service provider 5G/4G virtual RAN deployment and Edge Cloud B2B2X" with seven +1s on the etherpad
Snore.
Until one of those +1s is willing to uncouple nova-compute's tight use of rabbitmq and RDBMS-over-rabbitmq that we use as our control plane in Nova, all the talk of "edge" this and "MEC" that is nothing more than ... well, talk.
Deletion of project and project resources ========================================= - What is wanted: a delete API per service that takes a project_id and force deletes all resources owned by it with --dry-run component - Challenge to work out the dependencies for the order of deletion of all resources in all projects. Disable project, then delete things in order of dependency - Idea: turn os-purge into a REST API and each project implement a plugin for it
I don't see why a REST API would be needed. We could more easily implement the functionality by focusing on a plugin API for each service project and leaving it at that.
Getting operators' bug fixes upstreamed ======================================= - Problem: operator reports a bug and provides a solution, for example, pastes a diff in launchpad or otherwise describes how to fix the bug. How can we increase the chances of those fixes making it to gerrit? - Concern: are there legal issues with accepting patches pasted into launchpad by someone who hasn't signed the ICLA? - Possible actions: create a best practices guide tailored for operators and socialize it among the ops docs/meetup/midcycle group. Example: guidance on how to indicate you don't have time to add test coverage, etc when you propose a patch
THU ---
Bug triage: why not all the community? ====================================== - Cruft and mixing tasks with defect reports makes triage more difficult to manage. Example: difference between a defect reported by a user vs an effective TODO added by a developer. If New bugs were reliably from end users, would we be more likely to triage? - Bug deputy weekly ML reporting could help - Action: copy the generic portion of the nova bug triage wiki doc into the contributor guide docs. The idea/hope being that easy-to-understand instructions available to the wider community might increase the chances of people outside of the project team being capable of triaging bugs, so all of it doesn't fall on project teams - Idea: should we remove the bug supervisor requirement from nova to allow people who haven't joined the bug team to set Status and Importance?
Current state of volume encryption ================================== - Feedback: public clouds can't offer encryption because keys are stored in the cloud. Telcos are required to make sure admin can't access secrets. Action: SecuStack has a PoC for E2E key transfer, mnaser to help see what could be upstreamed - Features needed: ability for users to provide keys or use customer barbican or other key store. Thread: http://lists.openstack.org/pipermail/openstack-dev/2018-November/136258.html
Cross-technical leadership session (OpenStack, Kata, StarlingX, Airship, Zuul) ======================================================================== - Took down the structure of how leadership positions work in each project on the etherpad, look at differences - StarlingX taking a new approach for upstreaming, New strategy: align with master, analyze what they need, and address the gaps (as opposed to pushing all the deltas up). Bug fixes still need to be brought forward, that won't change
Concurrency limits for service instance creation ================================================ - Looking for ways to test and detect changes in performance as a community. Not straightforward because test hardware must stay consistent in order to detect performance deltas, release to release. Infra can't provide such an environment - Idea: it could help to write up a doc per project with a list of the usual tunables and basic info about how to use them
Change of ownership of resources ================================ - Ignore the network piece for now, it's the most complicated. Being able to transfer everything else would solve 90% of City Network's use cases - Some ideas around having this be a keystone auth-based access granting instead of an update of project/user, but if keystone could hand user A a token for user B, that token would apply to all resources of user B's, not just the ones desired for transfer
Whatever happened with the os-chown project Dan started in Denver?
https://github.com/kk7ds/oschown
Update on placement extraction from nova ======================================== - Upgrade step additions from integrated placement to extracted placement in TripleO and OpenStackAnsible are being worked on now - Reshaper patches for libvirt and xenapi drivers are up for review - Lab test for vGPU upgrade and reshape + new schedule for libvirt driver patch has been done already
This is news to me. Can someone provide me a link to where I can get some more information about this?
- FFU script work needs an owner. Will need to query libvirtd to get mdevs and use PlacementDirect to populate placement
Python bindings for the placement API ===================================== - Placement client code replicated in different projects: nova, blazar, neutron, cyborg. Want to commonize into python bindings lib - Consensus was that the placement bindings should go into openstacksdk and then projects will consume it from there
T series community goal discussion ================================== - Most popular goal ideas: Finish moving legacy python-*client CLIs to python-openstackclient, Deletion of project resources as discussed in forum session earlier in the week, ensure all projects use ServiceTokens when calling one another with incoming token
__________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Jay Pipes <jaypipes@gmail.com> writes:
Thanks for the highlights, Melanie. Appreciated. Some thoughts inline...
On 11/19/2018 04:17 AM, melanie witt wrote:
Hey all,
Here's some notes I took in forum sessions I attended -- feel free to add notes on sessions I missed.
Etherpad links: https://wiki.openstack.org/wiki/Forum/Berlin2018
Cheers, -melanie
TUE ---
Getting users involved in the project ===================================== - Disconnect between SIGs/WGs and project teams - Too steep a first step to get involved by subscribing to ML - People confused about how to participate
Seriously? If subscribing to a mailing list is seen as too much of a burden for users to provide feedback, I'm wondering what the point is of having an open source community at all.
IIRC, this was specifically about the *first* step of engaging with users and potential contributors. Throwing them into the deep end of the pool isn't a very gentle way to introduce them to the community, so even if we eventually need them to be able to join and participate on the mailing list maybe that's not the best answer to questions like "how do I get involved?"
WED ---
Deletion of project and project resources ========================================= - What is wanted: a delete API per service that takes a project_id and force deletes all resources owned by it with --dry-run component - Challenge to work out the dependencies for the order of deletion of all resources in all projects. Disable project, then delete things in order of dependency - Idea: turn os-purge into a REST API and each project implement a plugin for it
I don't see why a REST API would be needed. We could more easily implement the functionality by focusing on a plugin API for each service project and leaving it at that.
A REST API is easier to trigger from self-service operations like a user closing their account. We talked about updating os-purge to use plugins and building an OSC command that uses os-purge, and I see the possibility of adding a REST API to a service like Adjutant in the future. But, one step at a time. -- Doug
On 19/11/18 8:31 AM, Jay Pipes wrote:
Getting users involved in the project ===================================== - Disconnect between SIGs/WGs and project teams - Too steep a first step to get involved by subscribing to ML - People confused about how to participate
Seriously? If subscribing to a mailing list is seen as too much of a burden for users to provide feedback, I'm wondering what the point is of having an open source community at all.
The gist of the session was that the steps we have traditionally recommended were appropriate for somebody who has just been hired to work (substantially) full-time on upstream OpenStack, which at one time was not only common but arguably the best thing for us to concentrate on optimising. However, the same is not necessarily true for other types of contributors. For example, if someone is an operator of OpenStack with no time carved out to contribute, we still want them to push any patches they have upstream where possible, and some of those folks may even go on from there to become long-term contributors. (Ditto for end-users and bug reports.) For those folks, "sign up for ~17k p/a emails straight to your inbox and then maybe we'll talk" isn't the most helpful first step. FWIW, my input to the session was essentially this: * It isn't actually a great mystery how you retain and grow casual contributors: you make sure they get immediate, friendly, actionable, but most importantly immediate feedback any time they interact with the rest of the community. I believe that all of us understand this on some level. * If this were our top priority it would be completely feasible (though something else would certainly be sacrificed). * Revealed preferences suggest that it isn't, which is why we are discussing everything but that. cheers, Zane.
On Mon, 19 Nov 2018 08:31:59 -0500, Jay Pipes wrote:
Thanks for the highlights, Melanie. Appreciated. Some thoughts inline...
On 11/19/2018 04:17 AM, melanie witt wrote:
Hey all,
Here's some notes I took in forum sessions I attended -- feel free to add notes on sessions I missed.
Etherpad links: https://wiki.openstack.org/wiki/Forum/Berlin2018
Cheers, -melanie
TUE ---
Cells v2 updates ================ - Went over the etherpad, no objections to anything - Not directly related to the session, but CERN (hallway track) and NeCTAR (dev ML) have both given feedback and asked that the policy-driven idea for handling quota for down cells be avoided. Revived the "propose counting quota in placement" spec to see if there's any way forward here
\o/
Getting users involved in the project ===================================== - Disconnect between SIGs/WGs and project teams - Too steep a first step to get involved by subscribing to ML - People confused about how to participate
Seriously? If subscribing to a mailing list is seen as too much of a burden for users to provide feedback, I'm wondering what the point is of having an open source community at all.
Community outreach when culture, time zones, and language differ ================================================================ - Most discussion around how to synchronize real-time communication considering different time zones - Best to emphasize asynchronous communication. Discussion on ML and gerrit reviews
+1
- Helpful to create weekly meeting agenda in advance so contributors from other time zones can add notes/response to discussion items
+1, though I think it's also good to be able to say "look, nobody has brought up anything they'd like to discuss this week so let's not take time out of people's busy schedules if there's nothing to discuss".
WED ---
NFV/HPC pain points =================== Top issues for immediate action: NUMA-aware live migration (spec just needs re-approval), improved scheduler logging (resurrect cfriesen's patch and clean it up), distant third is SRIOV live migration
BFV improvements ================ - Went over the etherpad, no major objections to anything - Agree: we should expose boot_index from the attachments API - Unclear what to do about post-create delete_on_termination. Being able to specify it for attach sounds reasonable, but is it enough for those asking? Or would it end up serving no one?
Better expose what we produce ============================= - Project teams should propose patches to openstack/openstack-map to improve their project pages - Would be ideal if project pages included a longer paragraph explaining the project, have a diagram, list SIGs/WGs related to the project, etc
Blazar reservations to new resource types ========================================= - For nova compute hosts, reservations are done by putting reserved hosts into "blazar" host aggregate and then a special scheduler filter is used to exclude those hosts from scheduling. But how to extend that concept to other projects? - Note: the nova approach will change from scheduler filter => placement request filter
Didn't we agree in Denver to use a placement request filter that generated a forbidden aggregate request for this? I know Matt has had concerns about the proposed spec for forbidden aggregates not adequately explaining the Nova side configuration, but I was under the impression the general idea of using a forbidden aggregate placement request filter was a good one?
Yes, I think that is what was meant by the mention that the nova approach will change from the present scheduler filter (!blazar aggregate) to a placement request filter (the forbidden aggregate), going forward. I think the agreed idea you're talking about is what was mentioned in the session.
Edge use cases and requirements =============================== - Showed the reference architectures again - Most popular use case was "Mobile service provider 5G/4G virtual RAN deployment and Edge Cloud B2B2X" with seven +1s on the etherpad
Snore.
Until one of those +1s is willing to uncouple nova-compute's tight use of rabbitmq and RDBMS-over-rabbitmq that we use as our control plane in Nova, all the talk of "edge" this and "MEC" that is nothing more than ... well, talk.
Deletion of project and project resources ========================================= - What is wanted: a delete API per service that takes a project_id and force deletes all resources owned by it with --dry-run component - Challenge to work out the dependencies for the order of deletion of all resources in all projects. Disable project, then delete things in order of dependency - Idea: turn os-purge into a REST API and each project implement a plugin for it
I don't see why a REST API would be needed. We could more easily implement the functionality by focusing on a plugin API for each service project and leaving it at that.
Getting operators' bug fixes upstreamed ======================================= - Problem: operator reports a bug and provides a solution, for example, pastes a diff in launchpad or otherwise describes how to fix the bug. How can we increase the chances of those fixes making it to gerrit? - Concern: are there legal issues with accepting patches pasted into launchpad by someone who hasn't signed the ICLA? - Possible actions: create a best practices guide tailored for operators and socialize it among the ops docs/meetup/midcycle group. Example: guidance on how to indicate you don't have time to add test coverage, etc when you propose a patch
THU ---
Bug triage: why not all the community? ====================================== - Cruft and mixing tasks with defect reports makes triage more difficult to manage. Example: difference between a defect reported by a user vs an effective TODO added by a developer. If New bugs were reliably from end users, would we be more likely to triage? - Bug deputy weekly ML reporting could help - Action: copy the generic portion of the nova bug triage wiki doc into the contributor guide docs. The idea/hope being that easy-to-understand instructions available to the wider community might increase the chances of people outside of the project team being capable of triaging bugs, so all of it doesn't fall on project teams - Idea: should we remove the bug supervisor requirement from nova to allow people who haven't joined the bug team to set Status and Importance?
Current state of volume encryption ================================== - Feedback: public clouds can't offer encryption because keys are stored in the cloud. Telcos are required to make sure admin can't access secrets. Action: SecuStack has a PoC for E2E key transfer, mnaser to help see what could be upstreamed - Features needed: ability for users to provide keys or use customer barbican or other key store. Thread: http://lists.openstack.org/pipermail/openstack-dev/2018-November/136258.html
Cross-technical leadership session (OpenStack, Kata, StarlingX, Airship, Zuul) ======================================================================== - Took down the structure of how leadership positions work in each project on the etherpad, look at differences - StarlingX taking a new approach for upstreaming, New strategy: align with master, analyze what they need, and address the gaps (as opposed to pushing all the deltas up). Bug fixes still need to be brought forward, that won't change
Concurrency limits for service instance creation ================================================ - Looking for ways to test and detect changes in performance as a community. Not straightforward because test hardware must stay consistent in order to detect performance deltas, release to release. Infra can't provide such an environment - Idea: it could help to write up a doc per project with a list of the usual tunables and basic info about how to use them
Change of ownership of resources ================================ - Ignore the network piece for now, it's the most complicated. Being able to transfer everything else would solve 90% of City Network's use cases - Some ideas around having this be a keystone auth-based access granting instead of an update of project/user, but if keystone could hand user A a token for user B, that token would apply to all resources of user B's, not just the ones desired for transfer
Whatever happened with the os-chown project Dan started in Denver?
https://github.com/kk7ds/oschown
Update on placement extraction from nova ======================================== - Upgrade step additions from integrated placement to extracted placement in TripleO and OpenStackAnsible are being worked on now - Reshaper patches for libvirt and xenapi drivers are up for review - Lab test for vGPU upgrade and reshape + new schedule for libvirt driver patch has been done already
This is news to me. Can someone provide me a link to where I can get some more information about this?
So, I don't see a comment from Sylvain on the patch review itself, but I know that he has run the lab test on real hardware. I recall this IRC log, where he explained he ran the test before the "new schedule" patch landed: http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2... And here he confirms that a second test with reshape + new schedule works with the patch either after "new schedule" landed or rebased on top of: http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2...
- FFU script work needs an owner. Will need to query libvirtd to get mdevs and use PlacementDirect to populate placement
Python bindings for the placement API ===================================== - Placement client code replicated in different projects: nova, blazar, neutron, cyborg. Want to commonize into python bindings lib - Consensus was that the placement bindings should go into openstacksdk and then projects will consume it from there
T series community goal discussion ================================== - Most popular goal ideas: Finish moving legacy python-*client CLIs to python-openstackclient, Deletion of project resources as discussed in forum session earlier in the week, ensure all projects use ServiceTokens when calling one another with incoming token
__________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
On Mon, 19 Nov 2018 08:31:59 -0500, Jay Pipes wrote:
Thanks for the highlights, Melanie. Appreciated. Some thoughts inline...
On 11/19/2018 04:17 AM, melanie witt wrote:
Hey all,
Here's some notes I took in forum sessions I attended -- feel free to add notes on sessions I missed.
Etherpad links: https://wiki.openstack.org/wiki/Forum/Berlin2018
Cheers, -melanie
TUE ---
Cells v2 updates ================ - Went over the etherpad, no objections to anything - Not directly related to the session, but CERN (hallway track) and NeCTAR (dev ML) have both given feedback and asked that the policy-driven idea for handling quota for down cells be avoided. Revived the "propose counting quota in placement" spec to see if there's any way forward here
\o/
Getting users involved in the project ===================================== - Disconnect between SIGs/WGs and project teams - Too steep a first step to get involved by subscribing to ML - People confused about how to participate
Seriously? If subscribing to a mailing list is seen as too much of a burden for users to provide feedback, I'm wondering what the point is of having an open source community at all.
Community outreach when culture, time zones, and language differ ================================================================ - Most discussion around how to synchronize real-time communication considering different time zones - Best to emphasize asynchronous communication. Discussion on ML and gerrit reviews
+1
- Helpful to create weekly meeting agenda in advance so contributors from other time zones can add notes/response to discussion items
+1, though I think it's also good to be able to say "look, nobody has brought up anything they'd like to discuss this week so let's not take time out of people's busy schedules if there's nothing to discuss".
WED ---
NFV/HPC pain points =================== Top issues for immediate action: NUMA-aware live migration (spec just needs re-approval), improved scheduler logging (resurrect cfriesen's patch and clean it up), distant third is SRIOV live migration
BFV improvements ================ - Went over the etherpad, no major objections to anything - Agree: we should expose boot_index from the attachments API - Unclear what to do about post-create delete_on_termination. Being able to specify it for attach sounds reasonable, but is it enough for those asking? Or would it end up serving no one?
Better expose what we produce ============================= - Project teams should propose patches to openstack/openstack-map to improve their project pages - Would be ideal if project pages included a longer paragraph explaining the project, have a diagram, list SIGs/WGs related to the project, etc
Blazar reservations to new resource types ========================================= - For nova compute hosts, reservations are done by putting reserved hosts into "blazar" host aggregate and then a special scheduler filter is used to exclude those hosts from scheduling. But how to extend that concept to other projects? - Note: the nova approach will change from scheduler filter => placement request filter
Didn't we agree in Denver to use a placement request filter that generated a forbidden aggregate request for this? I know Matt has had concerns about the proposed spec for forbidden aggregates not adequately explaining the Nova side configuration, but I was under the impression the general idea of using a forbidden aggregate placement request filter was a good one?
Edge use cases and requirements =============================== - Showed the reference architectures again - Most popular use case was "Mobile service provider 5G/4G virtual RAN deployment and Edge Cloud B2B2X" with seven +1s on the etherpad
Snore.
Until one of those +1s is willing to uncouple nova-compute's tight use of rabbitmq and RDBMS-over-rabbitmq that we use as our control plane in Nova, all the talk of "edge" this and "MEC" that is nothing more than ... well, talk.
Deletion of project and project resources ========================================= - What is wanted: a delete API per service that takes a project_id and force deletes all resources owned by it with --dry-run component - Challenge to work out the dependencies for the order of deletion of all resources in all projects. Disable project, then delete things in order of dependency - Idea: turn os-purge into a REST API and each project implement a plugin for it
I don't see why a REST API would be needed. We could more easily implement the functionality by focusing on a plugin API for each service project and leaving it at that.
Getting operators' bug fixes upstreamed ======================================= - Problem: operator reports a bug and provides a solution, for example, pastes a diff in launchpad or otherwise describes how to fix the bug. How can we increase the chances of those fixes making it to gerrit? - Concern: are there legal issues with accepting patches pasted into launchpad by someone who hasn't signed the ICLA? - Possible actions: create a best practices guide tailored for operators and socialize it among the ops docs/meetup/midcycle group. Example: guidance on how to indicate you don't have time to add test coverage, etc when you propose a patch
THU ---
Bug triage: why not all the community? ====================================== - Cruft and mixing tasks with defect reports makes triage more difficult to manage. Example: difference between a defect reported by a user vs an effective TODO added by a developer. If New bugs were reliably from end users, would we be more likely to triage? - Bug deputy weekly ML reporting could help - Action: copy the generic portion of the nova bug triage wiki doc into the contributor guide docs. The idea/hope being that easy-to-understand instructions available to the wider community might increase the chances of people outside of the project team being capable of triaging bugs, so all of it doesn't fall on project teams - Idea: should we remove the bug supervisor requirement from nova to allow people who haven't joined the bug team to set Status and Importance?
Current state of volume encryption ================================== - Feedback: public clouds can't offer encryption because keys are stored in the cloud. Telcos are required to make sure admin can't access secrets. Action: SecuStack has a PoC for E2E key transfer, mnaser to help see what could be upstreamed - Features needed: ability for users to provide keys or use customer barbican or other key store. Thread: http://lists.openstack.org/pipermail/openstack-dev/2018-November/136258.html
Cross-technical leadership session (OpenStack, Kata, StarlingX, Airship, Zuul) ======================================================================== - Took down the structure of how leadership positions work in each project on the etherpad, look at differences - StarlingX taking a new approach for upstreaming, New strategy: align with master, analyze what they need, and address the gaps (as opposed to pushing all the deltas up). Bug fixes still need to be brought forward, that won't change
Concurrency limits for service instance creation ================================================ - Looking for ways to test and detect changes in performance as a community. Not straightforward because test hardware must stay consistent in order to detect performance deltas, release to release. Infra can't provide such an environment - Idea: it could help to write up a doc per project with a list of the usual tunables and basic info about how to use them
Change of ownership of resources ================================ - Ignore the network piece for now, it's the most complicated. Being able to transfer everything else would solve 90% of City Network's use cases - Some ideas around having this be a keystone auth-based access granting instead of an update of project/user, but if keystone could hand user A a token for user B, that token would apply to all resources of user B's, not just the ones desired for transfer
Whatever happened with the os-chown project Dan started in Denver?
What we distilled from the forum session is that at the heart of it, what is actually wanted is to be able to grant access to a resource owned by project A to project B, for example. It's not so much about wanting to literally change project_id/user_id from A to B. So, we asked the question, "what if project A could grant access to its resources to project B via keystone?" This could work if it is OK for project B to gain access to _all_ of project A's resources (since we currently have no way to scope access to specific resources). For a use case where it is OK for project A to grant access to all of project B's resources, the idea of accomplishing this is keystone-only, could work. Doing it auth-based through keystone-only would leave project_id/user_id and all dependencies intact, making the change only at the auth/project level. It is simpler and cleaner. However, for a use case where it is not OK for project B to gain access to all of project A's resources, because we lack the ability to scope access to specific resources, the os-chown approach is the only proposal we know of that can address it. So, depending on the use cases, we might be able to explore a keystone approach. From what I gathered in the forum session, it sounded like City Network might be OK with a project-wide access grant, but Oath might need a resource-specific scoped access grant. If those are both the case, we would find use in both a keystone access approach and the os-chown approach.
Update on placement extraction from nova ======================================== - Upgrade step additions from integrated placement to extracted placement in TripleO and OpenStackAnsible are being worked on now - Reshaper patches for libvirt and xenapi drivers are up for review - Lab test for vGPU upgrade and reshape + new schedule for libvirt driver patch has been done already
This is news to me. Can someone provide me a link to where I can get some more information about this?
- FFU script work needs an owner. Will need to query libvirtd to get mdevs and use PlacementDirect to populate placement
Python bindings for the placement API ===================================== - Placement client code replicated in different projects: nova, blazar, neutron, cyborg. Want to commonize into python bindings lib - Consensus was that the placement bindings should go into openstacksdk and then projects will consume it from there
T series community goal discussion ================================== - Most popular goal ideas: Finish moving legacy python-*client CLIs to python-openstackclient, Deletion of project resources as discussed in forum session earlier in the week, ensure all projects use ServiceTokens when calling one another with incoming token
__________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
On 11/21/2018 2:38 PM, melanie witt wrote:
Update on placement extraction from nova ======================================== - Upgrade step additions from integrated placement to extracted placement in TripleO and OpenStackAnsible are being worked on now - Reshaper patches for libvirt and xenapi drivers are up for review - Lab test for vGPU upgrade and reshape + new schedule for libvirt driver patch has been done already
This is news to me. Can someone provide me a link to where I can get some more information about this?
It was in the original checklist from the impromptu meeting in Denver: http://lists.openstack.org/pipermail/openstack-dev/2018-September/134541.htm... -- Thanks, Matt
On 11/26/2018 11:43 AM, Matt Riedemann wrote:
On 11/21/2018 2:38 PM, melanie witt wrote:
Update on placement extraction from nova ======================================== - Upgrade step additions from integrated placement to extracted placement in TripleO and OpenStackAnsible are being worked on now - Reshaper patches for libvirt and xenapi drivers are up for review - Lab test for vGPU upgrade and reshape + new schedule for libvirt driver patch has been done already
This is news to me. Can someone provide me a link to where I can get some more information about this?
It was in the original checklist from the impromptu meeting in Denver:
http://lists.openstack.org/pipermail/openstack-dev/2018-September/134541.htm...
What was news to me was the "has been done already" part of the Lab test for vGPU upgrade. I was asking for information on where I can see that Lab test. Thanks, -jay
On 11/26/2018 10:55 AM, Jay Pipes wrote:
What was news to me was the "has been done already" part of the Lab test for vGPU upgrade.
I was asking for information on where I can see that Lab test.
Oh, heh. It's in Sylvain's brain somewhere. -- Thanks, Matt
On Mon, 2018-11-26 at 11:01 -0600, Matt Riedemann wrote:
On 11/26/2018 10:55 AM, Jay Pipes wrote:
What was news to me was the "has been done already" part of the Lab test for vGPU upgrade.
I was asking for information on where I can see that Lab test.
Oh, heh. It's in Sylvain's brain somewhere. sylvain posted some past bins to irc a while ago sometime after denver and before the summit i want to say mid to late october.
i dont recall a ML post but there may have been one. basicially if i recall the irc converstation and pastbin contence correctly he deployed master, took a snapshot of the placemnet inventories, checked out the reshaper chage and restarted nova services took a snapshot of the reshaped placemetn invetories.
Le lun. 26 nov. 2018 à 18:03, Matt Riedemann <mriedemos@gmail.com> a écrit :
On 11/26/2018 10:55 AM, Jay Pipes wrote:
What was news to me was the "has been done already" part of the Lab test for vGPU upgrade.
I was asking for information on where I can see that Lab test.
Oh, heh. It's in Sylvain's brain somewhere.
+Oops sorry about the reply delay I surely miscommunicated this when I tested it. Like Sean said, I proceeded with an existing instance, restarting the Compute service and then asking again for a VGPU flavor. I can try to find the pastebin or just test it again, it should be easy. -Sylvain
--
Thanks,
Matt
Change of ownership of resources ================================ - Ignore the network piece for now, it's the most complicated. Being able to transfer everything else would solve 90% of City Network's use cases - Some ideas around having this be a keystone auth-based access granting instead of an update of project/user, but if keystone could hand user A a token for user B, that token would apply to all resources of user B's, not just the ones desired for transfer
Whatever happened with the os-chown project Dan started in Denver?
What we distilled from the forum session is that at the heart of it, what is actually wanted is to be able to grant access to a resource owned by project A to project B, for example. It's not so much about wanting to literally change project_id/user_id from A to B. So, we asked the question, "what if project A could grant access to its resources to project B via keystone?" This could work if it is OK for project B to gain access to _all_ of project A's resources (since we currently have no way to scope access to specific resources). For a use case where it is OK for project A to grant access to all of project B's resources, the idea of accomplishing this is keystone-only, could work. Doing it auth-based through keystone-only would leave project_id/user_id and all dependencies intact, making the change only at the auth/project level. It is simpler and cleaner.
However, for a use case where it is not OK for project B to gain access to all of project A's resources, because we lack the ability to scope access to specific resources, the os-chown approach is the only proposal we know of that can address it.
So, depending on the use cases, we might be able to explore a keystone approach. From what I gathered in the forum session, it sounded like City Network might be OK with a project-wide access grant, but Oath might need a resource-specific scoped access grant. If those are both the case, we would find use in both a keystone access approach and the os-chown approach.
FWIW, this is not what I gathered from the discussion, and I don't see anything about that on the etherpad: https://etherpad.openstack.org/p/BER-change-ownership-of-resources I know the self-service project-wide grant of access was brought up, but I don't recall any of the operators present saying that would actually solve their use cases (including City Network). I'm not really sure how granting another project access to all resources of another is really anything other than a temporary solution applicable in cases where supreme trust exists. I could be wrong, but I thought they specifically still wanted an API in each project that would forcibly transfer (i.e. actually change userid/project on) resources. Did I miss something in the hallway track afterwards? --Dan
On Tue, 27 Nov 2018 08:32:40 -0800, Dan Smith wrote:
Change of ownership of resources ================================ - Ignore the network piece for now, it's the most complicated. Being able to transfer everything else would solve 90% of City Network's use cases - Some ideas around having this be a keystone auth-based access granting instead of an update of project/user, but if keystone could hand user A a token for user B, that token would apply to all resources of user B's, not just the ones desired for transfer
Whatever happened with the os-chown project Dan started in Denver?
What we distilled from the forum session is that at the heart of it, what is actually wanted is to be able to grant access to a resource owned by project A to project B, for example. It's not so much about wanting to literally change project_id/user_id from A to B. So, we asked the question, "what if project A could grant access to its resources to project B via keystone?" This could work if it is OK for project B to gain access to _all_ of project A's resources (since we currently have no way to scope access to specific resources). For a use case where it is OK for project A to grant access to all of project B's resources, the idea of accomplishing this is keystone-only, could work. Doing it auth-based through keystone-only would leave project_id/user_id and all dependencies intact, making the change only at the auth/project level. It is simpler and cleaner.
However, for a use case where it is not OK for project B to gain access to all of project A's resources, because we lack the ability to scope access to specific resources, the os-chown approach is the only proposal we know of that can address it.
So, depending on the use cases, we might be able to explore a keystone approach. From what I gathered in the forum session, it sounded like City Network might be OK with a project-wide access grant, but Oath might need a resource-specific scoped access grant. If those are both the case, we would find use in both a keystone access approach and the os-chown approach.
FWIW, this is not what I gathered from the discussion, and I don't see anything about that on the etherpad:
https://etherpad.openstack.org/p/BER-change-ownership-of-resources
I know the self-service project-wide grant of access was brought up, but I don't recall any of the operators present saying that would actually solve their use cases (including City Network). I'm not really sure how granting another project access to all resources of another is really anything other than a temporary solution applicable in cases where supreme trust exists.
I could be wrong, but I thought they specifically still wanted an API in each project that would forcibly transfer (i.e. actually change userid/project on) resources. Did I miss something in the hallway track afterwards?
No, you didn't miss additional discussion after the session. I realize now from your and Tobias replies that I must have misunderstood the access grant part of the discussion. What I had interpreted when I brought up the idea of a keystone-based access grant was that Adrian thought it could solve their ownership transfer use case (and it's possible I misunderstood his response as well). And I don't recall Tobias saying something in objection to the idea, so I wrongly thought it could work for his use case too. I apologize for my misunderstanding and muddying the waters for everyone on this. Correcting myself: really what is wanted is to literally change project_id and user_id for resources, and that allowing the addition of another owner for a project's resources is not sufficient. Best, -melanie
On 2018-11-21 21:38, melanie witt wrote:
Change of ownership of resources ================================ - Ignore the network piece for now, it's the most complicated. Being able to transfer everything else would solve 90% of City Network's use cases - Some ideas around having this be a keystone auth-based access granting instead of an update of project/user, but if keystone could hand user A a token for user B, that token would apply to all resources of user B's, not just the ones desired for transfer
Whatever happened with the os-chown project Dan started in Denver?
What we distilled from the forum session is that at the heart of it, what is actually wanted is to be able to grant access to a resource owned by project A to project B, for example. It's not so much about wanting to literally change project_id/user_id from A to B. So, we asked the question, "what if project A could grant access to its resources to project B via keystone?" This could work if it is OK for project B to gain access to _all_ of project A's resources (since we currently have no way to scope access to specific resources). For a use case where it is OK for project A to grant access to all of project B's resources, the idea of accomplishing this is keystone-only, could work. Doing it auth-based through keystone-only would leave project_id/user_id and all dependencies intact, making the change only at the auth/project level. It is simpler and cleaner.
However, for a use case where it is not OK for project B to gain access to all of project A's resources, because we lack the ability to scope access to specific resources, the os-chown approach is the only proposal we know of that can address it.
So, depending on the use cases, we might be able to explore a keystone approach. From what I gathered in the forum session, it sounded like City Network might be OK with a project-wide access grant, but Oath might need a resource-specific scoped access grant. If those are both the case, we would find use in both a keystone access approach and the os-chown approach.
If you and others understood me that way I might have expressed me in the wrong way. For us, the use case we see is actually "transfer resources between tenants". Would say that basically all requests are for VMs and volumes, so that would cover most of that. As formulated in the forum session description, "As seamless as possible", can mean many different things for people, but offline approach is totally fine for my use case. What we discussed (or touched) during the session was to use similar approach as for glance image access, but in this case use that invite/accept approach for the actual ownership transfer (to be clear - not allow someone else access to my resources). Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED
On 11/19/2018 3:17 AM, melanie witt wrote:
- Not directly related to the session, but CERN (hallway track) and NeCTAR (dev ML) have both given feedback and asked that the policy-driven idea for handling quota for down cells be avoided. Revived the "propose counting quota in placement" spec to see if there's any way forward here
Should this be abandoned then? https://review.openstack.org/#/c/614783/ Since there is no microversion impact to that change, it could be added separately as a bug fix for the down cell case if other operators want that functionality. But maybe we don't know what other operators want since no one else is at multi-cell cells v2 yet. -- Thanks, Matt __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
On Mon, Nov 19, 2018 at 2:39 PM Matt Riedemann <mriedemos@gmail.com> wrote:
On 11/19/2018 3:17 AM, melanie witt wrote:
- Not directly related to the session, but CERN (hallway track) and NeCTAR (dev ML) have both given feedback and asked that the policy-driven idea for handling quota for down cells be avoided. Revived the "propose counting quota in placement" spec to see if there's any way forward here
Should this be abandoned then?
https://review.openstack.org/#/c/614783/
Since there is no microversion impact to that change, it could be added separately as a bug fix for the down cell case if other operators want that functionality. But maybe we don't know what other operators want since no one else is at multi-cell cells v2 yet.
I thought the policy check was needed until the "propose counting quota in placement" has been implemented as a workaround and that is what the "handling down cell" spec also proposed, unless the former spec would be implemented within this cycle in which case we do not need the policy check. -- Regards, Surya. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
On Mon, 19 Nov 2018 17:19:22 +0100, Surya Seetharaman wrote:
On Mon, Nov 19, 2018 at 2:39 PM Matt Riedemann <mriedemos@gmail.com <mailto:mriedemos@gmail.com>> wrote:
On 11/19/2018 3:17 AM, melanie witt wrote: > - Not directly related to the session, but CERN (hallway track) and > NeCTAR (dev ML) have both given feedback and asked that the > policy-driven idea for handling quota for down cells be avoided. Revived > the "propose counting quota in placement" spec to see if there's any way > forward here
Should this be abandoned then?
https://review.openstack.org/#/c/614783/
Since there is no microversion impact to that change, it could be added separately as a bug fix for the down cell case if other operators want that functionality. But maybe we don't know what other operators want since no one else is at multi-cell cells v2 yet.
I thought the policy check was needed until the "propose counting quota in placement" has been implemented as a workaround and that is what the "handling down cell" spec also proposed, unless the former spec would be implemented within this cycle in which case we do not need the policy check.
Right, I don't think that anyone _wants_ the policy check approach. That was just the workaround, last resort idea we had for dealing with down cells in the absence of being able to count quota usage from placement. The operators we've discussed with (CERN, NeCTAR, Oath) would like quota counting not to depend on cell databases, if possible. But they are understanding and will accept the policy-driven workaround if we can't move forward with counting quota usage from placement. If we can get agreement on the count quota usage from placement spec (I have updated it with new proposed details), then we should abandon the policy-driven behavior patch. I am eager to find out what everyone thinks of the latest proposal. Cheers, -melanie __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
participants (10)
-
Dan Smith
-
Doug Hellmann
-
Jay Pipes
-
Matt Riedemann
-
melanie witt
-
Sean Mooney
-
Surya Seetharaman
-
Sylvain Bauza
-
Tobias Rydberg
-
Zane Bitter