[tc][all][ Technical Committee 2023.1 Antelope cycle Virtual PTG discussions summary
gmann at ghanshyammann.com
Sun Oct 23 00:58:40 UTC 2022
I am summarizing the Technical Committee discussion that happened in the 2023.1
Antelope cycle PTG last week.
Also, in case anyone wanted to search for all the projects etherpad later after ptgbot cleans
them, I have added those to this wiki
* ~30 on Monday, ~25 on Thursday, ~20 on Friday
Improvement in project governance:
We continued the discussion from Monday and discussed a few high-level criteria which
can be the potential criteria to monitor inactive projects. Along with contributors, event,
and meeting occurrence is one of them but not all. JayF will be working on this to list the
exact criteria/things TC can monitor to find and work on help inactive projects.
* 2021 user Survey result analysis:
We started with the pending review of the 2021 survey analysis. There wasn't 2020
analysis so, the comparison was done using 2019. Overall, the trends in the survey were
positive. We discussed adding the question about UI participants especially if users
need new features to be implemented in UI then what prevents them to help and contribute?
* 2022 User Survey result:
knikolla and I will be working on the analysis of this survey result. jungleboyj will help
on documenting the process and tips to analyze the survey xls which is not an easy task.
Thanks to jungleboyj for doing the survey analysis and helping with the 2022 survey too.
* 2023 User Survey question review:
While doing the 2021 survey analysis, we discussed the below changes in 2023
** Remove "Other ways users participate:"
1. how users are consuming OS. From pypi packages? From RHOSP? etc.
2. How are users interacting with OpenStack? Through Horizon? CLI? Skyline? An
internally developed tool? etc.
3. Adding a sub-question about UI development help in "To which projects does your
organization contribute maintenance resources such as patches for bugs and reviews
on master or stable branches?"
4. Asking about what all OSS/software/services are used in your OpenStack provisioned
As the next step, we will work on the exact wording for the above questions and send it
to Allison before the end of Oct month.
Next step for OSC work
We started the discussion on sdk/osc should continue supporting the old interfaces even if
we are not able to test them properly. Anyone running a very old cloud should be able to use
the sdk/osc latest. We agree and Artem mentioned being more careful about removing the things.
Next, we discussed where we stand on OSC work and direction from the project side and good
progress from nova, neutron, and manila (might be a few more) but at the same time, many other
projects need work on fixing the feature gaps. Glance is also in agreement to work towards OSC.
There was a question on when and how we should work towards deprecating/removing the
project's python client. The best way is to start implementing the new features in OSC and try
their best to fill the old feature gap. Based on the current state, we agree to start this work as
one of the community-wide and Artem volunteers to help here.
** Propose this as a community-wide goal "focus on fixing the feature gaps in osc not to remove
the pythonclient which can be another goal"
** Look at devstack's use of OSC to see if we can enable token caching and perhaps use a smaller
venv for performance
Consistent and Secure Default RBAC
As a good amount of the services are ready with the project personas, We will be testing them in
the integrated gate (tempest and tempest plugins jobs), and based on the results we will enable the
scope to enforce and new defaults enforce by default in this cycle. Also, services can start implementing
phase 2 of the goal which is the service-to-service role.
On Horizon question of moving to new defaults. because of the explicit policy files (with defaults)
on the horizon side, the operator will not be able to just switch the flag until they change the policy
file. For a better migration path and getting one more cycle to test the new defaults by default on
the service side, it is better for Horizon to move to new defaults in the next cycle.
Ade lee provided the current status of the goal which is good progress. There is work going on
having the ubuntu based FIPs enabled job and that will be to achieve the goal of having of
voting job in the check/gate pipeline.
Migrate CI/CD jobs to Ubuntu 22.04 (Jammy Jellyfish)
Projects are testing the jobs on Jammy but not all, we encourage the projects to start testing
your project jobs before the deadline which is m-1 (Nov 18), and fix the issues if there are any.
One known issue is Ceph job failing on Jammy as there aren't jammy upstream packages. There
are packages in UCA and Ubuntu proper that could be used instead.
The community-wide goal process in a world with few contributors
This topic is on how we can improve the community-wide goal progress and productivity but
at the same time reduce the overhead on projects. Also, to highlight that the Champion of a
goal is not required to do all the work and it is up to the champion. knikolla will work on the
goal documentation to try goal overhead minimum. We will be more careful with goal selection
to make sure if any work can be done without a community-wide goal then we should proceed
knikolla: see opportunities for rewording the community goal document. mention: striving to keep
goals to a minimum to reduce overhead on teams and try to build consensus first.
Pop-Up Team checks
We have two active popup teams, 1. Image Encryption 2. Policy(RBAC). Both have pending work to
finish so will continue in this cycle also.
Support policy for 32-bit platforms support
Zigo is testing the OpenStack on 32-bit and filed bugs. As we do not test it in the upstream CI/CD,
we will not be committing to its support. But it is completely ok to fix the bug or add a skip in tests.
Thanks to zigo for testing and reporting the bugs.
Fixing docs build issues with recent Sphinx
Sphinx is capped at 4.5.0 in upper constraints for a long time and whether we should move our doc
to 6.0 or not is the question. Moving to 6.0 needs more work in openstackdocstheme. Until we have
someone fixing things, it is ok to continue on Sphinx 4.5.0.
We had a good amount of discussion on elections and many ideas to improve them in the future.
Also by discussing the k8s election process with k8s steering committee members. We also discussed
TC Chair election process to record the nomination in a better way. This is still not concluded and we
continue the discussion in TC. Overall it was a productive discussion and we came up with below
1. Extend the nomination and voting period to two weeks and also communicate the election well in
2. Along with existing projects/SIG repos, add a few more related repos (governance, etc) in election
tooling to count the electorate.
3. Add in the election process: call for AC on openstack-discuss or any other ML before the deadline.
4. Appoint a TC liaison as a point of contact to track the election dates more carefully and ICAL for
election tasks will be a good idea to send/add.
5. Update the TC charter to mention the election period, and deadline explicitly.
Cross-community sessions with k8s steering committee team:
We invited Kubernetes Steering Committee members to TC sessions. Tim and Christoph joined the
sessions. This is a great way to collaborate between two communities. Following the introduction
from both sides, we discussed various topics and share the process, challenges, and feedback from
both sides. The election process, contributor recruiting, and how to engage them as long-term
contributors. Also having part-time or non-corporate contributors is still a challenge for both
communities. We also discussed the operator engagement challenges OpenStack is facing. Kubernetes
operator thing is a bit complicated and they have app dev, cluster, app, and infra operator. Similar to
OpenInfra foundation to the OpenStack community, CNCF foundation plays an important role for
them to connect the operator with the community as much as possible.
Discuss and clarify the supported upgrade-path testing in PTI:
To provide a better upgrade path, we decided to test the old distro version also whenever we will
bump the distro version in our CI/CD. Below are details of the upgrade testing we will be following:
* Supporting two distro versions when we bump the new distro version in any release: (this is only
for the release bump of the distro and after that, we can go back to a single distro version testing)
** Run single tempest job in project gate on old distro version, not all jobs
** Add the previous python version unit test job in the new release testing template.
* For non-SLURP releases, we will try not to change the testing runtime unless it is very much required
due to the EOL of versions, we are using in testing.
* In case the project has to add some feature that needs a new version of deps which is not supported
in the old distro version then they need to be explicit about it and communicate in a better way.
I will document the above in PTI.
Guidelines for using the OpenStack release version/name and project version in 1. releasenotes 2. Documentation
In Zed cycle, TC passed a resolution and also prepared the guidelines on using the release number as
a primary identifier. During nova PTG, there was a question on what is the recommendation for using
OpenStack version and package version (say Nova 27.0.0) in project documentation, releasenotes etc.
After discussing the multiple options listed in etherpad we agree to go for the below:
* OpenStack <release version> (<project> <project vesion>)
Example: OpenStack 2023.1 (Nova 27.0.0)
I will add this to the release identifier page so that all projects can use it consistently.
Discussion on projects (like neutron, ceilometer ) in Upper Constraints:
Having projects in u-c makes it difficult for users to have a consistent deployment of
those services. But we do not recommend using the u-c in production and it can be
explicitly mentioned in the requirement and project-team-guide document.
tonyb to document the upper constraints usage expectation (especially for production usage)
in the requirement document as well as in project-team-guide.
In the end, we discussed the Zed cycle retrospective.
* What went well?
** Good amount of work in the zed cycle
** New TC members
** Good participation in meetings especially video call
** TC & Community Engagement (leaders interaction) improving
*** i18 SIG team having there helped to proceed with i18 SIG work
One thing to improve next time is to explicitly call out the team/members with a courtesy ping for
future PTG if any related discussion. We do send the agenda on ML in advance but no harm in
Meeting time check:
TC weekly Video calls are more productive compared to text meetings and we will do two video calls
in a month. We will also start a poll to select the meeting time.
2023.1 cycle TC Tracker
I prepared the TC tracker for 2023.1 and listed all the actionable working items that came up during the
PTG discussion. This is helpful for tracking the working items.
Thank you for reading the summary or I will say detailed summary :), have a nice weekend everyone.
More information about the openstack-discuss