openstack-discuss search results for query "#eventlet-removal"
openstack-discuss@lists.openstack.org- 186 messages
[tc][all] OpenStack Technical Committee Weekly Summary and Meeting Agenda (2025.2/R-18)
by Goutham Pacha Ravi
Hello Stackers,
We're 18 weeks away from the release date for OpenStack 2025.2
"Flamingo" [1]. Next week is the deadline for "cycle-trailing" [2]
projects to tag their 2025.1 "Epoxy" deliverables. Elsewhere, service
project teams are busy wrapping up design specifications for features
expected to be implemented in this release cycle. A call to action
regarding the cross-community goal on "eventlet removal" was made to
this mailing list [3]. Please join the #openstack-eventlet-removal
channel on OFTC and participate in the effort.
Several OpenStack governance changes are currently underway. A major
proposal among them is a transition [4] from the Contributor License
Agreement (CLA) [5] to the Developer Certificate of Origin [6]. This
change will affect every OpenStack contributor. The OpenStack
Technical Committee is working with the OpenInfra Foundation and the
OpenDev Infrastructure teams to enforce DCO compliance starting
2025-07-01. Please take some time to consider its implications and
provide your opinions on the TC resolution [4]. Project maintainers
are not expected to reject patches with DCO compliance today. If you
spot a "Signed-off-by" in the commit message, there's a good chance
reviewers have just looked past this, as it wasn't required so far.
It's a good time to review what may be necessary and be prepared for
the upcoming change [7].
=== Weekly Meeting ===
The weekly IRC meeting of the OpenStack Technical Committee occurred
on 2025-05-20 [8]. An action item regarding relinquishing the
"quantum" name on PyPI was discussed. The resolution in this regard
was acknowledged by the requester and merged shortly after. The
OpenDev infra administrators deleted OpenStack artifacts and handed
over the project namespace. The majority of the meeting later focused
on the transition from CLA (Contributor License Agreement) to DCO
(Developer Certificate of Origin). This move is part of a broader
transition into the Linux Foundation, with an effective date of June
1, 2025. The TC needed to reconfirm its desire to move to DCO,
preferably within the next two weeks, as the previous resolution on
this topic was from 2014. A new resolution confirming the board's
recommendation was deemed helpful for community feedback. We discussed
many aspects of this transition—a key concern being the smoothness of
the transition for contributors. While the technical implementation
(Gerrit enforcing Signed-Off-By in commit messages and turning off CLA
enforcement) is relatively simple, the human and organizational impact
is not trivial. The short timeline for the switchover was a major
point of contention, as downstream organizations may need to re-engage
legal teams and update internal contribution policies. The possibility
of having multiple CLAs active in Gerrit (allowing existing
contributors to continue under the old CLA while new contributors use
a new CLA for the new entity) was raised as a potential solution to
mitigate the immediate impact of the short deadline. However, mixing
CLA and DCO enforcement was generally seen as undesirable and hard to
implement. Post-meeting, the resolution was proposed [4], and the
timeline for implementation has been pushed out by a month to allow
the community time to prepare and react accordingly. Please expect
more communication regarding this in the next few days.
The next meeting of the OpenStack TC is on 2025-05-27 at 1700 UTC.
This meeting will be held over IRC on the #openstack-tc channel on
OFTC. Please find the agenda and other details on the meeting's wiki
page [9]. I hope you'll be able to join us there!
=== Governance Proposals ===
==== Merged ====
- [resolution] Relinquish "quantum" project on PyPI |
https://review.opendev.org/c/openstack/governance/+/949783
==== Open for Review ====
- Require declaration of affiliation from TC Candidates |
https://review.opendev.org/c/openstack/governance/+/949432
- [resolution] Replace CLA with DCO for all contributions |
https://review.opendev.org/c/openstack/governance/+/950463
- Clarify actions when no elections are required |
https://review.opendev.org/c/openstack/governance/+/949431
- Fix outdated info on the tc-guide |
https://review.opendev.org/c/openstack/governance/+/950446
=== Upcoming Events ===
- 2025-06-03: 15 ans d'OpenStack - OpenInfra UG, Paris:
https://www.meetup.com/openstack-france/events/307492285
- 2025-06-05: OpenStack 15 ans! - OpenInfra UG, Rennes:
https://www.meetup.com/openstack-rennes/events/306903998
- 2025-06-28: OpenInfra+Cloud Native Day, Vietnam:
https://www.vietopeninfra.org/void2025
Thank you very much for reading!
On behalf of the OpenStack TC,
Goutham Pacha Ravi (gouthamr)
OpenStack TC Chair
[1] 2025.2 "Flamingo" Release Schedule:
https://releases.openstack.org/flamingo/schedule.html
[2] "cycle-trailing":
https://releases.openstack.org/reference/release_models.html#cycle-trailing
[3] "eventlet-removal" status:
https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack…
[4] TC resolution to replace CLA with DCO for all contributions:
https://review.opendev.org/c/openstack/governance/+/950463
[5] OpenStack CLA:
https://docs.openstack.org/contributors/common/setup-gerrit.html#individual…
[6] Developer Certificate of Origin: https://developercertificate.org/
[7] DCO documentation draft:
https://review.opendev.org/c/openstack/contributor-guide/+/950839
[8] TC Meeting IRC Log 2025-05-20:
https://meetings.opendev.org/meetings/tc/2025/tc.2025-05-20-17.00.log.html
[9] TC Meeting Agenda, 2025-05-27:
https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting
7 months, 1 week
Re: [tc][all][security] Supporting Post-Quantum Cryptography in OpenStack code (all projects)
by Jean-Philippe Jung
Hi,
Thanks for the comments, and let me try to group some & reply ahead of
the PTG session.
1- Quantum computers are no threat, they are underpowered, so what’s
the risk & why should we do anything now?
* The issue is time. The day a quantum computer is powerful enough to
break traditional cryptography (known as Q-day), it is too late.
OpenStack lays naked and vulnerable.
* We can argue about dates, and my current opinion is that to be on the
safe side, any software product should implement PQC cryptography by
2029.
* We have a large code base, and we need to give time to the community
to review & adjust it then to openstack operators to implement
quantum-safe cryptography.
* There is no snake oil/panic-driven agenda here, either we protect
access to OpenStack infrastructure & between services, preferably by
2028. Or we don’t. We have time, but we should start work now.
2- It’s a problem for Operating System vendors, we just consume their
cryptography.
* I wish it were true. But the initial analysis I made demonstrates we
have libraries that do cryptographic operations, and maybe these
libraries end up properly calling an OS cryptographic module. Maybe not.
I know of paramiko, that does not call any OS layer, but its own copy of
OpenSSL code. We cannot assume OpenStack has nothing to do until we
analyse our cryptographic usage.
* I totally support the idea of using OS cryptography as much as
possible, because we offload the maintenance problem to OS vendors.
* Then because new fancy cryptographic algorithms have been finalized
(key encapsulation, ML-KEM), or near finalized (digital signature,
ML-DSA) does not indeed mean they are available in all the protocols
that uses them (IPsec, kerberos, dnssec…) in operating systems.
3- Some of the libraries listed might no longer be in used.
* The code should be removed from OpenStack. We have customers doing
Static Application Security Testing (SAST) and Dynamic Application
Security Testing (DAST): these will flag those unsafe modules.
* Other side of that point: some of these libraries are still used. Do
they rely on an operating system cryptography, or do they pull their own
cryptographic call? Are they maintained? Should we reduce the amount of
cryptographic modules in OpenStack?
4- There is no plan proposed, what’s next?
* Initial email was to express my concern about supporting PQC in
OpenStack, understanding if my concern was shared, and raise awareness
on this topic right ahead of the PTG.
* The first step to solve a problem is to know there is a problem, I’d
start with a per project effort starting with a simple question for
projects: "Does your project do anything in relation to keys,
encryption, or interaction of any encrypted data either at rest or while
in transit?"
* Next would be to use those results to plan for protecting OpenStack
against the store-now, decrypt-later attacks, meaning protecting key
exchanges relaying on ML-KEM (well hybrid X25519MLKEM768 to start with),
ensuring symmetric keys are long enough (256 bits minimum), then deal
later with digital signatures code (if any).
5- The goal is premature
* I consider this a community goal because all OpenStack components must
achieve support pretty much in the same time frame. I am not saying it
should take precedence over any other critical work (e.g., eventlet
removal), I think now is the time to scope the work and plan to meet the
2027/2028 deadline.
* OpenStack customers planning to run it well into the 2030s are asking
about the plans for PQC support. And new customers considering OpenStack
are asking about PQC support plans. Either we have an answer, a roadmap,
or OpenStack will not be part of the discussion, at some point (ask me
how I know :-).
Regards,
JP
2 months
[manila] 2025.1 Epoxy PTG summary
by Carlos Silva
Hello everyone! Thank you for the great participation at the PTG last week.
We've had great discussions and a good turnout. The recordings for the
sessions are available on YouTube [0]. If you would like to check on the
notes, please take a look at the PTG etherpad [1].
*2024.2 Dalmatian Retrospective*
==========================
- New core reviewers in the manila group were impactful in reviews, we
should continue actively working on maintaining/growing the core reviewer
team.
- We had the mid-cycle and managed to combine it with our well-known
collaborative review sessions, around feature proposal freeze. This had a
good impact on raising awareness on the changes being proposed, as well as
prioritizing the reviews.
- Great contributions ranging from new third party drivers to successful
internships on manila-ui, bandit and the ongoing OpenAPI internships.
*Action items:*
- Carlos (carloss) will work with the manila team to help people gain
context on the bug czar role and work with the team to rotate it.
- Vida Haririan (vhari) will jot down the details of the Bug Czar role
- Follow the discussions on teams joining the VMT and get Manila included
too.
- Spread the word on the removal of the manila client and switch to
OpenStackClient
*Share backup enhancements*
=========================
- Out of place restore isn't supported currently. We have agreed that this
is a good use case and that a design specification should be proposed to
document this.
- DataManager / BackupDriver - forcing the backup process to go through the
DataManager service is supported through a config option, but Manila is
currently not honoring it. We agreed that this is an issue in the code, and
we will review the proposed change [2] to make the data manager honor this
config.
- DataManager to allow for a backup driver to provide reports on API call
progress: Currently, the data manager fetches the progress of a backup
using a generic get progress call, but it is failing with the generic
backup driver. We suggested that this should be fixed in the base driver.
- Context for Backup API calls: currently, only objects representing a
Share and Backup are passed to the backup driver. The request context
should also be forwarded in these calls. The backup driver interface can be
changed for this, but we should be mindful of out of tree drivers that
could break.
*Action items:*
- Zach Goggins (zachgoggins) will look into:
- Proposing a spec for the share backup out of place restore.
- Updating the backup driver interface and adding context to the methods
that need it.
- Updating the backup driver interface and adding the abstract
methods/capabilities that will help with the `get_restore_progress` and
`get_backup_progress` methods.
- The manila team will provide feedback on [2]
*All things CephFS*
===============
*Updates from previous cycles*
------------------------------------------
*State of the Standalone NFS Ganesha protocol helper:*
- We added a deprecation warning at the end of the previous SLURP
release, and we are planning to complete the removal during the
2025.1/Epoxy release. There were no objections to this so far at the PTG.
When this is removed, CephFS-via-NFS will only work with cephadm deployed
ceph-nfs clusters.
*Testing and stabilization:*
- devstack-plugin-ceph has been refactored to deploy a standalone
NFS-Ganesha service with a ceph orch deployed cluster. We also dropped
support for package-based and container-based installation of ceph. cephadm
is used to deploy/orchestrate ceph.
- Bumped Ceph version to Reef in Antelope, Bobcat, Caracal, Dalmatian,
as well as started testing with Squid.
- There are some failures on stable branches jobs which are being
triaged and fixed.
*Manage/unmanage:*
- Implementation completed in Dalmatian and the documentation has been
updated. We are currently working to enable the tests on CI permanently, as
well as doing some small refactors to the CI jobs.
*Ensure shares:*
- Merged in Dalmatian but testing is still challenging, as running the
tests mean that the service would temporarily have a different status and
shares within the backend would have their status changed, which is harmful
for test concurrency.
*Preferred export locations and export location metadata:*
- The core feature merged, but we are still working to get the newly
implemented tests passing and merged.
*Plans for 2025.1/Epoxy*
--------------------------------
- NFSv3 + testing: we are looking into enabling NFSv3 support as soon as
the patch is merged in Ceph. We agreed that we should enable the tests
within manila-tempest-plugin and make any necessary changes to the tests
structure, so we can ensure that we are testing some scenarios with both
NFSv3 and NFSv4.
- We will start to investigate support for SMB/CIFS shares and look at
the necessary changes for setting up devstack and testing.
*Action items:*
- Carlos (carloss) will write an email to the openstack-discuss mailing
list announcing the removal of the deprecated ganesha helper
- Carlos pursue the manage/unmanage testing patches to have tests enabled
in the CephFS jobs during Epoxy.
- Carlos will look into approaches to test ensure shares APIs.
- Ashley (ashrod98) will continue working on the export location metadata
tempest changes and drive them to completion.
- The manila team will look into updating manila-tempest-plugin tests and
enabling NFSv3 tests in the Ceph NFS jobs
- Goutham (gouthamr) will be submitting a prototype of the SMB/CIFS
integration
*Tech Debt*
========
*Eventlet removal*
----------------------
Our main concerns:
- Performance should not be degraded with the default configuration when we
switch.
- Synchronous calls do not take a big hit and become asynchronous.
- Impact to the SSH Pool (used by many drivers) should be minimal.
*Action items for 2025.1 Epoxy:*
- Tackle the low-hanging-fruit changes.
- Participating in the pop-up team discussions.
- Removing the affected console scripts in Manila.
- Working on performance tests to understand what will be the impact on
the SSH pool that is used by some drivers.
- Look into enhancing our rally/browbeat test coverage.
*CI and testing images*
-------------------------------
We started working on the migration of the CI to Ubuntu 24.04 in all of the
manila repositories (manila-image-elements, python-manilaclient, manila-ui,
manila, manila-specs).
Currently, the Ceph job is broken [3].
*Action items:*
- We should clean up our CI job variants, as they have a lot of
workarounds and we can start moving away from them.
*Stable branches*
----------------------
We currently have 5 "unmaintained" branches, so we should be looking at
sunsetting them.
*Action items:*
- Carlos (carloss) will start the conversation for the transition of some
of these branches in the openstack-discuss mailing list.
*Allowing force delete of a share network subnet*
=======================================
We currently can add subnets (which translates to adding new network
interfaces) to a share server but we can't remove them. This is a proposal
to add this removal feature and being able to detach a network interface of
a share server.
We agreed that:
- This is a good use case and something that can be enhanced.
- The enhancement should add a force-delete API.
- We should not allow the last subnet to be deleted, otherwise the shares
won't have an export path.
- A bug should be filed for a tangential issue that the NetApp driver is
using "neutron_net_id" (and possibly "neutron_subnet_id" to name resources
on the backend: ipspaces, broadcast domains, and possibly concurrency
control / locks.
*Action Items:*
- sylvanld will look into proposing a spec to document this behavior
*NetApp Driver: Enforcing lifs limit per HA pair*
=====================================
- The NetApp ONTAP storage has a limit of network interfaces per node in a
HA pair. In case the sum of allocated network interfaces in the two nodes
of the HA pair is bigger than the limit of the single node, then the
failover operation is compromised and will fail.
- NetApp maintainers would like to fix this issue, and we agreed that:
- The fix should be as dynamic as possible, not relying on users/admin
input or configuration.
- The ONTAP driver must look up all of the interfaces already created
and allow/deny the request in case it would compromise the failover.
- The NetApp ONTAP driver should keep an updated capability with the max
network interfaces support number, and possible the number of allocated
network interfaces at the moment.
*NetApp Driver: Implement Certificate based authentication*
================================================
- The NetApp ONTAP driver currently handles only user/password
authentication, but in an environment that password should change
quarterly, this means updating the local.conf at least every three months.
This enhancement proposes also adding the possibility of adding certificate
based authentication.
- We agreed that this is something that is going to be important for
operators and will allow them to add their certificates with a longer
expiration date, avoiding the disruptions caused by needing to update the
user/password.
*Manage Share affinity relationships by annotation/label*
=============================================
Currently the manila scheduler uses affinity/anti-affinity hints and we
base ourselves on share IDs. The idea now would be to have the affinite
hints to be based in an affinity policy, as possible with Nova.
We considered the proposed approaches, and agreed that:
- If we are adding new policies, they should end up becoming a new
resource/entity within the manila database
- If there is a way to reuse the share groups mechanism, we should
prioritize it
*Action items:*
- Chuan (chuanm) will propose a design spec to document this new behavior.
*Share encryption*
==============
This feature is currently waiting for more reviews and testing on gerrit.
In the Dalmatian release mid-cycle we talked about the importance of
testing this feature against a first party driver, to ensure that the APIs
and integration with Barbican and Castellan work.
We agreed that:
- We should do some research on how to do this testing with the generic
driver (which uses Cinder and Nova)
- The testing will focus on the APIs and behavior of this feature, not the
encryption of the shares.
*Action items:*
- gouthamr will help with some research on how to test this with the
generic driver
- The manila team will discuss this again in the upcoming manila weekly
meetings.
[0]
https://www.youtube.com/watch?v=8UxrjEr6yik&list=PLnpzT0InFrqDHGfSDPhiGtSeX…
[1] https://etherpad.opendev.org/p/epoxy-ptg-manila
[2] https://review.opendev.org/c/openstack/manila/+/907983
[3] https://www.spinics.net/lists/ceph-users/msg83201.html
1 year, 2 months
Re: [watcher] 2025.2 Flamingo PTG summary
by Dmitriy Rabotyagov
Hey,
Have a comment on one AI from the list.
> AI: (jgilaber) Mark Monasca and Grafana as deprecated, unless someone
steps up to maintain them, which should include a minimal CI job running.
So eventually, on OpenStack-Ansible we were planning to revive the Watcher
role support to the project.
How we usually test deployment, is by spawning an all-in-one environment
with drivers and executing a couple of tempest scenarios to ensure basic
functionality of the service.
With that, having a native OpenStack telemetry datastore is very beneficial
for such goal, as we already do maintain means for spawning telemetry
stack. While a requirement for Prometheus will be unfortunate for us at
least.
While I was writing that, I partially realized that testing Watcher on
all-in-one is pretty much impossible as well...
But at the very least, I can propose looking into adding an OSA job with
Gnocchi as NV to the project, to show the state of the deployment with this
driver.
On Wed, 16 Apr 2025, 21:53 Douglas Viroel, <viroel(a)gmail.com> wrote:
> Hello everyone,
>
> Last week's PTG had very interesting topics. Thank you all that joined.
> The Watcher PTG etherpad with all notes is available here:
> https://etherpad.opendev.org/p/apr2025-ptg-watcher
> Here is a summary of the discussions that we had, including the great
> cross-project sessions with Telemetry, Horizon and Nova team:
>
> Tech Debt (chandankumar/sean-k-mooney)
> =================================
> a) Croniter
>
> - Project is being abandoned as per
> https://pypi.org/project/croniter/#disclaimer
> - Watcher uses croniter to calculate a new schedule time to run an
> audit (continuous). It is also used to validate cron like syntax
> - Agreed: replace croniter with appscheduler's cron methods.
> - *AI*: (chandankumar) Fix in master branch and backport to 2025.1
>
> b) Support status of Watcher Datasources
>
> - Only Gnocchi and Prometheus have CI job running tempest tests (with
> scenario tests)
> - Monaska is inactive since 2024.1
> - *AI*: (jgilaber) Mark Monasca and Grafana as deprecated, unless
> someone steps up to maintain them, which should include a minimal CI job
> running.
> - *AI*: (dviroel) Document a support matrix between Strategies and
> Datasources, which ones are production ready or experimental, and testing
> coverage.
>
> c) Eventlet Removal
>
> - Team is going to look at how the eventlet is used in Watcher and
> start a PoC of its removal.
> - Chandan Kumar and dviroel volunteer to help in this effort.
> - Planned for 2026.1 cycle.
>
> Workflow/API Improvements (amoralej)
> ==============================
> a) Actions states
>
> - Currently Actions updates from Pending to Succeeded or Failed, but
> these do not cover some important scenarios
> - If an Action's pre_conditions fails, the action is set to FAILED,
> but for some scenarios, it could be just SKIPPED and continue the workflow.
> - Proposal: New SKIPPED state for action. E.g: In a Nova Migration
> Action, if the instance doesn't exist in the source host, it can be skipped
> instead of fail.
> - Proposal: Users could also manually skip specific actions from an
> action plan.
> - A skip_reason field could also be added to document the reason
> behind the skip: user's request, pre-condition check, etc.
> - *AI*: (amoralej) Create a spec to describe the proposed changes.
>
> b) Meaning of SUCCEEDED state in Action Plan
>
> - Currently means that all actions are triggered, even if all of them
> fail, which can be confusing for users.
> - Docs mention that SUCCEEDED state means that all actions have been
> successfully executed.
> - *AI*: (amoralej) Document the current behavior as a bug (Priority
> High)
> - done: https://bugs.launchpad.net/watcher/+bug/2106407
>
> Watcher-Dashboard: Priorities to next release (amoralej)
> ===========================================
> a) Add integration/functional tests
>
> - Project is missing integration/functional tests and a CI job running
> against changes in the repo
> - No general conclusion and we will follow up with Horizon team
> - *AI*: (chandankumar/rlandy) sync with Horizon team about testing the
> plugin with horizon.
> - *AI*: (chandankumar/rlandy) devstack job running on new changes for
> watcher-dashboard repo.
>
> b) Add parameters to Audits
>
> - It is missing on the watcher-dashboard side. Without it, it is not
> possible to define some important parameters.
> - Should be addressed by a blueprint
> - Contributors to this feature: chandankumar
>
> Watcher cluster model collector improvement ideas (dviroel)
> =============================================
>
> - Brainstorm ideas to improve watcher collector process, since we
> still see a lot of issues due to outdated models when running audits
> - Both scheduled model update and event-based updates are enabled in
> CI today
> - It is unknown the current state of event-based updates from Nova
> notification. Code needs to be reviewed and improvements/fixes can be
> proposed
> - e.g: https://bugs.launchpad.net/watcher/+bug/2104220/comments/3 -
> We need to check if we are processing the right notifications of if is a
> bug on Nova
> - Proposal: Refresh the model before running an audit. A rate limit
> should be considered to avoid too many refreshments.
> - *AI*: (dviroel) new spec for cluster model refresh, based on audit
> trigger
> - *AI:* (dviroel) investigate the processing of nova events in Watcher
>
> Watcher and Nova's visible constraints (dviroel)
> ====================================
>
> - Currently, Watcher can propose solutions that include server
> migrations that violate some Nova constraints like: scheduler_hints,
> server_groups, pinned_az, etc.
> - In Epoxy release, Nova's API was improved to also show
> scheduler_hints and image_properties, allowing external services, like
> watcher, to query and use this information when calculating new solutions.
> -
> https://docs.openstack.org/releasenotes/nova/2025.1.html#new-features
> - Proposal: Extend compute instance model to include new properties,
> which can be retrieved via novaclient. Update strategies to filter invalid
> migration destinations based on these new properties.
> - *AI*: (dviroel) Propose a spec to better document the proposal. No
> API changes are expected here.
>
> Replacement for noisy neighbor policy (jgilaber)
> ====================================
>
> - The existing noisy neighbor strategy is based on L3 Cache metrics,
> which is not available anymore, since the support for it was dropped from
> the kernel and from Nova.
> - In order to keep this strategy, new metrics need to be considered:
> cpu_steal? io_wait? cache_misses?
> - *AI*: (jgilaber) Mark the strategy as deprecated during this cycle
> - *AI*: (TBD) Identify new metrics to be used
> - *AI*: (TBD) Work on a replacement for the current strategy
>
>
> Host Maintenance strategy new use case (jeno8)
> =====================================
>
> - New use case for Host Maintenance strategy: instance with ephemeral
> disks should not be migrated at all.
> - Spec proposed:
> https://review.opendev.org/c/openstack/watcher-specs/+/943873
> - New action to stop instances when both live/cold migration are
> disabled by the user
> - *AI*: (All) Review the spec and continue with discussion there.
>
> Missing Contributor Docs (sean-k-mooney)
> ================================
>
> - Doc missing: Scope of the project, e.g:
> https://docs.openstack.org/nova/latest/contributor/project-scope.html
> - *AI*: (rlandy) Create a scope of the project doc for Watcher
> - Doc missing: PTL Guide, e.g:
> https://docs.openstack.org/nova/latest/contributor/ptl-guide.html
> - *AI*: (TBD) Create a PTL Guide for Watcher project
> - Document: When to create a spec vs blueprint vs bug
> - *AI*: (TBD) Create a doc section to describe the process based on
> what is being modified in the code.
>
> Retrospective
> ==========
>
> - The DPL approach seems to be working for Watcher
> - New core members added: sean-k-mooney, dviroel, marios and
> chandankumar
> - We plan to add more cores in the next cycle, based on reviews and
> engagement.
> - We plan to remove not active members in the 2 last cycles
> (starting at 2026.1)
> - A new datasource was added: Prometheus
> - Prometheus job now also runs scenario tests, along with Gnocchi.
> - We triaged all old bugs from launchpad
> - Needs improvement:
> - current team is still learning about details in the code, much of
> the historical knowledge was lost with the previous maintainers
> - core team still needs to grow
> - we need to focus on creating stable releases
>
>
> Cross-project session with Horizon team
> ===============================
>
> - Combined session with Telemetry and Horizon team, focused on how to
> provide a tenant and an admin dashboard to visualize metrics.
> - Watcher team presented some ideas of new panels for both admin and
> tenants, and sean-k-mooney raised a discussion about frameworks that can be
> used to implement them
> - Use-cases that were discussed:
> - a) Admin would benefit from a visualization of the infrastructure
> utilization (real usage metrics), so they can identify bottlenecks and plan
> optimization
> - b) A tenant would like to view their workload performance,
> checking real usage of cpu/ram/disk of instances, to proper adjust their
> resources allocation.
> - c) An admin user of watcher service would like to visualize
> metrics generated by watcher strategies like standard deviation of host
> metrics.
> - sean-k-mooney presented an initial PoC on how a Hypervisor Metrics
> dashboard would look like.
> - Proposal for next steps:
> - start a new horizon plugin as an official deliverable of
> telemetry project
> - still unclear which framework to use for building charts
> - dashboard will integrate with Prometheus, as metric store
> - it is expected that only short term metrics will be supported (7
> days)
> - python-observability-client will be used to query Prometheus
>
>
> Cross-project session with Nova team
> =============================
>
> - sean-k-mooney led topics on how to evolve Nova to better assist
> other services, like Watcher, to take actions on instances. The team agreed
> on a proposal of using the existing metadata API to annotate instance's
> supported lifecycle operations. This information is very useful to improve
> Watcher's strategy's algorithms. Some example of instance's metadata could
> be:
> - lifecycle:cold-migratable=true|false
> - ha:maintenance-strategy:in_place|power_off|migrate
> - It was discussed that Nova could infer which operations are valid or
> not, based on information like: virt driver, flavor, image properties, etc.
> This feature was initially named 'instance capabilities' and will require a
> spec for further discussions.
> - Another topic of interest, also raised by Sean, was about adding new
> standard traits to resource providers, like PRESSURE_CPU and PRESSURE_DISK.
> These traits can be used to weight hosts when placing new VMs. Watcher and
> the libvirt driver could work on annotating them, but the team generally
> agreed that the libvirt driver is preferred here.
> - More info at Nova PTG etherpad [0] and sean's summary blog [1]
>
> [0] https://etherpad.opendev.org/p/r.bf5f1185e201e31ed8c3adeb45e3cf6d
> [1] https://www.seanmooney.info/blog/2025.2-ptg/#watcher-topics
>
>
> Please let me know if I missed something.
> Thanks!
>
> --
> Douglas Viroel - dviroel
>
8 months, 2 weeks
Re: [watcher] 2025.2 Flamingo PTG summary
by Dmitriy Rabotyagov
> well gnocchi is also not a native OpenStack telemetry datastore, it left
> our community to pursue its own goals and is now a third party datastore
> just like Grafana or Prometheus.
Yeah, well, true. Is still somehow treated as the "default" thing with
Telemetry, likely due to existing integration with Keystone and
multi-tenancy support. And beyond it - all other options become
opinionated too fast - ie, some do OpenTelemetry, some do Zabbix,
VictoriaMetrics, etc. As pretty much from what I got as well, is that
still relies on Ceilometer metrics?
And then Prometheus is obviously not the best storage for them, as it
requires to have pushgatgeway, and afaik prometheus maintainers are
strictly against "push" concept to it and treat it as conceptually
wrong (on contrary to OpenTelemetry). So the metric timestamp issue is
to remain unaddressed.
So that's why I'd see leaving Gnocchi as "base" implementation might
be valuable (and very handy for us, as we don't need to implement a
prometheus job specifically for Watcher).
> but for example watcher can integrate with both ironic an canonical maas
component
> to do some level of host power management.
That sounds really interesting... We do maintain infrastructure using
MAAS and playing with such integration will be extremely interesting.
I hope I will be able to get some time for this though...
чт, 17 апр. 2025 г. в 13:52, Sean Mooney <smooney(a)redhat.com>:
>
>
> On 16/04/2025 21:04, Dmitriy Rabotyagov wrote:
> >
> > Hey,
> >
> > Have a comment on one AI from the list.
> >
> > > AI: (jgilaber) Mark Monasca and Grafana as deprecated, unless
> > someone steps up to maintain them, which should include a minimal CI
> > job running.
> >
> > So eventually, on OpenStack-Ansible we were planning to revive the
> > Watcher role support to the project.
> > How we usually test deployment, is by spawning an all-in-one
> > environment with drivers and executing a couple of tempest scenarios
> > to ensure basic functionality of the service.
> >
> > With that, having a native OpenStack telemetry datastore is very
> > beneficial for such goal, as we already do maintain means for spawning
> > telemetry stack. While a requirement for Prometheus will be
> > unfortunate for us at least.
> >
> > While I was writing that, I partially realized that testing Watcher on
> > all-in-one is pretty much impossible as well...
> >
> you can certenly test some fo watcher with an all in one deployment
>
> i.e. the apis and you can use the dummy test stragies.
>
> but ya in general like nova you need at least 2 nodes to be able to test
> it properly ideally 3
>
> so that if your doing a live migration there is actully a choice of host.
>
> in general however watcher like heat just asks nova to actully move the vms.
>
> sure it will ask nova to move it to a specific host but fundementaly if
> you have
>
> tested live migration with nova via tempest seperatly there is no reason
> to expcect
>
> it would not work for live migratoin tirggred by watcher or heat or
> anything else that
>
> jsut calls novas api.
>
> so you could still get some valual testing in an all in one but ideally
> there woudl be at least 2 comptue hosts.
>
>
> > But at the very least, I can propose looking into adding an OSA job
> > with Gnocchi as NV to the project, to show the state of the deployment
> > with this driver.
> >
> well gnocchi is also not a native OpenStack telemetry datastore, it left
> our community to pursue its own goals and is now a third party datastore
>
> just like Grafana or Prometheus.
>
> monasca is currently marked as inactive
> https://review.opendev.org/c/openstack/governance/+/897520 and is in the
> process of being retired.
>
> but it also has no testing on the watcher side to the combination of the
> two is why we are deprecating it going forward.
>
> if both change im happy to see the support continue.
>
> Gnocchi has testing but we are not actively working on extending its
> functionality going forward.
>
> as long as it continues to work i see no reason to change its support
> status.
>
> watcher has quite a lot of untested integrations which is unfortunate
>
> we are planning to build out a feature/test/support matrix in the docs
> this cycle
>
> but for example watcher can integrate with both ironic an canonical maas
> component
>
> to do some level of host power management. none of that is tested and we
> are likely going
>
> to mark them as experimental and reflect on if we can continue to
> support them or not going forward.
>
> it also has the ability to do cinder storage pool balancing which is i
> think also untested write now.
>
> one of the things we hope to do is extend the exsitign testing in our
> current jobs to cover gaps like
>
> that where it is practical to do so. but creating a devstack plugin to
> deploy maas with fake infrastructure
>
> is likely alot more then we can do with our existing contributors so
> expect that to go to experimental then
>
> deprecated and finally it will be removed if no one turns up to support it.
>
> ironic is in the same boat however there are devstack jobs with fake
> ironic nodes so i
>
> could see a path to use having an ironic job down the line. its just not
> high on our current priority
>
> list to adress the support status or testing of this currently.
>
> eventlet removal and other techdebt/community goals are defintly higher
> but i hop the new supprot/testing
>
> matrix will at least help folks make informed descions or what feature
> to use and what backend are
>
> recommended going forward.
>
> >
> > On Wed, 16 Apr 2025, 21:53 Douglas Viroel, <viroel(a)gmail.com> wrote:
> >
> > Hello everyone,
> >
> > Last week's PTG had very interesting topics. Thank you all that
> > joined.
> > The Watcher PTG etherpad with all notes is available here:
> > https://etherpad.opendev.org/p/apr2025-ptg-watcher
> > Here is a summary of the discussions that we had, including the
> > great cross-project sessions with Telemetry, Horizon and Nova team:
> >
> > Tech Debt (chandankumar/sean-k-mooney)
> > =================================
> > a) Croniter
> >
> > * Project is being abandoned as per
> > https://pypi.org/project/croniter/#disclaimer
> > * Watcher uses croniter to calculate a new schedule time to run
> > an audit (continuous). It is also used to validate cron like
> > syntax
> > * Agreed: replace croniter with appscheduler's cron methods.
> > * *AI*: (chandankumar) Fix in master branch and backport to 2025.1
> >
> > b) Support status of Watcher Datasources
> >
> > * Only Gnocchi and Prometheus have CI job running tempest tests
> > (with scenario tests)
> > * Monaska is inactive since 2024.1
> > * *AI*: (jgilaber) Mark Monasca and Grafana as deprecated,
> > unless someone steps up to maintain them, which should include
> > a minimal CI job running.
> > * *AI*: (dviroel) Document a support matrix between Strategies
> > and Datasources, which ones are production ready or
> > experimental, and testing coverage.
> >
> > c) Eventlet Removal
> >
> > * Team is going to look at how the eventlet is used in Watcher
> > and start a PoC of its removal.
> > * Chandan Kumar and dviroel volunteer to help in this effort.
> > * Planned for 2026.1 cycle.
> >
> > Workflow/API Improvements (amoralej)
> > ==============================
> > a) Actions states
> >
> > * Currently Actions updates from Pending to Succeeded or Failed,
> > but these do not cover some important scenarios
> > * If an Action's pre_conditions fails, the action is set to
> > FAILED, but for some scenarios, it could be just SKIPPED and
> > continue the workflow.
> > * Proposal: New SKIPPED state for action. E.g: In a Nova
> > Migration Action, if the instance doesn't exist in the source
> > host, it can be skipped instead of fail.
> > * Proposal: Users could also manually skip specific actions from
> > an action plan.
> > * A skip_reason field could also be added to document the reason
> > behind the skip: user's request, pre-condition check, etc.
> > * *AI*: (amoralej) Create a spec to describe the proposed changes.
> >
> > b) Meaning of SUCCEEDED state in Action Plan
> >
> > * Currently means that all actions are triggered, even if all of
> > them fail, which can be confusing for users.
> > * Docs mention that SUCCEEDED state means that all actions have
> > been successfully executed.
> > * *AI*: (amoralej) Document the current behavior as a bug
> > (Priority High)
> > o done: https://bugs.launchpad.net/watcher/+bug/2106407
> >
> > Watcher-Dashboard: Priorities to next release (amoralej)
> > ===========================================
> > a) Add integration/functional tests
> >
> > * Project is missing integration/functional tests and a CI job
> > running against changes in the repo
> > * No general conclusion and we will follow up with Horizon team
> > * *AI*: (chandankumar/rlandy) sync with Horizon team about
> > testing the plugin with horizon.
> > * *AI*: (chandankumar/rlandy) devstack job running on new
> > changes for watcher-dashboard repo.
> >
> > b) Add parameters to Audits
> >
> > * It is missing on the watcher-dashboard side. Without it, it is
> > not possible to define some important parameters.
> > * Should be addressed by a blueprint
> > * Contributors to this feature: chandankumar
> >
> > Watcher cluster model collector improvement ideas (dviroel)
> > =============================================
> >
> > * Brainstorm ideas to improve watcher collector process, since
> > we still see a lot of issues due to outdated models when
> > running audits
> > * Both scheduled model update and event-based updates are
> > enabled in CI today
> > * It is unknown the current state of event-based updates from
> > Nova notification. Code needs to be reviewed and
> > improvements/fixes can be proposed
> > o e.g:
> > https://bugs.launchpad.net/watcher/+bug/2104220/comments/3
> > - We need to check if we are processing the right
> > notifications of if is a bug on Nova
> > * Proposal: Refresh the model before running an audit. A rate
> > limit should be considered to avoid too many refreshments.
> > * *AI*: (dviroel) new spec for cluster model refresh, based on
> > audit trigger
> > * *AI:* (dviroel) investigate the processing of nova events in
> > Watcher
> >
> > Watcher and Nova's visible constraints (dviroel)
> > ====================================
> >
> > * Currently, Watcher can propose solutions that include server
> > migrations that violate some Nova constraints like:
> > scheduler_hints, server_groups, pinned_az, etc.
> > * In Epoxy release, Nova's API was improved to also show
> > scheduler_hints and image_properties, allowing external
> > services, like watcher, to query and use this information when
> > calculating new solutions.
> > o https://docs.openstack.org/releasenotes/nova/2025.1.html#new-features
> > * Proposal: Extend compute instance model to include new
> > properties, which can be retrieved via novaclient. Update
> > strategies to filter invalid migration destinations based on
> > these new properties.
> > * *AI*: (dviroel) Propose a spec to better document the
> > proposal. No API changes are expected here.
> >
> > Replacement for noisy neighbor policy (jgilaber)
> > ====================================
> >
> > * The existing noisy neighbor strategy is based on L3 Cache
> > metrics, which is not available anymore, since the support for
> > it was dropped from the kernel and from Nova.
> > * In order to keep this strategy, new metrics need to be
> > considered: cpu_steal? io_wait? cache_misses?
> > * *AI*: (jgilaber) Mark the strategy as deprecated during this cycle
> > * *AI*: (TBD) Identify new metrics to be used
> > * *AI*: (TBD) Work on a replacement for the current strategy
> >
> >
> > Host Maintenance strategy new use case (jeno8)
> > =====================================
> >
> > * New use case for Host Maintenance strategy: instance with
> > ephemeral disks should not be migrated at all.
> > * Spec proposed:
> > https://review.opendev.org/c/openstack/watcher-specs/+/943873
> > o New action to stop instances when both live/cold migration
> > are disabled by the user
> > * *AI*: (All) Review the spec and continue with discussion there.
> >
> > Missing Contributor Docs (sean-k-mooney)
> > ================================
> >
> > * Doc missing: Scope of the project, e.g:
> > https://docs.openstack.org/nova/latest/contributor/project-scope.html
> > * *AI*: (rlandy) Create a scope of the project doc for Watcher
> > * Doc missing: PTL Guide, e.g:
> > https://docs.openstack.org/nova/latest/contributor/ptl-guide.html
> > * *AI*: (TBD) Create a PTL Guide for Watcher project
> > * Document: When to create a spec vs blueprint vs bug
> > * *AI*: (TBD) Create a doc section to describe the process based
> > on what is being modified in the code.
> >
> > Retrospective
> > ==========
> >
> > * The DPL approach seems to be working for Watcher
> > * New core members added: sean-k-mooney, dviroel, marios and
> > chandankumar
> > o We plan to add more cores in the next cycle, based on
> > reviews and engagement.
> > o We plan to remove not active members in the 2 last cycles
> > (starting at 2026.1)
> > * A new datasource was added: Prometheus
> > * Prometheus job now also runs scenario tests, along with Gnocchi.
> > * We triaged all old bugs from launchpad
> > * Needs improvement:
> > o current team is still learning about details in the code,
> > much of the historical knowledge was lost with the
> > previous maintainers
> > o core team still needs to grow
> > o we need to focus on creating stable releases
> >
> >
> > Cross-project session with Horizon team
> > ===============================
> >
> > * Combined session with Telemetry and Horizon team, focused on
> > how to provide a tenant and an admin dashboard to visualize
> > metrics.
> > * Watcher team presented some ideas of new panels for both admin
> > and tenants, and sean-k-mooney raised a discussion about
> > frameworks that can be used to implement them
> > * Use-cases that were discussed:
> > o a) Admin would benefit from a visualization of the
> > infrastructure utilization (real usage metrics), so they
> > can identify bottlenecks and plan optimization
> > o b) A tenant would like to view their workload performance,
> > checking real usage of cpu/ram/disk of instances, to
> > proper adjust their resources allocation.
> > o c) An admin user of watcher service would like to
> > visualize metrics generated by watcher strategies like
> > standard deviation of host metrics.
> > * sean-k-mooney presented an initial PoC on how a Hypervisor
> > Metrics dashboard would look like.
> > * Proposal for next steps:
> > o start a new horizon plugin as an official deliverable of
> > telemetry project
> > o still unclear which framework to use for building charts
> > o dashboard will integrate with Prometheus, as metric store
> > o it is expected that only short term metrics will be
> > supported (7 days)
> > o python-observability-client will be used to query Prometheus
> >
> >
> > Cross-project session with Nova team
> > =============================
> >
> > * sean-k-mooney led topics on how to evolve Nova to better
> > assist other services, like Watcher, to take actions on
> > instances. The team agreed on a proposal of using the existing
> > metadata API to annotate instance's supported lifecycle
> > operations. This information is very useful to improve
> > Watcher's strategy's algorithms. Some example of instance's
> > metadata could be:
> > o lifecycle:cold-migratable=true|false
> > o ha:maintenance-strategy:in_place|power_off|migrate
> > * It was discussed that Nova could infer which operations are
> > valid or not, based on information like: virt driver, flavor,
> > image properties, etc. This feature was initially named
> > 'instance capabilities' and will require a spec for further
> > discussions.
> > * Another topic of interest, also raised by Sean, was about
> > adding new standard traits to resource providers, like
> > PRESSURE_CPU and PRESSURE_DISK. These traits can be used to
> > weight hosts when placing new VMs. Watcher and the libvirt
> > driver could work on annotating them, but the team generally
> > agreed that the libvirt driver is preferred here.
> > * More info at Nova PTG etherpad [0] and sean's summary blog [1]
> >
> > [0] https://etherpad.opendev.org/p/r.bf5f1185e201e31ed8c3adeb45e3cf6d
> > [1] https://www.seanmooney.info/blog/2025.2-ptg/#watcher-topics
> >
> >
> > Please let me know if I missed something.
> > Thanks!
> >
> > --
> > Douglas Viroel - dviroel
> >
>
8 months, 2 weeks
[tc][all] OpenStack Technical Committee Weekly Summary and Meeting Agenda (2025.1/R-25)
by Goutham Pacha Ravi
Hello Stackers,
This week, we begin our 26-week endeavor towards the next SLURP
release, 2025.1 ("Epoxy") [1]. OpenStack Project Teams will meet
virtually at the Project Teams Gathering (PTG) in two weeks, starting
on 2024-10-21 [2]. The OpenStack TC plans to host cross project
meetings during the following time slots:
- 2024-10-21 (Monday): 1400 UTC - 1700 UTC
- 2024-10-25 (Friday): 1500 UTC - 1700 UTC
You'll find the proposed topics on the PTG Etherpad [3]; please add
your IRC nickname if you'd like to attend or be notified when
discussions begin.
Last week, a few community leads presented at OpenInfra Live,
recapping the 2024.2 release [4]. I encourage you to watch the
presentation and follow the themes each team is pursuing in the
"Epoxy" release cycle. I'm excited to share that the organizers of the
upcoming OpenInfra Days North America (Oct 15-16) have made it a
hybrid event. Please register if you plan to attend virtually [5].
=== Weekly Meeting ===
The last weekly meeting of the OpenStack Technical Committee was held
simultaneously on IRC [6] and video [7]. We discussed meeting times,
and the current time (Tuesdays at 1800 UTC) was retained due to a lack
of consensus on better alternatives. Sylvain Bauza (bauzas)
volunteered to be an Election Official for the 2025.2 elections, which
will be announced around February 2025. We also discussed "leaderless"
projects for the 2025.1 release and appointed leaders for the
OpenStack Mistral, OpenStack Watcher, and OpenStack Swift projects.
Additionally, we created a TC tracker for the 2025.1 release cycle [8]
to monitor the progress of community goals and other governance
initiatives.
The next OpenStack Technical Committee meeting is today (2024-10-08)
at 1800 UTC on the #openstack-tc IRC channel on OFTC. You can find the
agenda on the weekly meeting wiki page [9]. I hope you can join us!
Below is a list of governance changes that have merged in the past
week and those still pending community review.
=== Governance Proposals ===
==== Merged ====
- Appoint Tim Burke as PTL for Swift |
https://review.opendev.org/c/openstack/governance/+/928881
==== Open for Review ====
- Mark kuryr-kubernetes and kuryr-tempest-plugin inactive |
https://review.opendev.org/c/openstack/governance/+/929698
- Add Axel Vanzaghi as PTL for Mistral |
https://review.opendev.org/c/openstack/governance/+/927962
- Propose the eventlet-removal community goal |
https://review.opendev.org/c/openstack/governance/+/931254
=== Upcoming Events ===
- 2024-10-08: OpenInfra Monthly Board Meeting: https://board.openinfra.dev/
- 2024-10-15: OpenInfra Days NA, Indianapolis:
https://ittraining.iu.edu/explore-topics/titles/oid-iu/
- 2024-10-21: OpenInfra Project Teams Gathering: https://openinfra.dev/ptg/
Thank you for reading!
On behalf of the OpenStack TC,
Goutham Pacha Ravi (gouthamr)
OpenStack TC Chair
[1] 2025.1 "Epoxy" Release Schedule:
https://releases.openstack.org/epoxy/schedule.html
[2] "Epoxy" PTG Schedule: https://ptg.opendev.org/ptg.html
[3] Technical Committee PTG Etherpad:
https://etherpad.opendev.org/p/oct2024-ptg-os-tc
[4] "Introducing OpenStack Dalmatian 2024.2": https://youtu.be/6igJNIJ9yFE
[5] OpenInfra Days NA:
https://ittraining.iu.edu/explore-topics/titles/oid-iu/index.html#register
[6] TC Meeting IRC Log, 2024-10-01:
https://meetings.opendev.org/meetings/tc/2024/tc.2024-10-01-18.00.log.html
[7] TC Meeting Video Recording, 2024-10-01: https://youtu.be/6RXE1LfEv7w
[8] 2025.1 TC Tracker: https://etherpad.opendev.org/p/tc-2025.1-tracker
[9] TC Meeting Agenda, 2024-10-08:
https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting
1 year, 2 months
[tc][all] OpenStack Technical Committee Weekly Summary and Meeting Agenda (2025.1/R-20)
by Goutham Pacha Ravi
Hello Stackers,
This week is milestone-1 of the 2025.1 ("Epoxy") release cycle [1].
We've been hard at work implementing the action items from the recent
Virtual Project Teams Gathering (PTG) [2]. This past week, we didn’t
pursue any new governance changes, though several remain in draft as
follow-ups from the PTG. We’re also excited about the extended
deadline for the Call for Proposals for the OpenInfra Days event at
SCaLE 2025. The new deadline is Nov 15, 2024 [3].
=== Weekly Meeting ===
Last week, the Technical Committee met concurrently on video [4] and
IRC [5]. Allison Price (aprice) and Jimmy McArthur (jimmymcarthur)
from the OpenInfra VMware Migration Working Group [6] joined us to
discuss the group's charter and activities so far. The working group’s
goal is not only to address gaps and workarounds but also to influence
the future development of OpenStack software. The group has published
a whitepaper [7] for operators planning to migrate cloud workloads.
The TC expressed interest in collaboration between the working group
and the current maintainers of the VMware hypervisor driver in
OpenStack Nova. We also highlighted low contributor numbers in several
critical project teams mentioned in the whitepaper, including those
maintaining OpenStack Compute High Availability (Masakari), OpenStack
Storage Backup and Restore (Freezer), OpenStack Resource Optimization
(Watcher), and OpenStack Database Service (Trove). The working group
plans to hold regular meetings soon and will share details via this
mailing list.
The TC reviewed the maintenance status of the OpenStack documentation
styling extension for Python Sphinx, openstackdocstheme. This library
is outdated, as it depends on older versions, but there aren’t enough
maintainers to review changes, which could impact documentation across
all projects. Ideally, maintainers of this library should have
expertise in Sphinx and JavaScript, making the TC a less-than-ideal
group to support it. We encourage migrating documentation away from
this custom tool, as Sphinx styling extensions have advanced and may
now suffice without heavy customization. While there’s no immediate
migration plan, we’re looking for volunteers to lead this effort. In
the meantime, we’ll raise concerns with the Oslo project team about
the lack of reviews for openstackdocstheme changes.
The next meeting of the OpenStack Technical Committee is today
(2024-11-12) on OFTC's #openstack-tc channel. You can find the agenda
on the Meeting wiki [8]. I hope you'll be able to join us.
=== Governance Proposals ===
==== Open for Review ====
Propose to select the eventlet-removal community goal |
https://review.opendev.org/c/openstack/governance/+/931254
Add Cinder Huawei charm |
https://review.opendev.org/c/openstack/governance/+/867588
Thank you very much for reading!
On behalf of the OpenStack TC,
Goutham Pacha Ravi (gouthamr)
OpenStack TC Chair
[1] 2025.1 "Epoxy" Release Schedule:
https://releases.openstack.org/epoxy/schedule.html
[2] Superuser compilation of PTG summaries:
https://superuser.openinfra.dev/articles/october-2024-openinfra-ptg-summary/
[3] OpenInfra Days CFP:
https://lists.openinfra.dev/archives/list/foundation@lists.openinfra.dev/me…
[4] TC Meeting Video Recording, 2024-11-05: https://youtu.be/o_w3OutGEfM
[5] TC Meeting IRC Log 2024-11-05:
https://meetings.opendev.org/meetings/tc/2024/tc.2024-11-05-18.04.log.html
[6] OpenStack VMware Migration Group:
https://www.openstack.org/vmware-migration-to-openstack/
[7] VMware Migration to OpenStack White Paper:
https://www.openstack.org/vmware-migration-to-openstack-white-paper
[8] TC Meeting Agenda, 2024-11-12:
https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting
1 year, 1 month
Re: [watcher] 2025.2 Flamingo PTG summary
by Douglas Viroel
Hi Dmitriy,
Glad to hear that you plan to revive Watcher role support in OSA.
Right, if you plan to run tempest scenarios tests, then you will need at
least an additional compute node.
Only Gnocchi and Prometheus datasources have support for metrics injection
in watcher-tempest-plugin, which allows us to execute different strategy
tests.
We have CI jobs running scenario tests for both data sources, if you want
to take a look [1][2].
Let us know if you need anything.
[1]
https://zuul.opendev.org/t/openstack/builds?job_name=watcher-tempest-strate…
[2]
https://zuul.opendev.org/t/openstack/builds?job_name=watcher-prometheus-int…
BR,
On Wed, Apr 16, 2025 at 5:04 PM Dmitriy Rabotyagov <noonedeadpunk(a)gmail.com>
wrote:
> Hey,
>
> Have a comment on one AI from the list.
>
> > AI: (jgilaber) Mark Monasca and Grafana as deprecated, unless someone
> steps up to maintain them, which should include a minimal CI job running.
>
> So eventually, on OpenStack-Ansible we were planning to revive the Watcher
> role support to the project.
> How we usually test deployment, is by spawning an all-in-one environment
> with drivers and executing a couple of tempest scenarios to ensure basic
> functionality of the service.
>
> With that, having a native OpenStack telemetry datastore is very
> beneficial for such goal, as we already do maintain means for spawning
> telemetry stack. While a requirement for Prometheus will be unfortunate for
> us at least.
>
> While I was writing that, I partially realized that testing Watcher on
> all-in-one is pretty much impossible as well...
>
> But at the very least, I can propose looking into adding an OSA job with
> Gnocchi as NV to the project, to show the state of the deployment with this
> driver.
>
> On Wed, 16 Apr 2025, 21:53 Douglas Viroel, <viroel(a)gmail.com> wrote:
>
>> Hello everyone,
>>
>> Last week's PTG had very interesting topics. Thank you all that joined.
>> The Watcher PTG etherpad with all notes is available here:
>> https://etherpad.opendev.org/p/apr2025-ptg-watcher
>> Here is a summary of the discussions that we had, including the great
>> cross-project sessions with Telemetry, Horizon and Nova team:
>>
>> Tech Debt (chandankumar/sean-k-mooney)
>> =================================
>> a) Croniter
>>
>> - Project is being abandoned as per
>> https://pypi.org/project/croniter/#disclaimer
>> - Watcher uses croniter to calculate a new schedule time to run an
>> audit (continuous). It is also used to validate cron like syntax
>> - Agreed: replace croniter with appscheduler's cron methods.
>> - *AI*: (chandankumar) Fix in master branch and backport to 2025.1
>>
>> b) Support status of Watcher Datasources
>>
>> - Only Gnocchi and Prometheus have CI job running tempest tests (with
>> scenario tests)
>> - Monaska is inactive since 2024.1
>> - *AI*: (jgilaber) Mark Monasca and Grafana as deprecated, unless
>> someone steps up to maintain them, which should include a minimal CI job
>> running.
>> - *AI*: (dviroel) Document a support matrix between Strategies and
>> Datasources, which ones are production ready or experimental, and testing
>> coverage.
>>
>> c) Eventlet Removal
>>
>> - Team is going to look at how the eventlet is used in Watcher and
>> start a PoC of its removal.
>> - Chandan Kumar and dviroel volunteer to help in this effort.
>> - Planned for 2026.1 cycle.
>>
>> Workflow/API Improvements (amoralej)
>> ==============================
>> a) Actions states
>>
>> - Currently Actions updates from Pending to Succeeded or Failed, but
>> these do not cover some important scenarios
>> - If an Action's pre_conditions fails, the action is set to FAILED,
>> but for some scenarios, it could be just SKIPPED and continue the workflow.
>> - Proposal: New SKIPPED state for action. E.g: In a Nova Migration
>> Action, if the instance doesn't exist in the source host, it can be skipped
>> instead of fail.
>> - Proposal: Users could also manually skip specific actions from an
>> action plan.
>> - A skip_reason field could also be added to document the reason
>> behind the skip: user's request, pre-condition check, etc.
>> - *AI*: (amoralej) Create a spec to describe the proposed changes.
>>
>> b) Meaning of SUCCEEDED state in Action Plan
>>
>> - Currently means that all actions are triggered, even if all of them
>> fail, which can be confusing for users.
>> - Docs mention that SUCCEEDED state means that all actions have been
>> successfully executed.
>> - *AI*: (amoralej) Document the current behavior as a bug (Priority
>> High)
>> - done: https://bugs.launchpad.net/watcher/+bug/2106407
>>
>> Watcher-Dashboard: Priorities to next release (amoralej)
>> ===========================================
>> a) Add integration/functional tests
>>
>> - Project is missing integration/functional tests and a CI job
>> running against changes in the repo
>> - No general conclusion and we will follow up with Horizon team
>> - *AI*: (chandankumar/rlandy) sync with Horizon team about testing
>> the plugin with horizon.
>> - *AI*: (chandankumar/rlandy) devstack job running on new changes for
>> watcher-dashboard repo.
>>
>> b) Add parameters to Audits
>>
>> - It is missing on the watcher-dashboard side. Without it, it is not
>> possible to define some important parameters.
>> - Should be addressed by a blueprint
>> - Contributors to this feature: chandankumar
>>
>> Watcher cluster model collector improvement ideas (dviroel)
>> =============================================
>>
>> - Brainstorm ideas to improve watcher collector process, since we
>> still see a lot of issues due to outdated models when running audits
>> - Both scheduled model update and event-based updates are enabled in
>> CI today
>> - It is unknown the current state of event-based updates from Nova
>> notification. Code needs to be reviewed and improvements/fixes can be
>> proposed
>> - e.g: https://bugs.launchpad.net/watcher/+bug/2104220/comments/3
>> - We need to check if we are processing the right notifications of if is a
>> bug on Nova
>> - Proposal: Refresh the model before running an audit. A rate limit
>> should be considered to avoid too many refreshments.
>> - *AI*: (dviroel) new spec for cluster model refresh, based on audit
>> trigger
>> - *AI:* (dviroel) investigate the processing of nova events in Watcher
>>
>> Watcher and Nova's visible constraints (dviroel)
>> ====================================
>>
>> - Currently, Watcher can propose solutions that include server
>> migrations that violate some Nova constraints like: scheduler_hints,
>> server_groups, pinned_az, etc.
>> - In Epoxy release, Nova's API was improved to also show
>> scheduler_hints and image_properties, allowing external services, like
>> watcher, to query and use this information when calculating new solutions.
>> -
>> https://docs.openstack.org/releasenotes/nova/2025.1.html#new-features
>> - Proposal: Extend compute instance model to include new properties,
>> which can be retrieved via novaclient. Update strategies to filter invalid
>> migration destinations based on these new properties.
>> - *AI*: (dviroel) Propose a spec to better document the proposal. No
>> API changes are expected here.
>>
>> Replacement for noisy neighbor policy (jgilaber)
>> ====================================
>>
>> - The existing noisy neighbor strategy is based on L3 Cache metrics,
>> which is not available anymore, since the support for it was dropped from
>> the kernel and from Nova.
>> - In order to keep this strategy, new metrics need to be considered:
>> cpu_steal? io_wait? cache_misses?
>> - *AI*: (jgilaber) Mark the strategy as deprecated during this cycle
>> - *AI*: (TBD) Identify new metrics to be used
>> - *AI*: (TBD) Work on a replacement for the current strategy
>>
>>
>> Host Maintenance strategy new use case (jeno8)
>> =====================================
>>
>> - New use case for Host Maintenance strategy: instance with ephemeral
>> disks should not be migrated at all.
>> - Spec proposed:
>> https://review.opendev.org/c/openstack/watcher-specs/+/943873
>> - New action to stop instances when both live/cold migration are
>> disabled by the user
>> - *AI*: (All) Review the spec and continue with discussion there.
>>
>> Missing Contributor Docs (sean-k-mooney)
>> ================================
>>
>> - Doc missing: Scope of the project, e.g:
>> https://docs.openstack.org/nova/latest/contributor/project-scope.html
>> - *AI*: (rlandy) Create a scope of the project doc for Watcher
>> - Doc missing: PTL Guide, e.g:
>> https://docs.openstack.org/nova/latest/contributor/ptl-guide.html
>> - *AI*: (TBD) Create a PTL Guide for Watcher project
>> - Document: When to create a spec vs blueprint vs bug
>> - *AI*: (TBD) Create a doc section to describe the process based on
>> what is being modified in the code.
>>
>> Retrospective
>> ==========
>>
>> - The DPL approach seems to be working for Watcher
>> - New core members added: sean-k-mooney, dviroel, marios and
>> chandankumar
>> - We plan to add more cores in the next cycle, based on reviews
>> and engagement.
>> - We plan to remove not active members in the 2 last cycles
>> (starting at 2026.1)
>> - A new datasource was added: Prometheus
>> - Prometheus job now also runs scenario tests, along with Gnocchi.
>> - We triaged all old bugs from launchpad
>> - Needs improvement:
>> - current team is still learning about details in the code, much
>> of the historical knowledge was lost with the previous maintainers
>> - core team still needs to grow
>> - we need to focus on creating stable releases
>>
>>
>> Cross-project session with Horizon team
>> ===============================
>>
>> - Combined session with Telemetry and Horizon team, focused on how to
>> provide a tenant and an admin dashboard to visualize metrics.
>> - Watcher team presented some ideas of new panels for both admin and
>> tenants, and sean-k-mooney raised a discussion about frameworks that can be
>> used to implement them
>> - Use-cases that were discussed:
>> - a) Admin would benefit from a visualization of the
>> infrastructure utilization (real usage metrics), so they can identify
>> bottlenecks and plan optimization
>> - b) A tenant would like to view their workload performance,
>> checking real usage of cpu/ram/disk of instances, to proper adjust their
>> resources allocation.
>> - c) An admin user of watcher service would like to visualize
>> metrics generated by watcher strategies like standard deviation of host
>> metrics.
>> - sean-k-mooney presented an initial PoC on how a Hypervisor Metrics
>> dashboard would look like.
>> - Proposal for next steps:
>> - start a new horizon plugin as an official deliverable of
>> telemetry project
>> - still unclear which framework to use for building charts
>> - dashboard will integrate with Prometheus, as metric store
>> - it is expected that only short term metrics will be supported (7
>> days)
>> - python-observability-client will be used to query Prometheus
>>
>>
>> Cross-project session with Nova team
>> =============================
>>
>> - sean-k-mooney led topics on how to evolve Nova to better assist
>> other services, like Watcher, to take actions on instances. The team agreed
>> on a proposal of using the existing metadata API to annotate instance's
>> supported lifecycle operations. This information is very useful to improve
>> Watcher's strategy's algorithms. Some example of instance's metadata could
>> be:
>> - lifecycle:cold-migratable=true|false
>> - ha:maintenance-strategy:in_place|power_off|migrate
>> - It was discussed that Nova could infer which operations are valid
>> or not, based on information like: virt driver, flavor, image properties,
>> etc. This feature was initially named 'instance capabilities' and will
>> require a spec for further discussions.
>> - Another topic of interest, also raised by Sean, was about adding
>> new standard traits to resource providers, like PRESSURE_CPU and
>> PRESSURE_DISK. These traits can be used to weight hosts when placing new
>> VMs. Watcher and the libvirt driver could work on annotating them, but the
>> team generally agreed that the libvirt driver is preferred here.
>> - More info at Nova PTG etherpad [0] and sean's summary blog [1]
>>
>> [0] https://etherpad.opendev.org/p/r.bf5f1185e201e31ed8c3adeb45e3cf6d
>> [1] https://www.seanmooney.info/blog/2025.2-ptg/#watcher-topics
>>
>>
>> Please let me know if I missed something.
>> Thanks!
>>
>> --
>> Douglas Viroel - dviroel
>>
>
--
Douglas Viroel - dviroel
8 months, 2 weeks
[tc][all] OpenStack Technical Committee Weekly Summary and Meeting Agenda (2025.1/R-15)
by Goutham Pacha Ravi
Hello Stackers,
We're 15 weeks away from the 2025.1 ("Epoxy") release date
(2024-04-02) [1]. During the last week, the Technical Committee
continued discussing the state of "unmaintained" code branches across
OpenStack repositories and election officials proposed dates for the
combined TC+PTL elections for the next release cycle [2]. We have
decided to cancel the weekly meetings previously scheduled for 24th
December and 31st December in lieu of the holidays.
=== Weekly Meeting ===
The OpenStack Technical Committee met over IRC on 2024-12-10 [3]. We
revisited challenges with using CentOS 10 in the community CI system
due to lack of AVX/AVX2 CPU flags on current cloud providers.
Discussions revolved around finding providers with suitable hardware
or reducing reliance on CentOS testing. We then revisited the state of
"unmaintained" branches across OpenStack repositories. The oldest
"unmaintained" branch tracks the "victoria" release (GA: 2020-10-14).
The consensus amongst TC members was to enforce the policy to default
old branches to EOL unless explicitly opted-in with required
maintenance assurances. To do this, we need tooling to automate the
transition, and a way to track and validate volunteer opt-ins for
unmaintained branches. Volunteers stepped up to transition branches up
until the 2023.1 ("Antelope") release to "end-of-life", beginning with
the Victoria release branch [4]. If you're interested in keeping a
particular branch open, for any repository, please express your
interest directly on the gerrit change. All EOL transition patches
will be merged after 1 month of community review.
The next meeting of the OpenStack Technical Committee is on 2024-12-17
at 1800 UTC. This meeting will take place in the #openstack-tc channel
on OFTC's IRC server. Please find the meeting agenda on the wiki [5].
Please remember that you can propose topics for upcoming TC meetings
by directly editing the wiki. If you're unable to do so, please holler
at me (IRC nick: gouthamr) or in the TC's IRC channel (#openstack-tc).
=== Governance Proposals ===
==== Merged ====
- Retire Murano/Senlin/Sahara OpenStack-Ansible roles |
https://review.opendev.org/c/openstack/governance/+/935677
==== Open for Review ====
- Add ansible-role-httpd repo to OSA-owned projects |
https://review.opendev.org/c/openstack/governance/+/935694
- Rework the initial goal proposal as suggested by people |
https://review.opendev.org/c/openstack/governance/+/931254
- Resolve to adhere to non-biased language |
https://review.opendev.org/c/openstack/governance/+/934907
- Propose to select the eventlet-removal community goal |
https://review.opendev.org/c/openstack/governance/+/934936
- Allow more than 2 weeks for elections |
https://review.opendev.org/c/openstack/governance/+/937741
=== How to Contact the TC ===
You can reach the TC in several ways:
- Email: send an email with the tag [tc] on this mailing list.
- IRC: Ping us using the tc-members keyword on the #openstack-tc IRC
channel on OFTC.
- Weekly Meeting: The Technical Committee meets every week on Tuesdays
at 1800 UTC [5].
Thank you very much for reading!
On behalf of the OpenStack TC,
Goutham Pacha Ravi (gouthamr)
OpenStack TC Chair
[1] 2025.1 "Epoxy" Release Schedule:
https://releases.openstack.org/epoxy/schedule.html
[2] 2025.2/"F" elections proposal:
https://review.opendev.org/c/openstack/election/+/937408
[3] TC Meeting IRC Log 2024-12-10:
https://meetings.opendev.org/meetings/tc/2024/tc.2024-12-10-18.01.log.html
[4] Transition unmaintained/victoria to EOL:
https://review.opendev.org/c/openstack/releases/+/937515
[5] TC Meeting Agenda, 2024-12-17:
https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting
1 year
[all][tc][ptg] OpenStack Technical Committee Gazpacho PTG Summary
by Goutham Pacha Ravi
Hello Stackers,
The community had a very productive "design summit" at the Project
Teams Gathering last week. As you're reading the various summaries
posted here, and participating in new initiatives that have rolled out
since, I'd like to add a brief summary of the OpenStack Technical
Committee's meetings [0] at the PTG. We recorded all of these sessions
and have posted the recordings to the TC's YouTube channel [1].
Alongside this summary, I'd recommend reading the Eventlet Removal
Goal updates and discussion from the session that Hervé Beraud
(hberaud) hosted [2]. Remember, Etherpads are true to their name—you
can lose them :D So we made a handy reference to the PTG Etherpads on
the wiki [3].
Community Leaders' Forum
======================
We discussed active Community Goals [4], and took questions regarding
the status from various project maintainers. As it stands, the FIPS
compliance goal may no longer fit the criteria for community goals.
It's been challenging due to the lack of a stable OS to test FIPS
compliance in the CI. The community may need to explore CentOS Stream
alternatives such as Rocky Linux 10 and Alma Linux 10. We need someone
to pick up the work and replace existing test jobs. Secure RBAC has
got an active Goal Champion; however, work has slowed down in several
project teams. We're hoping to make another push through the Gazpacho
cycle.
We agreed that the current cross project liaisons information (on a
wiki) is severely outdated and needs to be migrated to the
openstack/governance repository, a centralized, programmatic location,
to avoid duplication and ensure accuracy.
We also discussed the OpenInfra AI contribution policy. Some project
teams have recently seen an uptick in AI-assisted contributions, but
there has been some confusion, and potential stigmatization of such
contributions. We identified a need for clarity on how to apply the AI
policy, specifically regarding using AI-generated code and verifying
copyrights. We could use some documentation and guidelines for
contributors and reviewers.
Action Items:
- Goutham (gouthamr) will move the FIPS goal from "selected" to
"proposed" and start a separate ML thread. Project teams can remove
any failing FIPS jobs when this change merges.
- Tony Breeds (tonyb) will enhance the project schema in the
governance repository to support adding cross-project liaisons.
- (needs volunteers) The FIPS goal needs rework, especially around
testing, and any objective refinements.
- (needs volunteers) Clarifications and instructions pertaining to AI
Contributions in the Project Team Guide.
Testing and Runtimes
=================
We discussed testing matrix updates for MariaDB/MySQL versions and
Linux distributions. The goal is to drop older, unmaintained versions
and move to modern platforms (like newer Python versions, CentOS
Stream, AlmaLinux, and Rocky Linux) to ensure compatibility and
support. AlmaLinux images are now available, but no projects are
currently testing with them. Kolla and OpenStack-Ansible project teams
are currently interested. The biggest advantage to project teams with
this would be that the wide gamut of CI providers are compatible with
this Red Hat-like distribution because of its support for x86_64v2 and
x86_64v3 CPU architectures (as opposed to CentOS Stream or Rocky
Linux). The CentOS Stream community has offered to run a third-party
CI to test CentOS Stream with OpenStack, and the team noted a good
meeting between maintainers in both communities at the OpenInfra
Summit in Paris that Amy Marrich (spotz) facilitated.
A precursor to this discussion was the concern that the latest LTS
versions of MariaDB/MySQL have changed the default behavior with
respect to character sets and collations, and several OpenStack
services are non-compliant with these changes. We agreed that
OpenStack services need to specify the character set on tables
explicitly, and so creating a community goal to mandate the use of a
specific character set (and to handle character set migrations) was
seen as a good path forward.
We briefly discussed OpenStack Special Interest Groups. We noted that
we could use fresh updates to the SIGs website, including refreshing
the SIG Chairs of several SIGs. There was a suggestion that, during
this exercise, we could suggest merging some SIGs, such as the
Scientific SIG and other operationally related SIGs, under a more
general OpenStack operators umbrella.
Action Items:
- Dmitriy Rabotyagov (noonedeadpunk) will propose a community goal for
project teams to support character set changes in newer LTS releases
of MariaDB/MySQL.
- tonyb will refresh the data for all the SIGs.
- gouthamr will merge the governance and governance-sigs repositories
for ease of management.
University Partnerships and Collaborations
==================================
Kendall Nelson (diablo_rojo) would like us to maintain an ongoing list
of projects and mentors for OpenStack's storied university interns.
This would alleviate bottlenecks and increase student engagement in
OpenStack. There's now an Etherpad for this purpose [5]. If you'd like
to mentor, please do sign up. We also entertained the idea of
combining efforts with the First Contact SIG, as it has not been
active in a while, to create a running list of mentors and
participants for various onboarding events and activities throughout
the community.
Action Items:
- Community members sign up and propose projects [5]
- Revive First Contact SIG by collaborating with the University
Partnership Projects.
Post-Quantum Cryptography
=======================
We discussed the potential breaking of traditional asymmetric
cryptography by quantum computers around 2030. JP Jung (JP) joined us
to detail that there are 17 cryptographic modules in OpenStack, and
the goal is to understand the problem in each project, catalog the
cryptographic algorithms used, and make the entire chain well-known to
achieve quantum safety. A significant concern was that OpenStack has
security threats today, and the energy should be directed towards
securing it for current use cases as well, not just focusing on
potential future threats like quantum computing. It was noted that the
Python ecosystem has minimal support for post-quantum cryptography,
and Python's cryptography project has not committed to a timeline for
adding support, so building a foundation with cryptography or
alternative libraries might be necessary before focusing on OpenStack.
JP proposed a document for the community to review [6]. Much of it
probably has to be less verbose and focus on specific gaps/changes.
JP's willing to be a champion of the effort to secure OpenStack in a
post-quantum world. He'd like project teams to audit the use of
encryption algorithms, key exchange libraries/mechanisms in their
deliverables. The Security SIG is a good venue to continue this
discussion, and, if necessary, a sub-team or a separate pop-up team
can be formed as well.
Action Items:
- JP will fix up the current analysis [6] to make it specific, and
highlight actionable problems.
OpenStack Universe a.k.a Affiliated Projects
==================================
Artem Goncherov (gtema) wanted to address the issue of signaling to
users that certain tools work well with OpenStack, even if they are
not official OpenStack projects. The idea is to create a connection
between OpenStack and these affiliated projects, giving them
recognition and a voice, and potentially providing some protection
from competing projects. Thierry Carrez (ttx) suggested that the
OpenInfra Universe [7] is an existing platform that lists projects in
a similar category, with criteria including OSI-recognized licenses
and integration with Open Infra projects. We brainstormed how to
evolve the current project intake form to better represent affiliated
projects, with a potential meeting to include OpenInfra Marketing
folks and explore options.
Action Items:
- ttx will set up a meeting with interested parties to improve the
OpenInfra Universe, and will share updates in a future OpenStack TC
meeting.
Bridging the gap between community and contributing orgs
=============================================
Ildiko Vancsa (ildikov), Jeremy Stanley (fungi) and Clark Boylan
(clarkb) provided an update on the effort to bridge the gap between
the community and contributing organizations. There are two surveys
that are currently open to contributors [8] and maintainers [9]. These
surveys focus on the Flamingo release cycle and will end on November
16th, 2025. (Please take them!) Once these surveys end, we're
expecting them to inform us of where things are going well and where
they are not, and to measure the effectiveness of changes we've slowly
been making across the community, with the ultimate goal of improving
the contributor experience.
There is a perception that getting a feature into OpenStack requires a
personal relationship with a maintainer, which can be a barrier to
entry for new contributors. The complexity of OpenStack's CI and
software can also be a significant technical element that makes it
difficult for external contributors to get started. Collectively,
we're having trouble finding new core maintainers and building
knowledge in parts of our code base to sustain them. Extending trust
and lowering barriers to entry can help address these issues, and
projects should consider reevaluating their processes to make it
easier for new contributors to get involved.
It was also highlighted that #openstack and #openstack-dev IRC
channels on OFTC have become less active over time. We explored the
idea of revitalizing these channels as preferred venues for
cross-project discussions, and having maintainers parked there to
engage with folks that aren't familiar with project-specific channels
or have cross-project concerns.
Action Items:
- Please take the contributor [8] and maintainer [9] surveys for the
2025.2 "Flamingo" release.
- (ildikov/fungi) Analyze survey results when surveys close and meet
with project teams and the OpenStack TC to discuss them.
Project Cascade
=============
Professor Corey Leong (profcorey) introduced the Cascade project, an
open-source unified communications project. He asked us about the
steps to make it an official OpenStack project, including potential
voting and governance requirements. The project aims to integrate
unified communications services, such as voice, phone, and video using
open-source projects, including OpenStack components. It'd allow
OpenStack users to download and install the service on their machines.
Participants drew a similarity to Trove in that Cascade would provide
a managed service to OpenStack users akin to Trove providing
database-as-a-service. We opined that the project is still in the
early stages, and the team is researching how to integrate with
OpenStack components, including identifying dependencies and
interactions between services. In this stage, all of the community's
infrastructure: hosting on OpenDev, publishing on wiki.openstack.org,
using official IRC channels on OFTC, Meetpad, Gerrit, and Zuul, are
available to Project Cascade's maintainers. We tried to understand the
benefits of switching to the OpenStack namespace, i.e., bringing the
project under OpenStack governance and what it would provide that is
not currently available. We hope to continue this discussion in a
future TC meeting or on the openstack-discuss mailing list.
Action Items:
- tonyb is working on fixing OpenID auth issues with
wiki.openstack.org and will coordinate with profcorey on problems with
other maintainer infrastructure.
Now that we have that out of the way, we can focus on some other
important things [10]. Again, thank you very much for participating
and for the great discussions over the week!
Sincerely,
Goutham Pacha Ravi (gouthamr)
Chair, OpenStack Technical Committee
[0] OpenStack TC PTG Etherpad:
https://etherpad.opendev.org/p/r.5dfdc57d7bc31731ccf9d51cb80de6c5
[1] OpenStack TC Gazpacho PTG playlist:
https://www.youtube.com/playlist?list=PLhwOhbQKWT7WkK5VGgipjt4XQQlgLLci9
[2] Hervé's summary of eventlet-removal:
https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack…
[3] "Gazpacho" Oct 2025 PTG Etherpads:
https://wiki.openstack.org/wiki/PTG/Oct2025PTG/Etherpads
[4] OpenStack Wide Goals: https://governance.openstack.org/tc/goals/
[5] University Mentors Sign Up:
https://etherpad.opendev.org/p/UPP-Projects%26Mentors
[6] Post Quantum OpenStack:
https://wiki.openstack.org/wiki/Post_quantum_openstack
[7] OpenInfra Universe: https://openinfra.org/universe
[8] Flamingo Contributor Survey:
https://openinfrafoundation.formstack.com/forms/openstack_contributor_exper…
[9] Flamingo Maintainer Survey:
https://openinfrafoundation.formstack.com/forms/openstack_maintainer_experi…
[10] https://cooking.nytimes.com/recipes/1017577-best-gazpacho
1 month, 3 weeks