openstack-discuss search results for query "#eventlet-removal"
openstack-discuss@lists.openstack.org- 149 messages
Re: [eventlet-removal]When to drop eventlet support
by Sean Mooney
On 16/06/2025 13:27, Dmitriy Rabotyagov wrote:
>
>
>
> sayint its FUD is not helpful.
>
> we got a driect ask form operator and soem core to not do a hard
> switch
> over.
>
> and while i wanted to only support one model for each binary at a
> time
> we were sepcificly ask to make it configurable.
>
> > In the later case, your only available action is help fixing
> bugs. It
> > is not up to the operators to double-guess what may or may not
> work.
>
> correct we are not planning to document how to change mode we were
> planning to only use this configuration in ci and operator would be
>
>
> Well, we'd need to have that communicated so that deployment toolings
> could adopt their setup to changes, as, for instance, in OSA amount of
> eventlet workers are calculated based on the system facts, so we'd
> need to change the logic and also suggest how users should treat this
> new logic for their systems.
why is OSA doing that at all today?
we generally don't recommend changing those values from the default
unless you really know what your doing.
i don't think other installer do that.
tripleo, kolla-ansbile and our new golang based installer do not, nor
does devstack so its surprising to me that OSA would change such low
level values
by default.
we will document any new config options we and and we are documentation
how to tune the new options for thread pools but we do not expect
installation
tools to modify them by default. we are explicitly not making the
options based on the amount of resources on the host i.e. dynamically
calculated based
on the number of CPU cores.
for example we are explicitly setting the number of scatter_gather
thread in the the dedicated thread pool to 5
why its a nice small number that will work for most people out of the box.
can you adjust it, yes but it scale based on the number of nova cells
you have an 99% wont have more then 5 cells.
using information about the host where the API is deployed to infer the
value of that would be incorrect.
you can really only make an informed decision about how to tune that
based on monitoring the usage of the pool.
that how we expect most of the other tuning options to go as well.
our defaults in nova tend to be higher then you would actually need in a
real environment so while it may make sense to reduce
them we try to make sure the work out of the box for most people.
gibi id building up
https://review.opendev.org/c/openstack/nova/+/949364/13/doc/source/admin/co…
as part of nova move to encode this but our goal is that deployment
tools shoudl not need to be modifyed to tune these
valued by defualt.
>
> So it will be kinda documented in a way after all.
>
>
> told for a given release deploy this way.
>
> this is an internal impelation detail however we are not prepared to
> deprecate usign eventlet until we are convicned
>
> that we can run properly without it.
>
> > For beginners, this would be a horrible nightmare if default
> options
> > simply wouldn't work. We *must* ship OpenStack working by default.
> no one is suggesting we do otherwise.
> >
> > Cheers,
> >
> > Thomas Goirand (zigo)
> >
>
1 month, 3 weeks
Re: [eventlet-removal]When to drop eventlet support
by Sean Mooney
On 16/06/2025 14:24, Dmitriy Rabotyagov wrote:
> In case you try to use a 32gb box with 16 cores as a controller for
> OpenStack - it will blow off with default amount of workers for wsgi
> and /or eventlet apps.
i think you are conflating workers and eventlet tuneign which are two
very diffent things.
the default for nova api depens on how you deploy it but normaly you
start with 1-2 worker process for the api.
we do seam to be defaulting to 1 worker proceess per core for conductor,
scheduler which likely shoudl be set to 1 aswell
https://github.com/openstack-k8s-operators/nova-operator/blob/main/template…
https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/nova/t…
https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/nova/t…
those have nothing to do with eventlet however the only eventlet
specific tunable nova has are the following
https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.de…
https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.sy…
https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.ex…
Thsese are what i and gibi were refering to when we said there will be
new tuning option for theaded mode.
those exisitng greenthread pools will be replaced with new executors
that wil need to be configured diffently.
the worker options are not being removed or changed as part of eventlet
removal.
although they probably should be updated to default to 1 instead of
$(nproc) and then be overried by
the deployer based on there own knowlage of the aviable resouces.
>
> While you can argue this should not be used as production setup, this
> can be totally valid for sandboxes and we wanna provide consistent and
> reliable behavior for users.
>
> But my argument was not in if/how we want to fine-tune deployments,
> but also understand and provide means to define what's needed as well
> as potential ability to revert in worst case scenario as a temporary
> workaround.
> So still some variables and logic would be introduced from what I
> understand today.
>
>
> On Mon, 16 Jun 2025, 14:43 Sean Mooney, <smooney(a)redhat.com> wrote:
>
>
> On 16/06/2025 13:27, Dmitriy Rabotyagov wrote:
> >
> >
> >
> > sayint its FUD is not helpful.
> >
> > we got a driect ask form operator and soem core to not do a hard
> > switch
> > over.
> >
> > and while i wanted to only support one model for each binary
> at a
> > time
> > we were sepcificly ask to make it configurable.
> >
> > > In the later case, your only available action is help fixing
> > bugs. It
> > > is not up to the operators to double-guess what may or may not
> > work.
> >
> > correct we are not planning to document how to change mode
> we were
> > planning to only use this configuration in ci and operator
> would be
> >
> >
> > Well, we'd need to have that communicated so that deployment
> toolings
> > could adopt their setup to changes, as, for instance, in OSA
> amount of
> > eventlet workers are calculated based on the system facts, so we'd
> > need to change the logic and also suggest how users should treat
> this
> > new logic for their systems.
>
> why is OSA doing that at all today?
> we generally don't recommend changing those values from the default
> unless you really know what your doing.
> i don't think other installer do that.
> tripleo, kolla-ansbile and our new golang based installer do not, nor
> does devstack so its surprising to me that OSA would change such low
> level values
> by default.
>
> we will document any new config options we and and we are
> documentation
> how to tune the new options for thread pools but we do not expect
> installation
> tools to modify them by default. we are explicitly not making the
> options based on the amount of resources on the host i.e. dynamically
> calculated based
> on the number of CPU cores.
>
> for example we are explicitly setting the number of scatter_gather
> thread in the the dedicated thread pool to 5
> why its a nice small number that will work for most people out of
> the box.
>
> can you adjust it, yes but it scale based on the number of nova cells
> you have an 99% wont have more then 5 cells.
>
> using information about the host where the API is deployed to
> infer the
> value of that would be incorrect.
>
> you can really only make an informed decision about how to tune that
> based on monitoring the usage of the pool.
>
> that how we expect most of the other tuning options to go as well.
>
> our defaults in nova tend to be higher then you would actually
> need in a
> real environment so while it may make sense to reduce
> them we try to make sure the work out of the box for most people.
>
> gibi id building up
> https://review.opendev.org/c/openstack/nova/+/949364/13/doc/source/admin/co…
>
> as part of nova move to encode this but our goal is that deployment
> tools shoudl not need to be modifyed to tune these
> valued by defualt.
>
> >
> > So it will be kinda documented in a way after all.
> >
> >
> > told for a given release deploy this way.
> >
> > this is an internal impelation detail however we are not
> prepared to
> > deprecate usign eventlet until we are convicned
> >
> > that we can run properly without it.
> >
> > > For beginners, this would be a horrible nightmare if default
> > options
> > > simply wouldn't work. We *must* ship OpenStack working by
> default.
> > no one is suggesting we do otherwise.
> > >
> > > Cheers,
> > >
> > > Thomas Goirand (zigo)
> > >
> >
>
1 month, 3 weeks
Re: Eventlet and debugging
by smooney@redhat.com
On Thu, 2024-08-15 at 01:49 +0530, engineer2024 wrote:
> Thanks to both of you for the response . I try to insert pdb breakpoint at
> some point in the code files and start the service. Then I issue a nova
> command to check how the flow works. From then on I use the pdb commands to
> check the function calls. This to me gives a very convenient way of
> learning the code connections, since I don't have an idea of how these
> functions and modules r related. U people as developers have a design to
> start with and work on from there. So u may know the purpose of each
> function, class, etc.
>
> Since nova api doesn't support pdb becos of eventlet design, I learned to
> print those objects and find their information.
>
> But with pdb, it's a smooth flow to inspect the class attributes, their
> functions, arguments, types etc. I can even test those objects the python
> way since pdb supports python prompt as well. Of course I do all this on
> dev systems though...
this is going to be a long responce and its a bit rambely but im goign to provide
you with the context and poitn er try and figure out how to do this for your self.
first thing to note is while its possibel it not trivial and the payoff is proably not
worth it in the long run. we do not have up to day docs for how to do this because
this is not how peole learn, contibute to ro develop nova in genral.
nova does have a remote debug facilitate that is sepreate for the eventlet backdor to enable
you to use an ide to debug remotely.
that was added around 2012 and has worked on an off to vering degrees since then.
the first thing to understand about nova and several other project is that they are a distibuted system
i.e. the applciaiton that compise a given service like nova run in multiple service which are interconnected
vai a message bus with in a service and rest apis are exposed for inter service comunication.
when you enable a debugger and stop on a breakpoitn only one of the process is stop and the other contiue
to exsculte which means that RPC calls, heathbeats and other asynconos tasks can and will timeout and fail.
you cannot debug a distibuted system in teh same way as you woudl a simple applction where all state
is executed in a signle main loop.
the act of observing the internal behavior of the nova api or nova-comptue agent will change its behavior.
nova has suppoort for remote debugging that was added about 12 years ago
https://wiki.openstack.org/wiki/Nova/RemoteDebugging
https://github.com/openstack/nova/blob/master/nova/debugger.py
with that supprot you can use an idea like pycharm to connect to the remote debugger and attempt to
debug a process.
im currently propsoing removing that as part of the removal of eventlet partly because it shoudl not be needed
when we dont have enventlet and mainly because it didnt work very well when i last used it.
some ides have supprot fo gevent
gevent is a fork of eventlet which was forked form some of hte libiares in stackless python
eventlet converts a single thread python interperater in a cooperative concurent multi userland process.
what does that mean , each python function is not operating in a seperate os stackframe, it has heap allcoated
stack frames that allow context swtichs between userland thread at any tiem a funnction woudl do blocking io
the reason single step debugging does not work well with gdb is that when you single step the eventlet engine
is often going to end up context switching into the midel of another function.
to work around this you just set breakpoint at the line you want to go to next and use continue instead fo single step
one of the rules when using eventlet is that if your going to monkey patch you need to monkey patch everyting early
in addtion too the remote debuging facilites we can disabl mokey patching entirly.
you can do that by settign OS_NOVA_DISABLE_EVENTLET_PATCHING=1
https://github.com/openstack/nova/blob/master/nova/monkey_patch.py#L87
this will entirly break nova-compute and several other nova process as they have
effectivly infinity loops to pool for rpc messags, monitor hyperviors ectra.
if the remote debugger is used it disables patching the thread lib
https://github.com/openstack/nova/blob/master/nova/monkey_patch.py#L44-L46
that is prartly why usign the remote debugger is kind fo buggy in itsself.
the code is not inteded to work that way.
these fucntionlatiy mainly exist to allow debuging tests not the actulaly running code.
if i need to debug test i normaly jsut comment out
https://github.com/openstack/nova/blob/master/nova/monkey_patch.py#L83-L89
and run the test in my ides(pycharm,vscodes) test debbuger.
i normally code in nano/emacs but is do use ides if i really cant figure out what happening
however i genreally start with just adding log lines.
when i first started workign on opentack in 2013 i very much liked using an ide and debugger
and i woudl often spend time trying to get nova to work in a debugger.
over tiem i generally found that that was not worth the effort.
keepign a devstack deployment working that way was often more work then doing print staement
or better writing functional or unit test to repoduce the problem.
you previous mails ask about debuging nova's api.
if you want to do that its one of the easiest thigns to debug.
first do _not_ use the wsgi script
adding apache of modwsgi is just going to make your life harder.
just disable eventlet monkey patching, by commetiing or usign the env varible
and run the nova-api console script directly under pdb or your debugger of choice.
that will allow you to basiclly single step debug without any issues as it will not be using eventlet
anymore and it has not infinitly loops or perodic tasks so it mostly just works when not using eventlet.
that becase it didnt use eventlet at all in the api until about 2016 and even today it only really uses it in one place
in my eventlet removal serise i split nova in to the part that use eventlet directly and the parts that dont in the
second path
https://review.opendev.org/c/openstack/nova/+/904424
anything under nova/cmd/eventlet/ needs eventlet today
everything under nova/cmd/standalone/ does not.
the approch of doign an api call and following it in a debugger is not a good way in my expericne to learn how somethign
like nova really works. it can hellp to a limtied degree but you cant eaislly debug across multipel processes.
a minium nova deployment has 4 process typically more.
you can try but each openstack service (nova, neutorn, cinder, ...) is typically compised of several micocservices
some like keystone and palcement are just rest apis in front of a db and dont use eventlet at all so are trivial to
debug as a result. the rest have non tirival runtime interactions that cant be as eaislly decompsed.
becuase of that nova and other project invest in our functional tests to allwo ues to emulate that multi process env
and better reason about the code.
the debugging functionality is documetned in the contibutor guide
https://github.com/openstack/nova/blob/master/doc/source/contributor/develo…
but we dont really adverstise that because none of the core maintainers use that any more and it has not been maintained
for the better part of a decade. since we cant use this type of debugging with our ci system
if we cant debug why a failure happens form the ci logs we add more logs because thyat will be useful for future us
and operators in production. so our focus is on logging/monitoring for production rather then for developer eases
simply because of the large techdebt that eventlet creates in this aready.
i know nuetorn has had some better look with using debuggers then nova has,
keystone/placmnet are eventlest free adn are just python web apps/rest apis so should "just work"
in any python debugger.
nova is prehaps the hardest to debug because of the legacy of the proejct so its not the best palce to start looking
how this works. too be clear i have ran nova api, nova-conductor, nova scheduler and nova-compute all in 4 ide windows
with 4 debuggers at once
it was proably 3 year of working on openstack before i understood how to hack that ot work and since i learned
how to properly debug without a debuggger, logs, code inspection, unit/fucntioal testing
i have nver really need to do that again.
gaining the ablity to run openstack in a debugger simply again, in a maintainable way is deffienly one of the reason im
looking forward to removing eventlet but its still a lot of work.
im not sure how helpful this was but this is proably as clsoe to a blog post as you will get.
>
> On Thu, 15 Aug 2024, 01:04 Jeremy Stanley, <fungi(a)yuggoth.org> wrote:
>
> > On 2024-08-14 22:38:43 +0530 (+0530), engineer2024 wrote:
> > > How many more releases are you planning to include eventlet in
> > > them? Seems they are removing any support for further development
> > > according to their official website. So, is the community thinking
> > > of any alternatives?
> > [...]
> >
> > Just to expand on the first part of Sean's reply, most of what you
> > want to know is probably answered here:
> >
> > https://governance.openstack.org/tc/goals/proposed/remove-eventlet.html
> >
> > --
> > Jeremy Stanley
> >
11 months, 4 weeks
[tc][all] OpenStack Technical Committee Weekly Summary and Meeting Agenda (2024.2/R-12)
by Goutham Pacha Ravi
Hello Stackers,
Time flies, we're over halfway through our 2024.2 (Dalmatian) release
cycle! This second half is usually chock-full of release deadlines,
specification and code freezes, and more release action [1]. The past
week was a short work week for many folks in the Technical Committee
due to local holidays, so this update will be brief. Hervé Beraud
(hberaud) and Mike Bayer (zzzeek) joined us at the TC's video meeting
last week, and they took us through the goal proposal to remove
eventlet from OpenStack [2][3][4][5][6]. The meeting was well
attended, and it involved a walkthrough of SQLAlchemy, where zzzeek
presented how it supports asyncio and a library that allows non-async
definitions to invoke awaitables inside an asyncio application. We
didn't merge any new governance proposals this week, but several are
in review as noted below.
=== Weekly Meeting ===
The weekly meeting was held simultaneously on video and IRC. Please
check out the recording posted to the TC's YouTube channel [7] as well
as the meeting log on eavesdrop [8]. We have new governance proposals
that concern Freezer and Monasca, the two projects that were marked
"inactive" during the last release cycle. We discussed providing the
Skyline team instructions and time to graduate from being an
"emerging" project. The TC reiterated its commitment to ensure that
all decisions are made asynchronously and recorded on Gerrit to
involve community members in different time zones. The rest of the
meeting was focused on the eventlet removal discussion. It was a very
informative presentation that I'd recommend all OpenStack maintainers
review. We took a note to time-box the reviews on the goal proposal.
I'd like to request reviews and any strong objections to be expressed
on the proposal as soon as possible.
The next Technical Committee meeting is today, 2024-07-09 at 1800 UTC!
The meeting will be hosted on OFTC's #openstack-tc channel, and the
agenda has been posted to the Meetings Wiki Page [9]. Please consider
joining us!
=== Governance Proposals ===
Remove Eventlet From Openstack |
https://review.opendev.org/c/openstack/governance/+/902585
Update criteria for the inactive projects to become active again |
https://review.opendev.org/c/openstack/governance/+/921500
Remove Monasca from inactive list |
https://review.opendev.org/c/openstack/governance/+/923466
Transition Watcher project to DPL |
https://review.opendev.org/c/openstack/governance/+/923583
Inactive state extensions: Freezer |
https://review.opendev.org/c/openstack/governance/+/923441
Retire Kuryr-Kubernetes and Kuryr-Tempest-Plugin |
https://review.opendev.org/c/openstack/governance/+/922507
Update to include docs and miscellaneous repos for AC status |
https://review.opendev.org/c/openstack/governance/+/915021
Adding a rule to manage the affiliation diversity requirement in the
TC | https://review.opendev.org/c/openstack/governance/+/922512
=== Upcoming Events ===
2024-08-14: Nominations open for 2025.1 OpenStack PTL+TC elections
2024-08-29: Dalmatian-3 milestone
Thank you very much for reading!
On behalf of the OpenStack TC,
Goutham Pacha Ravi (gouthamr)
OpenStack TC Chair
[1] Release Schedule: https://releases.openstack.org/dalmatian/schedule.html
[2] Remove eventlet from OpenStack:
https://review.opendev.org/c/openstack/governance/+/902585
[3] Awaitlet SQLAlchemy: https://awaitlet.sqlalchemy.org/en/latest/
[4] SQLAlchemy Asyncio Facade for oslo.db:
https://review.opendev.org/c/openstack/oslo.db/+/922976
[5] zzzeek's async demo: https://github.com/zzzeek/async_demo
[6] Presentation Slides:
https://docs.google.com/presentation/d/169Bc_Uhv-L-0HzM6WuE6uBsC-1hIlgCOQ4l…
[7] TC Meeting Recording, 2024-07-02:
https://www.youtube.com/watch?v=KnTDP7Dqit8
[8] TC Meeting IRC Logs, 2024-07-02:
https://meetings.opendev.org/meetings/tc/2024/tc.2024-07-02-18.00.log.html
[9] TC Meeting, 2024-07-09:
https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting
1 year, 1 month
[tc][all] OpenStack Technical Committee Weekly Meeting Agenda (2025.1/R-22)
by Goutham Pacha Ravi
Hello Stackers,
As you're aware, we concluded the virtual Project Teams Gathering
event last week. This week, we’ll start addressing the to-do lists we
created. The Technical Committee met for six hours, in addition to
participating in several discussions with various project teams. I’m
working on sharing a summary of the week’s proceedings with you via
this mailing list. In the meantime, we’ll be catching up on 2024-10-29
at the TC’s weekly IRC meeting in OFTC’s #openstack-tc channel. The
agenda for this meeting is available on the Meeting Wiki [1]. I hope
you’ll be able to join us.
=== Governance Proposals ===
==== Merged ====
- Retire kuryr-kubernetes and kuryr-tempest-plugin |
https://review.opendev.org/c/openstack/governance/+/922507
- Propose a pop-up team for eventlet-removal |
https://review.opendev.org/c/openstack/governance/+/931978
==== Open for Review ====
- Add Cinder Huawei charm |
https://review.opendev.org/c/openstack/governance/+/867588
- Add watcher DPL for Epoxy |
https://review.opendev.org/c/openstack/governance/+/933018
- Add Axel Vanzaghi as PTL for Mistral |
https://review.opendev.org/c/openstack/governance/+/927962
- Propose to select the eventlet-removal community goal |
https://review.opendev.org/c/openstack/governance/+/931254
Thank you very much for reading!
On behalf of the OpenStack TC,
Goutham Pacha Ravi (gouthamr)
OpenStack TC Chair
P.S: OpenStack TC PTG YouTube Playlist is shaping up here [2]. Please
feel free to browse through
[1] TC Meeting Agenda, 2024-10-29:
https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting
[2] https://youtube.com/playlist?list=PLhwOhbQKWT7XGjIwT0mtPpixpuY-tKYoh&featur…
9 months, 2 weeks
Re: [eventlet-removal]When to drop eventlet support
by Sean Mooney
On 16/06/2025 10:11, Balazs Gibizer wrote:
> On Sat, Jun 14, 2025 at 1:24 AM <thomas(a)goirand.fr> wrote:
>>
>> On Jun 13, 2025 20:52, Jay Faulkner <jay(a)gr-oss.io> wrote:
>>> I'm confused a bit -- the implementation details of our threading modules are not a public API that we owe deprecation periods for. Why are we treating it as such?
>>>
>>> -JayF
>> Right. Plus I don't get why operators get to choose what class of bugs they may experience, and how they will know beter than contributors.
just to address one thing.
we don't really intend to expose the configurabltiy to operators.
we are building it in so that we (the core team) can test both version
and choose when to move each component
to the new mode.
The environment setting could be set by operator to workaround bugs
if/when they happen but
our intent is we would choose the mode that it should be run in on a per
binary basis and the env var will just be
for our internal use. having it does provide use an escape hatch if
there is high severity bug to revert back
to the old mode of operation. we still have the ability to run os-vif in
the cli mode using ovs-vsctl
instead of the ovs python bindings
https://github.com/openstack/os-vif/blob/master/vif_plug_ovs/ovs.py#L72-L82
that was vital when ovs changed there implementation such that a
reconnect would block the nova-compute agent for multiples
seconds. ironically that was also eventlet related but having the old,
venerable, slow cli based driver as a fallback mitigated most
of the impact until the ovs c and python bidning could be fixed. that
took the better part of a year to do and have it released/
backpored. im not saying it will take use the same amount of time if we
have a bug in the threading mode but its possible.
we reported the eventlet related concurrency bug on 2021-05-24
https://bugs.launchpad.net/os-vif/+bug/1929446
the fix in ovsdbapp merved on Dec 2, 2021
https://github.com/openstack/ovsdbapp/commit/a2d3ef2a6491eb63b5ee961fc93007…
and we still had backprot being merged of this up until 2023-05-22 as
distros back-ported the original ovs change into older release of ovs.
This is the type of "nasty bugs" gibi was referring too.
i for one wanted to only support one mode of operation per service
binary per release but i do see value if for no other reason then debugging
of being able to revert to the old behavior. the fact we had the vsctl
driver made it very clear that this ovs bug was in the ovs
lib or python bindings as we coudl revert to the other impleation and
show it only happend in the native code path.
> The new concurrency model in nova (native threading) needs different
> performance tuning than the previous (eventlet). The cost of having
> 1000 eventlets is negligible but having 1000 threads to replace that
> will blow up the memory usage of the service. Operators expressed that
> having such tuning effort happening during upgrade without a temporary
> way back to the old model is scary. And honestly I agree.
>
> Similarly we expect nasty bugs in the new model as it is a significant
> architectural change. So having no way to go back to a known good
> state temporarily while the bug is fixed or worked around is scarry.
>
> Third, if we want to keep green CI while we are transforming nova
> services to the new model without keeping a big feature branch then
> we need to be able to land code that passes CI while things are half
> transformed. The only way we can do that is if we support both
> concurrency modes in parallel for a while.
>
> Cheers,
> gibi
>
>> Cheers,
>>
>> Thomas Goirand (zigo)
>>
1 month, 3 weeks
[tc][all] OpenStack Technical Committee Weekly Summary and Meeting Agenda (2025.1/R-18)
by Goutham Pacha Ravi
Hello Stackers,
We're 18 weeks away from the coordinated release for 2025.1 ("Epoxy")
[1]. We're beginning a period of low activity due to year-end holidays
across the world. Last week, the TC didn't merge any new governance
proposals; however, several are currently in progress.
=== Weekly Meeting ===
The TC met in OFTC's #openstack-tc IRC channel on 2024-11-19 [2]. We
discussed a series of global requirements upper constraint bumps that
were committed and the ensuing CI flakiness. We appreciated the effort
to proactively update dependencies to their latest versions earlier in
the release cycle so we can have more time to test and stabilize
OpenStack software. Sometimes, these failures may interrupt other
project priorities. The best way we can help is if project teams
identify a relatively quick-running and stable integration job to run
against changes to the global requirements repository.
The TC and OpenDev administrators worked with Launchpad administrators
to regain control of trackers belonging to the Watcher project team.
It was noted that Launchpad trackers for any project should not be
owned/administered by individuals. Instead, there must be a Launchpad
team driving all trackers associated with an OpenStack initiative, and
"openstack-admins" must own this team. This approach ensures the
sustainability of project teams and protects projects from the impact
of changing individual priorities.
We also briefly discussed the "eventlet-removal" goal proposal. The
goal document is being rewritten [3] to simplify what is being done
and the timelines we're aiming for. I'd love for the community to take
a look at this change and review it.
The next meeting of the OpenStack Technical Committee is today,
2024-11-25, at 1800 UTC in OFTC's #openstack-tc channel. The agenda
for this meeting is in our meeting wiki [4]. I hope you're able to
join us.
=== Governance Proposals ===
==== Open for Review ====
Add Cinder Huawei charm |
https://review.opendev.org/c/openstack/governance/+/867588
Rework the initial goal proposal as suggested by people |
https://review.opendev.org/c/openstack/governance/+/931254
Propose to select the eventlet-removal community goal |
https://review.opendev.org/c/openstack/governance/+/934936
Resolve to adhere to non-biased language |
https://review.opendev.org/c/openstack/governance/+/934907
Add ansible-role-httpd repo to OSA-owned projects |
https://review.opendev.org/c/openstack/governance/+/935694
Retire Murano/Senlin/Sahara OpenStack-Ansible roles |
https://review.opendev.org/c/openstack/governance/+/935677
Add ansible-role-httpd repo to OSA-owned projects |
https://review.opendev.org/c/openstack/governance/+/935694
==== Merged ====
Fix doc job for pillow 11.0.0 |
https://review.opendev.org/c/openstack/governance/+/935967
Add Ubuntu Noble migration goal tracking etherpad |
https://review.opendev.org/c/openstack/governance/+/935461
=== How to Contact the TC ===
You can reach the TC in several ways:
- Email: Send an email with the tag [tc] on this mailing list.
- IRC: Ping us using the 'tc-members' keyword on the #openstack-tc IRC
channel on OFTC.
- Weekly Meetings: Join us at our weekly meeting. The Technical
Committee meets every week on Tuesdays at 1800 UTC [4].
Thank you very much for reading!
On behalf of the OpenStack TC,
Goutham Pacha Ravi (gouthamr)
OpenStack TC Chair
[1] 2025.1 "Epoxy" Release Schedule:
https://releases.openstack.org/epoxy/schedule.html
[2] TC Meeting IRC Log 2024-11-19:
https://meetings.opendev.org/meetings/tc/2024/tc.2024-11-19-18.01.log.html
[3] Eventlet removal Community goal:
https://review.opendev.org/c/openstack/governance/+/931254
[4] TC Meeting Agenda, 2024-11-26:
https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee
8 months, 2 weeks
[tc][ptg] 2025.1 "Epoxy" Technical Committee PTG Summary
by Goutham Pacha Ravi
Hello Stackers,
The following is a summary of the Technical Committee's virtual PTG
discussions during the last week. For longer notes / recordings,
please consult the Etherpad [1] where we kept minutes, or recordings
within a YouTube playlist [2].
=== Reviving the Inclusive language discussion ===
The TC brainstormed with the Diversity and Inclusion Working Group [3]
about the renaming of the "master" development branch in repositories.
We evaluated whether a TC resolution is necessary to provide a clear,
standardized approach. Much of OpenDev tooling allows projects the
ability to customize their default branch names, with an understanding
that this could sometimes be confusing for tools and end-users. Some
open-source communities have attempted similar renaming efforts but
faced pushback or incomplete adoption, as highlighted by Kubernetes,
Ceph, and GNOME. Additionally, there remain concerns over the
feasibility, workload, and support available in the OpenStack
community to undertake a coordinated renaming effort. The discussion
did not yield a consensus and we will not be pursuing an OpenStack
wide effort to renaming all the development branches in OpenStack.
This doesn't prevent a future change of course, however. If we do
undertake this in the future, we will begin by assessing the
community's bandwidth for the effort and we'll identify contributors
who will lead and assist before proceeding.
The conversation also considered the broader scope of inclusive
language. The D&I WG does not strongly recommend renaming the "master"
branch of git repositories as it is not linked to problematic contexts
like slavery in this case. More concerning terminology, like "slave,"
should be prioritized:
https://wiki.openstack.org/wiki/Diversity/Inclusivity
Action Items:
- Evaluate and Draft a TC Resolution (gouthamr): TC resolution will
propose formalizing OpenStack's stance on inclusive language, which
would include specific recommendations about branch naming.
- Encourage a timeline to deal with non-Inclusive language in code and
documentation (tc)
=== Community Leaders meeting ===
Subtopic 1: Translating OpenStack documentation and i18n SIG's challenges
The OpenStack i18n team discussed challenges and potential
improvements for translating OpenStack documentation, particularly due
to a slow shift from Zanata to Weblate. The translation team currently
has limited activity and faces infrastructure issues, such as Weblate
integration problems after a recent cloud upgrade. Key goals are to
engage Asian communities (e.g., Korea, Vietnam, Indonesia) to support
documentation translations, focusing initially on a subset of projects
like Nova, Cinder, Neutron, and Glance.
To streamline the process, the i18n team proposed starting with
machine translations, which would be refined by regional language
moderators. They also discussed AI-generated translations and
compliance with OpenInfra's AI policy, requiring clear indication of
AI involvement in commits. The i18n team needs infrastructure support,
including Zuul automation for translation updates, and guidance for
project core teams on reviewing translations. The discussion continued
in the i18n SIG meetings during the rest of the week:
https://etherpad.opendev.org/p/oct2024-ptg-i18n
Action Items
- I18n SIG will work on weblate migration with priority, and that will
need resolving infrastructure issues as well as figuring out Zuul
integration.
- Sylvain Bauza (bauzas) takes on the role of the i18n TC liaison, and
will highlight the SIGs issues to the TC
- Begin translation efforts for a subset of projects (starting with
Nova) by translating content in the “docs” folders.
- Apply machine translation as a first step, with periodic updates
pushed to each project until Zuul automation is set up.
- Clearly mark AI-assisted translations in commit messages and add a
notice on documentation pages to indicate machine translation usage,
in line with OpenInfra’s AI policy.
- Define and document processes for core teams to review translated
content, considering creating specialized review teams for translation
directories.
Subtopic 2: Vulnerability Management and your project
We discussed the processes within the OpenStack Vulnerability
Management Team (VMT) and the need for consistency/coherence in the
face of evolving regulation across the world, besides our interest in
maintaining good security hygiene. A key concern expressed was the
need to keep core security contacts up to date, and the potential for
automatic inclusion of all OpenStack projects under the VMT's
purview. Adding all project teams and deliverables under the VMT will
provide consistency in our security bug management process. It will
also allow the VMT to formally help smaller project teams to maintain
a good security hygiene. It will allow for cross-project bugs to be
resolved quickly while still maintaining the tenets of embargoed
disclosures. We also discussed the possibility of a Common
Vulnerabilities and Exposures (CVE) authority within OpenStack. Jeremy
Stanley (fungi) emphasised that as a community we must be prioritizing
alignment and avoiding premature optimizations while regulations are
still being drafted. There were suggestions that the VMT wanted the TC
to act as a path of escalation to projects where the core security
contacts were unresponsive. If we include all TC governed projects
under the VMT, there were also concerns about the VMT's workload
increasing. fungi clarified factors that drain the VMT's time (mainly
chasing project security contacts to triage issues in a timely
manner), and didn't think that adding all OpenStack projects would be
a burden on the VMT. Having consistent processes may in fact reduce
the VMT's workload.
There is a strong desire however to grow the VMT with dedicated
volunteers. So if you're reading this and are interested, please hop
into OFTC's #openstack-security channel and join us.
Action Items:
- Project PTLs (or DPLs) must update their core-sec teams on
Launchpad/Storyboard, removing inactive members where necessary (e.g.,
Nova to review its `nova-coresec` members)
- PTLs of each project must confirm or designate a security liaison
for each project, or confirm that the PTL will take on this role.
- gouthamr will propose a TC resolution to include all governed
projects automatically included in VMT oversight
- the security-sig will continue engaging with the Open Regulatory
Compliance Working Group to monitor and influence relevant regulatory
developments (without immediate process changes on our end at this
time)
- We will also create a path of escalation for projects not actively
following security processes, which may include notifying TC without
detailed specifics or, as a last resort, considering project removal
from OpenStack.
- The VMT and OpenStack Security Team will evaluate the need and
feasibility of a common CVE authority for OpenStack and assess
community feedback on its potential value.
Subtopic 3: Remove Postgres CI jobs from the projects
We discussed whether to discontinue Postgres CI jobs across projects,
with Ironic and Neutron teams already planning to do so. This stems
from a lack of testing coverage and recurring errors in some projects,
particularly Neutron. The group raised concerns about the need for a
consistent, community-wide approach to database support, especially
given the challenges of switching database backends. Current user
survey data indicate that about 5% of deployments use Postgres, but
maintaining its support is not feasible without dedicated resources.
Action Items
- Update OpenStack documentation to clearly outline the level of
support and testing for each database backend, helping users
understand support status before deploying.
- Projects (like Ironic, Neutron) must drop their postgres CI jobs,
earlier in this cycle so we can prevent unexpected CI failures in
interconnected projects
- Reviving postgres support will need volunteers. The TC last
published a resolution in 2017 [4] explaining the state of support for
postgresql. There must be a follow up to state that non-MySQL backends
are not tested within the community.
Subtopic 4: Migrate CI testing from Ubuntu 22.04 to Ubuntu 24.04 (this
implies python3.12 support/usage)
Ghanshyam Mann (gmann) is championing the goal to ensure that the
advertised runtimes for 2025.1 are enforced in the community CI jobs.
Base CI job changes are already underway
Action Items:
- gmann will share the goal progress with the community as this progresses
- project teams must triage integration job failures with
Python3.12/Ubuntu Noble and prioritize fixes. They may temporarily pin
selective test jobs to python3.9/Ubuntu Jammy however, we expect these
issues to be resolved sooner within this release cycle.
=== Eventlet Removal ===
This was a "kick-off" cross project discussion on the proposed goal to
drop the usage of the "eventlet" library across OpenStack repositories
[5]. The migration plan includes identifying suitable alternatives
(e.g., native threads, asyncio, awaitlet) to meet different services'
needs while addressing challenges around async compatibility with
existing infrastructure (e.g., WSGI). Hervé Beraud (hberaud) led this
discussion, and we discussed the pop-up team that has been formed to
guide this transition. I'd refer you to the summary that hberaud
shared on the openstack-discuss ML earlier for a more in-depth summary
of further discussions [6]
Action Items:
- The TC and the community must review the proposal [7] for outcomes,
timeline and select the "Eventlet Removal" goal as a cross-community
goal
- hberaud will set up working groups focused on specific migration
needs: background tasks, networking/HTTP, and database interactions.
Details are outlined on the Eventlet Removal Wiki [4]
- Mike Bayer (zzzeek) and hberaud will work with project teams to
assess where asyncio may be beneficial or problematic, and provide
guidelines on selecting alternative solutions
- We need volunteers from project teams to join the pop-up team and
evaluate eventlet usages in the projects they maintain. Please head to
OFTC's #openstack-eventlet-removal channel to participate in the
discussion
=== Bridging the gap between community and contributing organizations ===
Ildiko Vancsa (ildikov) and Jeremy Stanley (fungi) brought the fourth
of a series of community discussions to the TC and the community. We
brainstormed ways to increase contributions from OpenInfra member
organizations, improve contributor experience, and address blockers to
community growth and engagement. The goals outlined are to bridge gaps
between casual contributors and experienced community members,
optimize review processes, and enhance visibility of contribution
pathways and expectations. The challenges discussed included
visibility of documentation, contributor engagement with maintainers,
review delays, and difficulty navigating IRC. The brainstorming that
followed proposed some ideas to try, such as contributor spotlights
for maintainers (e.g., Superuser articles or video interviews). Please
review the "bridging the gap" PTG etherpad for further discussions
around the topic [8]
Action Items:
- Project PTLs must ensure we have clear documentation on patch review
and escalation paths. This includes clearly highlighting the core
maintainers, liaisons and PTL and ways to contact them. Teams must
also list any code review rules that projects have evolved in the
documentation.
- Project maintainers must consider “review days” to prioritize and
expedite small patches.
- Develop a Matrix/Element connection guide for OpenStack rooms as an
alternative to IRC promoting the guide across "so you want to
contribute" pages.
- Formalize guidelines around code review pitfalls
=== OceanBase Database ===
Members of the OceanBase project community had a cross-project
discussion with OpenStack maintainers through this session. OceanBase
was stated to be a compatible alternative to MySQL within OpenStack,
without necessitating code changes in OpenStack services. The
OceanBase team aims to support broader community involvement by
contributing integration code for devstack and deployment tools like
Kolla, OpenStack-Helm, and OpenStack-Ansible (OSA). The team learned
about resource constraints in the community CI jobs, and we
brainstormed approaches to support integration testing.
Action Items:
- OceanBase contributors will implement OceanBase support in devstack,
potentially as a plugin if setup is complex.
- The team will explore an OpenStack’s CI job by working with the
OpenStack QA team, targeting tempest testing
- The team will initiate feature requests within deployment projects
(e.g., Kolla, OpenStack-Helm, OSA), and potentially collaborate with
OpenStack's Large Scale SIG
=== Updating the OpenStack tenant policy on CI ===
As a community, we have been discouraging indiscriminate "recheck"s on
failed CI jobs; however, our recent experiences landing interdependent
CVE fixes within several repositories necessitated fighting repeated
unrelated failures. The discussion was around some desire to avoid
this in the future and several approaches were brainstormed. Clark
Boylan (clarkb) helped the TC understand OpenStack's policy regarding
"clean check" practices in the Zuul CI [9]. This policy is intended to
reduce repeated gate failures and encourage debugging. It would be an
anti-pattern to explore a way to bypass this. The idea of dropping
this policy was discussed and the attendees overwhelmingly agreed that
there were significant downsides to doing this, including the
infeasibility of re-orienting each project team towards how to
responsibly craft their CI to avoid pitfalls. There was a suggestion
to allocate a limited “infra budget” per project, allowing them to
assess re-check frequency and test requirements based on specific
project stability and job complexity. There was a tangential
discussion on why Zuul's philosophy of testing does not allow
re-running single failed jobs, and several contributors shared their
experience with CI elsewhere where this convenience came at the
expense of introducing serious bugs. Currently, the community can
reach out in OFTC's #opendev channel to get help merging a change
despite Zuul's disagreements, de-queing a change, or re-enqueuing it
on Zuul. These bypass mechanisms were deemed sufficient for the
problem at hand.
Action Items:
- The need to recheck stems from unstable test jobs. Teams must try to
move flaky tests to experimental or periodic test queues and attempt
to isolate and fix issues with them.
- If you spot common failure patterns (e.g., timeouts, mirror issues,
OOM errors), please raise awareness to the project teams or to the
Infra administrators via the openstack-discuss mailing list or the
#opendev channel on OFTC
- Zuul CI could use documentation that clarifies why re-running
individual jobs in a failed buildset is not a supported action to
manage contributor expectations.
=== Testing and shipping non-OSI compatible software within OpenStack
binary artifacts ===
This discussion was an opportunity to clarify the community's stance
on adding, testing, maintaining, supporting, documenting and shipping
OpenStack software that pertains to components that are not licensed
with an OSI-approved license [10]. A specific recent example was
Masakari's Consul integration. Consul ships with a non-OSI-compliant
BSL 1.1 license, and is as such incompatible with OpenStack's license
policy. However, the general rule has been that the community would
strive to support and test only open source solutions. While there may
be integrations to proprietary or non-OSI licensed components, they
will not be tested with Community Infrastructure, and in each case, a
free and open source alternative must be available. In addition, if an
OpenStack service supports a feature, there must be a OSI compatible
implementation that the community will support and test with CI, and
there can be any number of implementations for the feature concerning
non-OSI components, but these must be tested by vendors or users of
such integrations.
Some projects (like Neutron and Cinder) have documented guidelines for
this, other projects (like Nova and Manila) enforce this as an
unwritten rule. This creates ambiguity around licensing and
integration expectations. At the tail end, there was a suggestion to
explore the feasibility of hosting an OpenStack container registry
(e.g., registry.openstack.org) This could help us better manage
binary artifacts and maintain container consistency across services,
though permanent registry storage remains a concern. The discussion
around this item was tabled.
Action Items:
- The TC will formally define and document a policy requiring
open-source alternatives for non-OSI software integrations. We must
ensure that it is applied consistently across all OpenStack projects.
- In continuation of this topic, we will review and clarify how to
handle existing integrations with dependencies that have changed
licenses (e.g., Redis). Determine whether to seek alternatives,
discontinue first-party CI testing, or require third-party CI for
these integrations.
That's a wrap! It was great fun seeing you all at the PTG. I look
forward to working on these AIs with you!
Thanks,
On behalf of the OpenStack TC,
Goutham Pacha Ravi (gouthamr)
OpenStack TC Chair
[1] PTG Etherpad for TC discussions:
https://etherpad.opendev.org/p/r.0f25532f564fb786a89cfe1de39b004b
[2] Recordings of the TC PTG:
https://www.youtube.com/playlist?list=PLhwOhbQKWT7XGjIwT0mtPpixpuY-tKYoh
[3] PTG Etherpad of the D&I working group:
https://etherpad.opendev.org/p/r.4f257833bbd7e0284c26f34e2bbc87c6
[4] TC Resolution on the state of testing of database systems:
https://governance.openstack.org/tc/resolutions/20170613-postgresql-status.…
[5] Eventlet removal wiki: https://wiki.openstack.org/wiki/Eventlet-removal
[6] Eventlet removal PTG Summary:
https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack…
[7] Eventlet removal Goal Proposal:
https://review.opendev.org/c/openstack/governance/+/931254
[8] "Bridging the Gap" PTG Etherpad:
https://etherpad.opendev.org/p/r.522712847f6a5a1d7bd2031566cde4e9
[9] "Clean Check" CI policy:
https://docs.openstack.org/contributors/common/zuul-status.html#why-do-chan…
[10] OSI Approved Licenses: https://opensource.org/licenses
9 months, 1 week
Re: [ops][api][all] What are we going to do about pastedeploy usage?
by Takashi Kajinami
On 7/17/25 11:24 PM, Sean Mooney wrote:
>
> On 17/07/2025 13:37, Stephen Finucane wrote:
>> oslo.service is going through a lot of churn right now as part of the
>> eventlet migration. We recently noticed that some unrelated WSGI
>> routing code had been inadvertently deprecated, and moved to
>> undeprecate this [1].
> so honestly while i think we should likely move off that stack to fastapi or
> flask ectra im not sure we should do this as a community until the eventlet removal
>
> goal is completed.
>
> the main thing that paste gave that other frameworks didn't at the time was
> the ability to configure middleware pipeline via a simple declarative file.
>
>> In the long-term, however, this code doesn't
>> really belong in oslo.service and should be moved elsewhere. I took a
>> stab at bootstrapping an oslo.wsgi package, and after some hacking I
>> arrived at a dependencies list containing the following:
>>
>> * Paste [2]
>> * PasteDeploy [3]
>> * Routes [4]
>> * WebOb [5]
> long term i think we could consider all of those to be tech debt that we want to
>
> move off of right.
>
> we discussed this briefly int eh context of the eventlet removal 18 month ago as you noted
> but that is likely not the tech stack we want an oslo.wsgi to use long term.
>
>>
>> As some of you might know, all of these packages are minimally
>> maintained bordering on unmaintained. I'm not entirely sure we want to
>> want to bootstrap a new project using these libraries as opposed to
>> migrating off of them.
> exactly this.
>> My question is this: how important is the
>> pastedeploy framework and the paste.ini files nowadays, particularly
>> for deployers?
> i would be very interested to see ops feedback here too as that ignoring time for a moment
> is the only gap i personally see with moving to a better maintained project like fastapi or flask
>
> for nova at least. flask is already used by keystone and neutron i think.
>
> if i had time i would also like to move watcher off of its current
> PasteDeploy + pecan + WebOb + WSME stack to fastapi or flask
>
> pecan had a long period of inactivity. now it has picked back up in 2025 so its
> not in a bad place but WebOb, WSME and PasteDeploy is defintly tech debt.
>
>> While their use is relatively consistent across projects
>> (see below), not every service uses them and for those that don't, I
>> personally haven't heard complaints about their absence. Rather than
>> migrating the pastedeploy stuff elsewhere, would it make more sense for
>> affected projects to simply define a static set of middleware (with
>> some config knobs for those we want to be able to enable/disable) and
>> call it a day?
> +1 my inclination was to just have a comma separate list of middle-ware to run in order as part
> of the standard service config.
>
> default to what is enabled in paste today.
>
> if need provide a way to load extra middle-ware using https://opendev.org/openstack/stevedore the same
>
> way we do plugins in the rest of openstack.
I personally haven't been bothered with the absence of configuration mechanism for some time
since most of the services introduced the basic set of middlewares such as http_proxy_to_wsgi,
cors and healthcheck.
However if we completely drop the mechanism to inject additional middleware and define static
list in each service, we should probably determine the strategy about a few middlewares which
are partially or not used in OpenStack services.
- Sizelimit from oslo.middleware and Audit from keystonemiddleware are not globally used
but only some include these in their pipeline
- RequestNotifier from oslo.messaging and BasicAuth from oslo.middleware are not included
by any default pipeline definition
In the past I expected some users (especially cloud providers) might implement their own
middleware for especially billing but I have no actual knowledge about actual use case.
>
>>
>> Cheers,
>> Stephen
>>
>> PS: This topic came up about 18 months ago [6], but we don't appear to
>> have reached a conclusion. Thus my bringing it up again.
>>
>> [1] https://review.opendev.org/c/openstack/oslo.service/+/954055
> I'm glad you spotted this as yes only the eventlet web-sever part was intended to be deprecated.
>> [2] https://pypi.org/project/Paste/#history
>> [3] https://pypi.org/project/PasteDeploy/#history
>> [4] https://pypi.org/project/Routes/#history
>> [5] https://pypi.org/project/WebOb/#history
>> [6] https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack…
>>
>> ---
>>
>> fwict, the following services all rely on this combo to build their own
>> frameworks, with Nova most likely the progenitor in each case (I'm
>> guessing)
>>
>> * Nova
>> * Barbican
>> * Cinder
>> * Designate
>> * Freezer
>> * Glance
>> * Heat
>> * Manila
>> * Monasca
>> * Neutron
>> * Swift
>> * Trove
>> * Vitrage
>> * Watcher
To extend the list, Aodh uses PasteDeploy but with pecan,
and Masakari uses paste+pastedeploy (althoug these are not in
requirements !!!)
>>
>> The following services do *not* use these libraries:
>>
>> * Cyborg (pecan)
>> * Ironic (pecan)
>> * Keystone (Flask + Flask-RESTful, with some webob)
>> * Magnum (pecan)
>> * Masakari (homegrown framework using webob)
>> * Zaqar (falcon)
>> * Zun (Flask)
... and Octavia may be added to this list, with pecan used.
>>
>
3 weeks, 5 days
[glance] Flamingo PTG summary
by Cyril Roelandt
Hi,
We have just had our virtual PTG. The full list of topics, along with
notes, can be found on our Etherpad[1]. Here is a summary of the topics
that will be targeted during this cycle.
If you would like to view one of the recordings, please reach out to us
so we can figure out a way to share it.
# glance-tempest-plugin work
There has not been a lot of activity on the Glance Tempest Plugin for a
while. We decided to put more effort into it this cycle. The first step
is to decide what stale patches to abandon or rebase. We will also try
to add tests for newer APIs added in the past cycles (location, quota,
cache).
# Eventlet removal
While Glance can already be deployed in production without eventlet,
this library is still required for half of our functional tests, as well
as for non-core features (namely the scrubber). We will be focusing on
migrating our functional tests away from eventlet during this cycle.
# Image encryption
Not much progress has been made on this topic in the past cycle. The
original authors of this feature are willing to spend time working on
the required patches during the Flamingo cycle; the Glance core dev will
make it a priority to quickly review the required patches so that we can
finally merge the feature.
# Glanceclient chunked transfer
By default, the Glance client transfers data to the Glance API in
chunks. This may force the Cinder backend to resize the volume multiple
times, slowing down the upload. Providing the total size of the image
will allow us to fix this issue.
# Cinder common configuration options
Operators often have parameters set to the exact same values for
multiple Cinder backends. We plan on making it possible to configure
"common configuration options", thus allowing operators not to repeat
configuration snippets.
# Freeze glanceclient development
As we approach feature parity between the Glance client and the
OpenStack unified client, we believe it will be best to freeze
development of the Glance client. In the future, we will only fix bugs
and security issues. During this cycle, we will work on actually
reaching feature parity with the unified client.
# new location API (Cinder & Nova)
Glance has introduced two new Location APIs in the Dalmatian cycle. We
can use these APIs to address OSSN-0090 and OSSN-0065. Patches for Nova
and Cinder must still be merged, hopefully during the Flamingo cycle.
# freeze glance client development
Cinder and Nova will need to use the OpenStack SDK instead of the
glanceclient. There is no need to complete this work during this cycle,
but we should at least have a good idea of what APIs are currently being
used, so that we can have a plan for the next cycles.
# cache-cleaner/pruner
We discussed possible minor improvements to both the cache-pruner and
the cache-cleaner. This work is not going to be a priority for this
cycle.
# Deprecate the filesystem_store_datadirs configuration option
This option has not been needed since the introduction of multistore
support.
Happy Hacking!
Cyril Roelandt
[1] https://etherpad.opendev.org/p/apr2025-ptg-glance
3 months, 4 weeks