openstack-discuss search results for query "canceled meeting"
openstack-discuss@lists.openstack.org- 840 messages
RE: openstack-discuss Digest, Vol 60, Issue 51 Multinode Cluster setup
by Asma Naz Shariq
Dear all,
I want to deployed Openstack through kolla-ansible release zed. The cluster architecture is as follows:
1- Controller Node same as Network Node.
2- Compute Node same as Storage Node.
On Controller Node, I have followed all the steps mentioned at https://docs.openstack.org/project-deploy-guide/kolla-ansible/zed/quickstar… and set the hosts name with ip at /etc/hosts on controller node to resolve the compute node.
When I tried to check the connectivity of controller-node with compute node, it encountered an error with:
Controller Node IP | FAILED! => {
"msg": "Missing sudo password"
}
localhost | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
Compute Node IP | FAILED! => {
"msg": "Missing sudo password
Can anyone guide me how to resolve this error, and setup multi node environment to which steps should I follow to make baseline cluster of one controller node and one compute node.
Thanks,
-----Original Message-----
From: openstack-discuss-request(a)lists.openstack.org <openstack-discuss-request(a)lists.openstack.org>
Sent: Tuesday, October 24, 2023 9:42 AM
To: openstack-discuss(a)lists.openstack.org
Subject: openstack-discuss Digest, Vol 60, Issue 51
Send openstack-discuss mailing list submissions to
openstack-discuss(a)lists.openstack.org
To subscribe or unsubscribe via email, send a message with subject or body 'help' to
openstack-discuss-request(a)lists.openstack.org
You can reach the person managing the list at
openstack-discuss-owner(a)lists.openstack.org
When replying, please edit your Subject line so it is more specific than "Re: Contents of openstack-discuss digest..."
Today's Topics:
1. [ptls] [tc] 2023 OpenStack Deployment Data (Allison Price)
2. Re: [neutron][openvswitch][antelope] Bridge eth1 for physical network provider does not exist
(Sławek Kapłoński)
3. Environmental Sustainability WG goes to the PTG! (Kendall Nelson)
4. [tacker] Cancelling IRC meetings (Yasufumi Ogawa)
5. Re: openstack Vm shutoff by itself (AJ_ sunny)
----------------------------------------------------------------------
Message: 1
Date: Mon, 23 Oct 2023 15:16:05 -0500
From: Allison Price <allison(a)openinfra.dev>
Subject: [ptls] [tc] 2023 OpenStack Deployment Data
To: openstack-discuss <openstack-discuss(a)lists.openstack.org>
Message-ID: <AB576C9A-95CB-4BCD-9215-0AC1D3E64A25(a)openinfra.dev>
Content-Type: text/plain; charset=us-ascii
Hi everyone,
Apologies for sending this during the PTG as I intended to send this prior, but below is a link to the anonymized data including responses to many project or TC contributed questions. I have removed a significant number of fields to preserve the anonymity of the organizations, but if there is a particular data point that would help your team, please let me know.
Cheers,
Allison
https://docs.google.com/spreadsheets/d/1ehlabuqOnNK4M7xtN1GXvhLoeHn4JBaISih…
------------------------------
Message: 2
Date: Mon, 23 Oct 2023 22:46:23 +0200
From: Sławek Kapłoński <skaplons(a)redhat.com>
Subject: Re: [neutron][openvswitch][antelope] Bridge eth1 for
physical network provider does not exist
To: "openstack-discuss(a)lists.openstack.org"
<openstack-discuss(a)lists.openstack.org>
Cc: "ddorra(a)t-online.de" <ddorra(a)t-online.de>
Message-ID: <5895770.MhkbZ0Pkbq@p1gen4>
Content-Type: multipart/signed; boundary="nextPart1790705.VLH7GnMWUR";
micalg="pgp-sha256"; protocol="application/pgp-signature"
Hi,
You need to create bridge (e.g. br-ex), add your eth1 to that bridge and put name of the bridge in the bridge_mapping.
Dnia poniedziałek, 23 października 2023 21:44:43 CEST ddorra(a)t-online.de pisze:
>
> Hello,
>
> I'm installing Openstack Antelope with network option 2 ( following
> https://docs.openstack.org/neutron/2023.2/install/compute-install-opti
> on2-ubuntu.html) The interface name of the provider network is eth1,
> so I put this to bridge_mappings.
> The local IP is from the management network.
>
> #------------------------------------------------------
> # /etc/neutron/plugins/ml2/openvswitch_agent.ini
> #
> [DEFAULT]
> [agent]
> [dhcp]
> [network_log]
> [ovs]
> bridge_mappings = provider:eth1
> [securitygroup]
> enable_security_group = true
> firewall_driver = openvswitch
> [vxlan]
> local_ip = 192.168.2.71
> l2_population = true
> #--------------------------------------------------------
>
> However, the neutron log complains that bridger eth1 does not exist.
> Launching of instances fails
> neutron-openvswitch-agent.log:
> 2023-10-23 19:26:00.062 17604 INFO os_ken.base.app_manager [-]
> instantiating app os_ken.app.ofctl.service of OfctlService
> 2023-10-23 19:26:00.062 17604 INFO
> neutron.agent.agent_extensions_manager
> [-] Loaded agent extensions: []
> 2023-10-23 19:26:00.108 17604 INFO
> neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_brid
> ge [-] Bridge br-int has datapath-ID 00005e65b1943a49
> 2023-10-23 19:26:02.438 17604 INFO
> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-]
> Mapping physical network provider to bridge eth1 vvvvvvv
> 2023-10-23 19:26:02.438 17604 ERROR
> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-]
> Bridge
> eth1 for physical network provider does not exist. Agent terminated!
> ^^^^^^^
> 2023-10-23 19:26:03.914 17619 INFO neutron.common.config [-] Logging
> enabled!
> 2023-10-23 19:26:03.914 17619 INFO neutron.common.config [-]
> /usr/bin/neutron-openvswitch-agent version 20.4.0
> 2023-10-23 19:26:03.914 17619 INFO os_ken.base.app_manager [-] loading
> app
> neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_oske
> napp
>
> -----------------------------------------
> Additional information
> root@control:/var/log/neutron# openstack network list
> +--------------------------------------+----------+--------------------------------------+
> | ID | Name | Subnets |
> +--------------------------------------+----------+--------------------------------------+
> | 32f53edc-a394-419f-a438-a664183ee618 | doznet |
> e38e25c2-4683-48fb-a7a0-7cbd7d276ee1 |
> | 74e3ee6a-1116-4ff6-9e99-530c3cbaef28 | provider |
> 2d3c3de4-9a0d-4a21-9af1-8ecb9f6f16d5 |
> +--------------------------------------+----------+--------------------------------------+
> root@control:/var/log/neutron#
>
> root@compute1:/etc/neutron/plugins/ml2# ip a
> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel
> state UP group default qlen 1000 link/ether 08:00:27:9f:c7:89 brd
> ff:ff:ff:ff:ff:ff inet 192.168.2.71/24 brd 192.168.2.255 scope global
> eth0 valid_lft forever preferred_lft forever
> inet6 fe80::a00:27ff:fe9f:c789/64 scope link valid_lft forever
> preferred_lft forever
> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel
> state UP group default qlen 1000 link/ether 08:00:27:91:60:58 brd
> ff:ff:ff:ff:ff:ff inet 10.0.0.71/24 brd 10.0.0.255 scope global eth1
> valid_lft forever preferred_lft forever
> inet6 fe80::a00:27ff:fe91:6058/64 scope link valid_lft forever
> preferred_lft forever
> 4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
> group default qlen 1000 link/ether da:3b:3a:a0:59:97 brd
> ff:ff:ff:ff:ff:ff
> 5: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
> default qlen 1000 link/ether 5e:65:b1:94:3a:49 brd ff:ff:ff:ff:ff:ff
> 6: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
> state DOWN group default qlen 1000 link/ether 52:54:00:53:87:59 brd
> ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope
> global virbr0 valid_lft forever preferred_lft forever
>
>
> What are the proper settings?
> Any help appreciated
> Dieter
>
>
>
--
Slawek Kaplonski
Principal Software Engineer
Red Hat-------------- next part -------------- A message part incompatible with plain text digests has been removed ...
Name: signature.asc
Type: application/pgp-signature
Size: 488 bytes
Desc: This is a digitally signed message part.
------------------------------
Message: 3
Date: Mon, 23 Oct 2023 16:36:20 -0500
From: Kendall Nelson <kennelson11(a)gmail.com>
Subject: Environmental Sustainability WG goes to the PTG!
To: OpenStack Discuss <openstack-discuss(a)lists.openstack.org>
Message-ID:
<CAJ6yrQgbdvFW7NcUGJ5zLyXagx=hMt6R7ib0tk=1Y+sm06MY=w(a)mail.gmail.com>
Content-Type: multipart/alternative;
boundary="000000000000e3ee26060869044d"
Hello Everyone!
Last minute, but I wanted to make sure you saw that we are planning to meet at the PTG, tomorrow (Tuesday Oct 24th) from 13:00 UTC. I have the room booked for two hours, but I don't think we will need all of that time.
If there are topics you would like to discuss, please add them to the agenda[1].
Hope to see you there!
-Kendall Nelson
[1] https://etherpad.opendev.org/p/oct2023-ptg-env-sus
1 year, 5 months
Re: openstack-discuss Digest, Vol 60, Issue 10 ~ Fwaas in Openstack 2023.1 antelope
by Lajos Katona
Hi Asma,
The enable_plugin is for devstack based deployments, I suppose.
To tell the truth I am not familiar with kolla, but I found this page which
speaks about enabling neutron extensions like sfc or vpnaas:
https://docs.openstack.org/kolla-ansible/4.0.2/networking-guide.html
and these in the group_vars/all.yml:
https://opendev.org/openstack/kolla-ansible/src/branch/master/ansible/group…
So to enable vpnaas: enable_neutron_vpnaas: "yes" I suppose to do all the
magic to install and set neutron-vpnaas.
I can't find neutron-fwaas , but perhaps this is just my lack of experience
with kolla.
To see what is necessary to configure fwaas, I would check the devstack
plugin:
https://opendev.org/openstack/neutron-fwaas/src/branch/master/devstack
Best wishes.
Lajos (lajoskatona)
Asma Naz Shariq <asma.naz(a)techavenue.biz> ezt írta (időpont: 2023. okt. 6.,
P, 14:37):
> Hi Openstack Community!
>
> I have set up OpenStack release 2023.1 antelope with Kolla-Ansible .
> However, I noticed that there is no enable_plugin option in the
> /etc/kolla/global.yml file. Now, I am trying to install FWaaS
> (Firewall-as-a-Service) following the instructions provided in this
> OpenStack's Firewall-as-a-Service (FWaaS) v2 scenario documentation.
>
> The documentation states, On Ubuntu and CentOS, modify the [fwaas] section
> in the /etc/neutron/fwaas_driver.ini file instead of
> /etc/neutron/neutron.conf. Unfortunately, I cannot find the
> fwaas_driver.ini
> file in the neutron-server, neutron-l3-agent, or neutron-openvswitch-agent
> containers
>
> Can someone guide me on how to properly install FWaaS in a Kolla
> environment
> using the information from the provided link?
>
> Best,
>
> -----Original Message-----
> From: openstack-discuss-request(a)lists.openstack.org
> <openstack-discuss-request(a)lists.openstack.org>
> Sent: Friday, October 6, 2023 1:27 PM
> To: openstack-discuss(a)lists.openstack.org
> Subject: openstack-discuss Digest, Vol 60, Issue 10
>
> Send openstack-discuss mailing list submissions to
> openstack-discuss(a)lists.openstack.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>
> https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss
>
> or, via email, send a message with subject or body 'help' to
> openstack-discuss-request(a)lists.openstack.org
>
> You can reach the person managing the list at
> openstack-discuss-owner(a)lists.openstack.org
>
> When replying, please edit your Subject line so it is more specific than
> "Re: Contents of openstack-discuss digest..."
>
>
> Today's Topics:
>
> 1. Re: [ops] [nova] "invalid argument: shares xxx must be in
> range [1, 10000]" after 1:25.2.0 to 1.25.2.1. update
> (Massimo Sgaravatto)
> 2. [TC][Monasca] Proposal to mark Monasca as an inactive project
> (S?awek Kap?o?ski)
> 3. [neutron] Neutron drivers meeting cancelled
> (Rodolfo Alonso Hernandez)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 6 Oct 2023 09:10:47 +0200
> From: Massimo Sgaravatto <massimo.sgaravatto(a)gmail.com>
> To: Franciszek Przewo?ny <fprzewozny(a)opera.com>
> Cc: OpenStack Discuss <openstack-discuss(a)lists.openstack.org>,
> smooney(a)redhat.com
> Subject: Re: [ops] [nova] "invalid argument: shares xxx must be in
> range [1, 10000]" after 1:25.2.0 to 1.25.2.1. update
> Message-ID:
> <
> CALaZjRGh6xnzX12cMgDTYx2yJYddUD9X3oh60JWnrB33ZdEf_Q(a)mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Thanks a lot Franciszek !
> I was indeed seeing the problem with a VM big 56 vcpus while I didn't see
> the issue with a tiny instance
>
> Thanks again !
>
> Cheers, Massimo
>
> On Fri, Oct 6, 2023 at 8:58?AM Franciszek Przewo?ny <fprzewozny(a)opera.com>
> wrote:
>
> > Hi Massimo,
> >
> > We are using Ubuntu for our environments and we experienced the same
> > issue during upgrade from Yoga/Focal to Yoga/Jammy. On Yoga/Focal
> > cgroups_v1 were used, and cpu_shares parameter value was cpu count *
> > 1024. From Jammy
> > cgroups_v2 have been implemented, and cpu_shares value has been set by
> > default to 100. It has hard limit of 10000, so flavors with more than
> > 9vCPUs won't fit. If you need to fix this issue without stopping VMs,
> > you can set cpu_shares with libvirt command: virsh schedinfo $domain
> > --live
> > cpu_shares=100
> > for more details about virsh schedinfo visit:
> > https://libvirt.org/manpages/virsh.html#schedinfo
> >
> > BR,
> > Franciszek
> >
> > On 5 Oct 2023, at 21:17, smooney(a)redhat.com wrote:
> >
> > On Thu, 2023-10-05 at 16:53 +0200, Massimo Sgaravatto wrote:
> >
> > Dear all
> >
> > We have recently updated openstack nova on some AlmaLinux9 compute
> > nodes running Yoga from 1:25.2.0 to 1.25.2.1. After this operation
> > some VMs don't start anymore. In the log it is reported:
> >
> > libvirt.libvirtError: invalid argument: shares \'57344\' must be in
> > range [1, 10000]\n'}
> >
> > libvirt version is 9.0.0-10.3
> >
> >
> > A quick google search suggests that it is something related to cgroups
> > and it is fixed in libvirt >= 9.1 (which is not yet in the almalinux9
> repos).
> > Did I get it right ?
> >
> > not quite
> >
> > it is reated to cgroups but the cause is that in cgroups_v1 the
> > maxvlaue of shares i.e. cpu_shares changed form make int to 10000 in
> > cgroups_v2 so the issue is teh vm requested a cpu share value of 57344
> > which is not vlaid on an OS that is useing cgroups_v2 libvirt will not
> > clamp the value nor will nova.
> > you have to change the volue in your flavor and resize the vm.
> >
> >
> >
> >
> > Thanks, Massimo
> >
> >
> >
> >
> >
>
1 year, 5 months
Re: [all][ptg] Pre-PTG discussion: New Keystone Middleware Feature Supporting OAuth2.0 with External Authorization Service
by Hiromu Asahina
As Keystone canceled Monday 14 UTC timeslot [1], I'd like to hold this
discussion on Monday 15 UTC timeslot. If it doesn't work for Ironic
members, please kindly reply convenient timeslots.
[1] https://ptg.opendev.org/ptg.html
Thanks,
Hiromu Asahina
On 2023/03/22 20:01, Hiromu Asahina wrote:
> Thanks!
>
> I look forward to your reply.
>
> On 2023/03/22 1:29, Julia Kreger wrote:
>> No worries!
>>
>> I think that time works for me. I'm not sure it will work for
>> everyone, but
>> I can proxy information back to the whole of the ironic project as we
>> also
>> have the question of this functionality listed for our Operator Hour in
>> order to help ironic gauge interest.
>>
>> -Julia
>>
>> On Tue, Mar 21, 2023 at 9:00 AM Hiromu Asahina <
>> hiromu.asahina.az(a)hco.ntt.co.jp> wrote:
>>
>>> I apologize that I couldn't reply before the Ironic meeting on Monday.
>>>
>>> I need one slot to discuss this topic.
>>>
>>> I asked Keystone today and Monday's first Keystone slot (14 UTC Mon,
>>> 27)[1,2] works for them. Does this work for Ironic? I understand not all
>>> Ironic members will join this discussion, so I hope we can arrange a
>>> convenient date for you two at least and, hopefully, for those
>>> interested in this topic.
>>>
>>> [1]
>>>
>>> https://www.timeanddate.com/worldclock/fixedtime.html?iso=2023-03-27T14:00:…
>>> [2] https://ptg.opendev.org/ptg.html
>>>
>>> Thanks,
>>> Hiromu Asahina
>>>
>>> On 2023/03/17 23:29, Julia Kreger wrote:
>>>> I'm not sure how many Ironic contributors would be the ones to attend a
>>>> discussion, in part because this is disjointed from the items they need
>>> to
>>>> focus on. It is much more of a "big picture" item for those of us
>>>> who are
>>>> leaders in the project.
>>>>
>>>> I think it would help to understand how much time you expect the
>>> discussion
>>>> to take to determine a path forward and how we can collaborate. Ironic
>>> has
>>>> a huge number of topics we want to discuss during the PTG, and I
>>>> suspect
>>>> our team meeting on Monday next week should yield more
>>>> interest/awareness
>>>> as well as an amount of time for each topic which will aid us in
>>> scheduling.
>>>>
>>>> If you can let us know how long, then I think we can figure out when
>>>> the
>>>> best day/time will be.
>>>>
>>>> Thanks!
>>>>
>>>> -Julia
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Fri, Mar 17, 2023 at 2:57 AM Hiromu Asahina <
>>>> hiromu.asahina.az(a)hco.ntt.co.jp> wrote:
>>>>
>>>>> Thank you for your reply.
>>>>>
>>>>> I'd like to decide the time slot for this topic.
>>>>> I just checked PTG schedule [1].
>>>>>
>>>>> We have the following time slots. Which one is convenient to gether?
>>>>> (I didn't get reply but I listed Barbican, as its cores are almost the
>>>>> same as Keystone)
>>>>>
>>>>> Mon, 27:
>>>>>
>>>>> - 14 (keystone)
>>>>> - 15 (keystone)
>>>>>
>>>>> Tue, 28
>>>>>
>>>>> - 13 (barbican)
>>>>> - 14 (keystone, ironic)
>>>>> - 15 (keysonte, ironic)
>>>>> - 16 (ironic)
>>>>>
>>>>> Wed, 29
>>>>>
>>>>> - 13 (ironic)
>>>>> - 14 (keystone, ironic)
>>>>> - 15 (keystone, ironic)
>>>>> - 21 (ironic)
>>>>>
>>>>> Thanks,
>>>>>
>>>>> [1] https://ptg.opendev.org/ptg.html
>>>>>
>>>>> Hiromu Asahina
>>>>>
>>>>>
>>>>> On 2023/02/11 1:41, Jay Faulkner wrote:
>>>>>> I think it's safe to say the Ironic community would be very
>>>>>> invested in
>>>>>> such an effort. Let's make sure the time chosen for vPTG with this is
>>>>> such
>>>>>> that Ironic contributors can attend as well.
>>>>>>
>>>>>> Thanks,
>>>>>> Jay Faulkner
>>>>>> Ironic PTL
>>>>>>
>>>>>> On Fri, Feb 10, 2023 at 7:40 AM Hiromu Asahina <
>>>>>> hiromu.asahina.az(a)hco.ntt.co.jp> wrote:
>>>>>>
>>>>>>> Hello Everyone,
>>>>>>>
>>>>>>> Recently, Tacker and Keystone have been working together on a new
>>>>> Keystone
>>>>>>> Middleware that can work with external authentication
>>>>>>> services, such as Keycloak. The code has already been submitted [1],
>>> but
>>>>>>> we want to make this middleware a generic plugin that works
>>>>>>> with as many OpenStack services as possible. To that end, we would
>>> like
>>>>> to
>>>>>>> hear from other projects with similar use cases
>>>>>>> (especially Ironic and Barbican, which run as standalone
>>>>>>> services). We
>>>>>>> will make a time slot to discuss this topic at the next vPTG.
>>>>>>> Please contact me if you are interested and available to
>>>>>>> participate.
>>>>>>>
>>>>>>> [1]
>>> https://review.opendev.org/c/openstack/keystonemiddleware/+/868734
>>>>>>>
>>>>>>> --
>>>>>>> Hiromu Asahina
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>> --
>>>>> ◇-------------------------------------◇
>>>>> NTT Network Innovation Center
>>>>> Hiromu Asahina
>>>>> -------------------------------------
>>>>> 3-9-11, Midori-cho, Musashino-shi
>>>>> Tokyo 180-8585, Japan
>>>>> Phone: +81-422-59-7008
>>>>> Email: hiromu.asahina.az(a)hco.ntt.co.jp
>>>>> ◇-------------------------------------◇
>>>>>
>>>>>
>>>>
>>>
>>> --
>>> ◇-------------------------------------◇
>>> NTT Network Innovation Center
>>> Hiromu Asahina
>>> -------------------------------------
>>> 3-9-11, Midori-cho, Musashino-shi
>>> Tokyo 180-8585, Japan
>>> Phone: +81-422-59-7008
>>> Email: hiromu.asahina.az(a)hco.ntt.co.jp
>>> ◇-------------------------------------◇
>>>
>>>
>>
>
--
◇-------------------------------------◇
NTT Network Innovation Center
Hiromu Asahina
-------------------------------------
3-9-11, Midori-cho, Musashino-shi
Tokyo 180-8585, Japan
Phone: +81-422-59-7008
Email: hiromu.asahina.az(a)hco.ntt.co.jp
◇-------------------------------------◇
2 years
[placement] update 19-12
by Chris Dent
HTML: https://anticdent.org/placement-update-19-12.html
Placement update 19-12. Nearing 1/4 of the way through the year.
I won't be around on Monday, if someone else can chair the
[meeting](http://eavesdrop.openstack.org/#Placement_Team_Meeting)
that would be great. Or feel free to cancel it.
# Most Important
An RC2 was cut earlier this week, expecting it to be the last, but
there are a [couple of
patches](https://review.openstack.org/#/q/project:openstack/placement+branc…
which could be put in an RC3 if we were inclined that way.
Discuss.
We merged a first suite of [contribution
guidelines](https://docs.openstack.org/placement/latest/contributor/contrib….
These are worth reading as they explain how to manage bugs, start
new features, and be a good reviewer. Because of the introduction of
StoryBoard, processes are different from what you may have been used
to in Nova.
Because of limited time and space and conflicting responsibilities
the Placement team will be doing a [Virtual
Pre-PTG](http://lists.openstack.org/pipermail/openstack-discuss/2019-March/….
# What's Changed
* The contribution guidelines linked above describe how to manage
specs, which will now be in-tree. If you have a spec to propose or
re-propose (from stein in nova), it now goes in
``doc/source/specs/train/approved/``.
* Some [image type traits](https://review.openstack.org/648147) have
merged (to be used in a nova-side request pre-filter), but the
change has exposed an issue we'll need to resolve: os-traits and
os-resource-classes are under the cycle-with-intermediary style
release which means that at this time in the cycle it is difficult
to make a release which can delay work. We could switch to
independent. This would make sense for libraries that are
basically lists of strings. It's hard to break that. We could also
investigate making os-traits and os-resource-classes
required-projects in job templates in zuul. This would allow them
to be "tox siblings". Or we could wait. Please express an
opinion if you have one.
* In discussion in `#openstack-nova` about the patch to [delete
placement from nova](https://review.openstack.org/#/c/618215/), it
was decided that rather than merge that after the final RC, we
would wait until the PTG. There is discussion on the patch which
attempts to explain the reasons why.
# Specs/Blueprint/Features
* The spec for [forbidden aggregate
membership](https://docs.openstack.org/placement/latest/specs/train/approve…
has merged.
* The two traits-related specs from stein will need to be
re-proposed to placement for train:
* [any traits](http://specs.openstack.org/openstack/nova-specs/specs/stein/approve…
* [mixing required traits](http://specs.openstack.org/openstack/nova-specs/specs/stein/approve…
* The spec for [request group
mapping](https://review.openstack.org/597601) will need to be
revisited.
# Bugs
* StoryBoard stories in [the placement
group](https://storyboard.openstack.org/#!/project_group/placement): 4
* Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 14.
* [In progress placement bugs](https://goo.gl/vzGGDQ) 7. +1.
We should do a bug squash day at some point. Should we wait until
after the PTG or no?
Note that the [contribution
guidelines](https://docs.openstack.org/placement/latest/contributor/contrib…
has some information on how to evaluate new stories and what tags to
add.
# osc-placement
osc-placement is currently behind by 13 microversions.
Pending changes:
* [support for 1.19](https://review.openstack.org/#/c/641094/)
* [support for 1.21](https://review.openstack.org/#/c/641123/)
* [aggregate allocation ratio
tool](https://review.openstack.org/#/c/640898/)
# Main Themes
Be thinking about what you'd like the main themes to be. Put them on
the [PTG
etherpad](https://etherpad.openstack.org/p/placement-ptg-train).
# Other Placement
* <https://review.openstack.org/#/q/topic:2005297-negative-aggregate-membership>
Negative member of aggregate filtering resource providers and
allocation candidates. This is nearly ready.
* <https://review.openstack.org/#/c/645255/>
This is a start at unit tests for the PlacementFixture. It is
proving a bit "fun" to get right, as there are many layers
involved. Making sure seemingly unrelated changes in placement
don't break the nova gate is important. Besides these unit tests,
there's discussion on the PTG etherpad of running the nova
functional tests, or a subset thereof, in placement's check run.
On the one hand this is a pain and messy, but on the other
consider what we're enabling: Functional tests that use the real
functionality of an external service (real data, real web
requests), not stubs or fakes.
* <https://review.openstack.org/641404>
Use ``code`` role in api-ref titles
# Other Service Users
There's a lot here, but it is certain this is not all of it. If I
missed something you care about, followup mentioning it.
* <https://review.openstack.org/552924>
Nova: Spec: Proposes NUMA topology with RPs
* <https://review.openstack.org/622893>
Nova: Spec: Virtual persistent memory libvirt driver
implementation
* <https://review.openstack.org/641899>
Nova: Check compute_node existence in when nova-compute reports
info to placement
* <https://review.openstack.org/601596>
Nova: spec: support virtual persistent memory
* <https://review.openstack.org/#/q/topic:bug/1790204>
Workaround doubling allocations on resize
* <https://review.openstack.org/555081>
Nova: Spec: Standardize CPU resource tracking
* <https://review.openstack.org/646029>
Nova: Spec: Use in_tree getting allocation candidates
* <https://review.openstack.org/645316>
Nova: Pre-filter hosts based on multiattach volume support
* <https://review.openstack.org/606199>
Ironic: A fresh way of looking at step retrieval
* <https://review.openstack.org/647396>
Nova: Add flavor to requested_resources in RequestSpec
* <https://review.openstack.org/633204>
Blazar: Retry on inventory update conflict
* <https://review.openstack.org/640080>
Nova: Use aggregate_add_host in nova-manage
* <https://review.openstack.org/#/q/topic:bp/count-quota-usage-from-placement>
Nova: count quota usage from placement
* <https://review.openstack.org/#/q/topic:bug/1819923>
Nova: nova-manage: heal port allocations
* <https://review.openstack.org/648500>
Tempest: Init placement client in tempest Manager object
* <https://review.openstack.org/624335>
puppet-tripleo: Initial extraction of the Placement service from Nova
* <https://review.openstack.org/#/q/topic:bug/1821824>
Nova: bug fix prevent forbidden traits from working as expected
* <https://review.openstack.org/648665>
Nova: Spec for a new nova virt driver to manage an RSD
* <https://review.openstack.org/#/q/topic:bug/1819460>
Nova: Handle placement error during re-schedule
* <https://review.openstack.org/#/c/642067/>
Helm: Allow more generic overrides for nova placement-api
* <https://review.openstack.org/647578>
Nova: add spec for image metadata prefiltering
--
Chris Dent ٩◔̯◔۶ https://anticdent.org/
freenode: cdent tw: @anticdent
6 years
[placement] update 19-20
by Chris Dent
HTML: https://anticdent.org/placement-update-19-20.html
Placement update 19-20. Lots of cleanups in progress, laying in the
groundwork to do the nested magic work (see themes below).
The poll to determine [what to do with the
weekly meeting](https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_9599a2647c319fd4&…
will close at the end of today. Thus far the leader is
office hours. Whatever the outcome, the meeting that would happen
this coming Monday is cancelled because many people will be having a
holiday.
# Most Important
The [spec for nested magic](https://review.opendev.org/658510) is
ready for more robust review. Since most of the work happening in
placement this cycle is described by that spec, getting it reviewed
well and quickly is important.
Generally speaking: review things. This is, and always will be, the
most important thing to do.
# What's Changed
* os-resource-classes 0.4.0 was released, promptly breaking the
placement gate (tests are broken not os-resource-classes).
[Fixes underway](https://review.opendev.org/661131).
* [Null root provider
protections](https://review.opendev.org/657716) have been removed
and a blocker migration and status check added. This removes a few
now redundant joins in the SQL queries which should help with our
ongoing efforts to speed up and simplify getting allocation
candidates.
* I had suggested an additional core group for os-traits and
os-resource-classes but after discussion with various people it
was decided it's easier/better to be aware of the right subject
matter experts and call them in to the reviews when required.
# Specs/Features
* <https://review.opendev.org/654799>
Support Consumer Types. This is very close with a few details to
work out on what we're willing and able to query on. It's a week
later and it still only has reviews from me so far.
* <https://review.opendev.org/658510>
Spec for Nested Magic. Un-wipped.
* <https://review.opendev.org/657582>
Resource provider - request group mapping in allocation candidate.
This spec was copied over from nova. It is a requirement of the
overall nested magic theme. While it has a well-defined and
refined design, there's currently no one on the hook implement
it.
These and other features being considered can be found on the
[feature
worklist](https://storyboard.openstack.org/#!/worklist/594).
Some non-placement specs are listed in the Other section below.
# Stories/Bugs
(Numbers in () are the change since the last pupdate.)
There are 20 (-3) stories in [the placement
group](https://storyboard.openstack.org/#!/project_group/placement).
0 are [untagged](https://storyboard.openstack.org/#!/worklist/580).
2 (-2) are [bugs](https://storyboard.openstack.org/#!/worklist/574). 5 are
[cleanups](https://storyboard.openstack.org/#!/worklist/575). 11
(-1) are [rfes](https://storyboard.openstack.org/#!/worklist/594).
2 are [docs](https://storyboard.openstack.org/#!/worklist/637).
If you're interested in helping out with placement, those stories
are good places to look.
On launchpad:
* Placement related nova [bugs not yet in progress](https://goo.gl/TgiPXb)
on launchpad: 16 (0).
* Placement related nova [in progress bugs](https://goo.gl/vzGGDQ) on
launchpad: 7 (+1).
# osc-placement
osc-placement is currently behind by 11 microversions. No change
since the last report.
Pending changes:
* <https://review.openstack.org/#/c/640898/>
Add 'resource provider inventory update' command (that helps with
aggregate allocation ratios).
* <https://review.openstack.org/#/c/651783/>
Add support for 1.22 microversion
* <https://review.openstack.org/586056>
Provide a useful message in the case of 500-error
# Main Themes
## Nested Magic
At the PTG we decided that it was worth the effort, in both Nova and
Placement, to make the push to make better use of nested providers —
things like NUMA layouts, multiple devices, networks — while keeping
the "simple" case working well. The general ideas for this are
described in a [story](https://storyboard.openstack.org/#!/story/2005575)
and an evolving [spec](https://review.opendev.org/658510).
Some code has started, mostly to reveal issues:
* <https://review.opendev.org/657419>
Changing request group suffix to string
* <https://review.opendev.org/657510>
WIP: Allow RequestGroups without resources
* <https://review.opendev.org/657463>
Add NUMANetworkFixture for gabbits
* <https://review.opendev.org/658192>
Gabbi test cases for can_split
## Consumer Types
Adding a type to consumers will allow them to be grouped for various
purposes, including quota accounting. A
[spec](https://review.opendev.org/654799) has started. There are
some questions about request and response details that need to be
resolved, but the overall concept is sound.
## Cleanup
As we explore and extend nested functionality we'll need to do some
work to make sure that the code is maintainable and has suitable
performance. There's some work in progress for this that's important
enough to call out as a theme:
* <https://storyboard.openstack.org/#!/story/2005712>
Some work from Tetsuro exploring ways to remove redundancies in
the code. There's a [stack of good improvements](https://review.opendev.org/658778).
* <https://review.opendev.org/643269>
WIP: Optionally run a wsgi profiler when asked.
This was used to find some of the above issues. Should we make it
generally available or is it better as a thing to base off when
exploring?
* <https://review.opendev.org/660691>
Avoid traversing summaries in _check_traits_for_alloc_request
Ed Leafe has also been doing some intriguing work on using graph
databases with placement. It's not yet clear if or how it could be
integrated with mainline placement, but there are likely many things
to be learned from the experiment.
# Other Placement
Miscellaneous changes can be found in [the usual
place](https://review.opendev.org/#/q/project:openstack/placement+status:op….
There are several [os-traits
changes](https://review.opendev.org/#/q/project:openstack/os-traits+status:…
being discussed.
# Other Service Users
New discoveries are added to the end. Merged stuff is removed.
Starting with the next pupdate I'll also be removing anything that
has had no reviews and no activity from the author in 4 weeks.
Otherwise these lists get too long and uselessly noisy.
* <https://review.openstack.org/552924>
Nova: Spec: Proposes NUMA topology with RPs
* <https://review.openstack.org/622893>
Nova: Spec: Virtual persistent memory libvirt driver
implementation
* <https://review.openstack.org/641899>
Nova: Check compute_node existence in when nova-compute reports
info to placement
* <https://review.openstack.org/601596>
Nova: spec: support virtual persistent memory
* <https://review.openstack.org/#/q/topic:bug/1790204>
Workaround doubling allocations on resize
* <https://review.openstack.org/645316>
Nova: Pre-filter hosts based on multiattach volume support
* <https://review.openstack.org/647396>
Nova: Add flavor to requested_resources in RequestSpec
* <https://review.openstack.org/633204>
Blazar: Retry on inventory update conflict
* <https://review.openstack.org/#/q/topic:bp/count-quota-usage-from-placement>
Nova: count quota usage from placement
* <https://review.openstack.org/#/q/topic:bug/1819923>
Nova: nova-manage: heal port allocations
* <https://review.openstack.org/648665>
Nova: Spec for a new nova virt driver to manage an RSD
* <https://review.openstack.org/625284>
Cyborg: Initial readme for nova pilot
* <https://review.openstack.org/629142>
Tempest: Add QoS policies and minimum bandwidth rule client
* <https://review.openstack.org/648687>
Nova-spec: Add PENDING vm state
* <https://review.openstack.org/650188>
nova-spec: Allow compute nodes to use DISK_GB from shared storage RP
* <https://review.openstack.org/651024>
nova-spec: RMD Plugin: Energy Efficiency using CPU Core P-State control
* <https://review.openstack.org/650963>
nova-spec: Proposes NUMA affinity for vGPUs. This describes a
legacy way of doing things because affinity in placement may be a
ways off. But it also [may not
be](https://review.openstack.org/650476).
* <https://review.openstack.org/#/q/topic:heal_allocations_dry_run>
Nova: heal allocations, --dry-run
* <https://review.opendev.org/656448>
Watcher spec: Add Placement helper
* <https://review.opendev.org/659233>
Cyborg: Placement report
* <https://review.opendev.org/657884>
Nova: Spec to pre-filter disabled computes with placement
* <https://review.opendev.org/657801>
rpm-packaging: placement service
* <https://review.opendev.org/657016>
Delete resource providers for all nodes when deleting compute service
* <https://review.opendev.org/654066>
nova fix for: Drop source node allocations if finish_resize fails
* <https://review.opendev.org/660924>
neutron: Add devstack plugin for placement service plugin
* <https://review.opendev.org/661179>
ansible: Add playbook to test placement
* <https://review.opendev.org/656885>
nova: WIP: Hey let's support routed networks y'all!
# End
As indicated above, I'm going to tune these pupdates to make sure
they are reporting only active links. This doesn't mean stalled out
stuff will be ignored, just that it won't come back on the lists
until someone does some work related to it.
--
Chris Dent ٩◔̯◔۶ https://anticdent.org/
freenode: cdent
5 years, 10 months
Re: [all][ptg] Pre-PTG discussion: New Keystone Middleware Feature Supporting OAuth2.0 with External Authorization Service
by Jay Faulkner
So, looking over the Ironic PTG schedule, I appear to have booked Firmware
Upgrade interface in two places -- tomorrow and Wednesday 2200 UTC. This is
fortuitous: I can move the firmware upgrade conversation entirely into 2200
UTC, and give the time we had set aside to this topic.
Dave, Julia and I consulted on IRC, and decided to take this action. We'll
be adding an item to Ironic's PTG for tomorrow, Tuesday March 28 at 1500
UTC - 1525 UTC to discuss KeystoneMiddleware OAUTH support.
I will perform the following changes to the Ironic schedule to accommodate:
- Remove firmware upgrades from Ironic Tues 1630-1700 UTC, move all
discussion fo it to Weds 2200 UTC - 2300 UTC (should be plenty of time).
- Move everything from Service Steps and later (after the first break)
forward 30 minutes
- Add new item for KeystoneMiddleware/OAUTH discussion into Ironic's
schedule at Wednesday, 1500 UTC - 1525 UTC (30 minutes with room for a
break)
Ironic will host the discussion in the Folsom room, and Dave will ensure
interested keystone contributors are redirected to our room for this period.
-
Jay Faulkner
Ironic PTL
On Mon, Mar 27, 2023 at 7:07 AM Dave Wilde <dwilde(a)redhat.com> wrote:
> Hi Julia,
>
> No worries!
>
> I see that several of our sessions are overlapping, perhaps we could
> combine the 15:00 UTC session tomorrow to discuss this topic?
>
> /Dave
> On Mar 27, 2023 at 8:44 AM -0500, Julia Kreger <
> juliaashleykreger(a)gmail.com>, wrote:
>
>
>
> On Fri, Mar 24, 2023 at 9:55 AM Dave Wilde <dwilde(a)redhat.com> wrote:
>
>> I’m happy to book an additional time slot(s) specifically for this
>> discussion if something other than what we currently have works better for
>> everyone. Please let me know.
>>
>> /Dave
>> On Mar 24, 2023 at 10:49 AM -0500, Hiromu Asahina <
>> hiromu.asahina.az(a)hco.ntt.co.jp>, wrote:
>>
>> As Keystone canceled Monday 14 UTC timeslot [1], I'd like to hold this
>> discussion on Monday 15 UTC timeslot. If it doesn't work for Ironic
>> members, please kindly reply convenient timeslots.
>>
>>
> Unfortunately, I took the last few days off and I'm only seeing this now.
> My morning is booked up aside from the original time slot which was
> discussed.
>
> Maybe there is a time later in the week which could work?
>
>
>
>>
>> [1] https://ptg.opendev.org/ptg.html
>>
>> Thanks,
>>
>> Hiromu Asahina
>>
>> On 2023/03/22 20:01, Hiromu Asahina wrote:
>>
>> Thanks!
>>
>> I look forward to your reply.
>>
>> On 2023/03/22 1:29, Julia Kreger wrote:
>>
>> No worries!
>>
>> I think that time works for me. I'm not sure it will work for
>> everyone, but
>> I can proxy information back to the whole of the ironic project as we
>> also
>> have the question of this functionality listed for our Operator Hour in
>> order to help ironic gauge interest.
>>
>> -Julia
>>
>> On Tue, Mar 21, 2023 at 9:00 AM Hiromu Asahina <
>> hiromu.asahina.az(a)hco.ntt.co.jp> wrote:
>>
>> I apologize that I couldn't reply before the Ironic meeting on Monday.
>>
>> I need one slot to discuss this topic.
>>
>> I asked Keystone today and Monday's first Keystone slot (14 UTC Mon,
>> 27)[1,2] works for them. Does this work for Ironic? I understand not all
>> Ironic members will join this discussion, so I hope we can arrange a
>> convenient date for you two at least and, hopefully, for those
>> interested in this topic.
>>
>> [1]
>>
>>
>> https://www.timeanddate.com/worldclock/fixedtime.html?iso=2023-03-27T14:00:…
>> [2] https://ptg.opendev.org/ptg.html
>>
>> Thanks,
>> Hiromu Asahina
>>
>> On 2023/03/17 23:29, Julia Kreger wrote:
>>
>> I'm not sure how many Ironic contributors would be the ones to attend a
>> discussion, in part because this is disjointed from the items they need
>>
>> to
>>
>> focus on. It is much more of a "big picture" item for those of us
>> who are
>> leaders in the project.
>>
>> I think it would help to understand how much time you expect the
>>
>> discussion
>>
>> to take to determine a path forward and how we can collaborate. Ironic
>>
>> has
>>
>> a huge number of topics we want to discuss during the PTG, and I
>> suspect
>> our team meeting on Monday next week should yield more
>> interest/awareness
>> as well as an amount of time for each topic which will aid us in
>>
>> scheduling.
>>
>>
>> If you can let us know how long, then I think we can figure out when
>> the
>> best day/time will be.
>>
>> Thanks!
>>
>> -Julia
>>
>>
>>
>>
>>
>> On Fri, Mar 17, 2023 at 2:57 AM Hiromu Asahina <
>> hiromu.asahina.az(a)hco.ntt.co.jp> wrote:
>>
>> Thank you for your reply.
>>
>> I'd like to decide the time slot for this topic.
>> I just checked PTG schedule [1].
>>
>> We have the following time slots. Which one is convenient to gether?
>> (I didn't get reply but I listed Barbican, as its cores are almost the
>> same as Keystone)
>>
>> Mon, 27:
>>
>> - 14 (keystone)
>> - 15 (keystone)
>>
>> Tue, 28
>>
>> - 13 (barbican)
>> - 14 (keystone, ironic)
>> - 15 (keysonte, ironic)
>> - 16 (ironic)
>>
>> Wed, 29
>>
>> - 13 (ironic)
>> - 14 (keystone, ironic)
>> - 15 (keystone, ironic)
>> - 21 (ironic)
>>
>> Thanks,
>>
>> [1] https://ptg.opendev.org/ptg.html
>>
>> Hiromu Asahina
>>
>>
>> On 2023/02/11 1:41, Jay Faulkner wrote:
>>
>> I think it's safe to say the Ironic community would be very
>> invested in
>> such an effort. Let's make sure the time chosen for vPTG with this is
>>
>> such
>>
>> that Ironic contributors can attend as well.
>>
>> Thanks,
>> Jay Faulkner
>> Ironic PTL
>>
>> On Fri, Feb 10, 2023 at 7:40 AM Hiromu Asahina <
>> hiromu.asahina.az(a)hco.ntt.co.jp> wrote:
>>
>> Hello Everyone,
>>
>> Recently, Tacker and Keystone have been working together on a new
>>
>> Keystone
>>
>> Middleware that can work with external authentication
>> services, such as Keycloak. The code has already been submitted [1],
>>
>> but
>>
>> we want to make this middleware a generic plugin that works
>> with as many OpenStack services as possible. To that end, we would
>>
>> like
>>
>> to
>>
>> hear from other projects with similar use cases
>> (especially Ironic and Barbican, which run as standalone
>> services). We
>> will make a time slot to discuss this topic at the next vPTG.
>> Please contact me if you are interested and available to
>> participate.
>>
>> [1]
>>
>> https://review.opendev.org/c/openstack/keystonemiddleware/+/868734
>>
>>
>> --
>> Hiromu Asahina
>>
>>
>>
>>
>>
>>
>> --
>> ◇-------------------------------------◇
>> NTT Network Innovation Center
>> Hiromu Asahina
>> -------------------------------------
>> 3-9-11, Midori-cho, Musashino-shi
>> Tokyo 180-8585, Japan
>> Phone: +81-422-59-7008
>> Email: hiromu.asahina.az(a)hco.ntt.co.jp
>> ◇-------------------------------------◇
>>
>>
>>
>>
>> --
>> ◇-------------------------------------◇
>> NTT Network Innovation Center
>> Hiromu Asahina
>> -------------------------------------
>> 3-9-11, Midori-cho, Musashino-shi
>> Tokyo 180-8585, Japan
>> Phone: +81-422-59-7008
>> Email: hiromu.asahina.az(a)hco.ntt.co.jp
>> ◇-------------------------------------◇
>>
>>
>>
>>
>>
>> --
>> ◇-------------------------------------◇
>> NTT Network Innovation Center
>> Hiromu Asahina
>> -------------------------------------
>> 3-9-11, Midori-cho, Musashino-shi
>> Tokyo 180-8585, Japan
>> Phone: +81-422-59-7008
>> Email: hiromu.asahina.az(a)hco.ntt.co.jp
>> ◇-------------------------------------◇
>>
>>
>>
2 years
[manila] Summary of the Victoria Cycle Project Technical Gathering
by Goutham Pacha Ravi
Hello Zorillas and other friendly animals of the OpenStack universe,
The manila project community met virtually between 1st June and 5th June
and discussed project plans for the Victoria cycle. The detailed meeting
notes are in [1] and the recordings were published [2]. A short summary of
the discussions and the action items is below:
*== Ussuri Retrospective ==*
- We lauded the work that Vida Haririan and Jason Grosso have put into
making bug tracking and triaging a whole lot easier, and systematic. At the
beginning of the cycle, we had roughly 250 bugs, and this was brought down
to under 130 by the end of the cycle. As a community, we acted upon many
multi-release bugs and made backports as appropriate. We've now automated
the expiry of invalid and incomplete bugs thereby reducing noise. Vida's
our current Bug Czar, and is interested in mentoring anyone that wants to
contribute to this role. Please reach out to her if you're interested.
- We also had two successful Outreachy internships (Soledad
Kuczala/solkz, Maari Tamm/maaritamm) thanks to Outreachy, their sponsors,
mentors (Sofia Enriquez/enriquetaso, Victoria Martinez de la Cruz/vkmc) and
the OpenStack Foundation; and a successful Google Summer of Code internship
(Robert Vasek/gman0) - many thanks to the mentor (Tomas Smetana/tsmetana),
Google, Red Hat and other sponsoring organizations. The team learned a lot,
and vkmc encouraged all of us to consider submitting a mentorship
application for upcoming cycles and increase our involvement. Through the
interns' collective efforts in Train and Ussuri development cycles:
- manila CSI driver was built [3]
- manilaclient now provides a plugin to the OpenStackClient
- manila-ui has support for newer microversions of the manila API and,
- manila documentation has gotten a whole lot better!
- We made good core team improvements and want to continue to mentor new
contributors to become maintainers, and folks felt their PTL was doing a
good job (:D)
- The community loved the idea of PTL docs (thanks Kendall
Nelson/diablo_rojo) - a lot of tribal knowledge was documented for the
first time!
- We felt that "low-hanging-fruit" bugs [4] were lingering too long in
some cases, and must have a "resolve-by" date. These are farmed for new
contributors, and if they turn out to be annoying issues, the team may set
a resolve-by date and close them out. However, we'll continue to make a
concerted effort to triage "bugs that are tolerable" with nice-to-have
fixes and keep them handy for anyone looking to make an initial
contribution.
*== Optimize query speed for share snapshots ==*
Haixin (haixin) discovered that not all APIs are taking advantage of
filtering and pagination via sqlalchemy. There's a list of APIs that he's
compiled and would like to work on them; the team agreed that this is a
valuable bug fix; and can be made available to Ussuri when the fixes land
in this cycle.
*== TC Goals for Victoria cycle ==*
- We discussed a long list of items that were proposed for inclusion as
TC goals [5]. The TC has officially picked two of them for this cycle [6].
- For gating manila project repos, we make heavy use of "legacy" DSVM
jobs. We hadn't invested time and effort in converting these jobs in the
past cycles; however, we have a plan [7] and have started porting jobs to
"native" zuulv3 format already in the manila-tempest-plugin repository.
Once these jobs are complete there, we'll switch over to using them on the
main branch of manila. Older branches will get opportunistic updates beyond
milestone-2.
- Luigi Toscano (tosky) joined us for this discussion and asked us for
the status of third party CI systems. The team hasn't mandated that third
party CI systems move their testing to zuulv3-native in this cycle.
However, the OpenStack community may drop support for devstack-gate in the
Victoria release, and making things work with it will get harder - so it's
strongly encouraged that third party vendor systems that are using the
community testing infrastructure projects: zuul, nodepool and devstack-gate
move away from devstack-gate in this cycle. An option to adopt Zuulv3 in
third party CI systems could be via the Software Factory project [8]. The
RDO community runs some third party jobs and votes on OpenStack upstream
projects - so they've created a wealth of jobs and documentation that can
be of help. Maintainers of this project hang out in #softwarefactory on
FreeNode.
- All of the new zuulv3 style tempest jobs inherit from devstack-tempest
from the tempest repository, and changing the node-set in the parent would
affect all our jobs as well - this would make the transition to Ubuntu
20.04 LTS/Focal Fossa easier.
*== Secure default policies and granular policies ==*
- Raildo Mascena (raildo) joined us and presented an overview this
cross-community effort [9]
- Manila has many assumptions of what project roles should be - and over
time, we seem to have blended the idea of a deployer administrator and a
project administrator - so there are inconsistencies when, even to perform
project level administration, one needs excessive permissions across the
cloud. This is undesirable - so, a precursor to supporting the new scoped
policies from Keystone seems to be to:
- eliminate hard coded checks in the code requiring an "admin" role and
switch to performing policy checks
- eliminating empty defaults which allow anyone to execute an API -
manila has very few of these
- supporting a "reader" role with the APIs
- We can then re-calibrate the defaults to ensure a separation between
cross-tenant administration (system scope) and per-tenant administration -
following the work in oslo.policy and in keystone
- gouthamr will be leading this in the Victoria cycle - other
contributors are welcome to join this effort!
*== Oslo.privsep and other manila TODOs ==*
- We discussed another cross-community effort around transitioning all
sudo actions from rootwrap to privsep
- Currently no one in the manila team has the bandwidth to investigate
and commit to this effort, so we're happy to ask for help!
- If you are interested, please join us during one of the team meetings
or start submitting patches and we can discuss with you via code reviews.
- The team also compiled a list of backlog items in an etherpad [10].
These are great areas for new project contributors to help manila, so
please get in touch with us if you would like to work on any of these items
*== OSC Status/Completion ==*
- Victoria Martinez de la Cruz and Maari Tamm compiled the status for
the completion of the OSC plugin work in manilaclient [11]
- There's still a lot of ground to cover to get complete parity with the
manila command line client, and we need contributors
- Maari Tamm (maaritamm) will continue to work on this as time permits.
Spyros Trigazis (strigazi) and his team at CERN are interested to work
on this as well. Thank you, Maari and Spyros!
- On Friday, we were joined by Artem Goncharov (gtema) to discuss the
issue of "common commands"
- quotas, services, availability zones, limits are common concepts that
apply to other projects as well
- OSC has support to show you these resources for compute, volume and
networking
- gtema suggested we should approach this via the OpenStackSDK rather
than the plugin since plugins are slow as is, and adding anything more to
that interface is not desirable at the moment
- There's planned work in the OpenStackClient project to work on the
plugin loading mechanisms to make things faster
*== Graduation of Experimental Features ==*
- Last cycle Carlos Eduardo (carloss) committed the work to graduate
Share Groups APIs from their "Experimental API" status
- We have two more features behind experimental APIs: share replication
and share migration
- This cycle, carloss will work on graduating the share replication APIs
to fully supported
- Generic Share Migration still needs some work, but we've fleshed out
the API and it has stayed pretty constant in the past few releases - we
might consider graduating the API for share migration in the Wallaby
release.
*== CephFS Updates ==*
- Victoria (vkmc) took us through the updates planned for the Victoria
cycle (heh)
- Currently all dsvm/tempest based testing in OpenStack (cinder, nova,
manila) is happening on ceph luminous and older releases (hammer and jewel)
- Victoria has a patch up [12] to update the ceph cluster to using
Nautilus by default
- This patch moves to using the packages built by the ceph community via
their shaman build system [13]
- shaman does not support building nautilus on CentOS 8, or on Ubuntu
Xenial - so if older branches of the projects are tested with Ubuntu
Xenial, we'll fall back to testing with Luminous
- The Manila CephFS driver wants to take advantage of the "ceph-mgr"
daemon in the Nautilus release and beyond
- Maintaining support for "ceph-mgr" and "ceph-volume" clients in the
driver will make things messy - so, the manila driver will not support Ceph
versions prior to Nautilus in the Victoria cycle
- If you're using manila with cephfs, please upgrade your ceph clusters
to Nautilus or newer
- We're not opposed to supporting versions prior to nautilus, but the
community members cannot invest in maintaining support for these older
releases of ceph for future releases of manila
- With the ceph-mgr interface, we intend to support asynchronous
create-share-from-snapshot with manila
- Ramana Raja (rraja) provided us an update regarding the ceph-mgr and
upcoming support for nfs-ganesha interactions via that interface (ceph
pacific release)
- Currently there's a ganesha interface driver in manila, and that can
switch to using the ceph-mgr interface
- Manila provides an "ensure_shares" mechanism to migrate share export
locations when the NAS host changes - We'll need to work on that if we want
to make it easier to switch ganesha hosts.
- We also briefly discussed supporting manage and unmanage operations
with the ceph drivers - that should greatly assist day 2 operations, and
migration of shared file systems from the native cephfs protocol to nfs and
vice-versa.
*== Add/Delete/Update security services for in-use share networks ==*
- Douglas Viroel (dviroel) discussed a change to manila to support share
server modifications wrt security services
- Security services are project visible and tenant driven - however,
share servers are hidden away from project users by virtue of default policy
- dviroel's idea is that, If a share network has multiple share servers,
the share manager will enumerate and communicate with all share servers on
the share network to update a security service
- We need to make sure that all conflicting operations (such as creating
new shares, changing access rules on existing shares) are fenced off when a
share server security service is being updated
- dviroel has a spec that he's working on - and would like feedback on
his proposal [14]
*== Create shares with two (or more) subnets ==*
- dviroel proposed a design allowing a share network having multiple
subnets in a given AZ (currently you can have utmost 1 subnet in an AZ for
a given share network)
- Allowing multiple NICs on a share server may be something most drivers
can easily support
- This change is identical to the one to update security services on
existing share networks - in terms of user experience and expectations
- The use cases here include dual IP support, share server network
maintenance and migration, simultaneous access from disparate subnets
- Feedback from this discussion was to treat this as two separate
concerns for easier implementation
- Supporting multiple subnets per AZ per share network
- Supporting adding/removing subnets to/from a share network that is
in-use
- Currently, there's no way to modify an in-use share server - so adding
that would be a precursor to allowing modification of share
networks/subnets and security services
*== Technical Committee Tags ==*
- In the last cycle, the manila team worked with OpenStack VMT to
perform a vulnerability disclosure and coordinate a fix across
distributions that included manila.
- The experience was valuable in gaining control of our own "coresec"
team that had gone wayward on launchpad; and learning about VMT
- Jeremy Stanley (fungi) and other members of the VMT have been
supportive of having manila apply to the "vulnerability-managed" tag. We'll
follow up on this soon
- While we're on the subject, with Ghanshyam Mann (gmann) in the room,
we discussed other potential tags that we can assert as the project team:
- "supports-accessible-upgrade" - manila allows control plane upgrade
without disrupting the accessibility of shares, snapshots, ACLs, groups,
replicas, share servers, networks and all other resources [15]
- "supports-api-interoperability" - manila's API is microversioned and
we have hard tests that enforce backwards compatibility [16]
- We discussed "tc:approved-release" tag a bit, and felt that we needed
to bring it up in a TC meeting, and we did that with Ghanshyam's help
- The view from the manila team is that we'd like to eliminate any
misconception that the project is not mature, or ready for production use
or that it isn't a part of a "core OpenStack"
- At the TC meeting, Thierry Carrez (ttx), Graham Hayes (mugsie) and the
others provided historic context for this tag: the tag was for a section
from the OpenStack foundation bylaws that states that the Technical
Committee must define what an approved release is (Article 4, section 4.1
(b) i) [17]
- The TC's view was that this tag has outlived its purpose and
core-vs-non-core discussions have happened a lot of times. Dropping this
tag might require speaking with the Foundation and amending the bylaws and
exploring what this implies. It's a good effort to get started on though.
- For the moment, The TC was not opposed to the manila team requesting
this change to include manila in the list of projects in
"tc-approved-release".
*== Share and share size quotas/limits per share server ==*
- carloss shared his design for allowing share server limits being
enforced via the quotas system where administrators could define project
share server quotas that the share manager would enforce these by
provisioning new servers when the quotas are hit
- the quotas system is ill suited for this sort of enforcement,
specially given that the share manager allows the share drivers to control
what share server can be used to provision a share
- it's possible to look for a global solution, like the one proposed for
the generic driver in [18], or implement this at a backend level agnostic
to the rest of manila
- another reason not to use quotas is that manila may eventually do away
with this home grown quota system in favor of oslo.limit enforced via
keystone
- another alternative to doing this is via share types, but, this really
fits as a per-share-network limit rather than a global one
*== Optimize the quota processing logic for 'manage share' ==*
- haixin ran into a bug where quota operations are incorrectly applied
for during a share import/manage operation such that a failed manage
operation would cause incorrect quota deductions
- we discussed possible solutions for the bug, but mostly, this can
definitely be fixed
- he opened a bug for this [19]
*== Share server migration ==*
- dviroel presented his thoughts around this new feature [20] which
would be helpful for day 2 operations
- he suggested that we should not provide for a generic mechanism to
perform this migration, given that users would not need it especially if it
is not 100% reliable
- though there is a generic framework for provisioning share servers, it
is only being used by the reference driver (Generic driver) and the Windows
SMB driver
- shooting for a generic solution would require us to solve SPOF issues
that we currently have with the reference driver - and there is not much
investment in doing so
- dviroel's solution involves a multi-step migration, however relying on
the share drivers to perform atomic migration of all the shares - you can
think of this wrt the Generic driver as multi-attaching all the underlying
cinder volumes and deleting the older nova instance.
- manila's share migration is multi-step allowing for a data copy and a
cutover phase - and is cancelable through the data copy phase and before
the cutover phase is invoked
- so there were some concerns if that two phased approach is required
here, given that the operation may not be cancelable always, generically
*== Manila Container Storage Interface ==*
- Tom Barron (tbarron) presented a summary and demo of using the Manila
CSI driver on OpenShift to provide RWX storage to containerized applications
- Robert Vasek (gman0) explained the core design and the reasoning
behind the architecture
- Mikhail Fedosin (mfedosin) spoke about the OpenShift operator for
Manila CSI and the ease of install and day two operations [21]
- the CSI driver has support for snapshots, cloning of snapshots (nfs
only at the moment) and topology aside from provisioning, access control
and deprovisioning
- the team's prioritizing supporting cephfs snapshots and creating
shares from cephfs snapshots via subvolume clones in the Victoria cycle
Thanks for reading this far! Should you have any questions, don't hesitate
to pop into #openstack-manila on freenode.net.
[1] https://etherpad.opendev.org/p/victoria-ptg-manila (Minutes of the
PTG)
[2] https://www.youtube.com/playlist?list=PLnpzT0InFrqBKkyIAQdA9RFJnx-geS3lp
(YouTube playlist of the PTG recordings)
[3]
https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/usi…
(Manila CSI driver docs)
[4] https://bugs.launchpad.net/manila/+bugs?field.tag=low-hanging-fruit
(Low hanging fruit bugs in Manila)
[5] https://etherpad.opendev.org/p/community-goals (Community goal
proposals)
[6]
http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015459.html
(Chosen TC Community Goals for Victoria cycle)
[7]
https://tree.taiga.io/project/gouthampacha-manila-ci-zuul-v3-migration/kanb…
(Zuulv3-native CI migrations tracker)
[8] https://www.softwarefactory-project.io/ (Software Factory project)
[9]
https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popu…
(Policy effort across OpenStack)
[10] https://etherpad.opendev.org/p/manila-todos (ToDo list for manila)
[11] https://etherpad.opendev.org/p/manila-openstackclient-updates (OSC
CLI catchup tracker)
[12] https://review.opendev.org/#/c/676722/ (devstack-plugin-ceph
support for Ceph Nautilus)
[13] https://shaman.ceph.com (Shaman build system for Ceph)
[14] https://review.opendev.org/#/c/729292/ (Specification to allow
security service updates)
[15]
https://governance.openstack.org/tc/reference/tags/assert_supports-accessib…
(TC tag for accessible upgrades)
[16]
https://governance.openstack.org/tc/reference/tags/assert_supports-api-inte…
(TC tag for API interoperability)
[17]
https://www.openstack.org/legal/bylaws-of-the-openstack-foundation#ARTICLE_…
(TC bylaws requiring "approved release")
[18] https://review.opendev.org/#/c/510542/ (Limitting the number of shares
per Share server)
[19] https://bugs.launchpad.net/manila/+bug/1883506 (delete manage_error
share will lead to quota error)
[20] https://review.opendev.org/#/c/735970/ (specification for share server
migration)
[21] https://github.com/openshift/csi-driver-manila-operator
4 years, 9 months
[tc][all][ Technical Committee Zed Virtual PTG discussions details
by Ghanshyam Mann
Hello Everyone,
I am writing the Technical Committee discussion details from the Zed cycle PTG this week. We had a lot of
discussions and that is why it is a long email but I tried to categorize the details per topic so that you can easily spot
the discussion you are interested in.
TC + Community leader's interaction
============================
We continue the interaction session in this PTG also. The main idea here is to interact with community leaders
and ask for their feedback on TC. I am happy to see more attendance this time. Below are the topics we discussed
in this feedback session.
* Updates from TC:
** Status on previous PTG interaction sessions' Action items/feedback:
*** TC to do more technical guides for the project: DONE
We have a unified limit as the first technical guide to start in this[1].
*** TC to define a concrete checklist to check goal readiness before we select a goal: DONE
We decoupled the community-wide goal from the release cycle[2] and also added the goal readiness checklist[3].
*** Spread the word: DONE
We have updated a new page in project-team-guide where projects can spread the word/story[4]
** Do not do ignorant Recheck:
Recheck without seeing the failure even those are not related to the proposed change is very
costly and end up consuming a lot of infra resources. Checking the failure and adding comments in why
you are doing a recheck will give related projects a chance to know the frequent failures happening in
upstream CI and so does we can spend more time fixing them.
Dan Smith has added detailed documentation on "How to Handle Test Failures"[5], this can be used
to educate the contributors on recheck and how to debug the failure reason.
*** Action Item:
**** PTL to start spreading the word and monitor ignorant recheck in their weekly meeting etc.
**** On ignorant recheck, Zuul to add a comment with the link to recheck practice documentation.
** Spread the word/project outreach:
This was brought up again in this PTG but I forget to update that we did work on this in the Yoga cycle and
updated the project-team-guide to list the places where the project can spread the word/story[4]. It is
ready for projects to refer it and know where they can post their stories/updates.
* Recognition to the new contributors and encourage/help them to continue contributing to OpenStack:
When there is a new contributor who contributes a significant amount of work to any project then we should
have some way to appreciate them. It can be an appreciation certificate/badge from the foundation or TC,
appreciation on social media, especially on linked-in. In TC, we will check what is the best possible and less
costly way to do so. We checked in cross-community session with k8s and they seem to have coupon/badge
distribution from CNCF foundation.
2021 user survey analysis
===================
Jay has prepared the summary of the 2021 TC survey[6] and we covered it at a high level, especially the big changes
from the previous survey. As the next step, we will review the proposed summary and merge it. Also, we talked about
updating the upgrade question when we start the tick/tock release.
Prepare to migrate for SQLAlchemy 2.0
==============================
sqlalchemy 2.0 is under development and might have many incompatible changes. To prepare OpenStack in advance,
Stephen sent the mail on what are changes and project needs to do[7], also gave the project status. oslo.db is all good
to use the sqlalchemy 2.0 and the required neutron change is also merged. Thanks to Stephen for driving it and taking
care of various project work.
Elections Analysis:
==============
Project elections are not going well and we end up with a lot of missing nominations. This cycle, we had 17 projects on
the leaderless list and out of it, 16 missed the nomination. There are various factors that lead to missing nomination,
short notice from the election (1-2 weeks), PTLs are not active on ML, Language, Time Zone etc. By seeing the
number of leaderless projects end up having the new PTLs (this cycle it was just 1 project) and having such projects repeating
the nomination miss, we thought election in those projects are not actually required. TC agrees to change the leader assignment
for such a project and instead of finding PTL, we will propose to move these projects to the DPL model. WIth the DPL model, we
do not need to repeat the PTL assignment in every cycle.
Please note, moving them to DPL model will not be automatically done instead that will be done when there are required liaisons
to take care of DPL responsibility.
Community-wide goals Check In & discussion:
===================================
* FIPS goal
ade_lee gave the current status of this FIPS work progress in various projects[8]. Current FIPS jobs are based on the centos9 stream
which is not so stable but we will keep monitoring it. Canonical has suggested a way to set up FIPS jobs on ubuntu and will give the
foundation keys to do FIPS testing and periodically rotate them. One challenge is the horizon which has the dependencies on Django
fix and that will be present in Django higher version (4.0 or later) and it is difficult for the horizon to upgrade to that, especially the current
bandwidth team has. He also proposed the different milestones to finish the FIPS work[9].
The Technical Committee agreed to have this as a 'selected goal' and as the next step ade_lee will propose it in governance with defined
milestone and we will review it. Thanks, ade_lee for all your work and energy in this.
* RBAC goal
I have summarised it in a separate email, please refer to it
- http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028103.ht…
Release Cadence Adjustment:
=======================
In TC and leaders' interaction sessions, Dan presented an overview of the new release cadence and explain the things which will be
changed. After that project followed the discussion in their project PTG sessions and then brought the question in the Thursday TC
session. There was a question if we can remove the things (which are deprecated in tick release) in tock release and the answer is yes.
We discussed the automation of deprecation/upgrade release notes from tock release to tick release and it is fine but we need to make
sure we do not make tick release notes so big that people start ignoring those. We will see how it goes in a few of the initial tick release
notes. Below are the various other things which need changes:
* ACTION:
** Deprecation document in project-team-guide[10]:
*** Add the master deployment case as a recommendation
*** Intermediate release upgrade case
*** Update 4-a paragraph to clarify the 12 months case and things deprecated in tick can be removed in the next tock release.
** Testing runtime
*** In tock release, it is ok to add/test the new python version but do not upgrade the distro version.
*** In tick release: no change and keep doing the same what we do currently
** Stable branch
*** Update release and stable team with what is proposed in
https://governance.openstack.org/tc/resolutions/20220210-release-cadence-ad…
** tick-tock name:
***gmann to check foundation about trademark check on "tick", "tock" name
Remove release naming instead just use the number
=======================================
In yoga cycle, we changed the release naming convention to add the number also in <year>.<release count in that year>
format[11]. But the release name is less fun nowadays and more of a conflict or objection from the community. Starting
from Ussuri, I have seen objections on many release names and the most recent is 'Zed'. Being one of the release name
process coordinators for the last couple of releases, I feel that I am doing a good job and have to accept the blame/objection.
Also, the process of selecting the release name has its own cost in terms of TC, community time as well as the cost of the
legal check by the foundation. Having only the release number will help to understand how old that release is and also in
understanding the tick-tock releases.
The Drawback of removing the release name is that it requires changes in release tooling and will be less helpful in marketing.
The former one is a one-time change and later once can be adjusted with having some tag lines.
For example, "'OpenStack 2022.2': The integration of your effort"
Considering all the facts, TC agreed to start the release name drop process in Gerrit and the community can add their feedback
on that.
Gate Health and Checks
===================
This is something TC started monitoring and helping on gate health and it is going in a good direction. TC will continue doing it
in every weekly meeting. We encourage every project to keep monitoring their rechecks and TC also will keep eye on those
when we notice a large number of rechecks without comment.
* Infra resource checks:
The next thing we checked was if there are enough infra resources we have to have the smooth development of OpenStack in
the Zed cycle. Clark and fungi gave updates on those and we are all in a good position at least not in a panic situation. But to
monitor the required infra resources correctly, TC will work on listing the required and good-to-have services we need for
OpenStack development. That way we will be able to judge the infra resource situation in a better way.
* ELK service:
The new dashboard for ELK service is ready to use, you can find the dashboard and credential information in this review[12]. We
encourage community members to start using it and provide feedback. Accordingly, Clark will plan to stop the older ELK servers
in a month or so. Thanks dpawlik for working on it.
* nested-virt capable instances for testing
We discussed it and they are available and can be requested from the job definition but there is always a risk of a slow job and
timing failure and it is preferred to use those in the project gate only and not in the integrated/cross-project gate,
Testing strategy:
=============
* Direction on the lower constraints
As lower constraints are not well tested and always causing issues in upstream CI, we are discussing it again to find some permanent
solution. We discussed both cases 1. keeping lower bound in requirements.txt files and 2. lower-constraints.txt file and its testing. We
agree to keep the 1st one but drop the 2nd one completely. Below is the agreement and action items:
** AGREE/ACTION ITEM:
*** Write up how downstream distro can test their constraint file with OpenStack upstream.
*** Keep the lower bound in requirements.txt but add a line saying that they are the best effort but not tested. If they are wrong
then file a bug or fix it.
*** Drop the lower-constraints.txt file, l-c tox env and its testing on master as well as on all stable branches.
TC will add the resolution and communicate that on ML too.
* Release specific job template for testing runtime:
As brought up in ML[13], release-specific template does not actually work for independent releases model repo where the OpenStack
bot does not update the template. Stephen added a new generic template to use in such repo. But if we see these release-specific
templates are adding an extra step to update in every repo on a new release. Having a generic template and handling the python version
jobs with branch variant[14] will work fine. But the generic template will move all the projects to new testing runtime at the same time from
a central place. We agree to do it but with proper communication to all the projects and give some window to test/fix them before we switch
to new testing versions.
* Performance testing/monitoring
Performance testing is good to do but as per our current CI resources, it is very difficult. We discussed few ideas where we can keep eyes
on the performance aspect:
** At the start, we can monitor the memory footprint via performance stats in CI jobs
** Before and after DB query counting (how does this patch affect the number of DB queries)
** For API performance, we can monitor all the API requests via a single behind tls proxy gateway and collect the stats.
* When to remove the deprecated devstack-gate?
We agree to remove it once stable/wallaby is EM and will update the same in README file also with the expected date of stable/wallaby to
be EM (2022-10-14).
Improvement in project governance (continue from Yoga PTG....)
================================================
This is regarding how we can better keep eyes on the less or no active projects and detect them earlier in the cycle to same the release,
infra resources. In the Yoga release, we faced the issue of a few projects being broken during the last date of the final release. We discussed
it in the last PTG and agreed to start the 'tech-preview' idea. To define what can be entry and exit criteria for any project to be in 'tech-preview',
we need input from the release team on stopping the auto release of projects or so. The release team was busy in their PTG at the same time
we had this discussion so I will continue the discussion in TC weekly meetings.
Current state of translations in OpenStack
================================
Ianychoi and Seongsoocho attended this session and gave us a detailed picture of the translation work, what is the current state and what
all work is required to do including migration from Zanata to weblate. The main issue here is we need more people helping in this work.
We also asked who uses those translations and they should come up to help. As the next step, we agreed to add a new question to the user
survey ("Do you use i18n translations in your deployments? If yes please tell us which languages you use") and get some stats about translation
usage. Meanwhile, Ian and seongsoocho team will continue maintaining. Thanks to Ian and Seongsoocho for their best effort to maintain it.
Cross community sessions with the k8s steering committee team:
=================================================
k8s steering committee members (Paris, Dims, Bob, and Tim) joined the OpenStack Technical Committee in PTG. We discussed the various
topics about Legal support/coverage for contributors, especially in the security disclosure process and export control.
We asked if k8s also have the same level of language translation support in code/doc as that we have in OpenStack and they have only for the
documentation which is also if anyone proposes to do. Then we discussed the long-term sustainability efforts, especially for the experience
contributors who can take higher responsibility as well as train the new contributors. This is an issue in both communities and none of us
have any solution to this.
In the last, we discussed how k8s recognize the contributor's good work. In k8s along with appreciating on slack, ML, they issue the coupon and
badges from the CNCF foundation.
TC Zed Activities checks and planning
=============================
This is the last hour of the PTG and we started with the yoga cycle retrospective.
* Yoga Retrospective
We are doing good in gate health monitoring as well doing more technical work. On the improvement side, we need to be faster at making
the decision on things that are in discussion for long period instead of keeping them open.
* Pop Up Team Check In
After checking with the status and need of both active popup teams, we decided to continue with both in Zed cycle.
* TC weekly Meeting time check
We will keep the current time until daylight saving time changes again. Also, we will continue the video call once a month.
* TC liaison continue or drop?
As we have improved the interaction with the project with the weekly meetings as well as in PTG sessions (TC+Leaders interaction session), we
agreed to drop the TC liaisons completely.
* I will prepare the Zed cycle TC Tracker (activities we will do in Zed cycle)
* Next week's TC meeting is cancelled and we will resume meeting from 21st April onwards.
That is all from PTG, thanks for reading it and stay safe!
[1] https://docs.openstack.org/project-team-guide/technical-guides/unified-limi…
[2] https://review.opendev.org/c/openstack/governance/+/816387
[3] https://review.opendev.org/c/openstack/governance/+/835102
[4] https://docs.openstack.org/project-team-guide/spread-the-word.html
[5] https://docs.openstack.org/project-team-guide/testing.html#how-to-handle-te…
[6] https://review.opendev.org/c/openstack/governance/+/836888
[7] http://lists.openstack.org/pipermail/openstack-discuss/2021-August/024122.h…
[8] https://etherpad.opendev.org/p/qa-zed-ptg-fips
[9] https://etherpad.opendev.org/p/zed-ptg-fips-goal
[10] https://docs.openstack.org/project-team-guide/deprecation.html
[11] https://review.opendev.org/c/openstack/governance/+/829563
[12] https://review.opendev.org/c/openstack/governance-sigs/+/835838
[13] http://lists.openstack.org/pipermail/openstack-discuss/2022-March/027676.ht…
[14] https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/833892
2 years, 11 months
Re: [all][ptg] Pre-PTG discussion: New Keystone Middleware Feature Supporting OAuth2.0 with External Authorization Service
by Dave Wilde
I’m happy to book an additional time slot(s) specifically for this discussion if something other than what we currently have works better for everyone. Please let me know.
/Dave
On Mar 24, 2023 at 10:49 AM -0500, Hiromu Asahina <hiromu.asahina.az(a)hco.ntt.co.jp>, wrote:
> As Keystone canceled Monday 14 UTC timeslot [1], I'd like to hold this
> discussion on Monday 15 UTC timeslot. If it doesn't work for Ironic
> members, please kindly reply convenient timeslots.
>
> [1] https://ptg.opendev.org/ptg.html
>
> Thanks,
>
> Hiromu Asahina
>
> On 2023/03/22 20:01, Hiromu Asahina wrote:
> > Thanks!
> >
> > I look forward to your reply.
> >
> > On 2023/03/22 1:29, Julia Kreger wrote:
> > > No worries!
> > >
> > > I think that time works for me. I'm not sure it will work for
> > > everyone, but
> > > I can proxy information back to the whole of the ironic project as we
> > > also
> > > have the question of this functionality listed for our Operator Hour in
> > > order to help ironic gauge interest.
> > >
> > > -Julia
> > >
> > > On Tue, Mar 21, 2023 at 9:00 AM Hiromu Asahina <
> > > hiromu.asahina.az(a)hco.ntt.co.jp> wrote:
> > >
> > > > I apologize that I couldn't reply before the Ironic meeting on Monday.
> > > >
> > > > I need one slot to discuss this topic.
> > > >
> > > > I asked Keystone today and Monday's first Keystone slot (14 UTC Mon,
> > > > 27)[1,2] works for them. Does this work for Ironic? I understand not all
> > > > Ironic members will join this discussion, so I hope we can arrange a
> > > > convenient date for you two at least and, hopefully, for those
> > > > interested in this topic.
> > > >
> > > > [1]
> > > >
> > > > https://www.timeanddate.com/worldclock/fixedtime.html?iso=2023-03-27T14:00:…
> > > > [2] https://ptg.opendev.org/ptg.html
> > > >
> > > > Thanks,
> > > > Hiromu Asahina
> > > >
> > > > On 2023/03/17 23:29, Julia Kreger wrote:
> > > > > I'm not sure how many Ironic contributors would be the ones to attend a
> > > > > discussion, in part because this is disjointed from the items they need
> > > > to
> > > > > focus on. It is much more of a "big picture" item for those of us
> > > > > who are
> > > > > leaders in the project.
> > > > >
> > > > > I think it would help to understand how much time you expect the
> > > > discussion
> > > > > to take to determine a path forward and how we can collaborate. Ironic
> > > > has
> > > > > a huge number of topics we want to discuss during the PTG, and I
> > > > > suspect
> > > > > our team meeting on Monday next week should yield more
> > > > > interest/awareness
> > > > > as well as an amount of time for each topic which will aid us in
> > > > scheduling.
> > > > >
> > > > > If you can let us know how long, then I think we can figure out when
> > > > > the
> > > > > best day/time will be.
> > > > >
> > > > > Thanks!
> > > > >
> > > > > -Julia
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > On Fri, Mar 17, 2023 at 2:57 AM Hiromu Asahina <
> > > > > hiromu.asahina.az(a)hco.ntt.co.jp> wrote:
> > > > >
> > > > > > Thank you for your reply.
> > > > > >
> > > > > > I'd like to decide the time slot for this topic.
> > > > > > I just checked PTG schedule [1].
> > > > > >
> > > > > > We have the following time slots. Which one is convenient to gether?
> > > > > > (I didn't get reply but I listed Barbican, as its cores are almost the
> > > > > > same as Keystone)
> > > > > >
> > > > > > Mon, 27:
> > > > > >
> > > > > > - 14 (keystone)
> > > > > > - 15 (keystone)
> > > > > >
> > > > > > Tue, 28
> > > > > >
> > > > > > - 13 (barbican)
> > > > > > - 14 (keystone, ironic)
> > > > > > - 15 (keysonte, ironic)
> > > > > > - 16 (ironic)
> > > > > >
> > > > > > Wed, 29
> > > > > >
> > > > > > - 13 (ironic)
> > > > > > - 14 (keystone, ironic)
> > > > > > - 15 (keystone, ironic)
> > > > > > - 21 (ironic)
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > [1] https://ptg.opendev.org/ptg.html
> > > > > >
> > > > > > Hiromu Asahina
> > > > > >
> > > > > >
> > > > > > On 2023/02/11 1:41, Jay Faulkner wrote:
> > > > > > > I think it's safe to say the Ironic community would be very
> > > > > > > invested in
> > > > > > > such an effort. Let's make sure the time chosen for vPTG with this is
> > > > > > such
> > > > > > > that Ironic contributors can attend as well.
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Jay Faulkner
> > > > > > > Ironic PTL
> > > > > > >
> > > > > > > On Fri, Feb 10, 2023 at 7:40 AM Hiromu Asahina <
> > > > > > > hiromu.asahina.az(a)hco.ntt.co.jp> wrote:
> > > > > > >
> > > > > > > > Hello Everyone,
> > > > > > > >
> > > > > > > > Recently, Tacker and Keystone have been working together on a new
> > > > > > Keystone
> > > > > > > > Middleware that can work with external authentication
> > > > > > > > services, such as Keycloak. The code has already been submitted [1],
> > > > but
> > > > > > > > we want to make this middleware a generic plugin that works
> > > > > > > > with as many OpenStack services as possible. To that end, we would
> > > > like
> > > > > > to
> > > > > > > > hear from other projects with similar use cases
> > > > > > > > (especially Ironic and Barbican, which run as standalone
> > > > > > > > services). We
> > > > > > > > will make a time slot to discuss this topic at the next vPTG.
> > > > > > > > Please contact me if you are interested and available to
> > > > > > > > participate.
> > > > > > > >
> > > > > > > > [1]
> > > > https://review.opendev.org/c/openstack/keystonemiddleware/+/868734
> > > > > > > >
> > > > > > > > --
> > > > > > > > Hiromu Asahina
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > > > --
> > > > > > ◇-------------------------------------◇
> > > > > > NTT Network Innovation Center
> > > > > > Hiromu Asahina
> > > > > > -------------------------------------
> > > > > > 3-9-11, Midori-cho, Musashino-shi
> > > > > > Tokyo 180-8585, Japan
> > > > > > Phone: +81-422-59-7008
> > > > > > Email: hiromu.asahina.az(a)hco.ntt.co.jp
> > > > > > ◇-------------------------------------◇
> > > > > >
> > > > > >
> > > > >
> > > >
> > > > --
> > > > ◇-------------------------------------◇
> > > > NTT Network Innovation Center
> > > > Hiromu Asahina
> > > > -------------------------------------
> > > > 3-9-11, Midori-cho, Musashino-shi
> > > > Tokyo 180-8585, Japan
> > > > Phone: +81-422-59-7008
> > > > Email: hiromu.asahina.az(a)hco.ntt.co.jp
> > > > ◇-------------------------------------◇
> > > >
> > > >
> > >
> >
>
> --
> ◇-------------------------------------◇
> NTT Network Innovation Center
> Hiromu Asahina
> -------------------------------------
> 3-9-11, Midori-cho, Musashino-shi
> Tokyo 180-8585, Japan
> Phone: +81-422-59-7008
> Email: hiromu.asahina.az(a)hco.ntt.co.jp
> ◇-------------------------------------◇
>
>
2 years
Re: [all][ptg] Pre-PTG discussion: New Keystone Middleware Feature Supporting OAuth2.0 with External Authorization Service
by Dave Wilde
Hi Julia,
No worries!
I see that several of our sessions are overlapping, perhaps we could combine the 15:00 UTC session tomorrow to discuss this topic?
/Dave
On Mar 27, 2023 at 8:44 AM -0500, Julia Kreger <juliaashleykreger(a)gmail.com>, wrote:
>
>
> > On Fri, Mar 24, 2023 at 9:55 AM Dave Wilde <dwilde(a)redhat.com> wrote:
> > > I’m happy to book an additional time slot(s) specifically for this discussion if something other than what we currently have works better for everyone. Please let me know.
> > >
> > > /Dave
> > > On Mar 24, 2023 at 10:49 AM -0500, Hiromu Asahina <hiromu.asahina.az(a)hco.ntt.co.jp>, wrote:
> > > > As Keystone canceled Monday 14 UTC timeslot [1], I'd like to hold this
> > > > discussion on Monday 15 UTC timeslot. If it doesn't work for Ironic
> > > > members, please kindly reply convenient timeslots.
> >
> > Unfortunately, I took the last few days off and I'm only seeing this now. My morning is booked up aside from the original time slot which was discussed.
> >
> > Maybe there is a time later in the week which could work?
> >
> >
> > > >
> > > > [1] https://ptg.opendev.org/ptg.html
> > > >
> > > > Thanks,
> > > >
> > > > Hiromu Asahina
> > > >
> > > > On 2023/03/22 20:01, Hiromu Asahina wrote:
> > > > > Thanks!
> > > > >
> > > > > I look forward to your reply.
> > > > >
> > > > > On 2023/03/22 1:29, Julia Kreger wrote:
> > > > > > No worries!
> > > > > >
> > > > > > I think that time works for me. I'm not sure it will work for
> > > > > > everyone, but
> > > > > > I can proxy information back to the whole of the ironic project as we
> > > > > > also
> > > > > > have the question of this functionality listed for our Operator Hour in
> > > > > > order to help ironic gauge interest.
> > > > > >
> > > > > > -Julia
> > > > > >
> > > > > > On Tue, Mar 21, 2023 at 9:00 AM Hiromu Asahina <
> > > > > > hiromu.asahina.az(a)hco.ntt.co.jp> wrote:
> > > > > >
> > > > > > > I apologize that I couldn't reply before the Ironic meeting on Monday.
> > > > > > >
> > > > > > > I need one slot to discuss this topic.
> > > > > > >
> > > > > > > I asked Keystone today and Monday's first Keystone slot (14 UTC Mon,
> > > > > > > 27)[1,2] works for them. Does this work for Ironic? I understand not all
> > > > > > > Ironic members will join this discussion, so I hope we can arrange a
> > > > > > > convenient date for you two at least and, hopefully, for those
> > > > > > > interested in this topic.
> > > > > > >
> > > > > > > [1]
> > > > > > >
> > > > > > > https://www.timeanddate.com/worldclock/fixedtime.html?iso=2023-03-27T14:00:…
> > > > > > > [2] https://ptg.opendev.org/ptg.html
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Hiromu Asahina
> > > > > > >
> > > > > > > On 2023/03/17 23:29, Julia Kreger wrote:
> > > > > > > > I'm not sure how many Ironic contributors would be the ones to attend a
> > > > > > > > discussion, in part because this is disjointed from the items they need
> > > > > > > to
> > > > > > > > focus on. It is much more of a "big picture" item for those of us
> > > > > > > > who are
> > > > > > > > leaders in the project.
> > > > > > > >
> > > > > > > > I think it would help to understand how much time you expect the
> > > > > > > discussion
> > > > > > > > to take to determine a path forward and how we can collaborate. Ironic
> > > > > > > has
> > > > > > > > a huge number of topics we want to discuss during the PTG, and I
> > > > > > > > suspect
> > > > > > > > our team meeting on Monday next week should yield more
> > > > > > > > interest/awareness
> > > > > > > > as well as an amount of time for each topic which will aid us in
> > > > > > > scheduling.
> > > > > > > >
> > > > > > > > If you can let us know how long, then I think we can figure out when
> > > > > > > > the
> > > > > > > > best day/time will be.
> > > > > > > >
> > > > > > > > Thanks!
> > > > > > > >
> > > > > > > > -Julia
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > On Fri, Mar 17, 2023 at 2:57 AM Hiromu Asahina <
> > > > > > > > hiromu.asahina.az(a)hco.ntt.co.jp> wrote:
> > > > > > > >
> > > > > > > > > Thank you for your reply.
> > > > > > > > >
> > > > > > > > > I'd like to decide the time slot for this topic.
> > > > > > > > > I just checked PTG schedule [1].
> > > > > > > > >
> > > > > > > > > We have the following time slots. Which one is convenient to gether?
> > > > > > > > > (I didn't get reply but I listed Barbican, as its cores are almost the
> > > > > > > > > same as Keystone)
> > > > > > > > >
> > > > > > > > > Mon, 27:
> > > > > > > > >
> > > > > > > > > - 14 (keystone)
> > > > > > > > > - 15 (keystone)
> > > > > > > > >
> > > > > > > > > Tue, 28
> > > > > > > > >
> > > > > > > > > - 13 (barbican)
> > > > > > > > > - 14 (keystone, ironic)
> > > > > > > > > - 15 (keysonte, ironic)
> > > > > > > > > - 16 (ironic)
> > > > > > > > >
> > > > > > > > > Wed, 29
> > > > > > > > >
> > > > > > > > > - 13 (ironic)
> > > > > > > > > - 14 (keystone, ironic)
> > > > > > > > > - 15 (keystone, ironic)
> > > > > > > > > - 21 (ironic)
> > > > > > > > >
> > > > > > > > > Thanks,
> > > > > > > > >
> > > > > > > > > [1] https://ptg.opendev.org/ptg.html
> > > > > > > > >
> > > > > > > > > Hiromu Asahina
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > On 2023/02/11 1:41, Jay Faulkner wrote:
> > > > > > > > > > I think it's safe to say the Ironic community would be very
> > > > > > > > > > invested in
> > > > > > > > > > such an effort. Let's make sure the time chosen for vPTG with this is
> > > > > > > > > such
> > > > > > > > > > that Ironic contributors can attend as well.
> > > > > > > > > >
> > > > > > > > > > Thanks,
> > > > > > > > > > Jay Faulkner
> > > > > > > > > > Ironic PTL
> > > > > > > > > >
> > > > > > > > > > On Fri, Feb 10, 2023 at 7:40 AM Hiromu Asahina <
> > > > > > > > > > hiromu.asahina.az(a)hco.ntt.co.jp> wrote:
> > > > > > > > > >
> > > > > > > > > > > Hello Everyone,
> > > > > > > > > > >
> > > > > > > > > > > Recently, Tacker and Keystone have been working together on a new
> > > > > > > > > Keystone
> > > > > > > > > > > Middleware that can work with external authentication
> > > > > > > > > > > services, such as Keycloak. The code has already been submitted [1],
> > > > > > > but
> > > > > > > > > > > we want to make this middleware a generic plugin that works
> > > > > > > > > > > with as many OpenStack services as possible. To that end, we would
> > > > > > > like
> > > > > > > > > to
> > > > > > > > > > > hear from other projects with similar use cases
> > > > > > > > > > > (especially Ironic and Barbican, which run as standalone
> > > > > > > > > > > services). We
> > > > > > > > > > > will make a time slot to discuss this topic at the next vPTG.
> > > > > > > > > > > Please contact me if you are interested and available to
> > > > > > > > > > > participate.
> > > > > > > > > > >
> > > > > > > > > > > [1]
> > > > > > > https://review.opendev.org/c/openstack/keystonemiddleware/+/868734
> > > > > > > > > > >
> > > > > > > > > > > --
> > > > > > > > > > > Hiromu Asahina
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > > ◇-------------------------------------◇
> > > > > > > > > NTT Network Innovation Center
> > > > > > > > > Hiromu Asahina
> > > > > > > > > -------------------------------------
> > > > > > > > > 3-9-11, Midori-cho, Musashino-shi
> > > > > > > > > Tokyo 180-8585, Japan
> > > > > > > > > Phone: +81-422-59-7008
> > > > > > > > > Email: hiromu.asahina.az(a)hco.ntt.co.jp
> > > > > > > > > ◇-------------------------------------◇
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > ◇-------------------------------------◇
> > > > > > > NTT Network Innovation Center
> > > > > > > Hiromu Asahina
> > > > > > > -------------------------------------
> > > > > > > 3-9-11, Midori-cho, Musashino-shi
> > > > > > > Tokyo 180-8585, Japan
> > > > > > > Phone: +81-422-59-7008
> > > > > > > Email: hiromu.asahina.az(a)hco.ntt.co.jp
> > > > > > > ◇-------------------------------------◇
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > > > --
> > > > ◇-------------------------------------◇
> > > > NTT Network Innovation Center
> > > > Hiromu Asahina
> > > > -------------------------------------
> > > > 3-9-11, Midori-cho, Musashino-shi
> > > > Tokyo 180-8585, Japan
> > > > Phone: +81-422-59-7008
> > > > Email: hiromu.asahina.az(a)hco.ntt.co.jp
> > > > ◇-------------------------------------◇
> > > >
> > > >
2 years