From senrique at redhat.com Fri Apr 1 01:47:03 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Thu, 31 Mar 2022 22:47:03 -0300 Subject: [cinder] Cancelling cinder meeting on 6th April In-Reply-To: References: Message-ID: Same for the bug meeting ;) On Thu, Mar 31, 2022 at 3:21 PM Rajat Dhasmana wrote: > Hi, > > We will be cancelling the cinder meeting on 6th April as it is the PTG > week. > > Thanks and regards > Rajat Dhasmana > -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From pdeore at redhat.com Fri Apr 1 05:39:46 2022 From: pdeore at redhat.com (Pranali Deore) Date: Fri, 1 Apr 2022 11:09:46 +0530 Subject: No weekly meeting On April 7th Message-ID: Hello, We won't be having our weekly meeting on 7th April since it's the PTG week. Thanks, Pranali Deore -------------- next part -------------- An HTML attachment was scrubbed... URL: From pdeore at redhat.com Fri Apr 1 06:05:17 2022 From: pdeore at redhat.com (Pranali Deore) Date: Fri, 1 Apr 2022 11:35:17 +0530 Subject: [Glance] No weekly meeting On April 7th Message-ID: Hello, We won't be having our weekly meeting on 7th April since it's the PTG week. Thanks, Pranali Deore -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Fri Apr 1 06:19:40 2022 From: marios at redhat.com (Marios Andreou) Date: Fri, 1 Apr 2022 09:19:40 +0300 Subject: [TripleO] gate blocker - impacting all quickstart-based jobs - openstack-ansible-os_tempest In-Reply-To: References: Message-ID: On Fri, Apr 1, 2022 at 12:14 AM Ronelle Landy wrote: > Hello All, > > We have a check/gate blocker on all TripleO quickstart-based jobs, as > described in: > > https://bugs.launchpad.net/tripleo/+bug/1967430 > > [1] commit to openstack-ansible-os_tempest removed setup.py and is causing > failings in all quickstart jobs. > > A revert was proposed but will not be workable - we are waiting on another > fix. > > Please hold rechecks until this is resolved. > > [1] > https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/835969 > > Unfortunately looks like the core group on that repo is empty [1]. I added some folks into CC here that merged the original patch. Folks can you please help us merge the fix at https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/836091 TripleO gate is blocked until we merge ansible-role-python_venv_build/+/836091 please help :D [1] https://review.opendev.org/admin/groups/3474fc86368161e5288be01295041a089a1060b3,members > Thank you! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bartosz.rabiega at ovhcloud.com Fri Apr 1 08:19:33 2022 From: bartosz.rabiega at ovhcloud.com (bartosz.rabiega at ovhcloud.com) Date: Fri, 1 Apr 2022 10:19:33 +0200 Subject: [nova][os-brick] Yoga - native nvmeof multipath support Message-ID: <76d6a151-09d8-9221-4772-ed6100d674cd@ovhcloud.com> Hello, I'm trying to figure out how the new feature of Yoga works: "The libvirt driver now allows using Native NVMeoF multipathing for NVMeoF connector, via the configuration attribute in nova-cpu.conf [libvirt]/volume_use_multipath, defaulting to False (disabled)." https://review.opendev.org/c/openstack/nova/+/823941 It's said that this config value enables native NVMeoF multipathing, but I can't see any support for such configuration in os-brick connector. https://github.com/openstack/os-brick/blob/master/os_brick/initiator/connectors/nvmeof.py Am I missing something? BR BR From bcafarel at redhat.com Fri Apr 1 08:50:47 2022 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Fri, 1 Apr 2022 10:50:47 +0200 Subject: [monasca][neutron][venus][release][kolla] Missing yoga tarballs In-Reply-To: References: Message-ID: Thanks to Lajos for the patch dropping lower constraints job in stable/yoga [1], neutron-vpnaas-dashboard should have the release patches merged soon (in gate) [1] https://review.opendev.org/c/openstack/neutron-vpnaas-dashboard/+/836044/ On Thu, 31 Mar 2022 at 15:40, Lajos Katona wrote: > Hi, > Thanks for heads up. > I will check neutron-vpnaas-dashboard, but it seems we lost the > maintainers of vpnaas, so even neutron-vpnaas gate seems to be red > currently. > We will anyway spend a slot for such low activity networking projects. > > Lajos Katona (lajoskatona) > > Mark Goddard ezt ?rta (id?pont: 2022. m?rc. 31., Cs, > 11:38): > >> Hi, >> >> In kolla we are trying to switch to the yoga branch tarballs in our >> images as part of our release process. The following projects lack these: >> >> * monasca-api: stable/yoga exists, but no additional commits yet, so >> no tarball. Could someone merge the release bot patches for stable/yoga? >> >> * neutron-vpnaas-dashboard: stable/yoga exists, but no additional commits >> yet, so no tarball. Could someone merge the release bot patches for >> stable/yoga? >> >> * venus: no stable/yoga branch yet. Looks like the review for the >> deliverable files is still open: >> https://review.opendev.org/c/openstack/releases/+/824394 >> >> Thanks, >> Mark >> > -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Fri Apr 1 09:03:13 2022 From: geguileo at redhat.com (Gorka Eguileor) Date: Fri, 1 Apr 2022 11:03:13 +0200 Subject: [nova][os-brick] Yoga - native nvmeof multipath support In-Reply-To: <76d6a151-09d8-9221-4772-ed6100d674cd@ovhcloud.com> References: <76d6a151-09d8-9221-4772-ed6100d674cd@ovhcloud.com> Message-ID: <20220401090313.4smbpanf6aqloqx5@localhost> On 01/04, bartosz.rabiega at ovhcloud.com wrote: > Hello, > > I'm trying to figure out how the new feature of Yoga works: > > "The libvirt driver now allows using Native NVMeoF multipathing for NVMeoF > connector, via the configuration attribute in nova-cpu.conf > [libvirt]/volume_use_multipath, defaulting to False (disabled)." > > https://review.opendev.org/c/openstack/nova/+/823941 > > It's said that this config value enables native NVMeoF multipathing, but I > can't see any support for such configuration in os-brick connector. > > https://github.com/openstack/os-brick/blob/master/os_brick/initiator/connectors/nvmeof.py > > Am I missing something? > > BR > BR > Hi, You are not missing anything. Maybe the wording of the nova release note could have been a bit more clear, because what nova has now is the ability to let NVMe-oF use multipathing (once it becomes available in os-brick) for Cinder drivers that support it... There are 2 kinds of multipathing possible with NVMe-oF: - Native (ANA) - Device Mapper (same as with iSCSI an FC) The team is currently working on adding NVMe-oF native multipathing to os-brick [1], and the DM option will be available later, so it's not available on Yoga. Once it becomes available in os-brick, Cinder drivers will need to be updated to use the newer connection information format (except the Kioxia driver that already uses it). Cheers, Gorka. [1]: https://review.opendev.org/c/openstack/os-brick/+/830800 From bartosz.rabiega at ovhcloud.com Fri Apr 1 09:21:56 2022 From: bartosz.rabiega at ovhcloud.com (bartosz.rabiega at ovhcloud.com) Date: Fri, 1 Apr 2022 11:21:56 +0200 Subject: [nova][os-brick] Yoga - native nvmeof multipath support In-Reply-To: <20220401090313.4smbpanf6aqloqx5@localhost> References: <76d6a151-09d8-9221-4772-ed6100d674cd@ovhcloud.com> <20220401090313.4smbpanf6aqloqx5@localhost> Message-ID: <0cd709f2-92aa-94a7-2513-e48587fd47d7@ovhcloud.com> Oh thanks for the clarification! I'm particularly interested in native multipath (ANA). Are there any plans to have a mechanism for updating/replacing failed paths? E.g. in case when one of the backend paths dies and has to be replaced with a new one - completely new target (let's say cinder driver exposes new path in place of the failed one). Is there a way to participate in the development? BR On 4/1/22 11:03, Gorka Eguileor wrote: > On 01/04, bartosz.rabiega at ovhcloud.com wrote: >> Hello, >> >> I'm trying to figure out how the new feature of Yoga works: >> >> "The libvirt driver now allows using Native NVMeoF multipathing for NVMeoF >> connector, via the configuration attribute in nova-cpu.conf >> [libvirt]/volume_use_multipath, defaulting to False (disabled)." >> >> https://review.opendev.org/c/openstack/nova/+/823941 >> >> It's said that this config value enables native NVMeoF multipathing, but I >> can't see any support for such configuration in os-brick connector. >> >> https://github.com/openstack/os-brick/blob/master/os_brick/initiator/connectors/nvmeof.py >> >> Am I missing something? >> >> BR >> BR >> > > Hi, > > You are not missing anything. > > Maybe the wording of the nova release note could have been a bit more > clear, because what nova has now is the ability to let NVMe-oF use > multipathing (once it becomes available in os-brick) for Cinder drivers > that support it... > > There are 2 kinds of multipathing possible with NVMe-oF: > > - Native (ANA) > - Device Mapper (same as with iSCSI an FC) > > The team is currently working on adding NVMe-oF native multipathing to > os-brick [1], and the DM option will be available later, so it's not > available on Yoga. > > Once it becomes available in os-brick, Cinder drivers will need to be > updated to use the newer connection information format (except the > Kioxia driver that already uses it). > > Cheers, > Gorka. > > > [1]: https://review.opendev.org/c/openstack/os-brick/+/830800 > From elod.illes at est.tech Fri Apr 1 10:29:47 2022 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 1 Apr 2022 12:29:47 +0200 Subject: [release] Release Team's Zed PTG schedule & topics Message-ID: <22fb1c0e-dad8-1615-f991-4c4cd8e71993@est.tech> Hi, Release team decided to hold our Zed PTG session at the usual weekly Release Team meeting timeframe, so @ Friday 14:00 UTC, April 8th, (Liberty Room). Please see the proposed topics (and add yours if you have any!) to our PTG etherpad: https://etherpad.opendev.org/p/april2022-ptg-rel-mgt Reminder: today's weekly meeting is cancelled, see you next week at the PTG! Cheers, El?d From geguileo at redhat.com Fri Apr 1 10:39:23 2022 From: geguileo at redhat.com (Gorka Eguileor) Date: Fri, 1 Apr 2022 12:39:23 +0200 Subject: [nova][os-brick] Yoga - native nvmeof multipath support In-Reply-To: <0cd709f2-92aa-94a7-2513-e48587fd47d7@ovhcloud.com> References: <76d6a151-09d8-9221-4772-ed6100d674cd@ovhcloud.com> <20220401090313.4smbpanf6aqloqx5@localhost> <0cd709f2-92aa-94a7-2513-e48587fd47d7@ovhcloud.com> Message-ID: <20220401103923.jnz5rdeo7c564n5i@localhost> On 01/04, bartosz.rabiega at ovhcloud.com wrote: > Oh thanks for the clarification! > > I'm particularly interested in native multipath (ANA). > Are there any plans to have a mechanism for updating/replacing failed paths? > E.g. in case when one of the backend paths dies and has to be replaced with > a new one - completely new target (let's say cinder driver exposes new path > in place of the failed one). > > Is there a way to participate in the development? > > BR Hi, There is no plan to provide a generic mechanism to create new paths. Traditional storage systems have N interfaces, and the cinder driver usually exports and maps volumes on ALL available interfaces, so there is no additional path that can be created on the storage system if one of the existing paths fail. If I remember correctly, for non traditional SDS storage systems that can potentially create additional copies of the volume on a different location and add a new path, the current approach we are taking is having an agent on the system that can talk directly with the storage and update the local nvme circumventing Cinder. Did you have a specific storage in mind? Right now I believe both lightos and Kioxia have that approach. Today I've started on adding multipathing support to LVM with nvmet so I can test the os-brick multipathing patch. Regarding participating in the development, PLEASE DO! All contributions are welcome (including cloud providers bringing their ideas and concerns). There are upstream guides [1] on how to contribute, but you can always just drop by the Cinder team IRC and video meetings [2] and it just so happens that next week we are going to have the PTG [3]. The PTG is a great opportunity to hear about the current and future OpenStack development efforts, as well as bring topics up for discussion. The Cinder team will be meeting Tuesday to Friday from 13:00 to 17:00 UTC each day. You can see in the topics [4] and schedule [5] that on Friday we'll be discussing about existing NVMe-oF issues. Which makes me think that we should change the topic to a more generic one, such as "NVMe-oF efforts", since we'll probably discuss the ongoing multipathing effort as well. Registration is free [6] and you don't have to attend all days, so I'm looking forward to see you there. ;-) Cheers, Gorka. [1]: https://wiki.openstack.org/wiki/How_To_Contribute [2]: https://wiki.openstack.org/wiki/CinderMeetings [3]: https://www.openstack.org/ptg/ [4]: https://etherpad.opendev.org/p/zed-ptg-cinder [5]: https://ethercalc.openstack.org/crz6qdm7fq0v [6]: https://openinfra-ptg.eventbrite.com/ > > > On 4/1/22 11:03, Gorka Eguileor wrote: > > On 01/04, bartosz.rabiega at ovhcloud.com wrote: > > > Hello, > > > > > > I'm trying to figure out how the new feature of Yoga works: > > > > > > "The libvirt driver now allows using Native NVMeoF multipathing for NVMeoF > > > connector, via the configuration attribute in nova-cpu.conf > > > [libvirt]/volume_use_multipath, defaulting to False (disabled)." > > > > > > https://review.opendev.org/c/openstack/nova/+/823941 > > > > > > It's said that this config value enables native NVMeoF multipathing, but I > > > can't see any support for such configuration in os-brick connector. > > > > > > https://github.com/openstack/os-brick/blob/master/os_brick/initiator/connectors/nvmeof.py > > > > > > Am I missing something? > > > > > > BR > > > BR > > > > > > > Hi, > > > > You are not missing anything. > > > > Maybe the wording of the nova release note could have been a bit more > > clear, because what nova has now is the ability to let NVMe-oF use > > multipathing (once it becomes available in os-brick) for Cinder drivers > > that support it... > > > > There are 2 kinds of multipathing possible with NVMe-oF: > > > > - Native (ANA) > > - Device Mapper (same as with iSCSI an FC) > > > > The team is currently working on adding NVMe-oF native multipathing to > > os-brick [1], and the DM option will be available later, so it's not > > available on Yoga. > > > > Once it becomes available in os-brick, Cinder drivers will need to be > > updated to use the newer connection information format (except the > > Kioxia driver that already uses it). > > > > Cheers, > > Gorka. > > > > > > [1]: https://review.opendev.org/c/openstack/os-brick/+/830800 > > > From rlandy at redhat.com Fri Apr 1 11:06:24 2022 From: rlandy at redhat.com (Ronelle Landy) Date: Fri, 1 Apr 2022 07:06:24 -0400 Subject: [TripleO] gate blocker - impacting all quickstart-based jobs - openstack-ansible-os_tempest In-Reply-To: References: Message-ID: The gate blocker is now cleared. Thank you to all who got the required patches through. On Fri, Apr 1, 2022 at 2:19 AM Marios Andreou wrote: > On Fri, Apr 1, 2022 at 12:14 AM Ronelle Landy wrote: > >> Hello All, >> >> We have a check/gate blocker on all TripleO quickstart-based jobs, as >> described in: >> >> https://bugs.launchpad.net/tripleo/+bug/1967430 >> >> [1] commit to openstack-ansible-os_tempest removed setup.py and >> is causing failings in all quickstart jobs. >> >> A revert was proposed but will not be workable - we are waiting on >> another fix. >> >> Please hold rechecks until this is resolved. >> >> [1] >> https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/835969 >> >> > > Unfortunately looks like the core group on that repo is empty [1]. I added > some folks into CC here that merged the original patch. Folks can you > please help us merge the fix at > https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/836091 > > > TripleO gate is blocked until we > merge ansible-role-python_venv_build/+/836091 > > > please help :D > > > [1] > https://review.opendev.org/admin/groups/3474fc86368161e5288be01295041a089a1060b3,members > > > > > >> Thank you! >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Fri Apr 1 13:39:56 2022 From: sbauza at redhat.com (Sylvain Bauza) Date: Fri, 1 Apr 2022 15:39:56 +0200 Subject: [nova][placement] PTG schedule attempt Message-ID: Hi Compute-rs (lollilol), Before the PTG, I started to group all related topics into sections and I tried to allocate time against those. Feel free to take a look at the first schedule attempt https://etherpad.opendev.org/p/nova-zed-ptg#L45 As you will see in the etherpad, I added for every topic a courtesy ping list. Add your IRC nick in there if you are interested in attending this session, I'll ping folks at the start of the related session so people who can't be around for 3 hours can know when we discuss. Of course, I at least added the speaker ;-) -Sylvain -------------- next part -------------- An HTML attachment was scrubbed... URL: From moreira.belmiro.email.lists at gmail.com Fri Apr 1 14:01:51 2022 From: moreira.belmiro.email.lists at gmail.com (Belmiro Moreira) Date: Fri, 1 Apr 2022 16:01:51 +0200 Subject: [ptg][ptl][largescale-sig][all] PTG session: "The Scaling Journey" - Wednesday, April 6 15utc - kilo room Message-ID: Hi, the Large Scale SIG is organizing a PTG session to discuss "The Scaling Journey". The SIG worked in a "scaling journey" to guide and help operators to scale their OpenStack deployments. Definitely, there are different ways to scale OpenStack! and the challenges to move from a few hundreds cores to thousands or now millions cores are completely different. Based on the experience of several operators, we tried to answer different common questions and identify the pain points. https://wiki.openstack.org/wiki/Large_Scale_SIG If you are interested in scalability it would be great to have your feedback. Also, it's important that PTLs join because they can give the project vision for scalability and advise on how to overcome possible bottlenecks. To discuss all of this we will have a "zoom" session on Wednesday, April 6 15utc - "kilo room". See you there! cheers, Belmiro on behalf of the Large Scale SIG -------------- next part -------------- An HTML attachment was scrubbed... URL: From federica.fanzago at pd.infn.it Fri Apr 1 14:05:47 2022 From: federica.fanzago at pd.infn.it (federica fanzago) Date: Fri, 1 Apr 2022 16:05:47 +0200 Subject: [horizon] [glance] [ops] Openstack Xena: error deleting image and snapshot via dashboard (as user and as admin)" Message-ID: Hi all, we have installed Openstack Xena in our cloud infrastructure (OS Centos-Stream 8) and we find a problem with the delete of images and snapshots via dashboard. The delete command returns "Error: Unable to delete Image:xxx". Via command line the delete works well. Looking in glance logs? I don't find any error message. The only error message is in a httpd log "ssl_error_log:[Thu Mar 31 09:54:28.857689 2022] [wsgi:error] [pid 315710:tid 140368604612352] [remote 192.168.60.229:59240] Internal Server Error: /dashboard/api/glance/images/9d519edb-5690-4660-a6e0-e44f9e2ab58f/" but it doesn't provide useful info for the debug. The creation of images via dashboard works well. Did you experienced this problem? Have you suggestions about it? Thanks, cheers ??? Federica From mark at stackhpc.com Fri Apr 1 15:45:11 2022 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 1 Apr 2022 16:45:11 +0100 Subject: [monasca][neutron][venus][release][kolla] Missing yoga tarballs In-Reply-To: References: Message-ID: Thanks! On Fri, 1 Apr 2022 at 09:51, Bernard Cafarelli wrote: > Thanks to Lajos for the patch dropping lower constraints job in > stable/yoga [1], neutron-vpnaas-dashboard should have the release patches > merged soon (in gate) > > [1] > https://review.opendev.org/c/openstack/neutron-vpnaas-dashboard/+/836044/ > > On Thu, 31 Mar 2022 at 15:40, Lajos Katona wrote: > >> Hi, >> Thanks for heads up. >> I will check neutron-vpnaas-dashboard, but it seems we lost the >> maintainers of vpnaas, so even neutron-vpnaas gate seems to be red >> currently. >> We will anyway spend a slot for such low activity networking projects. >> >> Lajos Katona (lajoskatona) >> >> Mark Goddard ezt ?rta (id?pont: 2022. m?rc. 31., Cs, >> 11:38): >> >>> Hi, >>> >>> In kolla we are trying to switch to the yoga branch tarballs in our >>> images as part of our release process. The following projects lack these: >>> >>> * monasca-api: stable/yoga exists, but no additional commits yet, so >>> no tarball. Could someone merge the release bot patches for stable/yoga? >>> >>> * neutron-vpnaas-dashboard: stable/yoga exists, but no additional >>> commits yet, so no tarball. Could someone merge the release bot patches for >>> stable/yoga? >>> >>> * venus: no stable/yoga branch yet. Looks like the review for the >>> deliverable files is still open: >>> https://review.opendev.org/c/openstack/releases/+/824394 >>> >>> Thanks, >>> Mark >>> >> > > -- > Bernard Cafarelli > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haleyb.dev at gmail.com Fri Apr 1 16:30:31 2022 From: haleyb.dev at gmail.com (Brian Haley) Date: Fri, 1 Apr 2022 12:30:31 -0400 Subject: [neutron] [kolla] Static routes added to subnets after upgrading from Queens to Train In-Reply-To: <1362758790.769385.1648757200650@mail.yahoo.com> References: <997697892.1640950.1648587862872.ref@mail.yahoo.com> <997697892.1640950.1648587862872@mail.yahoo.com> <242099798.347726.1648668427206@mail.yahoo.com> <2f5ea4b4-73a3-b6cc-2e7c-e6b7114117e4@gmail.com> <1362758790.769385.1648757200650@mail.yahoo.com> Message-ID: <0f0d71f2-23a6-5d6b-5cef-605e25a7b26d@gmail.com> Hi Albert, Thanks for the command line, it helped me track down the code in neutron that changed, and it was really the --network-segment arg that is triggering this along with --gateway (and I haven't defined any segments so don't see it in my setup). Anyways, there are a few changes that added the update of host routes in the segment plugin code to support routed networks better. Looking at https://bugs.launchpad.net/neutron/+bug/1766380 shows them all, but https://review.opendev.org/c/openstack/neutron/+/570405/ and https://review.opendev.org/c/openstack/neutron/+/573897 where the two main ones. It doesn't look like there's a way to disable it, but I cc'd Harald to get his thoughts on it. My only follow-on question would be are these host routes causing an issue or just something that was noticed in your upgrade? Thanks, -Brian On 3/31/22 16:06, Albert Braden wrote: > Here's what I get when I create a 4th subnet: > > $ openstack network segment create --physical-network physnet_bo-az3 > --network-type vlan --segment 1115 --network trust trust-az4 > +------------------+--------------------------------------+ > | Field | Value | > +------------------+--------------------------------------+ > | description | | > | id | 92355e6d-3406-4b29-a956-1b05c4c9a33e | > | name | private-provider-trust-az4 | > | network_id | ac30a487-bccc-c3de-93eb-c422ad9f3ce5 | > | network_type | vlan | > | physical_network | physnet_bo-az3 | > | segmentation_id | 1115 | > +------------------+--------------------------------------+ > > $ openstack subnet create --no-dhcp --network private-provider-trust > --network-segment private-provider-trust-az4 --ip-version 4 > --allocation-pool start=10.52.172.14,end=10.52.172.235 --subnet-range > 10.52.172.0/22 --dns-nameserver 10.10.10.10 --gateway 10.52.172.1 > private-provider-trust-az4-subnet > +----------------------+------------------------------------------------------+ > | Field | Value | > +----------------------+------------------------------------------------------+ > | allocation_pools | 10.52.172.10-10.52.172.245 | > | cidr | 10.52.172.0/22 | > | created_at | 2022-03-31T19:26:48Z | > | description | | > | dns_nameservers | 10.10.10.10 | > | dns_publish_fixed_ip | None | > | enable_dhcp | False | > | gateway_ip | 10.52.172.1 | > | host_routes | destination='10.52.160.0/22', gateway='10.52.172.1' | > | | destination='10.52.164.0/22', gateway='10.52.172.1' | > | | destination='10.52.168.0/22', gateway='10.52.172.1' | > | id | 04a15cdd-d22b-4e58-8bbd-8b956d8c10ba | > | ip_version | 4 | > | ipv6_address_mode | None | > | ipv6_ra_mode | None | > | name | private-provider-trust-az4-subnet | > | network_id | ac30a487-bccc-4ac5-93eb-c422ad9f3ce5 | > | prefix_length | None | > | project_id | 561e8d2236634ece81ffa22203e80dc7 | > | revision_number | 0 | > | segment_id | 92355e6d-a5de-4b29-a956-1b05c4c9a33e | > | service_types | | > | subnetpool_id | None | > | tags | | > | updated_at | 2022-03-31T19:26:48Z | > +----------------------+------------------------------------------------------+ > > If I create the 4th subnet without specifying a gateway, then the routes > are not created. It looks like this may be what changed from Queens to > Train: > > $ openstack subnet create --no-dhcp --network private-provider-trust > --network-segment private-provider-trust-az4 --ip-version 4 > --allocation-pool start=10.52.172.10,end=10.52.172.245 --subnet-range > 10.52.172.0/22 --dns-nameserver 10.10.10.10 > private-provider-trust-az4-subnet > +----------------------+--------------------------------------+ > | Field | Value | > +----------------------+--------------------------------------+ > | allocation_pools | 10.52.172.10-10.52.172.245 | > | cidr | 10.52.172.0/22 | > | created_at | 2022-03-31T20:00:44Z | > | description | | > | dns_nameservers | 10.10.10.10 | > | dns_publish_fixed_ip | None | > | enable_dhcp | False | > | gateway_ip | 10.52.172.1 | > | host_routes | | > | id | 11757c89-2057-4c7c-9730-9b7d976e361e | > | ip_version | 4 | > | ipv6_address_mode | None | > | ipv6_ra_mode | None | > | name | private-provider-trust-az4-subnet | > | network_id | ac30a487-bccc-4ac5-93eb-c422ad9f3ce5 | > | prefix_length | None | > | project_id | 561e8d2236634ece81ffa22203e80dc7 | > | revision_number | 0 | > | segment_id | 92355e6d-a5de-4b29-a956-1b05c4c9a33e | > | service_types | | > | subnetpool_id | None | > | tags | | > | updated_at | 2022-03-31T20:00:44Z | > +----------------------+--------------------------------------+ > On Wednesday, March 30, 2022, 09:01:23 PM EDT, Brian Haley > wrote: > > > Hi, > > On 3/30/22 15:27, Albert Braden wrote: > > The command that we use to create subnets looks like this: > > > > openstack subnet create --no-dhcp --network trust --network-segment > > trust-az1-seg --ip-version 4 --allocation-pool > > start=10.52.160.14,end=10.52.160.235 --subnet-range 10.52.160.0/24 > > --dns-nameserver 10.10.10.10 --gateway 10.52.160.1 trust-az1 > > Since you're not specifying --host-route there should be none, can you > paste the created object returned from this call since for me > host_routes is blank (see below). > > > My co-workers tell me that we also specified "--gateway" when we created > > our Queens subnets, but this did not cause static routes to be created. > > Did the handling of "--gateway" change from Queens to Train? > > I don't believe so, and --gateway will default to the first IP in the > subnet if not given so isn't required. > > -Brian > > > $ openstack subnet create --subnet-pool > f5e3f133-a932-4adc-9592-0b525aec278f --network private private-subnet-2 > +----------------------+---------------------------+ > | Field? ? ? ? ? ? ? ? | Value? ? ? ? ? ? ? ? ? ? | > +----------------------+---------------------------+ > | allocation_pools? ? | 10.0.0.66-10.0.0.126? ? ? | > | cidr? ? ? ? ? ? ? ? | 10.0.0.64/26? ? ? ? ? ? ? | > | created_at? ? ? ? ? | 2022-03-30T17:38:40Z? ? ? | > | description? ? ? ? ? |? ? ? ? ? ? ? ? ? ? ? ? ? | > | dns_nameservers? ? ? |? ? ? ? ? ? ? ? ? ? ? ? ? | > | dns_publish_fixed_ip | None? ? ? ? ? ? ? ? ? ? ? | > | enable_dhcp? ? ? ? ? | True? ? ? ? ? ? ? ? ? ? ? | > | gateway_ip? ? ? ? ? | 10.0.0.65? ? ? ? ? ? ? ? | > | host_routes? ? ? ? ? |? ? ? ? ? ? ? ? ? ? ? ? ? | > | id? ? ? ? ? ? ? ? ? | ce09a038-b918-4208-9a3d-c8c259ae7433 | > | ip_version? ? ? ? ? | 4? ? ? ? ? ? ? ? ? ? ? ? | > | ipv6_address_mode? ? | None? ? ? ? ? ? ? ? ? ? ? | > | ipv6_ra_mode? ? ? ? | None? ? ? ? ? ? ? ? ? ? ? | > | name? ? ? ? ? ? ? ? | private-subnet-2? ? ? ? ? | > | network_id? ? ? ? ? | baf6c62d-4cec-464e-a768-253074df8879 | > | project_id? ? ? ? ? | 657e6d647c0446438c1f06da70d79bed | > | revision_number? ? ? | 0? ? ? ? ? ? ? ? ? ? ? ? | > ? ? ? ? | segment_id? ? ? ? ? | None? ? ? ? ? ? ? ? ? ? ? | > > | service_types? ? ? ? |? ? ? ? ? ? ? ? ? ? ? ? ? | > | subnetpool_id? ? ? ? | f5e3f133-a932-4adc-9592-0b525aec278f | > | tags? ? ? ? ? ? ? ? |? ? ? ? ? ? ? ? ? ? ? ? ? | > | updated_at? ? ? ? ? | 2022-03-30T17:38:40Z? ? ? | > > +----------------------+---------------------------+ > > > On Wednesday, March 30, 2022, 01:45:52 PM EDT, Brian Haley > > > wrote: > > > > > > Hi Albert, > > > > On 3/29/22 17:04, Albert Braden wrote: > >? > After upgrading our kolla-ansible clusters from Queens to Train, we > > are seeing static routes when we create subnets. We didn?t see this in > > Queens. For example, in our de6 region we have a network called ?trust? > > with 3 subnets: > >? > > >? > Subnet? ? ? ? ? ? ? ? CIDR? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Gateway > >? > trust-az1:? ? ? ? ? ? 10.52.160.0/22? 10.52.160.1 > >? > trust-az2:? ? ? ? ? ? 10.52.164.0/22? 10.52.164.1 > >? > trust-az3:? ? ? ? ? ? 10.52.168.0/22? 10.52.168.1 > >? > > >? > Each of these subnets has 2 entries under ?host_routes:? that point > > to the other two subnets. For example, subnet trust-az1 has these two > > routes: > >? > > >? > host_routes? ? ? ? ? | destination='10.52.164.0/22', > > gateway='10.52.160.1' | > >? > |? ? ? ? ? ? ? ? ? ? ? | destination='10.52.168.0/22', > > gateway='10.52.160.1' | > >? > > >? > How can we prevent these host routes from being created in Train? Do > > we need to change something in our config? > > > > > >? From the neutron side of things, host_routes of a subnet is not > > automatically calculated and filled-in, they have to be manually added. > > So perhaps this is something kolla is doing? At least on my Yoga setup > > it is completely blank using 'openstack subnet create ...' even with > > multiple subnets on a network. > > > > How exactly are the subnets getting created? > > > > -Brian > > > From mdemaced at redhat.com Fri Apr 1 17:05:37 2022 From: mdemaced at redhat.com (Maysa De Macedo Souza) Date: Fri, 1 Apr 2022 19:05:37 +0200 Subject: [Kuryr] Zed PTG schedule Message-ID: Hello, The Kuryr agenda[1] with the topics to be discussed on the PTG is up. Feel free to include any topics you're interested in discussing. See you at the PTG! Cheers, Maysa Macedo [1] https://etherpad.opendev.org/p/april2022-ptg-kuryr -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Fri Apr 1 17:47:02 2022 From: iurygregory at gmail.com (Iury Gregory) Date: Fri, 1 Apr 2022 14:47:02 -0300 Subject: [ironic] Ironic Zed PTG Schedule In-Reply-To: References: Message-ID: Hello everyone, Our schedule is available in the etherpad [1], see you next week! [1] https://etherpad.opendev.org/p/ironic-zed-ptg Em ter., 29 de mar. de 2022 ?s 14:25, Iury Gregory escreveu: > Hello Ironicers and Stackers! > > I've split the topics we had in our etherpad [1] in the booked slots. > You can find the proposed schedule in [2], please provide feedback till > Thursday so I can make changes and provide the official schedule by Friday > morning =). > If you haven't registered yourself for the PTG, please do [3]. > > [1] https://etherpad.opendev.org/p/ironic-zed-ptg > [2] https://paste.opendev.org/show/bosAputU90U8RNKUbvKd/ > [3] https://openinfra-ptg.eventbrite.com/ > > -- > > > *Att[]'sIury Gregory Melo Ferreira * > *MSc in Computer Science at UFCG* > *Part of the ironic-core and puppet-manager-core team in OpenStack* > *Software Engineer at Red Hat Czech* > *Social*: https://www.linkedin.com/in/iurygregory > *E-mail: iurygregory at gmail.com * > -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the ironic-core and puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From ozzzo at yahoo.com Fri Apr 1 17:50:14 2022 From: ozzzo at yahoo.com (Albert Braden) Date: Fri, 1 Apr 2022 17:50:14 +0000 (UTC) Subject: [neutron] [kolla] Static routes added to subnets after upgrading from Queens to Train In-Reply-To: <0f0d71f2-23a6-5d6b-5cef-605e25a7b26d@gmail.com> References: <997697892.1640950.1648587862872.ref@mail.yahoo.com> <997697892.1640950.1648587862872@mail.yahoo.com> <242099798.347726.1648668427206@mail.yahoo.com> <2f5ea4b4-73a3-b6cc-2e7c-e6b7114117e4@gmail.com> <1362758790.769385.1648757200650@mail.yahoo.com> <0f0d71f2-23a6-5d6b-5cef-605e25a7b26d@gmail.com> Message-ID: <278507007.1104672.1648835414452@mail.yahoo.com> Thanks for your help and advice. It doesn't appear to be causing any serious problems. The customer who complained about it was worried that it could interfere with traffic routing for hosts with dual interfaces, but he worked around that by deleting the routes from route- on his VMs and that seems to have worked. For future cluster builds we will prevent the routes from being created by not specifying --gateway and letting it default to the 1st IP in the subnet. Now that I understand what is happening, I don't think that this is necessarily a bad change; we just need to change our config to match the new code. On Friday, April 1, 2022, 12:38:07 PM EDT, Brian Haley wrote: Hi Albert, Thanks for the command line, it helped me track down the code in neutron that changed, and it was really the --network-segment arg that is triggering this along with --gateway (and I haven't defined any segments so don't see it in my setup). Anyways, there are a few changes that added the update of host routes in the segment plugin code to support routed networks better. Looking at https://bugs.launchpad.net/neutron/+bug/1766380 shows them all, but https://review.opendev.org/c/openstack/neutron/+/570405/ and https://review.opendev.org/c/openstack/neutron/+/573897 where the two main ones. It doesn't look like there's a way to disable it, but I cc'd Harald to get his thoughts on it. My only follow-on question would be are these host routes causing an issue or just something that was noticed in your upgrade? Thanks, -Brian On 3/31/22 16:06, Albert Braden wrote: > Here's what I get when I create a 4th subnet: > > $ openstack network segment create --physical-network physnet_bo-az3 > --network-type vlan --segment 1115 --network trust trust-az4 > +------------------+--------------------------------------+ > | Field | Value | > +------------------+--------------------------------------+ > | description | | > | id | 92355e6d-3406-4b29-a956-1b05c4c9a33e | > | name | private-provider-trust-az4 | > | network_id | ac30a487-bccc-c3de-93eb-c422ad9f3ce5 | > | network_type | vlan | > | physical_network | physnet_bo-az3 | > | segmentation_id | 1115 | > +------------------+--------------------------------------+ > > $ openstack subnet create --no-dhcp --network private-provider-trust > --network-segment private-provider-trust-az4 --ip-version 4 > --allocation-pool start=10.52.172.14,end=10.52.172.235 --subnet-range > 10.52.172.0/22 --dns-nameserver 10.10.10.10 --gateway 10.52.172.1 > private-provider-trust-az4-subnet > +----------------------+------------------------------------------------------+ > | Field | Value | > +----------------------+------------------------------------------------------+ > | allocation_pools | 10.52.172.10-10.52.172.245 | > | cidr | 10.52.172.0/22 | > | created_at | 2022-03-31T19:26:48Z | > | description | | > | dns_nameservers | 10.10.10.10 | > | dns_publish_fixed_ip | None | > | enable_dhcp | False | > | gateway_ip | 10.52.172.1 | > | host_routes | destination='10.52.160.0/22', gateway='10.52.172.1' | > | | destination='10.52.164.0/22', gateway='10.52.172.1' | > | | destination='10.52.168.0/22', gateway='10.52.172.1' | > | id | 04a15cdd-d22b-4e58-8bbd-8b956d8c10ba | > | ip_version | 4 | > | ipv6_address_mode | None | > | ipv6_ra_mode | None | > | name | private-provider-trust-az4-subnet | > | network_id | ac30a487-bccc-4ac5-93eb-c422ad9f3ce5 | > | prefix_length | None | > | project_id | 561e8d2236634ece81ffa22203e80dc7 | > | revision_number | 0 | > | segment_id | 92355e6d-a5de-4b29-a956-1b05c4c9a33e | > | service_types | | > | subnetpool_id | None | > | tags | | > | updated_at | 2022-03-31T19:26:48Z | > +----------------------+------------------------------------------------------+ > > If I create the 4th subnet without specifying a gateway, then the routes > are not created. It looks like this may be what changed from Queens to > Train: > > $ openstack subnet create --no-dhcp --network private-provider-trust > --network-segment private-provider-trust-az4 --ip-version 4 > --allocation-pool start=10.52.172.10,end=10.52.172.245 --subnet-range > 10.52.172.0/22 --dns-nameserver 10.10.10.10 > private-provider-trust-az4-subnet > +----------------------+--------------------------------------+ > | Field | Value | > +----------------------+--------------------------------------+ > | allocation_pools | 10.52.172.10-10.52.172.245 | > | cidr | 10.52.172.0/22 | > | created_at | 2022-03-31T20:00:44Z | > | description | | > | dns_nameservers | 10.10.10.10 | > | dns_publish_fixed_ip | None | > | enable_dhcp | False | > | gateway_ip | 10.52.172.1 | > | host_routes | | > | id | 11757c89-2057-4c7c-9730-9b7d976e361e | > | ip_version | 4 | > | ipv6_address_mode | None | > | ipv6_ra_mode | None | > | name | private-provider-trust-az4-subnet | > | network_id | ac30a487-bccc-4ac5-93eb-c422ad9f3ce5 | > | prefix_length | None | > | project_id | 561e8d2236634ece81ffa22203e80dc7 | > | revision_number | 0 | > | segment_id | 92355e6d-a5de-4b29-a956-1b05c4c9a33e | > | service_types | | > | subnetpool_id | None | > | tags | | > | updated_at | 2022-03-31T20:00:44Z | > +----------------------+--------------------------------------+ > On Wednesday, March 30, 2022, 09:01:23 PM EDT, Brian Haley > wrote: > > > Hi, > > On 3/30/22 15:27, Albert Braden wrote: >? > The command that we use to create subnets looks like this: >? > >? > openstack subnet create --no-dhcp --network trust --network-segment >? > trust-az1-seg --ip-version 4 --allocation-pool >? > start=10.52.160.14,end=10.52.160.235 --subnet-range 10.52.160.0/24 >? > --dns-nameserver 10.10.10.10 --gateway 10.52.160.1 trust-az1 > > Since you're not specifying --host-route there should be none, can you > paste the created object returned from this call since for me > host_routes is blank (see below). > >? > My co-workers tell me that we also specified "--gateway" when we created >? > our Queens subnets, but this did not cause static routes to be created. >? > Did the handling of "--gateway" change from Queens to Train? > > I don't believe so, and --gateway will default to the first IP in the > subnet if not given so isn't required. > > -Brian > > > $ openstack subnet create --subnet-pool > f5e3f133-a932-4adc-9592-0b525aec278f --network private private-subnet-2 > +----------------------+---------------------------+ > | Field? ? ? ? ? ? ? ? | Value? ? ? ? ? ? ? ? ? ? | > +----------------------+---------------------------+ > | allocation_pools? ? | 10.0.0.66-10.0.0.126? ? ? | > | cidr? ? ? ? ? ? ? ? | 10.0.0.64/26? ? ? ? ? ? ? | > | created_at? ? ? ? ? | 2022-03-30T17:38:40Z? ? ? | > | description? ? ? ? ? |? ? ? ? ? ? ? ? ? ? ? ? ? | > | dns_nameservers? ? ? |? ? ? ? ? ? ? ? ? ? ? ? ? | > | dns_publish_fixed_ip | None? ? ? ? ? ? ? ? ? ? ? | > | enable_dhcp? ? ? ? ? | True? ? ? ? ? ? ? ? ? ? ? | > | gateway_ip? ? ? ? ? | 10.0.0.65? ? ? ? ? ? ? ? | > | host_routes? ? ? ? ? |? ? ? ? ? ? ? ? ? ? ? ? ? | > | id? ? ? ? ? ? ? ? ? | ce09a038-b918-4208-9a3d-c8c259ae7433 | > | ip_version? ? ? ? ? | 4? ? ? ? ? ? ? ? ? ? ? ? | > | ipv6_address_mode? ? | None? ? ? ? ? ? ? ? ? ? ? | > | ipv6_ra_mode? ? ? ? | None? ? ? ? ? ? ? ? ? ? ? | > | name? ? ? ? ? ? ? ? | private-subnet-2? ? ? ? ? | > | network_id? ? ? ? ? | baf6c62d-4cec-464e-a768-253074df8879 | > | project_id? ? ? ? ? | 657e6d647c0446438c1f06da70d79bed | > | revision_number? ? ? | 0? ? ? ? ? ? ? ? ? ? ? ? | >? ? ? ? ? | segment_id? ? ? ? ? | None? ? ? ? ? ? ? ? ? ? ? | > > | service_types? ? ? ? |? ? ? ? ? ? ? ? ? ? ? ? ? | > | subnetpool_id? ? ? ? | f5e3f133-a932-4adc-9592-0b525aec278f | > | tags? ? ? ? ? ? ? ? |? ? ? ? ? ? ? ? ? ? ? ? ? | > | updated_at? ? ? ? ? | 2022-03-30T17:38:40Z? ? ? | > > +----------------------+---------------------------+ > >? > On Wednesday, March 30, 2022, 01:45:52 PM EDT, Brian Haley >? > > wrote: >? > >? > >? > Hi Albert, >? > >? > On 3/29/22 17:04, Albert Braden wrote: >? >? > After upgrading our kolla-ansible clusters from Queens to Train, we >? > are seeing static routes when we create subnets. We didn?t see this in >? > Queens. For example, in our de6 region we have a network called ?trust? >? > with 3 subnets: >? >? > >? >? > Subnet? ? ? ? ? ? ? ? CIDR? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Gateway >? >? > trust-az1:? ? ? ? ? ? 10.52.160.0/22? 10.52.160.1 >? >? > trust-az2:? ? ? ? ? ? 10.52.164.0/22? 10.52.164.1 >? >? > trust-az3:? ? ? ? ? ? 10.52.168.0/22? 10.52.168.1 >? >? > >? >? > Each of these subnets has 2 entries under ?host_routes:? that point >? > to the other two subnets. For example, subnet trust-az1 has these two >? > routes: >? >? > >? >? > host_routes? ? ? ? ? | destination='10.52.164.0/22', >? > gateway='10.52.160.1' | >? >? > |? ? ? ? ? ? ? ? ? ? ? | destination='10.52.168.0/22', >? > gateway='10.52.160.1' | >? >? > >? >? > How can we prevent these host routes from being created in Train? Do >? > we need to change something in our config? >? > >? > >? >? From the neutron side of things, host_routes of a subnet is not >? > automatically calculated and filled-in, they have to be manually added. >? > So perhaps this is something kolla is doing? At least on my Yoga setup >? > it is completely blank using 'openstack subnet create ...' even with >? > multiple subnets on a network. >? > >? > How exactly are the subnets getting created? >? > >? > -Brian >? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Apr 1 21:09:26 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 01 Apr 2022 16:09:26 -0500 Subject: [all][tc][goal][rbac] RBAC goal discussion in Zed PTG Message-ID: <17fe6f627a2.d1e1e5e888307.2751909585242263050@ghanshyammann.com> Hello Everyone, I have collected few of the RBAC related sessions planned in Zed PTG. Please add related sessions in below etherpad if you are planning in your project PTG. - https://etherpad.opendev.org/p/rbac-zed-ptg Also, if you have or come up with questions/discussion points during PTG, feel free to add it in same etherpad (section "Open Questions") and we will discuss it in TC PTG slot on Thursday 14-15 UTC - https://etherpad.opendev.org/p/tc-zed-ptg#L71 -gmann From gmann at ghanshyammann.com Fri Apr 1 21:18:21 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 01 Apr 2022 16:18:21 -0500 Subject: [all][tc] Zed TC-PTG Planning In-Reply-To: <17f66331e29.fda178e1386814.8033979575523031096@ghanshyammann.com> References: <17f22bcc50d.dc214987454540.4578480937628902952@ghanshyammann.com> <17f66331e29.fda178e1386814.8033979575523031096@ghanshyammann.com> Message-ID: <17fe6fe53b2.b080215e88453.9008002524468167442@ghanshyammann.com> ---- On Mon, 07 Mar 2022 15:05:00 -0600 Ghanshyam Mann wrote ---- > ---- On Tue, 22 Feb 2022 12:41:10 -0600 Ghanshyam Mann wrote ---- > > Hello Everyone, > > > > As you already know that the Zed cycle virtual PTG will be held between 4th - 8th April[1]. > > > > I have started the preparation for the Technical Committee PTG sessions. Please do the following: > > > > 1. Fill the below doodle poll as per your availability. Please fill it in soon as the deadline to book the slot is March 11th. > > > > - https://doodle.com/poll/gz4dy67vmew5wmn9 > > > > 2. Add the topics you would like to discuss to the below etherpad. > > > > - https://etherpad.opendev.org/p/tc-zed-ptg > > > > NOTE: this is not limited to TC members only; I would like all community members to > > fill the doodle poll and, add the topics you would like or want TC members to discuss in PTG. > > As discussed on IRC, I have booked below slots for TC discussion: > > * Monday 14 - 16 UTC (TC + PTLfor 2 hrs ) > * Thursday 13-17 UTC (TC discussion for 4 hrs) > * Friday 13-17 UTC (TC discussion for 4 hrs ) I tried to group the related topic and prepared the rough schedule for TC discussions: - https://etherpad.opendev.org/p/tc-zed-ptg Please check and let me know if any of the topic you want me to re-schedule. -gmann > > -gmann > > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-February/027051.html > > > > -gmann > > > > > > > > From gmann at ghanshyammann.com Fri Apr 1 21:34:23 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 01 Apr 2022 16:34:23 -0500 Subject: [all][tc] What's happening in Technical Committee: summary April 1st, 21: Reading: 10 min Message-ID: <17fe70d022a.b9ebbba488687.8608763910627979237@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We canceled this week meeting. * Next TC weekly meeting will be after PTG on April 14th Thursday 15:00 UTC, feel free to add the topic on the agenda[1] by April 13th. 2. What we completed this week: ========================= * Removed the tags framework[2]. * Defined 2022 upstream investment opportunities[3]. 3. Activities In progress: ================== TC Tracker for Yoga cycle ------------------------------ * This etherpad includes the Yoga cycle targets/working items for TC[4]. 6 out of 9 items are completed and many are in progress. Open Reviews ----------------- * Seven open reviews for ongoing activities[5]. Consistent and Secure Default RBAC -------------------------------------------- I collected few of the RBAC related topic in RBAC PTG etherpad[6], also please add the questions or discussion point in that etherpad if you would like to discuss with TC on Thursday 14 UTC. Community-wide goals readiness checklist -------------------------------------------------- As we received the feedback in Yoga PTG TC+PTL interaction session, I have proposed the checklist which can help us to check if the goal is ready to start or not[7], please review and let us know if more items need to be added. Zed cycle Leaderless projects ---------------------------------- No updates on this. I am still waiting for Zaqar PTL +1 on the PTL appointment patch[8] and we will discuss Adjutant in PTG, hoping Braden will be ready with their company side permission[9]. PTG Preparation -------------------- I have prepared the schedule and updated in etherpad[10], please check and see you all next week. Fixing Zuul config error ---------------------------- Requesting projects with zuul config error to look into those and fix them which should not take much time[11]. Project updates ------------------- * Add Ganesha based Ceph NFS Charm[12] 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[13]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [14] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://review.opendev.org/c/openstack/governance/+/834798 [3] https://governance.openstack.org/tc/reference/upstream-investment-opportunities/2022/index.html [4] https://etherpad.opendev.org/p/tc-yoga-tracker [5] https://review.opendev.org/q/projects:openstack/governance+status:open [6] https://etherpad.opendev.org/p/rbac-zed-ptg [7] https://review.opendev.org/c/openstack/governance/+/835102 [8] https://review.opendev.org/c/openstack/governance/+/831123 [9] http://lists.openstack.org/pipermail/openstack-discuss/2022-March/027626.html [10] https://etherpad.opendev.org/p/tc-zed-ptg [11] https://etherpad.opendev.org/p/zuul-config-error-openstack [12] https://review.opendev.org/c/openstack/governance/+/835429 [13] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [14] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From laurentfdumont at gmail.com Sat Apr 2 13:58:01 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Sat, 2 Apr 2022 09:58:01 -0400 Subject: Support to allow only boot from cinder volumes Message-ID: Hey folks, I am trying to see if there are any ways to instruct Openstack to prevent the usage of local storage/ephemeral disks. There are cases where : - I don't want the added complexity of Ceph. - I don't want the added hassle of using local volumes/nfs/shared storage on the computes directly. In an ideal world, creating a VM would mean that you always have a boot-from-volume with the volume being in your chosen backend. I've seen this spec : https://blueprints.launchpad.net/nova/+spec/flavor-root-disk-none But it doesn't seem to have survived the Ocata cycle. Any thoughts? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Sat Apr 2 14:02:05 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Sat, 2 Apr 2022 10:02:05 -0400 Subject: Openstack Xena: error deleting image and snapshot via dashboard (as user and as admin) In-Reply-To: References: Message-ID: I would look at the Horizon logs since it will be the one interacting with Glance through the UI. But Glance should show you the image delete attempt. On Thu, Mar 31, 2022 at 9:59 AM federica fanzago < federica.fanzago at pd.infn.it> wrote: > Hi all, > > we have installed Openstack Xena in our cloud infrastructure (OS > Centos-Stream 8) and we find a problem with the delete of images and > snapshots via dashboard. The delete command returns "Error: Unable to > delete Image:xxx" > > Via command line the delete works well. > > Looking in glance logs I don't find any error message. > > Did you experienced this problem? Have you suggestions about it? > > Thanks, > > cheers > > Federica > > > -- > Federica Fanzago > INFN Sezione di Padova > Via Marzolo, 8 > 35131 Padova - Italy > > Tel: +39 049.967.7367 > -- > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sgaravatto at gmail.com Sat Apr 2 14:16:03 2022 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Sat, 2 Apr 2022 16:16:03 +0200 Subject: Openstack Xena: error deleting image and snapshot via dashboard (as user and as admin) In-Reply-To: References: Message-ID: Let me answer on behalf of Federica (my colleague). Nothing is reported in horizon log. There is only this message in one httpd log: wsgi:error] [pid 315710:tid 140368481855232] [remote 192.168.60.229:41096] Internal Server Error: /dashboard/api/glance/images/b57b57ce-b484-4b75-8038-6b5f28b4dc18/ Regards, Massimo On Sat, Apr 2, 2022 at 4:07 PM Laurent Dumont wrote: > I would look at the Horizon logs since it will be the one interacting with > Glance through the UI. > > But Glance should show you the image delete attempt. > > On Thu, Mar 31, 2022 at 9:59 AM federica fanzago < > federica.fanzago at pd.infn.it> wrote: > >> Hi all, >> >> we have installed Openstack Xena in our cloud infrastructure (OS >> Centos-Stream 8) and we find a problem with the delete of images and >> snapshots via dashboard. The delete command returns "Error: Unable to >> delete Image:xxx" >> >> Via command line the delete works well. >> >> Looking in glance logs I don't find any error message. >> >> Did you experienced this problem? Have you suggestions about it? >> >> Thanks, >> >> cheers >> >> Federica >> >> >> -- >> Federica Fanzago >> INFN Sezione di Padova >> Via Marzolo, 8 >> 35131 Padova - Italy >> >> Tel: +39 049.967.7367 >> -- >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Sat Apr 2 14:18:17 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Sat, 2 Apr 2022 10:18:17 -0400 Subject: Openstack Xena: error deleting image and snapshot via dashboard (as user and as admin) In-Reply-To: References: Message-ID: That is a bit strange. Can you turn on DEBUG for glance + Horizon (not sure if it's possible for that) and retry? On Sat, Apr 2, 2022 at 10:16 AM Massimo Sgaravatto < massimo.sgaravatto at gmail.com> wrote: > Let me answer on behalf of Federica (my colleague). Nothing is reported in > horizon log. There is only this message in one httpd log: > > wsgi:error] [pid 315710:tid 140368481855232] [remote 192.168.60.229:41096] > Internal Server Error: > /dashboard/api/glance/images/b57b57ce-b484-4b75-8038-6b5f28b4dc18/ > > Regards, Massimo > > On Sat, Apr 2, 2022 at 4:07 PM Laurent Dumont > wrote: > >> I would look at the Horizon logs since it will be the one interacting >> with Glance through the UI. >> >> But Glance should show you the image delete attempt. >> >> On Thu, Mar 31, 2022 at 9:59 AM federica fanzago < >> federica.fanzago at pd.infn.it> wrote: >> >>> Hi all, >>> >>> we have installed Openstack Xena in our cloud infrastructure (OS >>> Centos-Stream 8) and we find a problem with the delete of images and >>> snapshots via dashboard. The delete command returns "Error: Unable to >>> delete Image:xxx" >>> >>> Via command line the delete works well. >>> >>> Looking in glance logs I don't find any error message. >>> >>> Did you experienced this problem? Have you suggestions about it? >>> >>> Thanks, >>> >>> cheers >>> >>> Federica >>> >>> >>> -- >>> Federica Fanzago >>> INFN Sezione di Padova >>> Via Marzolo, 8 >>> 35131 Padova - Italy >>> >>> Tel: +39 049.967.7367 >>> -- >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Sun Apr 3 01:55:04 2022 From: amy at demarco.com (Amy Marrich) Date: Sat, 2 Apr 2022 20:55:04 -0500 Subject: OPS Meetup - next meeting at the PTG Message-ID: The OPS Meetup team will be meeting this week during the PTG on Tuesday at 13:00 UTC in the Austin room. We will be planning[0] the meetup to be held in Berlin on June 10th]. See you there, Amy(spotz) 0 - https://etherpad.opendev.org/p/april2022-ptg-openstack-ops -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sgaravatto at gmail.com Sun Apr 3 06:58:35 2022 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Sun, 3 Apr 2022 08:58:35 +0200 Subject: Openstack Xena: error deleting image and snapshot via dashboard (as user and as admin) In-Reply-To: References: Message-ID: After a restart of httpd and glance I an not able to reproduce anymore the issue At any rate thanks a lot for your help ! Cheers, Massimo On Sat, Apr 2, 2022 at 4:18 PM Laurent Dumont wrote: > That is a bit strange. > > Can you turn on DEBUG for glance + Horizon (not sure if it's possible for > that) and retry? > > On Sat, Apr 2, 2022 at 10:16 AM Massimo Sgaravatto < > massimo.sgaravatto at gmail.com> wrote: > >> Let me answer on behalf of Federica (my colleague). Nothing is reported >> in horizon log. There is only this message in one httpd log: >> >> wsgi:error] [pid 315710:tid 140368481855232] [remote 192.168.60.229:41096] >> Internal Server Error: >> /dashboard/api/glance/images/b57b57ce-b484-4b75-8038-6b5f28b4dc18/ >> >> Regards, Massimo >> >> On Sat, Apr 2, 2022 at 4:07 PM Laurent Dumont >> wrote: >> >>> I would look at the Horizon logs since it will be the one interacting >>> with Glance through the UI. >>> >>> But Glance should show you the image delete attempt. >>> >>> On Thu, Mar 31, 2022 at 9:59 AM federica fanzago < >>> federica.fanzago at pd.infn.it> wrote: >>> >>>> Hi all, >>>> >>>> we have installed Openstack Xena in our cloud infrastructure (OS >>>> Centos-Stream 8) and we find a problem with the delete of images and >>>> snapshots via dashboard. The delete command returns "Error: Unable to >>>> delete Image:xxx" >>>> >>>> Via command line the delete works well. >>>> >>>> Looking in glance logs I don't find any error message. >>>> >>>> Did you experienced this problem? Have you suggestions about it? >>>> >>>> Thanks, >>>> >>>> cheers >>>> >>>> Federica >>>> >>>> >>>> -- >>>> Federica Fanzago >>>> INFN Sezione di Padova >>>> Via Marzolo, 8 >>>> 35131 Padova - Italy >>>> >>>> Tel: +39 049.967.7367 >>>> -- >>>> >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Sun Apr 3 09:47:31 2022 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sun, 3 Apr 2022 11:47:31 +0200 Subject: Support to allow only boot from cinder volumes In-Reply-To: References: Message-ID: On Sat, 2 Apr 2022 at 16:00, Laurent Dumont wrote: > > Hey folks, Hi Laurent, > I am trying to see if there are any ways to instruct Openstack to prevent the usage of local storage/ephemeral disks. > > There are cases where : > > I don't want the added complexity of Ceph. > I don't want the added hassle of using local volumes/nfs/shared storage on the computes directly. > > In an ideal world, creating a VM would mean that you always have a boot-from-volume with the volume being in your chosen backend. > > I've seen this spec : https://blueprints.launchpad.net/nova/+spec/flavor-root-disk-none > > But it doesn't seem to have survived the Ocata cycle. > > Any thoughts? You can set the root and ephemeral disk sizes to 0 which means the flavor is not usable without a volume. This way you are forced to use Cinder and what it offers. -yoctozepto From radoslaw.piliszek at gmail.com Sun Apr 3 09:54:35 2022 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sun, 3 Apr 2022 11:54:35 +0200 Subject: Support to allow only boot from cinder volumes In-Reply-To: References: Message-ID: On Sun, 3 Apr 2022 at 11:47, Rados?aw Piliszek wrote: > > On Sat, 2 Apr 2022 at 16:00, Laurent Dumont wrote: > > > > Hey folks, > > Hi Laurent, > > > I am trying to see if there are any ways to instruct Openstack to prevent the usage of local storage/ephemeral disks. > > > > There are cases where : > > > > I don't want the added complexity of Ceph. > > I don't want the added hassle of using local volumes/nfs/shared storage on the computes directly. > > > > In an ideal world, creating a VM would mean that you always have a boot-from-volume with the volume being in your chosen backend. > > > > I've seen this spec : https://blueprints.launchpad.net/nova/+spec/flavor-root-disk-none > > > > But it doesn't seem to have survived the Ocata cycle. > > > > Any thoughts? > > You can set the root and ephemeral disk sizes to 0 which means the > flavor is not usable without a volume. Sorry, I somehow forgot to write the actual workaround. There should be this sentence in here: And then, make the nova's local disk store readonly / with a small quota. > This way you are forced to use Cinder and what it offers. > > -yoctozepto From laurentfdumont at gmail.com Sun Apr 3 10:46:30 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Sun, 3 Apr 2022 06:46:30 -0400 Subject: Support to allow only boot from cinder volumes In-Reply-To: References: Message-ID: Got it! By quota, do you mean reserved_host_disk_mb in nova.conf? I could make the /var/lib/nova/instances RO but I am not sure how that would impact config drive that are created locally (since I dont have ceph) Just to be clear on the behavior, this means that boot-from-image requests would fail? Looking at nova.conf, I can disable the number of local-disks supported, but this doesn't act as a behavior change when the requests are made. I assume, from what I now know, that there is no mechanism to default/transform a request to BFV. # A negative number means unlimited. Setting max_local_block_devices# to 0 means that any request that attempts to create a local disk# will fail. This option is meant to limit the number of local discs# (so root local disc that is the result of --image being used, and# any other ephemeral and swap disks). 0 does not mean that images# will be automatically converted to volumes and boot instances from# volumes - it just means that all requests that attempt to create a# local disk will fail.## Possible values:## * 0: Creating a local disk is not allowed.# * Negative number: Allows unlimited number of local discs.# * Positive number: Allows only these many number of local discs.# (Default value is 3).# (integer value)#max_local_block_devices = 3 On Sun, Apr 3, 2022 at 5:54 AM Rados?aw Piliszek < radoslaw.piliszek at gmail.com> wrote: > On Sun, 3 Apr 2022 at 11:47, Rados?aw Piliszek > wrote: > > > > On Sat, 2 Apr 2022 at 16:00, Laurent Dumont > wrote: > > > > > > Hey folks, > > > > Hi Laurent, > > > > > I am trying to see if there are any ways to instruct Openstack to > prevent the usage of local storage/ephemeral disks. > > > > > > There are cases where : > > > > > > I don't want the added complexity of Ceph. > > > I don't want the added hassle of using local volumes/nfs/shared > storage on the computes directly. > > > > > > In an ideal world, creating a VM would mean that you always have a > boot-from-volume with the volume being in your chosen backend. > > > > > > I've seen this spec : > https://blueprints.launchpad.net/nova/+spec/flavor-root-disk-none > > > > > > But it doesn't seem to have survived the Ocata cycle. > > > > > > Any thoughts? > > > > You can set the root and ephemeral disk sizes to 0 which means the > > flavor is not usable without a volume. > > Sorry, I somehow forgot to write the actual workaround. > There should be this sentence in here: > And then, make the nova's local disk store readonly / with a small quota. > > > This way you are forced to use Cinder and what it offers. > > > > -yoctozepto > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Sun Apr 3 12:12:47 2022 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sun, 3 Apr 2022 14:12:47 +0200 Subject: Support to allow only boot from cinder volumes In-Reply-To: References: Message-ID: On Sun, 3 Apr 2022 at 12:46, Laurent Dumont wrote: > > Got it! > > By quota, do you mean reserved_host_disk_mb in nova.conf? I could make the /var/lib/nova/instances RO but I am not sure how that would impact config drive that are created locally (since I dont have ceph) I meant the filesystem quota. And yes, this affects config drives. Unfortunately, the error message from nova might be confusing. > Just to be clear on the behavior, this means that boot-from-image requests would fail? > > Looking at nova.conf, I can disable the number of local-disks supported, but this doesn't act as a behavior change when the requests are made. > > I assume, from what I now know, that there is no mechanism to default/transform a request to BFV. > > # A negative number means unlimited. Setting max_local_block_devices > # to 0 means that any request that attempts to create a local disk > # will fail. This option is meant to limit the number of local discs > # (so root local disc that is the result of --image being used, and > # any other ephemeral and swap disks). 0 does not mean that images > # will be automatically converted to volumes and boot instances from > # volumes - it just means that all requests that attempt to create a > # local disk will fail. > # > # Possible values: > # > # * 0: Creating a local disk is not allowed. > # * Negative number: Allows unlimited number of local discs. > # * Positive number: Allows only these many number of local discs. > # (Default value is 3). > # (integer value) > #max_local_block_devices = 3 It seems this is actually the best approach (the error message now makes sense). Also confirmed by the faq - https://opendev.org/openstack/nova/src/commit/b0851b0e9c82446aec2ea0317514766fbc53abc0/doc/source/user/block-device-mapping.rst#faqs -yoctozepto From noonedeadpunk at gmail.com Fri Apr 1 06:47:49 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Fri, 1 Apr 2022 08:47:49 +0200 Subject: [TripleO] gate blocker - impacting all quickstart-based jobs - openstack-ansible-os_tempest In-Reply-To: References: Message-ID: Hey there! I have quick question - do you think it's valid approach to install Ansible roles as python packages? This smells sooooo fishy since ansible-galaxy is a thing along with requirements.yml... So actual question is - do you have any plans on changing this approach to more Ansible way anytime soon? ??, 1 ???. 2022 ?., 8:19 Marios Andreou : > On Fri, Apr 1, 2022 at 12:14 AM Ronelle Landy wrote: > >> Hello All, >> >> We have a check/gate blocker on all TripleO quickstart-based jobs, as >> described in: >> >> https://bugs.launchpad.net/tripleo/+bug/1967430 >> >> [1] commit to openstack-ansible-os_tempest removed setup.py and >> is causing failings in all quickstart jobs. >> >> A revert was proposed but will not be workable - we are waiting on >> another fix. >> >> Please hold rechecks until this is resolved. >> >> [1] >> https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/835969 >> >> > > Unfortunately looks like the core group on that repo is empty [1]. I added > some folks into CC here that merged the original patch. Folks can you > please help us merge the fix at > https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/836091 > > > TripleO gate is blocked until we > merge ansible-role-python_venv_build/+/836091 > > > please help :D > > > [1] > https://review.opendev.org/admin/groups/3474fc86368161e5288be01295041a089a1060b3,members > > > > > >> Thank you! >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Sun Apr 3 18:06:02 2022 From: gagehugo at gmail.com (Gage Hugo) Date: Sun, 3 Apr 2022 13:06:02 -0500 Subject: [security-sig] No meeting this week Message-ID: Since this week is the PTG, there will not be a security SIG meeting. Hope to see you all at the session! - Gage -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Sun Apr 3 22:28:54 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Sun, 3 Apr 2022 18:28:54 -0400 Subject: Support to allow only boot from cinder volumes In-Reply-To: References: Message-ID: Bummer :( We have a couple of use cases where I would like this to be transparent and things to automagically-unicorn and rainbows ;) Thanks for the insight Radoslaw! On Sun, Apr 3, 2022 at 8:12 AM Rados?aw Piliszek < radoslaw.piliszek at gmail.com> wrote: > On Sun, 3 Apr 2022 at 12:46, Laurent Dumont > wrote: > > > > Got it! > > > > By quota, do you mean reserved_host_disk_mb in nova.conf? I could make > the /var/lib/nova/instances RO but I am not sure how that would impact > config drive that are created locally (since I dont have ceph) > > I meant the filesystem quota. > And yes, this affects config drives. > Unfortunately, the error message from nova might be confusing. > > > Just to be clear on the behavior, this means that boot-from-image > requests would fail? > > > > Looking at nova.conf, I can disable the number of local-disks supported, > but this doesn't act as a behavior change when the requests are made. > > > > I assume, from what I now know, that there is no mechanism to > default/transform a request to BFV. > > > > # A negative number means unlimited. Setting max_local_block_devices > > # to 0 means that any request that attempts to create a local disk > > # will fail. This option is meant to limit the number of local discs > > # (so root local disc that is the result of --image being used, and > > # any other ephemeral and swap disks). 0 does not mean that images > > # will be automatically converted to volumes and boot instances from > > # volumes - it just means that all requests that attempt to create a > > # local disk will fail. > > # > > # Possible values: > > # > > # * 0: Creating a local disk is not allowed. > > # * Negative number: Allows unlimited number of local discs. > > # * Positive number: Allows only these many number of local discs. > > # (Default value is 3). > > # (integer value) > > #max_local_block_devices = 3 > > It seems this is actually the best approach (the error message now > makes sense). Also confirmed by the faq - > > https://opendev.org/openstack/nova/src/commit/b0851b0e9c82446aec2ea0317514766fbc53abc0/doc/source/user/block-device-mapping.rst#faqs > > -yoctozepto > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Mon Apr 4 06:08:02 2022 From: marios at redhat.com (Marios Andreou) Date: Mon, 4 Apr 2022 09:08:02 +0300 Subject: [TripleO] gate blocker - impacting all quickstart-based jobs - openstack-ansible-os_tempest In-Reply-To: References: Message-ID: On Sun, Apr 3, 2022 at 8:10 PM Dmitriy Rabotyagov wrote: > Hey there! > > I have quick question - do you think it's valid approach to install > Ansible roles as python packages? > This smells sooooo fishy since ansible-galaxy is a thing along with > requirements.yml... > > So actual question is - do you have any plans on changing this approach to > more Ansible way anytime soon? > > Hi yes agreed and it was discussed in the team... the focus was on unblocking the gate for now but what you suggest is being worked on there https://review.opendev.org/c/openstack/tripleo-quickstart/+/836104 regards > ??, 1 ???. 2022 ?., 8:19 Marios Andreou : > >> On Fri, Apr 1, 2022 at 12:14 AM Ronelle Landy wrote: >> >>> Hello All, >>> >>> We have a check/gate blocker on all TripleO quickstart-based jobs, as >>> described in: >>> >>> https://bugs.launchpad.net/tripleo/+bug/1967430 >>> >>> [1] commit to openstack-ansible-os_tempest removed setup.py and >>> is causing failings in all quickstart jobs. >>> >>> A revert was proposed but will not be workable - we are waiting on >>> another fix. >>> >>> Please hold rechecks until this is resolved. >>> >>> [1] >>> https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/835969 >>> >>> >> >> Unfortunately looks like the core group on that repo is empty [1]. I >> added some folks into CC here that merged the original patch. Folks can you >> please help us merge the fix at >> https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/836091 >> >> >> TripleO gate is blocked until we >> merge ansible-role-python_venv_build/+/836091 >> >> >> please help :D >> >> >> [1] >> https://review.opendev.org/admin/groups/3474fc86368161e5288be01295041a089a1060b3,members >> >> >> >> >> >>> Thank you! >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpodivin at redhat.com Mon Apr 4 06:17:22 2022 From: jpodivin at redhat.com (Jiri Podivin) Date: Mon, 4 Apr 2022 08:17:22 +0200 Subject: [TripleO] gate blocker - impacting all quickstart-based jobs - openstack-ansible-os_tempest In-Reply-To: References: Message-ID: Full disclosure: I have only surface level understanding of how ansible galaxy actually works on the inside. My exposure to it is rather limited and it's possible that all of my concerns have perfectly valid responses I'm not aware of. Furthermore, I do believe that we could utilize ansible galaxy a bit more than we do. That being said, I do think that we should be cautious when changing the way we package and deliver. Even if everything works out we are possibly setting ourselves up for a whole new set of possible problems we are unfamiliar with. Whether that is an acceptable risk or not is a question for a different avenue however. On Sun, Apr 3, 2022 at 7:10 PM Dmitriy Rabotyagov wrote: > Hey there! > > I have quick question - do you think it's valid approach to install > Ansible roles as python packages? > This smells sooooo fishy since ansible-galaxy is a thing along with > requirements.yml... > > So actual question is - do you have any plans on changing this approach to > more Ansible way anytime soon? > > ??, 1 ???. 2022 ?., 8:19 Marios Andreou : > >> On Fri, Apr 1, 2022 at 12:14 AM Ronelle Landy wrote: >> >>> Hello All, >>> >>> We have a check/gate blocker on all TripleO quickstart-based jobs, as >>> described in: >>> >>> https://bugs.launchpad.net/tripleo/+bug/1967430 >>> >>> [1] commit to openstack-ansible-os_tempest removed setup.py and >>> is causing failings in all quickstart jobs. >>> >>> A revert was proposed but will not be workable - we are waiting on >>> another fix. >>> >>> Please hold rechecks until this is resolved. >>> >>> [1] >>> https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/835969 >>> >>> >> >> Unfortunately looks like the core group on that repo is empty [1]. I >> added some folks into CC here that merged the original patch. Folks can you >> please help us merge the fix at >> https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/836091 >> >> >> TripleO gate is blocked until we >> merge ansible-role-python_venv_build/+/836091 >> >> >> please help :D >> >> >> [1] >> https://review.opendev.org/admin/groups/3474fc86368161e5288be01295041a089a1060b3,members >> >> >> >> >> >>> Thank you! >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Mon Apr 4 06:35:39 2022 From: marios at redhat.com (Marios Andreou) Date: Mon, 4 Apr 2022 09:35:39 +0300 Subject: [TripleO] gate blocker - impacting all quickstart-based jobs - openstack-ansible-os_tempest In-Reply-To: References: Message-ID: On Mon, Apr 4, 2022 at 9:27 AM Jiri Podivin wrote: > Full disclosure: I have only surface level understanding of how ansible > galaxy actually works on the inside. > My exposure to it is rather limited and it's possible that all of my > concerns have perfectly valid responses I'm not aware of. > Furthermore, I do believe that we could utilize ansible galaxy a bit more > than we do. > > That being said, I do think that we should be cautious when changing the > way we package and deliver. > Even if everything works out we are possibly setting ourselves up for a > whole new set of possible problems we are unfamiliar with. > Whether that is an acceptable risk or not is a question for a different > avenue however. > > In this particular case, we can get away with installing the ansible galaxy collections because we have 'nested' ansible so something like zuul (ansible) calling bash (tripleo-quickstart) calling ansible. There are other cases (zuul/ansible 'native', not nested) where we have to install such dependencies as python utilities because of the security concerns around allowing collections to be installed on the ansible controller (e.g. see http://lists.zuul-ci.org/pipermail/zuul-discuss/2021-November/001752.html). In this case, we can do the installation of the required ansible bits during the middle "bash" part of the workflow (as you can see in https://review.opendev.org/c/openstack/tripleo-quickstart/+/836104). There are other cases where we can't (yet?) regards, marios > On Sun, Apr 3, 2022 at 7:10 PM Dmitriy Rabotyagov > wrote: > >> Hey there! >> >> I have quick question - do you think it's valid approach to install >> Ansible roles as python packages? >> This smells sooooo fishy since ansible-galaxy is a thing along with >> requirements.yml... >> >> So actual question is - do you have any plans on changing this approach >> to more Ansible way anytime soon? >> >> ??, 1 ???. 2022 ?., 8:19 Marios Andreou : >> >>> On Fri, Apr 1, 2022 at 12:14 AM Ronelle Landy wrote: >>> >>>> Hello All, >>>> >>>> We have a check/gate blocker on all TripleO quickstart-based jobs, as >>>> described in: >>>> >>>> https://bugs.launchpad.net/tripleo/+bug/1967430 >>>> >>>> [1] commit to openstack-ansible-os_tempest removed setup.py and >>>> is causing failings in all quickstart jobs. >>>> >>>> A revert was proposed but will not be workable - we are waiting on >>>> another fix. >>>> >>>> Please hold rechecks until this is resolved. >>>> >>>> [1] >>>> https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/835969 >>>> >>>> >>> >>> Unfortunately looks like the core group on that repo is empty [1]. I >>> added some folks into CC here that merged the original patch. Folks can you >>> please help us merge the fix at >>> https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/836091 >>> >>> >>> TripleO gate is blocked until we >>> merge ansible-role-python_venv_build/+/836091 >>> >>> >>> please help :D >>> >>> >>> [1] >>> https://review.opendev.org/admin/groups/3474fc86368161e5288be01295041a089a1060b3,members >>> >>> >>> >>> >>> >>>> Thank you! >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Mon Apr 4 07:45:19 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 4 Apr 2022 09:45:19 +0200 Subject: [neutron] Bug deputy March 28 to April 3 Message-ID: Hello Neutrinos: This is the bug list of week 13 (March 28 to April 3). Critical: - https://bugs.launchpad.net/neutron/+bug/1967472: [OVN] OVN compilation is failing in "neutron-tempest-plugin-scenario-ovn" job - Fixed Medium: - https://bugs.launchpad.net/neutron/+bug/1967142: No way to set quotas for neutron-vpnaas resources using openstack CLI tool. - Not assigned. - This bug affects OSC. Neutron API provides the needed information. - https://bugs.launchpad.net/neutron/+bug/1967144: [OVN] Live migration can fail due to wrong revision id during setting requested chassis in ovn. - Assigned to Slawek. Invalid/opinion/incomplete: - https://bugs.launchpad.net/neutron/+bug/1966858: `ovn-metadata-agent` not starting due to missing module `neutron.privileged.agent` - Most probably this is a problem with the installation of OpenStack and the Python version used (3.10). The recommendation is to use v3.8 or v3.9 as default binary. - The python version used to install the libraries must be the same running the services. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From peljasz at yahoo.co.uk Mon Apr 4 08:04:54 2022 From: peljasz at yahoo.co.uk (lejeczek) Date: Mon, 4 Apr 2022 09:04:54 +0100 Subject: wireguard - ? - puzzle References: Message-ID: Hi guys. Has anybody solved that puzzle? Or perhaps it's not a puzzle at all, I'd imagine might be trivial to experts. First I thought - and only thought so far thus asking here - 'allowed_address_pairs' I'd need but that obviously does not do anything as 'wireguard' creates its own ifaces. So.. how do you get your 'wireguard' in openstack to route (no NAT) to instances' local network(s)? many thanks, L. From tobias.urdin at binero.com Mon Apr 4 08:41:56 2022 From: tobias.urdin at binero.com (Tobias Urdin) Date: Mon, 4 Apr 2022 08:41:56 +0000 Subject: Support to allow only boot from cinder volumes In-Reply-To: References: Message-ID: <7D17FF18-FDB5-4A7A-89AD-9E0682429BC1@binero.com> Another way to make it transparent is using the RBD image backend in Nova and thus the instances look like they are running on local disk but is spawned on for example Ceph, however that assumes you have such a backend. In the future, which I?ve wanted for a long time, is a images backend in Nova that simply is a proxy to calling Cinder and getting a volume, that way it would be volumes in both cases but that?s a lot of work and edge cases that might need to be tuned. On 4 Apr 2022, at 00:28, Laurent Dumont > wrote: Bummer :( We have a couple of use cases where I would like this to be transparent and things to automagically-unicorn and rainbows ;) Thanks for the insight Radoslaw! On Sun, Apr 3, 2022 at 8:12 AM Rados?aw Piliszek > wrote: On Sun, 3 Apr 2022 at 12:46, Laurent Dumont > wrote: > > Got it! > > By quota, do you mean reserved_host_disk_mb in nova.conf? I could make the /var/lib/nova/instances RO but I am not sure how that would impact config drive that are created locally (since I dont have ceph) I meant the filesystem quota. And yes, this affects config drives. Unfortunately, the error message from nova might be confusing. > Just to be clear on the behavior, this means that boot-from-image requests would fail? > > Looking at nova.conf, I can disable the number of local-disks supported, but this doesn't act as a behavior change when the requests are made. > > I assume, from what I now know, that there is no mechanism to default/transform a request to BFV. > > # A negative number means unlimited. Setting max_local_block_devices > # to 0 means that any request that attempts to create a local disk > # will fail. This option is meant to limit the number of local discs > # (so root local disc that is the result of --image being used, and > # any other ephemeral and swap disks). 0 does not mean that images > # will be automatically converted to volumes and boot instances from > # volumes - it just means that all requests that attempt to create a > # local disk will fail. > # > # Possible values: > # > # * 0: Creating a local disk is not allowed. > # * Negative number: Allows unlimited number of local discs. > # * Positive number: Allows only these many number of local discs. > # (Default value is 3). > # (integer value) > #max_local_block_devices = 3 It seems this is actually the best approach (the error message now makes sense). Also confirmed by the faq - https://opendev.org/openstack/nova/src/commit/b0851b0e9c82446aec2ea0317514766fbc53abc0/doc/source/user/block-device-mapping.rst#faqs -yoctozepto -------------- next part -------------- An HTML attachment was scrubbed... URL: From marc-antoine.godde at viarezo.fr Mon Apr 4 09:11:05 2022 From: marc-antoine.godde at viarezo.fr (Marc-Antoine Godde) Date: Mon, 4 Apr 2022 11:11:05 +0200 Subject: Upgrading Openstack nodes Message-ID: <9683CBBC-AC0D-4FAF-BF29-30FEC3CD18D1@viarezo.fr> Hello, We are running an Openstack cloud composed of 3 controller nodes and 4 compute nodes. Our deployment was realized with OpenStack-ansible and we are running OpenStack Ussuri on Ubuntu 18.04. Our plan is to upgrade nodes to Ubuntu 20.04, that way we would be able to update to OpenStack Victoria and further. We would like to withdraw each node from the cluster, reinstall a clean linux and redeploy the nodes. There is garbage remaining from previous upgrades. We figured out the way in the documentation to remove a compute node from the cluster with Openstack-ansible but we can?t find any related documentation for controller nodes. Any help would be very much appreciated. By the way, if you?d have any other suggestions on how to perform that upgrade, fell free to help. Best, Marc-Antoine Godde From fungi at yuggoth.org Mon Apr 4 12:16:22 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 4 Apr 2022 12:16:22 +0000 Subject: [TripleO] gate blocker - impacting all quickstart-based jobs - openstack-ansible-os_tempest In-Reply-To: References: Message-ID: <20220404121621.wglu7uhluj6yfdcg@yuggoth.org> On 2022-04-04 09:35:39 +0300 (+0300), Marios Andreou wrote: [...] > In this particular case, we can get away with installing the > ansible galaxy collections because we have 'nested' ansible so > something like zuul (ansible) calling bash (tripleo-quickstart) > calling ansible. There are other cases (zuul/ansible 'native', > not nested) where we have to install such dependencies as python > utilities because of the security concerns around allowing > collections to be installed on the ansible controller (e.g. see > http://lists.zuul-ci.org/pipermail/zuul-discuss/2021-November/001752.html). [...] We hope this will get simpler soon as we work toward Zuul v6: https://zuul-ci.org/docs/zuul/latest/developer/specs/unrestricted-ansible.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jpodivin at redhat.com Mon Apr 4 12:29:25 2022 From: jpodivin at redhat.com (Jiri Podivin) Date: Mon, 4 Apr 2022 14:29:25 +0200 Subject: [TripleO] gate blocker - impacting all quickstart-based jobs - openstack-ansible-os_tempest In-Reply-To: <20220404121621.wglu7uhluj6yfdcg@yuggoth.org> References: <20220404121621.wglu7uhluj6yfdcg@yuggoth.org> Message-ID: I understand. The question is how far back, if at all, should we backport the change. Provided that it is merged into master of course. On Mon, Apr 4, 2022 at 2:21 PM Jeremy Stanley wrote: > On 2022-04-04 09:35:39 +0300 (+0300), Marios Andreou wrote: > [...] > > In this particular case, we can get away with installing the > > ansible galaxy collections because we have 'nested' ansible so > > something like zuul (ansible) calling bash (tripleo-quickstart) > > calling ansible. There are other cases (zuul/ansible 'native', > > not nested) where we have to install such dependencies as python > > utilities because of the security concerns around allowing > > collections to be installed on the ansible controller (e.g. see > > > http://lists.zuul-ci.org/pipermail/zuul-discuss/2021-November/001752.html > ). > [...] > > We hope this will get simpler soon as we work toward Zuul v6: > > > https://zuul-ci.org/docs/zuul/latest/developer/specs/unrestricted-ansible.html > > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Mon Apr 4 12:47:33 2022 From: marios at redhat.com (Marios Andreou) Date: Mon, 4 Apr 2022 15:47:33 +0300 Subject: [TripleO] gate blocker - impacting all quickstart-based jobs - openstack-ansible-os_tempest In-Reply-To: References: <20220404121621.wglu7uhluj6yfdcg@yuggoth.org> Message-ID: On Mon, Apr 4, 2022 at 3:36 PM Jiri Podivin wrote: > I understand. > The question is how far back, if at all, should we backport the change. > Provided that it is merged into master of course. > > well the proposed fix on our side is in the ci tooling https://review.opendev.org/c/openstack/tripleo-quickstart/+/836104 and that repo is branchless so we should be good > > On Mon, Apr 4, 2022 at 2:21 PM Jeremy Stanley wrote: > >> On 2022-04-04 09:35:39 +0300 (+0300), Marios Andreou wrote: >> [...] >> > In this particular case, we can get away with installing the >> > ansible galaxy collections because we have 'nested' ansible so >> > something like zuul (ansible) calling bash (tripleo-quickstart) >> > calling ansible. There are other cases (zuul/ansible 'native', >> > not nested) where we have to install such dependencies as python >> > utilities because of the security concerns around allowing >> > collections to be installed on the ansible controller (e.g. see >> > >> http://lists.zuul-ci.org/pipermail/zuul-discuss/2021-November/001752.html >> ). >> [...] >> >> We hope this will get simpler soon as we work toward Zuul v6: >> >> >> https://zuul-ci.org/docs/zuul/latest/developer/specs/unrestricted-ansible.html >> >> -- >> Jeremy Stanley >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpodivin at redhat.com Mon Apr 4 12:52:29 2022 From: jpodivin at redhat.com (Jiri Podivin) Date: Mon, 4 Apr 2022 14:52:29 +0200 Subject: [TripleO] gate blocker - impacting all quickstart-based jobs - openstack-ansible-os_tempest In-Reply-To: References: <20220404121621.wglu7uhluj6yfdcg@yuggoth.org> Message-ID: Right, makes sense. On Mon, Apr 4, 2022 at 2:47 PM Marios Andreou wrote: > > > On Mon, Apr 4, 2022 at 3:36 PM Jiri Podivin wrote: > >> I understand. >> The question is how far back, if at all, should we backport the change. >> Provided that it is merged into master of course. >> >> > well the proposed fix on our side is in the ci tooling > https://review.opendev.org/c/openstack/tripleo-quickstart/+/836104 and > that repo is branchless so we should be good > > > > >> >> On Mon, Apr 4, 2022 at 2:21 PM Jeremy Stanley wrote: >> >>> On 2022-04-04 09:35:39 +0300 (+0300), Marios Andreou wrote: >>> [...] >>> > In this particular case, we can get away with installing the >>> > ansible galaxy collections because we have 'nested' ansible so >>> > something like zuul (ansible) calling bash (tripleo-quickstart) >>> > calling ansible. There are other cases (zuul/ansible 'native', >>> > not nested) where we have to install such dependencies as python >>> > utilities because of the security concerns around allowing >>> > collections to be installed on the ansible controller (e.g. see >>> > >>> http://lists.zuul-ci.org/pipermail/zuul-discuss/2021-November/001752.html >>> ). >>> [...] >>> >>> We hope this will get simpler soon as we work toward Zuul v6: >>> >>> >>> https://zuul-ci.org/docs/zuul/latest/developer/specs/unrestricted-ansible.html >>> >>> -- >>> Jeremy Stanley >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Apr 4 12:53:04 2022 From: smooney at redhat.com (Sean Mooney) Date: Mon, 04 Apr 2022 13:53:04 +0100 Subject: wireguard - ? - puzzle In-Reply-To: References: Message-ID: On Mon, 2022-04-04 at 09:04 +0100, lejeczek wrote: > Hi guys. > > Has anybody solved that puzzle? > Or perhaps it's not a puzzle at all, I'd imagine might be > trivial to experts. > > First I thought - and only thought so far thus asking here - > 'allowed_address_pairs' I'd need but that obviously does not > do anything as 'wireguard' creates its own ifaces. > So.. how do you get your 'wireguard' in openstack to route > (no NAT) to instances' local network(s)? i have not done this but i suspect you would need to enable the subnet used by wireguard in the allowed adres pairs as you said on the instnace that is hosting the wireguard endpoint. then set a staic route in the neutron router so other instance knew how to acess it. openstack router set --route destination=,gateway= you might also need to confiure some sequirty group rules but im not certin on the last point. if you run wireguard in a vm it is basicaly becomeing a router which is not something that we typicaly expect vms to do but other service like octavia do this when they deploy loadblancers and the vpn as a service exteion similar did this in the past so this should be possibel with the exising api. > > many thanks, L. > From marios at redhat.com Mon Apr 4 12:58:19 2022 From: marios at redhat.com (Marios Andreou) Date: Mon, 4 Apr 2022 15:58:19 +0300 Subject: [TripleO] gate blocker - impacting all quickstart-based jobs - openstack-ansible-os_tempest In-Reply-To: References: <20220404121621.wglu7uhluj6yfdcg@yuggoth.org> Message-ID: On Mon, Apr 4, 2022 at 3:52 PM Jiri Podivin wrote: > Right, makes sense. > > On Mon, Apr 4, 2022 at 2:47 PM Marios Andreou wrote: > >> >> >> On Mon, Apr 4, 2022 at 3:36 PM Jiri Podivin wrote: >> >>> I understand. >>> The question is how far back, if at all, should we backport the change. >>> Provided that it is merged into master of course. >>> >>> >> well the proposed fix on our side is in the ci tooling >> https://review.opendev.org/c/openstack/tripleo-quickstart/+/836104 and >> that repo is branchless so we should be good >> >> >> >> >>> >>> On Mon, Apr 4, 2022 at 2:21 PM Jeremy Stanley wrote: >>> >>>> On 2022-04-04 09:35:39 +0300 (+0300), Marios Andreou wrote: >>>> [...] >>>> > In this particular case, we can get away with installing the >>>> > ansible galaxy collections because we have 'nested' ansible so >>>> > something like zuul (ansible) calling bash (tripleo-quickstart) >>>> > calling ansible. There are other cases (zuul/ansible 'native', >>>> > not nested) where we have to install such dependencies as python >>>> > utilities because of the security concerns around allowing >>>> > collections to be installed on the ansible controller (e.g. see >>>> > >>>> http://lists.zuul-ci.org/pipermail/zuul-discuss/2021-November/001752.html >>>> ). >>>> [...] >>>> >>>> We hope this will get simpler soon as we work toward Zuul v6: >>>> >>>> >>>> https://zuul-ci.org/docs/zuul/latest/developer/specs/unrestricted-ansible.html >>>> >>>> forgot to add thanks for the pointer fungi - interesting - from a quick skim it doesn't appear to be completely unrestricted but will allow you to add some files/roles/collections into a special ("bubblewrap") env ? adding to reading list for more careful scanning later ;) regards, marios > -- >>>> Jeremy Stanley >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Apr 4 13:18:55 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 4 Apr 2022 13:18:55 +0000 Subject: [TripleO] gate blocker - impacting all quickstart-based jobs - openstack-ansible-os_tempest In-Reply-To: References: <20220404121621.wglu7uhluj6yfdcg@yuggoth.org> Message-ID: <20220404131855.jwolsodyefjyjjxy@yuggoth.org> On 2022-04-04 15:58:19 +0300 (+0300), Marios Andreou wrote: [...] > from a quick skim it doesn't appear to be completely unrestricted > but will allow you to add some files/roles/collections into a > special ("bubblewrap") env ? adding to reading list for more > careful scanning later ;) Currently, the Zuul executors run Ansible in per-build containers in order to provide some separation so that jobs hopefully won't interfere with one another. In addition, Zuul uses a forked copy of Ansible's stdlib in order to prevent "unsafe" modules from being called in that container, or to remove "unsafe" features from some allowed modules. What the spec proposes, in summary, is to drop that separate fork we're maintaining of the Ansible stdlib, and just allow jobs to call any module within the existing container on the executor. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ekuvaja at redhat.com Mon Apr 4 13:43:03 2022 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Mon, 4 Apr 2022 14:43:03 +0100 Subject: [glance] import RBD image In-Reply-To: References: Message-ID: On Mon, Mar 28, 2022 at 6:16 PM Tony Liu wrote: > Thank you Erno for pointing it out! > I wonder if I can get a bit more clarifications. > > ``` > [image_import_opts] > image_import_plugins = ['image_decompression', 'image_conversion'] > > [image_conversion] > output_format = raw > ``` > For example, with the above configuration and web-download method, > to download image.qcow2.gz2 (2GB) and create a raw image (40GB). > The uncompressed image.qcow2 is 10GB. > My understanding is that, image.qcow2.gz2 is downloaded and takes 2GB > space. > Then it's decompressed to image.qcow2, and 12GB space is used now. > Then it's converted to raw image to Ceph directly. So the max space > required for > this process is 12GB. Is that correct? > > Not exactly. The decompression plugin will clean the original compressed image after itself, so it will need 12GB in your example as peak and once its work is finished there will be left that 10GB image, the conversion plugin will do the same. The convert to raw will utilize 50GB of disk space during it's operation and once it's done the 40GB RAW will be in the staging. That will then be uploaded to the destination store(s) before it's cleaned up at the end of the Import taskflow. The reason for this is that there might be other plugins in the chain that still need to have access to that image data before it's sent to the store. > Regarding to recommended local staging directory, doc says "you must > configure > each worker with the URL by which the other workers can reach it directly". > Would you mind giving a link to that part of the doc. Sounds like it needs little clarification. That shared access was required for the 'glance-direct' import method as the stage and import calls are different and might land to different nodes. There was work done a couple of cycles ago to mitigate this need and it was never a requirement for the 'web-download' method as that is processed fully on the import request. > Is that because the image processing (download, convert, upload) may be > split to > different workers? I would expect the whole process/task is tied to one > worker, > because the process is triggered by single request from client. It would > be good > to get some clarifications on how this works and why workers need to > connect > to each other during image processing. > Initially there was discussion of executing these flows in separate workers, but it has been never tested as no-one has expressed real need for that. I doubt it would work as is considering how coupled some of the import code is currently with the g-api code. > For worker_self_reference_url, would "http://: port>" work? > Yes, that should work. - Erno > > > Thanks! > Tony > ________________________________________ > From: Erno Kuvaja > Sent: March 28, 2022 04:13 AM > To: Tony Liu > Cc: openstack-discuss > Subject: Re: [glance] import RBD image > > On Sun, Mar 27, 2022 at 2:10 AM Tony Liu tonyliu0592 at hotmail.com>> wrote: > Hi, > > There used to be a way to import RBD image with API v1, but not any more > with v2. > Is there any other way to do that? > In case given QCOW2 image, It would be much faster and easier to convert > it directly > to RBD image with RAW format, than convert it to RAW format on local file > system and > upload the RAW image to Glance. > > > Thanks! > Tony > > > Hi Tony, > > There is an image-conversion plugin for Interoperable Image Import for > that. Please see > https://docs.openstack.org/glance/latest/admin/interoperable-image-import.html > for the documentation. > > jokke > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From k.parenteau at connectria.com Mon Apr 4 13:40:21 2022 From: k.parenteau at connectria.com (Kelsi Parenteau) Date: Mon, 4 Apr 2022 13:40:21 +0000 Subject: plain text config parameters encryption feature In-Reply-To: References: Message-ID: Good morning Openstack, I hope this message finds you well. I wanted to follow up from Alex's last email below to help to clarify our questions here. We're reaching out to ask your reviewers for their feedback on what had changed on your side during our course of work. https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/814865 We had been working with your team over many months, and had been tracking to commit the code upstream. We were not sure why the Openstack reviewers had not brought up this potential concern for us earlier on in our discussions to be addressed. Can you please advise us why that particular comment regarding the requirement for this to be an ansible plugin stops us from being able to commit the code? We look forward to your feedback here, and would be happy to schedule a call as well to talk this through. Please let us know if you have any questions. Thank you, Kelsi Parenteau, PMP, PMI-ACP, CSM Senior Project Manager d: 586.473.1230 I m: 313.404.3214 [cid:39748af2-b062-4a28-a022-8e401d5457a1] [cid:ecd3d72c-daba-452c-8b29-968cd5fc710a] [cid:e9d44c60-dd78-4d77-8ae0-532692e2dd99] ________________________________ From: Alexander Yeremko Sent: Tuesday, March 29, 2022 4:10 PM To: openstack-discuss at lists.openstack.org Cc: Tina Wisbiski ; Kelsi Parenteau ; Yuliia Romanova Subject: plain text config parameters encryption feature Dear OpenStack community, we are developing plain text config secrets encryption feature according to the next specification: https://specs.openstack.org/openstack/openstack-ansible-specs/specs/xena/protecting-plaintext-configs.html We started from Glance OS service and submitted two patchsets already: https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/814865 Now we have two questions that we need to clarify to proceed our work on that feature and finish our development: 1. Is it correct that we need to develop more patchsets to rework some logic of encryption mechanism according to comment to 'files/encypt_secrets.py' script that arised at the second patchset (PatchSet 2) dated Nov/30/2021 ? Comment is by Dmitry Rabotyagov: "We _really_ should make it as an ansible plugin and re-work logic" 2. We wish to have such feature in previous releases also, not just in upcoming Yoga or Zed. Stein, Train and Victoria - it would be excellent to have plain text secrets encryption with these releases also. So question is how is it possible to use our feature in those releases also? Can we push some backports to those releases openstack-ansible repo? Could someone be so kind and give us answers? Best regards and wishes, Alex Yeremko This E-Mail (including any attachments) may contain privileged or confidential information. It is intended only for the addressee(s) indicated above. The sender does not waive any of its rights, privileges or other protections respecting this information. Any distribution, copying or other use of this E-Mail or the information it contains, by other than an intended recipient, is not sanctioned and is prohibited. If you received this E-Mail in error, please delete it and advise the sender (by return E-Mail or otherwise) immediately. Any calls held by you with Connectria may be recorded by an automated note taking system to ensure prompt follow up and for information collection purposes, and your attendance on any calls with Connectria confirms your consent to this. Any E-mail received by or sent from Connectria is subject to review by Connectria supervisory personnel. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.rosser at rd.bbc.co.uk Mon Apr 4 14:45:15 2022 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Mon, 4 Apr 2022 15:45:15 +0100 Subject: [openstack-ansible] Re: plain text config parameters encryption feature In-Reply-To: References: Message-ID: <2824929a-4116-a58a-2428-21b8874e2451@rd.bbc.co.uk> Hello, I think these messages have gone un-noticed by the openstack-ansible team due to the missing tags in the topic line of these messages, see https://docs.openstack.org/project-team-guide/open-community.html#mailing-lists. In general stable branches only have bugfixes backported, not new features. The openstack stable branches are described here https://docs.openstack.org/project-team-guide/stable-branches.html#appropriate-fixes. Regarding the patch sets you have created, review of those should happen in the gerrit comments, as Dimitry has already started. The changes would need to be appropriate in the wider context of openstack-ansible. Please join the IRC channel #openstack-ansible if you'd like to discuss more in real-time. Regards, Jonathan. On 04/04/2022 14:40, Kelsi Parenteau wrote: > Good morning Openstack, > > I hope this message finds you well. I wanted to follow up from Alex's > last email below to help to clarify our questions here. We're reaching > out to ask your reviewers for their feedback on what had changed on > your side during our course of work. > https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/814865 > > We had been working with your team over many months, and had been > tracking to commit the code upstream. We were not sure why the > Openstack reviewers had not brought up this potential concern for us > earlier on in our discussions to be addressed. > > Can you please advise us why that particular comment regarding the > requirement for this to be an ansible plugin stops us from being able > to commit the code? > > We look forward to your feedback here, and would be happy to schedule > a call as well to talk this through. Please let us know if you have > any questions. > > Thank you, > > * > * > > > *Kelsi Parenteau, PMP, PMI-ACP, CSM* > > Senior Project Manager > > d: 586.473.1230 I m: 313.404.3214// > > // > > > > > ------------------------------------------------------------------------ > *From:* Alexander Yeremko > *Sent:* Tuesday, March 29, 2022 4:10 PM > *To:* openstack-discuss at lists.openstack.org > > *Cc:* Tina Wisbiski ; Kelsi Parenteau > ; Yuliia Romanova > *Subject:* plain text config parameters encryption feature > Dear OpenStack community, > > we are developingplaintextconfigsecretsencryptionfeatureaccording to > the next specification: > > https://specs.openstack.org/openstack/openstack-ansible-specs/specs/xena/protecting-plaintext-configs.html > > We started from Glance OS service and submitted two patchsets already: > > https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/814865 > > Now we have two questions that we need to clarify to proceed our work > on thatfeatureand finish our development: > > 1. Is it correct that we need to develop more patchsets to rework some > logic ofencryptionmechanism according > to comment to 'files/encypt_secrets.py' script that arised at the > second patchset (PatchSet 2) dated Nov/30/2021 ? > Comment is by Dmitry Rabotyagov: "We _really_ should make it as an > ansible plugin and re-work logic" > > 2. We wish to have suchfeaturein previous releases also, not just in > upcoming Yoga or Zed. > Stein, Train and Victoria - it would be excellent to > haveplaintextsecretsencryptionwith these releases also. > So question is how is it possible to use ourfeaturein those releases > also? Can we push some backports to those releases openstack-ansible repo? > > Could someone be so kind and give us answers? > > Best regards and wishes, > Alex Yeremko > This E-Mail (including any attachments) may contain privileged or > confidential information. It is intended only for the addressee(s) > indicated above. The sender does not waive any of its rights, > privileges or other protections respecting this information. Any > distribution, copying or other use of this E-Mail or the information > it contains, by other than an intended recipient, is not sanctioned > and is prohibited. If you received this E-Mail in error, please delete > it and advise the sender (by return E-Mail or otherwise) immediately. > Any calls held by you with Connectria may be recorded by an automated > note taking system to ensure prompt follow up and for information > collection purposes, and your attendance on any calls with Connectria > confirms your consent to this. Any E-mail received by or sent from > Connectria is subject to review by Connectria supervisory personnel. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arne.wiebalck at cern.ch Mon Apr 4 15:06:32 2022 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Mon, 4 Apr 2022 17:06:32 +0200 Subject: [Ironic][Bare Metal SIG] Community Feedback Survey (1 minute) In-Reply-To: <870961726.31831.1648395796384@cernmail.cern.ch> References: <870961726.31831.1648395796384@cernmail.cern.ch> Message-ID: Dear all, As the feedback turns out to be *really* useful for the community, we decided to keep the survey open, so it is not too late to respond! Whether you run Ironic already or are just interested in bare metal provisioning/management with Ironic, keep the responses coming! Thanks! Arne On 27.03.22 17:43, Arne Wiebalck wrote: > Dear all, > > As input for the PTG and to align priorities for the next cycle, the > Ironic team is looking for > feedback from the community (users, operators, admins, deployers ... > everyone interested > in bare metal really!). > > So, if you have 1 minute, we would greatly appreciate your input by > filling: > > https://de.surveymonkey.com/r/XCR2TN5 > > > If you have any additional input for the Ironic team, do not hesitate to > reach out (via mail, on IRC, > or come along to the PTG!). > > Thanks! > ?Arne -- Dr. Arne Wiebalck CERN IT From noonedeadpunk at ya.ru Mon Apr 4 15:15:14 2022 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Mon, 04 Apr 2022 18:15:14 +0300 Subject: plain text config parameters encryption feature In-Reply-To: References: Message-ID: <1632421649083792@mail.yandex.ru> An HTML attachment was scrubbed... URL: From pramchan at yahoo.com Mon Apr 4 15:54:24 2022 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Mon, 4 Apr 2022 15:54:24 +0000 (UTC) Subject: PTG -Kickstart with Mistral Worflow review by eOTF/CoreStack teams References: <1200802170.861648.1649087664125.ref@mail.yahoo.com> Message-ID: <1200802170.861648.1649087664125@mail.yahoo.com> Hi all, It's a great pleasure to kickstart PTG event by "eMerging Open Tech Foundation" (eOTF) having become Associate Member to promote and Collaborate with Open Infrastructure Foundation. https://meetpad.opendev.org/eotf Here is the link to today's Mistral - Workflow Engine presentation with Q&A. https://us02web.zoom.us/rec/share/lhcRPhpQYGbR_0QjizgOnPhozW2VOZlSBkhXjkLEsPYQ4cyx6QsD5Q09VfMihzG9.SS7UNhpMmmzNLEKR?startTime=1649082466000 (Passcode: H=Tx2KdN)? We like interested community members to join in our efforts to revive maintenance and development for Mistral for CSPs for Hybrid Cloud usage. Next meeting is scheduled for Friday April 8th 7.30 PM IST / 7.00 AM Pacific Daylight Time when we will focus on KupeStack as well follow up on plans for maitenance and development? for Mistral in Zed and future releases.? ThanksFor eOTF teamhttps://eotf.infoR.Prakash -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Mon Apr 4 16:02:13 2022 From: gagehugo at gmail.com (Gage Hugo) Date: Mon, 4 Apr 2022 11:02:13 -0500 Subject: [openstack-helm] No Meeting This Week Message-ID: Hey team, Since the PTG is this week, the weekly meeting has been cancelled. We will meet for the PTG session on April 6th 1500-1700 UTC, hope to see you there! -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Mon Apr 4 16:19:43 2022 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Mon, 04 Apr 2022 19:19:43 +0300 Subject: Upgrading Openstack nodes In-Reply-To: <9683CBBC-AC0D-4FAF-BF29-30FEC3CD18D1@viarezo.fr> References: <9683CBBC-AC0D-4FAF-BF29-30FEC3CD18D1@viarezo.fr> Message-ID: <1663331649088955@mail.yandex.ru> An HTML attachment was scrubbed... URL: From k.parenteau at connectria.com Mon Apr 4 15:37:31 2022 From: k.parenteau at connectria.com (Kelsi Parenteau) Date: Mon, 4 Apr 2022 15:37:31 +0000 Subject: plain text config parameters encryption feature In-Reply-To: <1632421649083792@mail.yandex.ru> References: <1632421649083792@mail.yandex.ru> Message-ID: Hello Dmitriy, Thank you for your prompt reply! We appreciate your input on this, and will review internally. Thank you, Kelsi Parenteau, PMP, PMI-ACP, CSM Senior Project Manager d: 586.473.1230 I m: 313.404.3214 [cid:39748af2-b062-4a28-a022-8e401d5457a1] [cid:ecd3d72c-daba-452c-8b29-968cd5fc710a] [cid:e9d44c60-dd78-4d77-8ae0-532692e2dd99] ________________________________ From: Dmitriy Rabotyagov Sent: Monday, April 4, 2022 11:15 AM To: Kelsi Parenteau ; openstack-discuss at lists.openstack.org Cc: Tina Wisbiski ; Yuliia Romanova ; Alexander Yeremko Subject: Re: plain text config parameters encryption feature [EXTERNAL] This email came from an external sender Hi there. Sorry, I totally missed that email, since we usually use tags to address specific teams, so please, use "[${PROJECT}]" in topic if you address a ML to specific group in future:) 1. There bunch of issues with code proposed, actually, which have been commented: [1] and neither of them were reflected in any way since 10 December. Gerrit Code-Review [2] system is a point where proposed code is being reviewed by Core Reviewers. Which it has been done in quite timely manner if you reffer to timestaps in patch of topic. Why I said about ansible module, because current proposed solution is not idempotent and is hard to maintain. As if you want to fix or change smth in script that manages vault tokens, you will need to edit it in every role that uses it, which is really hard to manage.On the contrary ansible module is being managed from single place, so you just call it from role and don't need to do duplicate code for each role. Also, current solution would create a new vault secret each time role runs even when secret already has been stored which is not idempotent way. Not saying about other 8 comments and that patches were never passing CI. So from my perspective solution requires some effort before it can be considered as ready one. And are we quite picky when it comes to code quality that we merge. 2. According to OpenStack Releases guidelines [3], new features are not eligible for being backported. Also branches you;re mentioning are under Extended Maintenance which means only security patching is generally provided for them. However, OpenStack-Ansible is flexible enough. So you should be able to deploy older OpenStack code with recent roles. We define SHA for services that are being deployed by OSA using SHAs [4], so technically it should be possible to use Yoga version of OpenStack-Ansible and override OpenStack version to Stein to get stein version of OpenStack services deployed. It could be quite tricky in practice though, since we could drop some required variables that are now deprecated, but in most cases it can be fixed trivially. So what I'm saying that technically there's a way to use your code from master for older versions. As Jonathan mentioned, we're quite open for communication in #opnestack-ansible channel on IRC. [1] https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/814865 [2] https://review.opendev.org/ [3] https://docs.openstack.org/project-team-guide/stable-branches.html#maintained [4] https://opendev.org/openstack/openstack-ansible/src/branch/master/playbooks/defaults/repo_packages/openstack_services.yml 04.04.2022, 17:33, "Kelsi Parenteau" : Good morning Openstack, I hope this message finds you well. I wanted to follow up from Alex's last email below to help to clarify our questions here. We're reaching out to ask your reviewers for their feedback on what had changed on your side during our course of work. https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/814865 We had been working with your team over many months, and had been tracking to commit the code upstream. We were not sure why the Openstack reviewers had not brought up this potential concern for us earlier on in our discussions to be addressed. Can you please advise us why that particular comment regarding the requirement for this to be an ansible plugin stops us from being able to commit the code? We look forward to your feedback here, and would be happy to schedule a call as well to talk this through. Please let us know if you have any questions. Thank you, Kelsi Parenteau, PMP, PMI-ACP, CSM Senior Project Manager d: 586.473.1230 I m: 313.404.3214 ________________________________ From: Alexander Yeremko > Sent: Tuesday, March 29, 2022 4:10 PM To: openstack-discuss at lists.openstack.org > Cc: Tina Wisbiski >; Kelsi Parenteau >; Yuliia Romanova > Subject: plain text config parameters encryption feature Dear OpenStack community, we are developing plain text config secrets encryption feature according to the next specification: https://specs.openstack.org/openstack/openstack-ansible-specs/specs/xena/protecting-plaintext-configs.html We started from Glance OS service and submitted two patchsets already: https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/814865 Now we have two questions that we need to clarify to proceed our work on that feature and finish our development: 1. Is it correct that we need to develop more patchsets to rework some logic of encryption mechanism according to comment to 'files/encypt_secrets.py' script that arised at the second patchset (PatchSet 2) dated Nov/30/2021 ? Comment is by Dmitry Rabotyagov: "We _really_ should make it as an ansible plugin and re-work logic" 2. We wish to have such feature in previous releases also, not just in upcoming Yoga or Zed. Stein, Train and Victoria - it would be excellent to have plain text secrets encryption with these releases also. So question is how is it possible to use our feature in those releases also? Can we push some backports to those releases openstack-ansible repo? Could someone be so kind and give us answers? Best regards and wishes, Alex Yeremko This E-Mail (including any attachments) may contain privileged or confidential information. It is intended only for the addressee(s) indicated above. The sender does not waive any of its rights, privileges or other protections respecting this information. Any distribution, copying or other use of this E-Mail or the information it contains, by other than an intended recipient, is not sanctioned and is prohibited. If you received this E-Mail in error, please delete it and advise the sender (by return E-Mail or otherwise) immediately. Any calls held by you with Connectria may be recorded by an automated note taking system to ensure prompt follow up and for information collection purposes, and your attendance on any calls with Connectria confirms your consent to this. Any E-mail received by or sent from Connectria is subject to review by Connectria supervisory personnel. -- Kind Regards, Dmitriy Rabotyagov -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Mon Apr 4 19:30:32 2022 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Mon, 4 Apr 2022 21:30:32 +0200 Subject: [ptl][release][stable][EM] Extended Maintenance - Victoria Message-ID: <4846a89c-b370-02e3-6c43-c539d11687cd@est.tech> Hi, As Yoga was released last week and we are in a less busy period (except the PTG :)), now it is a good time to call your attention to the following: In a month Victoria is planned to transition to Extended Maintenance phase [1] (planned date: /2022-04-27/). I have generated the list of the current *open* and *unreleased* changes in stable/victoria for every repositories [2] (where there are such patches). These lists could help the teams who are planning to do a *final* release on Victoria before moving stable/victoria branches to Extended Maintenance. Feel free to edit and extend these lists to track your progress! * next week the Release Team will tag the *latest* Victoria *releases* of repositories with *victoria-em* tag. * at the planned deadline (April 27th)//the Release Team will merge all the transition patches (even the ones without any response!) * After the transition stable/victoria will be still open for bug fixes, but there won't be official releases anymore. *NOTE*: teams, please focus on wrapping up your libraries first if there is any concern about the changes, in order to avoid broken (final!) releases! Thanks, El?d [1] https://releases.openstack.org/ [2] https://etherpad.opendev.org/p/victoria-final-release-before-em -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Mon Apr 4 20:00:00 2022 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Mon, 4 Apr 2022 17:00:00 -0300 Subject: [CloudKitty] April 2022 - PTG Summary Message-ID: Hello, CloudKitty community, Below is my summary of the CloudKitty session during the PTG meeting in April. More details registered can be found at [1]. - CloudKitty Yoga release: we started a discussion reviewing the Yoga release, where we had major interesting new features introduced, such as the reprocessing API, V2 API for the already existing V1 endpoints, and so on. Moreover, we reviewed the reviewing process of patches, and it seems to be working nicely for everybody and producing results for CloudKitty; therefore, we will maintain it. - Create scope API: there is no blueprint yet, but this is one of the requests we got. The idea is that one can create the scope (project) in CloudKitty before it is discovered by CloudKitty. Therefore, one could mark test/ignored projects for the rating before they start to be used and are picked up by CloudKitty. - ElasticSearch improvements: there are some improvements already going on, and Pierre's team is the one conducting it. - Expand the "custom_fields" in the Summary GET API to support Elastic search backends. As we discussed, this feature might be interesting to be expanded and added for other backends such as ElasticSearch. However, we still need to evaluate if that would be feasible. - Where do we go from here?: And last, but not least, we discussed the future of CloudKitty and where we want to go from our current state. The consensus is that we need to improve the default configurations to other systems such as Monasca and Prometheus to facilitate for newcomers to use it; keep users happy; continue innovating and improving CloudKitty. And, as the next step for that, we will propose a Forum panel for CloudKitty at the OpenStack summit to see if we can help people's onboarding and to have a broader discussion regarding the next big developments we will be doing in CloudKitty. Those are the main topics we covered during the PTG. If I forgot something, please let me know. Now, we just gotta work to keep evolving this awesome billing stack :) Link for the PTG Etherpad: [1] [1] https://etherpad.opendev.org/p/cloudkitty-ptg-zed -- Rafael Weing?rtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Mon Apr 4 21:20:18 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Mon, 4 Apr 2022 17:20:18 -0400 Subject: Support to allow only boot from cinder volumes In-Reply-To: <7D17FF18-FDB5-4A7A-89AD-9E0682429BC1@binero.com> References: <7D17FF18-FDB5-4A7A-89AD-9E0682429BC1@binero.com> Message-ID: That is usually what we do so that boot from image also end up in something more solid than local-storage / NFS backed stuff. But having Ceph is a hassle, especially if I use Swift for the glance images. On Mon, Apr 4, 2022 at 4:45 AM Tobias Urdin wrote: > Another way to make it transparent is using the RBD image backend in Nova > and thus the instances look like they are running on local disk but is > spawned on for > example Ceph, however that assumes you have such a backend. > > In the future, which I?ve wanted for a long time, is a images backend in > Nova that simply > is a proxy to calling Cinder and getting a volume, that way it would be > volumes in both cases > but that?s a lot of work and edge cases that might need to be tuned. > > On 4 Apr 2022, at 00:28, Laurent Dumont wrote: > > Bummer :( > > We have a couple of use cases where I would like this to be transparent > and things to automagically-unicorn and rainbows ;) Thanks for the insight > Radoslaw! > > On Sun, Apr 3, 2022 at 8:12 AM Rados?aw Piliszek < > radoslaw.piliszek at gmail.com> wrote: > >> On Sun, 3 Apr 2022 at 12:46, Laurent Dumont >> wrote: >> > >> > Got it! >> > >> > By quota, do you mean reserved_host_disk_mb in nova.conf? I could make >> the /var/lib/nova/instances RO but I am not sure how that would impact >> config drive that are created locally (since I dont have ceph) >> >> I meant the filesystem quota. >> And yes, this affects config drives. >> Unfortunately, the error message from nova might be confusing. >> >> > Just to be clear on the behavior, this means that boot-from-image >> requests would fail? >> > >> > Looking at nova.conf, I can disable the number of local-disks >> supported, but this doesn't act as a behavior change when the requests are >> made. >> > >> > I assume, from what I now know, that there is no mechanism to >> default/transform a request to BFV. >> > >> > # A negative number means unlimited. Setting max_local_block_devices >> > # to 0 means that any request that attempts to create a local disk >> > # will fail. This option is meant to limit the number of local discs >> > # (so root local disc that is the result of --image being used, and >> > # any other ephemeral and swap disks). 0 does not mean that images >> > # will be automatically converted to volumes and boot instances from >> > # volumes - it just means that all requests that attempt to create a >> > # local disk will fail. >> > # >> > # Possible values: >> > # >> > # * 0: Creating a local disk is not allowed. >> > # * Negative number: Allows unlimited number of local discs. >> > # * Positive number: Allows only these many number of local discs. >> > # (Default value is 3). >> > # (integer value) >> > #max_local_block_devices = 3 >> >> It seems this is actually the best approach (the error message now >> makes sense). Also confirmed by the faq - >> >> https://opendev.org/openstack/nova/src/commit/b0851b0e9c82446aec2ea0317514766fbc53abc0/doc/source/user/block-device-mapping.rst#faqs >> >> -yoctozepto >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Mon Apr 4 23:01:08 2022 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 4 Apr 2022 16:01:08 -0700 Subject: Constraints and docs requirements Message-ID: Hello, The Designate project recently discovered that Wallaby docs builds were failing. This started happening because jinja2 newer than is specified in the constraints file was being installed, and there was a non backward compatible change that broke the doc builds. After some debugging we have determined the reason for this is that `tox -e docs` has two steps in its installation process. The first is the dependency installation which respects constraints and then after that is the installation of the package itself (in this case Designate). This second step does not supply constraints info, and it is expected that all dependencies of the package have been preinstalled by the first step. Where we get in trouble is that the docs requirements for projects like Designate (and we suspect others) are a subset of the package requirements necessary to install the packages. This means the second step in this install process discovers that many dependencies need to be installed and this happens without constraints allowing unexpected versions to be pulled in. To resolve this issue, make sure you are including the project requirements along with the doc requirements in your tox environment dependencies section[1]. This will allow pip to install all of the required dependencies, those required for the documentation generation and those for the project, to be all installed using the upper constraints. Thanks to clarkb and fungi for helping track down this issue and composing this email. [1] https://review.opendev.org/c/openstack/designate/+/836410/1/tox.ini From fungi at yuggoth.org Tue Apr 5 00:03:50 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 5 Apr 2022 00:03:50 +0000 Subject: Constraints and docs requirements In-Reply-To: References: Message-ID: <20220405000349.hxcvht676gskc63p@yuggoth.org> On 2022-04-04 16:01:08 -0700 (-0700), Michael Johnson wrote: > After some debugging we have determined the reason for this is that > `tox -e docs` has two steps in its installation process. The first is > the dependency installation which respects constraints and then after > that is the installation of the package itself (in this case > Designate). This second step does not supply constraints info, and it > is expected that all dependencies of the package have been > preinstalled by the first step. [...] Do note that the second pip install is occurring because usedevelop is set to True in the tox.ini. If a project doesn't set usedevelop (which defaults to false), or explicitly sets it to false, the project is not installed unless included in the testenv's deps. > To resolve this issue, make sure you are including the project > requirements along with the doc requirements in your tox environment > dependencies section[1]. This will allow pip to install all of the > required dependencies, those required for the documentation generation > and those for the project, to be all installed using the upper > constraints. [...] An alternative would be to turn off usedevelop and add something like {toxinidir} or "." to the deps list in the testenv:docs entry, making sure that the -c option to apply constraints is also included in that deps list. This also avoids the second pip install phase, so may shave a few seconds off the run. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From yasufum.o at gmail.com Tue Apr 5 01:12:56 2022 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Tue, 5 Apr 2022 10:12:56 +0900 Subject: [tacker] No IRC meeting on Apr 5th Message-ID: Hi, I'd like to skip IRC meeting today since it's PTG week. Cheers, Yasufumi From rdhasman at redhat.com Tue Apr 5 06:47:51 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Tue, 5 Apr 2022 12:17:51 +0530 Subject: [cinder] Zed PTG schedule In-Reply-To: References: Message-ID: Hi, An update on the Cinder PTG etherpad[1] (inspired from nova), I've added a *Courtesy ping *section for every topic so if you would like to be notified when a topic of your interest is starting, you can add your name there. The ping will be in the #openstack-cinder channel so make sure you've joined it during the Cinder PTG. [1] https://etherpad.opendev.org/p/zed-ptg-cinder Thanks and Regards Rajat Dhasmana On Tue, Mar 29, 2022 at 9:14 PM Rajat Dhasmana wrote: > Hello Argonauts, > > Zed PTG is going to start from next week i.e. 4th April, 2022 to 8th > April, 2022. Kindly register if you haven't already[1]. The registration is > free and acts as a count of the people participating. > > The Cinder team will be meeting from 5th April (Tuesday) till 8th April > (Friday) in the 1300-1700 UTC time slot. > I've arranged the topics as per participants availability and in the > following manner[2]. The connection details are on the etherpad along with > the day wise schedule of topics. I've also arranged them in a tabular > structure[3] (the time allotted to each topic here might vary). > > Hope to see everyone next week! > > [1] https://openinfra-ptg.eventbrite.com/ > [2] https://etherpad.opendev.org/p/zed-ptg-cinder > [3] https://ethercalc.openstack.org/crz6qdm7fq0v > > Thanks and regards > Rajat Dhasmana > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonykarera at gmail.com Tue Apr 5 07:06:23 2022 From: tonykarera at gmail.com (Karera Tony) Date: Tue, 5 Apr 2022 09:06:23 +0200 Subject: Kubernetes cluster fails with timed out error Message-ID: Hello Team, I deployed Openstack Wallaby using kolla-ansible and enabled Magnum. The Installation was Ok but when I try to deploy a k8s cluster with fedora-coreos-32.20200601.3.0 image, it fails with a timeout error. When I check the /var/log/heat-config/heat-config-script logs on the master node, I get the error below. ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']' + sleep 5 ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']' + sleep 5 ++ kubectl get --raw=/healthz Any idea on what could be the issue. Regards Regards Tony Karera -------------- next part -------------- An HTML attachment was scrubbed... URL: From vikarnatathe at gmail.com Tue Apr 5 07:21:06 2022 From: vikarnatathe at gmail.com (Vikarna Tathe) Date: Tue, 5 Apr 2022 12:51:06 +0530 Subject: Kubernetes cluster fails with timed out error In-Reply-To: References: Message-ID: Hi Karera, Please try with fcos-33 image and see if it works. Vikarna On Tue, 5 Apr 2022 at 12:45, Karera Tony wrote: > Hello Team, > > I deployed Openstack Wallaby using kolla-ansible and enabled Magnum. > > The Installation was Ok but when I try to deploy a k8s cluster with > fedora-coreos-32.20200601.3.0 image, it fails with a timeout error. > > When I check the /var/log/heat-config/heat-config-script logs on the > master node, I get the error below. > > ++ kubectl get --raw=/healthz > The connection to the server localhost:8080 was refused - did you specify > the right host or port? > + '[' ok = '' ']' > + sleep 5 > ++ kubectl get --raw=/healthz > The connection to the server localhost:8080 was refused - did you specify > the right host or port? > + '[' ok = '' ']' > + sleep 5 > ++ kubectl get --raw=/healthz > > > Any idea on what could be the issue. > > Regards > > Regards > > Tony Karera > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonykarera at gmail.com Tue Apr 5 07:42:07 2022 From: tonykarera at gmail.com (Karera Tony) Date: Tue, 5 Apr 2022 09:42:07 +0200 Subject: Kubernetes cluster fails with timed out error In-Reply-To: References: Message-ID: Hi Vikarna, Do you have any links I can download from please ? Regards Tony Karera On Tue, Apr 5, 2022 at 9:21 AM Vikarna Tathe wrote: > Hi Karera, > > Please try with fcos-33 image and see if it works. > > Vikarna > > On Tue, 5 Apr 2022 at 12:45, Karera Tony wrote: > >> Hello Team, >> >> I deployed Openstack Wallaby using kolla-ansible and enabled Magnum. >> >> The Installation was Ok but when I try to deploy a k8s cluster with >> fedora-coreos-32.20200601.3.0 image, it fails with a timeout error. >> >> When I check the /var/log/heat-config/heat-config-script logs on the >> master node, I get the error below. >> >> ++ kubectl get --raw=/healthz >> The connection to the server localhost:8080 was refused - did you specify >> the right host or port? >> + '[' ok = '' ']' >> + sleep 5 >> ++ kubectl get --raw=/healthz >> The connection to the server localhost:8080 was refused - did you specify >> the right host or port? >> + '[' ok = '' ']' >> + sleep 5 >> ++ kubectl get --raw=/healthz >> >> >> Any idea on what could be the issue. >> >> Regards >> >> Regards >> >> Tony Karera >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From vikarnatathe at gmail.com Tue Apr 5 08:20:37 2022 From: vikarnatathe at gmail.com (Vikarna Tathe) Date: Tue, 5 Apr 2022 13:50:37 +0530 Subject: Kubernetes cluster fails with timed out error In-Reply-To: References: Message-ID: Hi Karera, You can find the images on the below link...scroll and load all the images https://builds.coreos.fedoraproject.org/browser?stream=stable&arch=x86_64 I used the following image earlier fedora-coreos-33.20210426.3.0-openstack.x86_64.qcow2 Vikarna On Tue, 5 Apr 2022 at 13:12, Karera Tony wrote: > Hi Vikarna, > > Do you have any links I can download from please ? > > Regards > > Tony Karera > > > > > On Tue, Apr 5, 2022 at 9:21 AM Vikarna Tathe > wrote: > >> Hi Karera, >> >> Please try with fcos-33 image and see if it works. >> >> Vikarna >> >> On Tue, 5 Apr 2022 at 12:45, Karera Tony wrote: >> >>> Hello Team, >>> >>> I deployed Openstack Wallaby using kolla-ansible and enabled Magnum. >>> >>> The Installation was Ok but when I try to deploy a k8s cluster with >>> fedora-coreos-32.20200601.3.0 image, it fails with a timeout error. >>> >>> When I check the /var/log/heat-config/heat-config-script logs on the >>> master node, I get the error below. >>> >>> ++ kubectl get --raw=/healthz >>> The connection to the server localhost:8080 was refused - did you >>> specify the right host or port? >>> + '[' ok = '' ']' >>> + sleep 5 >>> ++ kubectl get --raw=/healthz >>> The connection to the server localhost:8080 was refused - did you >>> specify the right host or port? >>> + '[' ok = '' ']' >>> + sleep 5 >>> ++ kubectl get --raw=/healthz >>> >>> >>> Any idea on what could be the issue. >>> >>> Regards >>> >>> Regards >>> >>> Tony Karera >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonykarera at gmail.com Tue Apr 5 08:26:43 2022 From: tonykarera at gmail.com (Karera Tony) Date: Tue, 5 Apr 2022 10:26:43 +0200 Subject: Kubernetes cluster fails with timed out error In-Reply-To: References: Message-ID: Hello Team, I have been able to resolve the issue after not selecting Disable TLS on the template creation. Thanks Regards Tony Karera On Tue, Apr 5, 2022 at 10:20 AM Vikarna Tathe wrote: > Hi Karera, > > You can find the images on the below link...scroll and load all the images > https://builds.coreos.fedoraproject.org/browser?stream=stable&arch=x86_64 > > I used the following image earlier > fedora-coreos-33.20210426.3.0-openstack.x86_64.qcow2 > > Vikarna > > On Tue, 5 Apr 2022 at 13:12, Karera Tony wrote: > >> Hi Vikarna, >> >> Do you have any links I can download from please ? >> >> Regards >> >> Tony Karera >> >> >> >> >> On Tue, Apr 5, 2022 at 9:21 AM Vikarna Tathe >> wrote: >> >>> Hi Karera, >>> >>> Please try with fcos-33 image and see if it works. >>> >>> Vikarna >>> >>> On Tue, 5 Apr 2022 at 12:45, Karera Tony wrote: >>> >>>> Hello Team, >>>> >>>> I deployed Openstack Wallaby using kolla-ansible and enabled Magnum. >>>> >>>> The Installation was Ok but when I try to deploy a k8s cluster with >>>> fedora-coreos-32.20200601.3.0 image, it fails with a timeout error. >>>> >>>> When I check the /var/log/heat-config/heat-config-script logs on the >>>> master node, I get the error below. >>>> >>>> ++ kubectl get --raw=/healthz >>>> The connection to the server localhost:8080 was refused - did you >>>> specify the right host or port? >>>> + '[' ok = '' ']' >>>> + sleep 5 >>>> ++ kubectl get --raw=/healthz >>>> The connection to the server localhost:8080 was refused - did you >>>> specify the right host or port? >>>> + '[' ok = '' ']' >>>> + sleep 5 >>>> ++ kubectl get --raw=/healthz >>>> >>>> >>>> Any idea on what could be the issue. >>>> >>>> Regards >>>> >>>> Regards >>>> >>>> Tony Karera >>>> >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Tue Apr 5 09:07:27 2022 From: marios at redhat.com (Marios Andreou) Date: Tue, 5 Apr 2022 12:07:27 +0300 Subject: [TripleO] Move tripleo repos stable/ussuri to End Of Life OK? In-Reply-To: References: Message-ID: On Tue, Mar 22, 2022 at 5:06 PM Marios Andreou wrote: > > > On Fri, Mar 18, 2022 at 5:40 PM Jiri Podivin wrote: > >> I suppose the corresponding rdo distgit branches will follow suit? >> Once again most of the open reviews[6] are backports. >> >> [6]https://review.rdoproject.org/r/q/branch:ussuri-rdo+status:open >> >> > well those won't be removed as part of > https://review.opendev.org/c/openstack/releases/+/834049 - that will only > remove the source code branches. > However indeed the rdo team will likely want to remove the corresponding > distgit for each of the tripleo repos being eol for ussuri after we move > forward. > > At this point I think we'll probably hold on this until ptg which is just > under 2 weeks away but I haven't heard any pushback against this proposal > so far, > > k... just to close this out... this was also discussed in yesterday's TripleO PTG meetup and we are going to go ahead as proposed I'll send another message to request halt on posting patches to stable/ussuri for better visibility regards > marios > > > >> On Fri, Mar 18, 2022 at 4:35 PM Marios Andreou wrote: >> >>> >>> >>> On Thu, Mar 17, 2022 at 9:20 AM Jiri Podivin >>> wrote: >>> >>>> Hi, >>>> As far as Tripleo-Validations repo[5] is concerned I don't believe we >>>> should see any issues. >>>> For the most part we are treating the stable/ussuri branch as a >>>> necessary, but not very meaningful, step when backporting changes to >>>> stable/train. >>>> >>>> [5]https://review.opendev.org/admin/repos/openstack/tripleo-validations >>>> >>>> >>> thanks for confirming Jirka >>> >>> so assuming we go ahead with this we will at some point need to close >>> all open reviews against stable ussuri - there aren't too many as we would >>> expect (just things being backported - validations has 3 currently). For >>> convenience, links for all the repos: >>> >>> >>> >>> https://review.opendev.org/q/project:openstack%252Fos-apply-config+status:open+branch:stable/ussuri >>> >>> https://review.opendev.org/q/project:openstack%252Fos-collect-config+status:open+branch:stable/ussuri >>> >>> https://review.opendev.org/q/project:openstack%252Fos-net-config+status:open+branch:stable/ussuri >>> >>> https://review.opendev.org/q/project:openstack%252Fos-refresh-config+status:open+branch:stable/ussuri >>> >>> https://review.opendev.org/q/project:openstack%252Fpaunch+status:open+branch:stable/ussuri >>> >>> https://review.opendev.org/q/project:openstack%252Fpuppet-tripleo+status:open+branch:stable/ussuri >>> >>> https://review.opendev.org/q/project:openstack%252Fpython-tripleoclient+status:open+branch:stable/ussuri >>> >>> https://review.opendev.org/q/project:openstack%252Ftripleo-ansible+status:open+branch:stable/ussuri >>> >>> https://review.opendev.org/q/project:openstack%252Ftripleo-common+status:open+branch:stable/ussuri >>> >>> https://review.opendev.org/q/project:openstack%252Ftripleo-heat-templates+status:open+branch:stable/ussuri >>> >>> https://review.opendev.org/q/project:openstack%252Ftripleo-image-elements+status:open+branch:stable/ussuri >>> >>> https://review.opendev.org/q/project:openstack%252Ftripleo-ipsec+status:open+branch:stable/ussuri >>> >>> https://review.opendev.org/q/project:openstack%252Ftripleo-puppet-elements+status:open+branch:stable/ussuri >>> >>> https://review.opendev.org/q/project:openstack%252Ftripleo-validations+status:open+branch:stable/ussuri >>> >>> >>> I'll bring this up again in #tripleo next week and see if there are any >>> objections, otherwise we can call it and start getting folks to close out >>> those open reviews and stop posting new ones, >>> >>> thanks, marios >>> >>> >>> >>> >>>> On Wed, Mar 16, 2022 at 5:45 PM Marios Andreou >>>> wrote: >>>> >>>>> Hello TripleO o/ >>>>> >>>>> The tripleo-ci team proposes that we move the stable/ussuri branch for >>>>> all tripleo repos [1] to End of Life [2]. >>>>> >>>>> The branch was moved to extended maintenance with [3] so we can >>>>> already no longer make any new releases for ussuri/tripleo repos. >>>>> >>>>> Are there any objections or concerns about this? If so please speak up >>>>> here or directly on the patch. I think that any patches posted to ussuri >>>>> nowadays are mainly about cherrypicking back to train, rather than merging >>>>> something to ussuri specifically. >>>>> >>>>> I have posted the proposal to move to EOL at [4] and have WF-1 to wait >>>>> for this discussion. >>>>> >>>>> If there are no objections then once we move to EOL the tripleo-ci >>>>> team will remove all Ussuri related CI, check/gate/promotions. >>>>> >>>>> regards, marios >>>>> >>>>> [1] https://releases.openstack.org/teams/tripleo.html#ussuri >>>>> [2] >>>>> https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases >>>>> [3] https://review.opendev.org/c/openstack/releases/+/817623 >>>>> [4] https://review.opendev.org/c/openstack/releases/+/834049 >>>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Tue Apr 5 09:25:53 2022 From: marios at redhat.com (Marios Andreou) Date: Tue, 5 Apr 2022 12:25:53 +0300 Subject: [TripleO] please stop posting patches for tripleo* stable/ussuri (going EOL) Message-ID: Hello TripleO As proposed at [1] and also discussed in yesterday's TripleO Z PTG meet [2] we are going to move stable/ussuri for all tripleo repos to EOL. In order to move ahead we need to have no open patches against stable/ussuri. ** please stop posting patches to stable/ussuri tripleo repos ** If you have open patches can you please either get them merged, or abandon them by next Tuesday 12th. After this we will have to abandon any open patches (e.g. folks moved on/not even looking there) and then I can update the EOL proposal at [3] with the latest commit hashes so we can proceed. For reference the repos with open reviews at time of writing are at [4] below. Please speak up if you need more time or with any other comments regards, marios [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028025.html [2] https://etherpad.opendev.org/p/tripleo-zed-ci-load [3] https://review.opendev.org/c/openstack/releases/+/834049 [4] ( list of repos with open reviews for stable/ussuri): * https://review.opendev.org/q/project:openstack%252Fos-net-config+status:open+branch:stable/ussuri * https://review.opendev.org/q/project:openstack%252Fpaunch+status:open+branch:stable/ussuri * https://review.opendev.org/q/project:openstack%252Fpuppet-tripleo+status:open+branch:stable/ussuri * https://review.opendev.org/q/project:openstack%252Fpython-tripleoclient+status:open+branch:stable/ussuri * https://review.opendev.org/q/project:openstack%252Ftripleo-ansible+status:open+branch:stable/ussuri * https://review.opendev.org/q/project:openstack%252Ftripleo-common+status:open+branch:stable/ussuri * https://review.opendev.org/q/project:openstack%252Ftripleo-heat-templates+status:open+branch:stable/ussuri * https://review.opendev.org/q/project:openstack%252Ftripleo-puppet-elements+status:open+branch:stable/ussuri * https://review.opendev.org/q/project:openstack%252Ftripleo-validations+status:open+branch:stable/ussuri -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Tue Apr 5 09:56:58 2022 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Tue, 5 Apr 2022 11:56:58 +0200 Subject: Problems running Watcher on DevStack Message-ID: <881258d5-fc52-1f1f-d57a-f897603d0363@inovex.de> Hello there, ? I am trying to setup devstack which is running Watcher. But I am running always into problems with setup not finishing and throwing errors (see below). To clarify: I am trying to setup the devstack with multiple nodes (three nodes to be exactly, one control/compute and two compute) following the instructions at https://docs.openstack.org/watcher/latest/contributor/devstack.html which leads to this configuration ? ? control / first compute node: > [[local|localrc]] > ADMIN_PASSWORD="" > DATABASE_PASSWORD=$ADMIN_PASSWORD > RABBIT_PASSWORD=$ADMIN_PASSWORD > SERVICE_PASSWORD=$ADMIN_PASSWORD > SERVICE_HOST="" > HOST_IP="" > FIXED_RANGE="10.4.128.0/20" > FLOATING_RANGE="subnet range" > ? > DEVSTACK_RELEASE="stable/yoga" > ? > LOGFILE=/opt/stack/logs/stack.sh.log > LOGDAYS=7 > LOG_COLOR=False > ? > enable_plugin watcher https://opendev.org/openstack/watcher > $DEVSTACK_RELEASE > ? > enable_plugin watcher-dashboard > https://opendev.org/openstack/watcher-dashboard $DEVSTACK_RELEASE > ? > enable_plugin ceilometer https://opendev.org/openstack/ceilometer.git > $DEVSTACK_RELEASE > CEILOMETER_BACKEND=gnocchi > ? > enable_service ceilometer-api > enable_service ceilometer-acompute > ? > enable_plugin gnocchi https://github.com/gnocchixyz/gnocchi > ? > # OpenStack Telemetry (Ceilometer) Alarming > # enable_plugin aodh https://opendev.org/openstack/aodh $DEVSTACK_RELEASE > ? > # I did not use the panko project which is mentioned in the > documentation because it is deprecated > ? > [[post-config|$NOVA_CONF]] > [DEFAULT] > compute_monitors=cpu.virt_driver > [scheduler] > discover_hosts_in_cells_interval=2 ? ? additional compute node(s): > [[local|localrc]] > HOST_IP="" > FIXED_RANGE=10.4.128.0/20 > FLOATING_RANGE=" ADMIN_PASSWORD="" > DATABASE_PASSWORD=$ADMIN_PASSWORD > RABBIT_PASSWORD=$ADMIN_PASSWORD > SERVICE_PASSWORD=$ADMIN_PASSWORD > DATABASE_TYPE=mysql > SERVICE_HOST="" > MYSQL_HOST=$SERVICE_HOST > RABBIT_HOST=$SERVICE_HOST > GLANCE_HOSTPORT=$SERVICE_HOST:9292 > Q_HOST=$SERVICE_HOST > OVN_SB_REMOTE=tcp:$SERVICE_HOST:6642 > disable_all_services > ENABLED_SERVICES=n-cpu,placement-client,ovn-controller,q-ovn-metadata-agent > NOVA_VNC_ENABLED=True > NOVNCPROXY_URL="http://$SERVICE_HOST:6080/vnc_lite.html" > VNCSERVER_LISTEN=$HOST_IP > VNCSERVER_PROXYCLIENT_ADDRESS=$VNCSERVER_LISTEN > ? > DEVSTACK_RELEASE="stable/yoga" > ? > LOGFILE=/opt/stack/logs/stack.sh.log > LOGDAYS=7 > LOG_COLOR=False > ? > enable_plugin ceilometer https://opendev.org/openstack/ceilometer > $DEVSTACK_RELEASE > disable_service ceilometer-acentral > disable_service ceilometer-collector > disable_service ceilometer-api > ? > [[post-config|$NOVA_CONF]] > [DEFAULT] > compute_monitors=cpu.virt_driver ? ? In addition I have tried to use the configuration that can be found in the repository of the watcher-project at https://opendev.org/openstack/watcher/src/branch/master/devstack. ?The error is always the same with setup.sh ending in: ? > ++ /opt/stack/ceilometer/devstack/plugin.sh:configure_ceilometer:244 > :?? cp /opt/stack/ceilometer/etc/ceilometer/polling_all.yaml > /etc/ceilometer/polling.yaml > ++ /opt/stack/ceilometer/devstack/plugin.sh:configure_ceilometer:246 > :?? cp > /opt/stack/ceilometer/ceilometer/pipeline/data/event_definitions.yaml > /opt/stack/ceilometer/ceilometer/pipeline/data/event_pipeline.yaml > /opt/stack/ceilometer/ceilometer/pipeline/data/pipeline.yaml > /etc/ceilometer > ++ /opt/stack/ceilometer/devstack/plugin.sh:configure_ceilometer:248 > :?? '[' '' ']' > ++ /opt/stack/ceilometer/devstack/plugin.sh:configure_ceilometer:251 > :?? '[' False == True ']' > ++ /opt/stack/ceilometer/devstack/plugin.sh:configure_ceilometer:259 > :?? iniset /etc/ceilometer/ceilometer.conf service_credentials > auth_type password > ++ /opt/stack/ceilometer/devstack/plugin.sh:configure_ceilometer:260 > :?? iniset /etc/ceilometer/ceilometer.conf service_credentials > user_domain_id default > ++ /opt/stack/ceilometer/devstack/plugin.sh:configure_ceilometer:261 > :?? iniset /etc/ceilometer/ceilometer.conf service_credentials > project_domain_id default > ++ /opt/stack/ceilometer/devstack/plugin.sh:configure_ceilometer:262 > :?? iniset /etc/ceilometer/ceilometer.conf service_credentials > project_name service > ++ /opt/stack/ceilometer/devstack/plugin.sh:configure_ceilometer:263 > :?? iniset /etc/ceilometer/ceilometer.conf service_credentials > username ceilometer > ++ /opt/stack/ceilometer/devstack/plugin.sh:configure_ceilometer:264 > :?? iniset /etc/ceilometer/ceilometer.conf service_credentials > password > ++ /opt/stack/ceilometer/devstack/plugin.sh:configure_ceilometer:265 > :?? iniset /etc/ceilometer/ceilometer.conf service_credentials > region_name RegionOne > ++ /opt/stack/ceilometer/devstack/plugin.sh:configure_ceilometer:266 > :?? iniset /etc/ceilometer/ceilometer.conf service_credentials > auth_url http://192.168.0.100/identity > ++ /opt/stack/ceilometer/devstack/plugin.sh:configure_ceilometer:268 > :?? configure_auth_token_middleware /etc/ceilometer/ceilometer.conf > ceilometer /var/cache/ceilometer > ++ lib/keystone:configure_auth_token_middleware:467 :?? echo 'WARNING: > configure_auth_token_middleware is deprecated, use > configure_keystone_authtoken_middleware instead' > WARNING: configure_auth_token_middleware is deprecated, use > configure_keystone_authtoken_middleware instead > ++ lib/keystone:configure_auth_token_middleware:468 : > configure_keystone_authtoken_middleware > /etc/ceilometer/ceilometer.conf ceilometer > ++ lib/keystone:configure_keystone_authtoken_middleware:447 : local > conf_file=/etc/ceilometer/ceilometer.conf > ++ lib/keystone:configure_keystone_authtoken_middleware:448 : local > admin_user=ceilometer > ++ lib/keystone:configure_keystone_authtoken_middleware:449 : local > section=keystone_authtoken > ++ lib/keystone:configure_keystone_authtoken_middleware:451 : iniset > /etc/ceilometer/ceilometer.conf keystone_authtoken auth_type password > ++ lib/keystone:configure_keystone_authtoken_middleware:452 : iniset > /etc/ceilometer/ceilometer.conf keystone_authtoken interface public > ++ lib/keystone:configure_keystone_authtoken_middleware:453 : iniset > /etc/ceilometer/ceilometer.conf keystone_authtoken auth_url > http://192.168.0.100/identity > ++ lib/keystone:configure_keystone_authtoken_middleware:454 : iniset > /etc/ceilometer/ceilometer.conf keystone_authtoken username ceilometer > ++ lib/keystone:configure_keystone_authtoken_middleware:455 : iniset > /etc/ceilometer/ceilometer.conf keystone_authtoken password > ++ lib/keystone:configure_keystone_authtoken_middleware:456 : iniset > /etc/ceilometer/ceilometer.conf keystone_authtoken user_domain_name > Default > ++ lib/keystone:configure_keystone_authtoken_middleware:457 : iniset > /etc/ceilometer/ceilometer.conf keystone_authtoken project_name service > ++ lib/keystone:configure_keystone_authtoken_middleware:458 : iniset > /etc/ceilometer/ceilometer.conf keystone_authtoken project_domain_name > Default > ++ lib/keystone:configure_keystone_authtoken_middleware:460 : iniset > /etc/ceilometer/ceilometer.conf keystone_authtoken cafile > /opt/stack/data/ca-bundle.pem > ++ lib/keystone:configure_keystone_authtoken_middleware:461 : iniset > /etc/ceilometer/ceilometer.conf keystone_authtoken memcached_servers > localhost:11211 > ++ /opt/stack/ceilometer/devstack/plugin.sh:configure_ceilometer:270 > :?? [[ libvirt = \v\s\p\h\e\r\e ]] > ++ /opt/stack/ceilometer/devstack/plugin.sh:configure_ceilometer:277 > :?? _ceilometer_configure_storage_backend > ++ > /opt/stack/ceilometer/devstack/plugin.sh:_ceilometer_configure_storage_backend:208 > :?? '[' gnocchi = none ']' > ++ > /opt/stack/ceilometer/devstack/plugin.sh:_ceilometer_configure_storage_backend:210 > :?? '[' gnocchi = gnocchi ']' > ++ > /opt/stack/ceilometer/devstack/plugin.sh:_ceilometer_configure_storage_backend:211 > :?? sed -i > 's/gnocchi:\/\//gnocchi:\/\/?archive_policy=ceilometer-low\&filter_project=gnocchi_swift/' > /etc/ceilometer/event_pipeline.yaml /etc/ceilometer/pipeline.yaml > ++ > /opt/stack/ceilometer/devstack/plugin.sh:_ceilometer_configure_storage_backend:212 > :?? [[ ,watcher,watcher-dashboard,ceilometer,gnocchi =~ gnocchi ]] > + /opt/stack/ceilometer/devstack/plugin.sh:configure_ceilometer:1 :?? > exit_trap ? What am I missing? Any configuration parameter? ? Thanks in advance Christian From mrunge at matthias-runge.de Tue Apr 5 12:02:35 2022 From: mrunge at matthias-runge.de (Matthias Runge) Date: Tue, 5 Apr 2022 14:02:35 +0200 Subject: Problems running Watcher on DevStack In-Reply-To: <881258d5-fc52-1f1f-d57a-f897603d0363@inovex.de> References: <881258d5-fc52-1f1f-d57a-f897603d0363@inovex.de> Message-ID: <6e7a6ce9-1481-c617-b87d-6c07aa7b5412@matthias-runge.de> On 4/5/22 11:56, Christian Rohmann wrote: > Hello there, > ? > I am trying to setup devstack which is running Watcher. > > But I am running always into problems with setup not finishing and > throwing errors (see below). >> ? >> enable_plugin ceilometer https://opendev.org/openstack/ceilometer.git >> $DEVSTACK_RELEASE >> CEILOMETER_BACKEND=gnocchi >> ? >> enable_service ceilometer-api >> enable_service ceilometer-acompute >> ? I'd switch setting CEILOMETER_BACKEND and enable_plugin ceilometer. (swap these lines). Matthias From marios at redhat.com Tue Apr 5 12:14:53 2022 From: marios at redhat.com (Marios Andreou) Date: Tue, 5 Apr 2022 15:14:53 +0300 Subject: [tripleo][RDO] Tags for Yoga GA version in tripleo projects In-Reply-To: References: Message-ID: On Fri, Mar 25, 2022 at 1:03 PM Alfredo Moralejo Alonso wrote: > > > > On Thu, Mar 24, 2022 at 7:23 AM Marios Andreou wrote: >> >> On Wed, Mar 23, 2022 at 4:24 PM Alfredo Moralejo Alonso wrote: >>> >>> Hi, >>> >>> As we are doing builds for RDO Yoga release, I'd like to know what we should include in yoga release. In Xena, tags were created with the commits for the most recent promotion at around GA and shipped rpm packages for them. Additionally, we pinned RDO Trunk Xena to those versions. Will we get similar tags for Yoga? >>> >> >> Hi Alfredo - I think it's a fair request and I guess by replying here I am also volunteering to do that ;) >> >> I can prepare a release just after PTG (that is usually about the time we would cut the new branch and make the release) is that OK for you? I am guessing you'll be blocking the release for this? Or possibly can you release without tripleo then release those afterwards if you don't want to block? >> > > Thanks Marios. I think after PTG is fine for us, we can wait for it. I'd prefer to do the RDO Yoga announcement when we have tripleo packages even if we push builds for other packages to the repositories before. > FYI started the reviews there https://review.opendev.org/c/openstack/puppet-tripleo/+/836617 & https://review.opendev.org/c/openstack/releases/+/836638 I used the versions.csv from [1] which is the latest current-tripleo for centos9 master [2] (7c3595fcdce0ec20189de8d5b99dec16) [1] https://trunk.rdoproject.org/centos9-master/current-tripleo/7c/35/7c3595fcdce0ec20189de8d5b99dec16/versions.csv [2] https://trunk.rdoproject.org/centos9-master/current-tripleo/delorean.repo.md5 regards > > Alfredo > > >> >> Assuming slagle or anyone else doesn't have any objections to that of course >> >> regards, marios >> >> >> >>> >>> Best regards, >>> >>> Alfredo >>> From gthiemonge at redhat.com Tue Apr 5 16:45:05 2022 From: gthiemonge at redhat.com (Gregory Thiemonge) Date: Tue, 5 Apr 2022 18:45:05 +0200 Subject: [Octavia] Weekly meeting cancelled Message-ID: Hi, As this is the PTG week, the Octavia weekly meeting is cancelled. Thanks, Gregory -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Tue Apr 5 16:53:26 2022 From: akekane at redhat.com (Abhishek Kekane) Date: Tue, 5 Apr 2022 22:23:26 +0530 Subject: [glance] Zed PTG schedule In-Reply-To: References: Message-ID: Hi Everyone, An update on the Glance PTG etherpad[1] , on Friday 08 April, we will be having another session to discuss Secure RBAC community goal where we will be discussion what work we need to target in this cycle, This session will be held after Open hour session happening on Thursday so it will give us more clarity about community goal and Zed cycle target. [1] https://etherpad.opendev.org/p/zed-glance-ptg Thanks & Best Regards, Abhishek Kekane On Mon, Mar 28, 2022 at 7:26 PM Abhishek Kekane wrote: > Hello All, > Greetings!!! > > Zed PTG is going to start next week and if you haven't already registered, > please do so as soon as possible [1]. > > I have created an etherpad [2] and also added day wise topics along with > timings we are going to discuss. Kindly let me know if you have any > concerns with allotted time slots. We also have one slot open on Wednesday > and Friday is kept reserved for any unplanned discussions. So please feel > free to add your topics if you still haven't added yet. > > As a reminder, these are the time slots for our discussion. > > Tuesday 5 April 2022 > 1400 UTC to 1700 UTC > > Wednesday 6 April 2022 > 1400 UTC to 1700 UTC > > Thursday 7 April 2022 > 1400 UTC to 1700 UTC > > Friday 8 April 2022 > 1400 UTC to 1700 UTC > > NOTE: > At the moment we don't have any sessions scheduled on Friday, if there are > any last moment request(s)/topic(s) we will discuss them on Friday else we > will conclude our PTG on Thursday 7th April. > > We will be using bluejeans for our discussion, kindly try to use it once > before the actual discussion. The meeting URL is mentioned in etherpad [2] > and will be the same throughout the PTG. > > [1] https://openinfra-ptg.eventbrite.com/ > [2] https://etherpad.opendev.org/p/zed-glance-ptg > > Thank you, > > Abhishek > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbollo at redhat.com Tue Apr 5 08:15:11 2022 From: mbollo at redhat.com (Daniel Mats Niklas Bengtsson) Date: Tue, 5 Apr 2022 10:15:11 +0200 Subject: [Oslo] IRC meeting. Message-ID: Hi there, I would like to know if you agree that we have the meeting once a week? Instead of the first and third Monday of the month. It will be easier to manage and if sometimes it is canceled it does not matter Even if the meetings each week are short, this way we will have regular follow-up. From ignaziocassano at gmail.com Tue Apr 5 19:01:45 2022 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 5 Apr 2022 21:01:45 +0200 Subject: [Openstack][nova] server groups Message-ID: Hello, we noted that instances can be inserted in server groups only at instance creation step but we need to insert in a server group some old instances. We tried to modify database nova_api server group tables but we noted that we must modify spec in request_specs table . For us is not clear how to modify the spec value. We tried to investigate looking at instances inserted in a server group at creation step and we got issues in instance live migration. Please, anyone could provide any utility to do it or any template ? Thanks Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From peljasz at yahoo.co.uk Tue Apr 5 19:42:18 2022 From: peljasz at yahoo.co.uk (lejeczek) Date: Tue, 5 Apr 2022 20:42:18 +0100 Subject: wireguard - ? - puzzle In-Reply-To: References: Message-ID: On 04/04/2022 13:53, Sean Mooney wrote: > On Mon, 2022-04-04 at 09:04 +0100, lejeczek wrote: >> Hi guys. >> >> Has anybody solved that puzzle? >> Or perhaps it's not a puzzle at all, I'd imagine might be >> trivial to experts. >> >> First I thought - and only thought so far thus asking here - >> 'allowed_address_pairs' I'd need but that obviously does not >> do anything as 'wireguard' creates its own ifaces. >> So.. how do you get your 'wireguard' in openstack to route >> (no NAT) to instances' local network(s)? > i have not done this but i suspect you would need to enable the subnet used by wireguard > in the allowed adres pairs as you said on the instnace that is hosting the wireguard endpoint. > then set a staic route in the neutron router so other instance knew how to acess it. > openstack router set --route destination=,gateway= > you might also need to confiure some sequirty group rules but im not certin on the last point. But doesn't openstack's neutron do some mac "firewalling", which if it does, would "brake" that wireguard iface always/anyways, right? I see that and more weird instance network behavior, when I set wg iface to use IP on instance's local net, which IP otherwise work - with allowed_address_pairs - when set as a secondary IP to a "real" iface. Also, is what you suggest "admin" end or can be done by non-admin consumer? many thanks, L. > if you run wireguard in a vm it is basicaly becomeing a router which is not something that we typicaly > expect vms to do but other service like octavia do this when they deploy loadblancers and the vpn as a service exteion similar > did this in the past so this should be possibel with the exising api. >> many thanks, L. >> From fungi at yuggoth.org Tue Apr 5 19:46:13 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 5 Apr 2022 19:46:13 +0000 Subject: [dev][infra][qa][tact-sig] Zuul behavior change with Depends-On across queues Message-ID: <20220405194612.nt3eu2k5nwsufosx@yuggoth.org> For those who haven't seen the more detailed announcement[*] about it, just a quick note that if you get a sudden -2 back from Zuul when approving a change with a Depends-On to a change in a different project which hasn't merged yet, that's likely an indication those projects don't share a dependent queue. It's not a bug, but an intentional clarification of Zuul's enqueuing behavior. For most other Zuul deployments (and even our other Zuul tenants in OpenDev) this is purely cosmetic, but since OpenStack's Zuul tenant is configured to require a positive Verified vote before enqueuing into the gate pipeline, it means some changes may end up unexpectedly needing another pass through check first. It's worth re-evaluating whether or not this "clean check" rule remains a useful requirement for gating. It was added some years ago because a number of gate breaking bugs were traced back to unstable changes being rechecked enough times that eventually they got lucky and were able to merge, and then their instability contributed to destabilizing the integrated gate as a whole. Similarly, changes were being approved without reviewers confirming their jobs were passing first, and this led to additional resource waste. There is a bit of discussion around "blind rechecks" at the PTG this week, and so this topic is related; it might be a good idea to consider it in conjunction with the greater recheck conversation. [*] http://lists.opendev.org/pipermail/service-announce/2022-April/000033.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dsmigiel at redhat.com Tue Apr 5 19:58:12 2022 From: dsmigiel at redhat.com (Dariusz Smigiel) Date: Tue, 5 Apr 2022 12:58:12 -0700 Subject: [TripleO] Gate blockers - C8 Wallaby & C9 Master|Wallaby Message-ID: Hey! TripleO team noticed two separate issues which hit gates about 5h ago: * C8 Wallaby https://bugs.launchpad.net/tripleo/+bug/1967943 * C9 Master|Wallaby: https://bugs.launchpad.net/tripleo/+bug/1967945 Please withhold rechecking until further notice. Thanks, Dariusz From cboylan at sapwetik.org Tue Apr 5 20:28:43 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 05 Apr 2022 13:28:43 -0700 Subject: [TripleO] Gate blockers - C8 Wallaby & C9 Master|Wallaby In-Reply-To: References: Message-ID: <12fb1f58-2ad8-4763-9761-cd6f9eabb729@www.fastmail.com> On Tue, Apr 5, 2022, at 12:58 PM, Dariusz Smigiel wrote: > Hey! > TripleO team noticed two separate issues which hit gates about 5h ago: > > * C8 Wallaby https://bugs.launchpad.net/tripleo/+bug/1967943 > * C9 Master|Wallaby: https://bugs.launchpad.net/tripleo/+bug/1967945 > > Please withhold rechecking until further notice. I think both of these issues are due to the problem where PyPI's CDN will fallback to the backup backend, and that backup backend is stale without newer package releases. OpenStack notices because constraints require specific versions that cannot be satisfied in these situations. Most other users of PyPI end up getting old versions. If you request the package index for the packages experiencing these problems through our mirrors (which are actually just caching proxies) you'll typically find that all the versions are there as this problem is infrequent. Though infrequent it only takes one of the many packages being installed to have a problem to cause everything to break. Others should double check this though as it is possible there is a requirement conflict or some other problem causing this. > > Thanks, > Dariusz From dsmigiel at redhat.com Tue Apr 5 21:19:41 2022 From: dsmigiel at redhat.com (Dariusz Smigiel) Date: Tue, 5 Apr 2022 14:19:41 -0700 Subject: [TripleO] Gate blockers - C8 Wallaby & C9 Master|Wallaby In-Reply-To: <12fb1f58-2ad8-4763-9761-cd6f9eabb729@www.fastmail.com> References: <12fb1f58-2ad8-4763-9761-cd6f9eabb729@www.fastmail.com> Message-ID: > > * C8 Wallaby https://bugs.launchpad.net/tripleo/+bug/1967943 > > * C9 Master|Wallaby: https://bugs.launchpad.net/tripleo/+bug/1967945 > > > > Please withhold rechecking until further notice. > > I think both of these issues are due to the problem where PyPI's CDN will fallback to the backup backend, and that backup backend is stale without newer package releases. OpenStack notices because constraints require specific versions that cannot be satisfied in these situations. Most other users of PyPI end up getting old versions. Clark, I think you hit a nail on the head here: https://status.python.org/incidents/mxgkk3xxr9v7?u=v8pzlr5n28h8 Thanks for that explanation. I learned a new thing with our infra. Dariusz From laurentfdumont at gmail.com Tue Apr 5 21:39:10 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Tue, 5 Apr 2022 17:39:10 -0400 Subject: wireguard - ? - puzzle In-Reply-To: References: Message-ID: Allowed_pairs also let's you add MAC addresses. You can also disable port_security at the port level to remove any restrictions. # neutron port-create net1 --allowed-address-pairs type=dict list=true mac_address=,ip_address= On Tue, Apr 5, 2022 at 3:44 PM lejeczek wrote: > > > On 04/04/2022 13:53, Sean Mooney wrote: > > On Mon, 2022-04-04 at 09:04 +0100, lejeczek wrote: > >> Hi guys. > >> > >> Has anybody solved that puzzle? > >> Or perhaps it's not a puzzle at all, I'd imagine might be > >> trivial to experts. > >> > >> First I thought - and only thought so far thus asking here - > >> 'allowed_address_pairs' I'd need but that obviously does not > >> do anything as 'wireguard' creates its own ifaces. > >> So.. how do you get your 'wireguard' in openstack to route > >> (no NAT) to instances' local network(s)? > > i have not done this but i suspect you would need to enable the subnet > used by wireguard > > in the allowed adres pairs as you said on the instnace that is hosting > the wireguard endpoint. > > then set a staic route in the neutron router so other instance knew how > to acess it. > > openstack router set --route destination= subnet>,gateway= > > you might also need to confiure some sequirty group rules but im not > certin on the last point. > But doesn't openstack's neutron do some mac "firewalling", > which if it does, would "brake" that wireguard iface > always/anyways, right? > I see that and more weird instance network behavior, when I > set wg iface to use IP on instance's local net, which IP > otherwise work - with allowed_address_pairs - when set as a > secondary IP to a "real" iface. > > Also, is what you suggest "admin" end or can be done by > non-admin consumer? > > many thanks, L. > > > > if you run wireguard in a vm it is basicaly becomeing a router which is > not something that we typicaly > > expect vms to do but other service like octavia do this when they deploy > loadblancers and the vpn as a service exteion similar > > did this in the past so this should be possibel with the exising api. > >> many thanks, L. > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tburke at nvidia.com Tue Apr 5 22:46:21 2022 From: tburke at nvidia.com (Timothy Burke) Date: Tue, 5 Apr 2022 22:46:21 +0000 Subject: [swift][ptg] Ops feedback session - Apr 7 at 13:00 UTC Message-ID: As in PTGs past, we're getting devs and ops together to talk about Swift: what's working, what isn't, and what would be most helpful to improve. We're meeting in Havana (https://www.openstack.org/ptg/rooms/havana) on Apr 7 at 13:00UTC -- if you run a Swift cluster, we hope to see you there! Even if you can't make it, I'd appreciate if you can offer some feedback on this PTG's etherpad (https://etherpad.opendev.org/p/swift-zed-ops-feedback). Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Tue Apr 5 23:03:06 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Tue, 5 Apr 2022 19:03:06 -0400 Subject: [Openstack][nova] server groups In-Reply-To: References: Message-ID: I'm trying to find where else this was discussed, but afaik, this was never supported. I am not sure if someone was able to "hack" it's way to a working setup. It's a bit of a shame because it makes server-groups really not flexible :( On Tue, Apr 5, 2022 at 3:04 PM Ignazio Cassano wrote: > Hello, we noted that instances can be inserted in server groups only at > instance creation step but we need to insert in a server group some old > instances. > > We tried to modify database nova_api server group tables but we noted that > we must modify spec in request_specs table . For us is not clear how to > modify the spec value. > We tried to investigate looking at instances inserted in a server group at > creation step and we got issues in instance live migration. > Please, anyone could provide any utility to do it or any template ? > Thanks > Ignazio > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Tue Apr 5 23:29:13 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Tue, 5 Apr 2022 19:29:13 -0400 Subject: Metadata service {devices list} In-Reply-To: References: Message-ID: Circling back on this, I see the same in my kolla-ansible + Ussuri. {"uuid": "a916a95d-3f9b-4e5e-9f64-894470742d8b", "hostname": "test-laurent.novalocal", "name": "test-laurent", "launch_index": 0, "availability_zone": "nova", "random_seed": "EEO9FjZLP51L4DDzQV09NRlwUeuKQa+XlGEWp3nH3XKXCnM5vEwhVgw/qG2kuqLN3HZ+oQIcLsnFPwoCsr5TuYodLTTTEhHgo0xJwZ3mlY/P6Br7QWCOyXCDEIKDxxfvxXhDOKtf9OFkFNROD9GDc4vWrOeCDGtcKshf5QZLsgiIv07fQus9axsqGYosNPdOKAejRa+gVtQfxlqV0kVcVIWWOAQOQdVh/TfoBGaxc8FjSCj/9MHLUMYP/zPSj+NRU4G1AwlKHzmxxiF4LQwHCBuy6dNrG+ImpUs6nLORjlDAgovoMhwIgDVhgihel4eFoT8f2izuq42yCen7yRU7FSfJcL40IlTmdHVTJTfChS2+yP5Y5SjeNHAmO6xCRJ9CCliRXIj8hsCjD0triQi2LCMC/gvZoaLSeczSzUgmL3zEFB+9IcalUuvf4yChK2OqpGfK94YlWR/U7fivdyUMChlaUC9BilPJUkUpCPL2wHiKKQMpPVFK5sRZoFe7nnegLRrMvYKuCPbtp99VqCCm2ts/6u6dgCHZdUD+NvOYUOMBGXYcQz0DVIpF/HyAI+AuW/5HAPCw66NZfwsCfMugzGf5+ljm8zr4UU//pf0vMCVX690dGyVraB/ozuXg1rdQYF8f7iDh9v2vkR+oanuC1sY6bHV+DRMhfc/Xp+KFDCc=", "project_id": "4db7dbf9961c4fc6a4589a5cb2ae3c9a", "devices": []}# Nothing in Devices But I see it in the network endpoint # curl http://169.254.169.254/openstack/2018-08-27/network_data.json {"links": [{"id": "tap1b4490ec-76", "vif_id": "1b4490ec-76b9-4223-9a31-837d60b13cc2", "type": "ovs", "mtu": 1450, "ethernet_mac_address": "fa:16:3e:58:a4:4a"}], "networks": [{"id": "network0", "type": "ipv4_dhcp", "link": "tap1b4490ec-76", "network_id": "bef77274-defd-463c-814a-5051ea2acae0"}], "services": []}# I'll look at the code, but I am not clear where that snippet is being generated. It could be expected that the devices array is empty. On Sun, Mar 20, 2022 at 11:11 AM Ahmed Abdelhamid < ahmedabdelhamid1221 at gmail.com> wrote: > Thanks. It's OpenStack ussuri, deployed via kolla-ansible > > On Fri, Mar 18, 2022 at 10:06 PM Laurent Dumont > wrote: > >> That is weird. >> >> - What version of Openstack are you running? >> - How was it deployed? >> >> >> On Fri, Mar 18, 2022 at 9:30 AM Ahmed Abdelhamid < >> ahmedabdelhamid1221 at gmail.com> wrote: >> >>> Thanks, Laurent. I tried it for both CEPH-backed VMs and ones with local >>> disk, still, the devices array is empty >>> >>> Network data curl shows alright >>> >>> {"links": [{"id": "tapa.....", "vif_id": "ae....", "type": "bridge", >>> "mtu": 1500, "ethernet_mac_address": "fa:......."}], "networks": [{"id": >>> "network0", "type": "ipv4_dhcp", "link": "tap..... >>> >>> On Wed, Mar 16, 2022 at 6:39 PM Laurent Dumont >>> wrote: >>> >>>> Are you getting anything from the neutron endpoint? >>>> >>>> http://169.254.169.254/openstack/2018-08-27/network_data.json >>>> >>>> Can you provide an "openstack server show $server_id" of the VM? I >>>> wonder if it's a case of boot from a volume VM missing that data. It would >>>> not explain why the port is not there though. >>>> >>>> On Tue, Mar 15, 2022 at 9:15 AM Ahmed Abdelhamid < >>>> ahmedabdelhamid1221 at gmail.com> wrote: >>>> >>>>> Hi All, >>>>> >>>>> I am running into a strange issue with the metadata service. Per metadata-service >>>>> manual , >>>>> the devices attached to a VM should be visible in >>>>> >>>>> $ curl http://169.254.169.254/openstack/2018-08-27/meta_data.json >>>>> >>>>> However, whenever i execute it from a VM , the devices array is empty >>>>> and looks like this, any idea why ? >>>>> >>>>> Thanks >>>>> >>>>> { "random_seed": "yu5ZnkqF2CqnDZVAfZgarG...", "availability_zone": "nova", "keys": [ { "data": "ssh-rsa AAAAB3NzaC1y...== Generated by Nova\n", "type": "ssh", "name": "mykey" } ], "hostname": "test.novalocal", "launch_index": 0, "meta": { "priority": "low", "role": "webserver" }, "devices": [ ], "project_id": "f7ac731cc11f40efbc03a9f9e1d1d21f", "public_keys": { "mykey": "ssh-rsa AAAAB3NzaC1y...== Generated by Nova\n" }, "name": "test"} >>>>> >>>>> >>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Tue Apr 5 23:58:19 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Tue, 5 Apr 2022 19:58:19 -0400 Subject: Metadata service {devices list} In-Reply-To: References: Message-ID: Trying to trace the code that I dont fully grasp :D We start in nova/base.py https://github.com/openstack/nova/blob/stable/ussuri/nova/api/metadata/base.py We seem to generate a device array with a bunch of stuff inside if self._check_os_version(NEWTON_ONE, version): metadata['devices'] = self._get_device_metadata(version) (but only if the version is higher than NEWTON_ONE? I dont think we generate the file every single time someone calls the API so it's probably at VM creation only?) Looking at the code for _get_device_metadata https://github.com/openstack/nova/blob/184a3c976faed38907af148a533bc6e9faa410f5/nova/api/metadata/base.py#L367 It seems we try to build an array for Metadata specifically. It's not expected to be a full list of all the actual devices (ports + disks). I'll see if I add a tag to a port, does it now show up? On Tue, Apr 5, 2022 at 7:29 PM Laurent Dumont wrote: > Circling back on this, I see the same in my kolla-ansible + Ussuri. > > {"uuid": "a916a95d-3f9b-4e5e-9f64-894470742d8b", "hostname": > "test-laurent.novalocal", "name": "test-laurent", "launch_index": 0, > "availability_zone": "nova", "random_seed": > "EEO9FjZLP51L4DDzQV09NRlwUeuKQa+XlGEWp3nH3XKXCnM5vEwhVgw/qG2kuqLN3HZ+oQIcLsnFPwoCsr5TuYodLTTTEhHgo0xJwZ3mlY/P6Br7QWCOyXCDEIKDxxfvxXhDOKtf9OFkFNROD9GDc4vWrOeCDGtcKshf5QZLsgiIv07fQus9axsqGYosNPdOKAejRa+gVtQfxlqV0kVcVIWWOAQOQdVh/TfoBGaxc8FjSCj/9MHLUMYP/zPSj+NRU4G1AwlKHzmxxiF4LQwHCBuy6dNrG+ImpUs6nLORjlDAgovoMhwIgDVhgihel4eFoT8f2izuq42yCen7yRU7FSfJcL40IlTmdHVTJTfChS2+yP5Y5SjeNHAmO6xCRJ9CCliRXIj8hsCjD0triQi2LCMC/gvZoaLSeczSzUgmL3zEFB+9IcalUuvf4yChK2OqpGfK94YlWR/U7fivdyUMChlaUC9BilPJUkUpCPL2wHiKKQMpPVFK5sRZoFe7nnegLRrMvYKuCPbtp99VqCCm2ts/6u6dgCHZdUD+NvOYUOMBGXYcQz0DVIpF/HyAI+AuW/5HAPCw66NZfwsCfMugzGf5+ljm8zr4UU//pf0vMCVX690dGyVraB/ozuXg1rdQYF8f7iDh9v2vkR+oanuC1sY6bHV+DRMhfc/Xp+KFDCc=", > "project_id": "4db7dbf9961c4fc6a4589a5cb2ae3c9a", "devices": []}# > > Nothing in Devices > > But I see it in the network endpoint > > # curl http://169.254.169.254/openstack/2018-08-27/network_data.json > {"links": [{"id": "tap1b4490ec-76", "vif_id": > "1b4490ec-76b9-4223-9a31-837d60b13cc2", "type": "ovs", "mtu": 1450, > "ethernet_mac_address": "fa:16:3e:58:a4:4a"}], "networks": [{"id": > "network0", "type": "ipv4_dhcp", "link": "tap1b4490ec-76", "network_id": > "bef77274-defd-463c-814a-5051ea2acae0"}], "services": []}# > > I'll look at the code, but I am not clear where that snippet is > being generated. It could be expected that the devices array is empty. > > On Sun, Mar 20, 2022 at 11:11 AM Ahmed Abdelhamid < > ahmedabdelhamid1221 at gmail.com> wrote: > >> Thanks. It's OpenStack ussuri, deployed via kolla-ansible >> >> On Fri, Mar 18, 2022 at 10:06 PM Laurent Dumont >> wrote: >> >>> That is weird. >>> >>> - What version of Openstack are you running? >>> - How was it deployed? >>> >>> >>> On Fri, Mar 18, 2022 at 9:30 AM Ahmed Abdelhamid < >>> ahmedabdelhamid1221 at gmail.com> wrote: >>> >>>> Thanks, Laurent. I tried it for both CEPH-backed VMs and ones with >>>> local disk, still, the devices array is empty >>>> >>>> Network data curl shows alright >>>> >>>> {"links": [{"id": "tapa.....", "vif_id": "ae....", "type": "bridge", >>>> "mtu": 1500, "ethernet_mac_address": "fa:......."}], "networks": [{"id": >>>> "network0", "type": "ipv4_dhcp", "link": "tap..... >>>> >>>> On Wed, Mar 16, 2022 at 6:39 PM Laurent Dumont < >>>> laurentfdumont at gmail.com> wrote: >>>> >>>>> Are you getting anything from the neutron endpoint? >>>>> >>>>> http://169.254.169.254/openstack/2018-08-27/network_data.json >>>>> >>>>> Can you provide an "openstack server show $server_id" of the VM? I >>>>> wonder if it's a case of boot from a volume VM missing that data. It would >>>>> not explain why the port is not there though. >>>>> >>>>> On Tue, Mar 15, 2022 at 9:15 AM Ahmed Abdelhamid < >>>>> ahmedabdelhamid1221 at gmail.com> wrote: >>>>> >>>>>> Hi All, >>>>>> >>>>>> I am running into a strange issue with the metadata service. Per metadata-service >>>>>> manual , >>>>>> the devices attached to a VM should be visible in >>>>>> >>>>>> $ curl http://169.254.169.254/openstack/2018-08-27/meta_data.json >>>>>> >>>>>> However, whenever i execute it from a VM , the devices array is empty >>>>>> and looks like this, any idea why ? >>>>>> >>>>>> Thanks >>>>>> >>>>>> { "random_seed": "yu5ZnkqF2CqnDZVAfZgarG...", "availability_zone": "nova", "keys": [ { "data": "ssh-rsa AAAAB3NzaC1y...== Generated by Nova\n", "type": "ssh", "name": "mykey" } ], "hostname": "test.novalocal", "launch_index": 0, "meta": { "priority": "low", "role": "webserver" }, "devices": [ ], "project_id": "f7ac731cc11f40efbc03a9f9e1d1d21f", "public_keys": { "mykey": "ssh-rsa AAAAB3NzaC1y...== Generated by Nova\n" }, "name": "test"} >>>>>> >>>>>> >>>>>> >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Wed Apr 6 00:37:11 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Tue, 5 Apr 2022 20:37:11 -0400 Subject: Metadata service {devices list} In-Reply-To: References: Message-ID: After adding some POTATO debugs, it seems that meta_data generation does not actually happen for the instances I have tested. vif_vfs_trusted_supported = self._check_os_version(ROCKY, version) *LOG.info('POTATO -3 CHECKING IF WE HAVE DEVICE_METADATA| '+str(self.instance.device_metadata))* if self.instance.device_metadata is not None: *LOG.info('device_metadata is not None| '+str(self.instance.device_metadata))* for device in self.instance.device_metadata.devices: device_metadata = {} bus = 'none' In my logs 2022-04-06 00:33:08.541 30 INFO nova.api.metadata.base [req-15612a39-649f-4598-b588-ebcceedd49b1 - - - - -] POTATO -3 CHECKING IF WE HAVE DEVICE_METADATA| None device_metadata = None so the condition is False so it skips all that code. It seems a bit strange, since None would mean there is nothing so let's add some data? I am not clear on how self.instance.device_metadata is expected to behave. I'll have to dig a little deeper. On Tue, Apr 5, 2022 at 7:58 PM Laurent Dumont wrote: > Trying to trace the code that I dont fully grasp :D > > We start in nova/base.py > > https://github.com/openstack/nova/blob/stable/ussuri/nova/api/metadata/base.py > > We seem to generate a device array with a bunch of stuff inside > > if self._check_os_version(NEWTON_ONE, version): > metadata['devices'] = self._get_device_metadata(version) > > (but only if the version is higher than NEWTON_ONE? I dont think we > generate the file every single time someone calls the API so it's probably > at VM creation only?) > > Looking at the code for _get_device_metadata > > > https://github.com/openstack/nova/blob/184a3c976faed38907af148a533bc6e9faa410f5/nova/api/metadata/base.py#L367 > > It seems we try to build an array for Metadata specifically. It's not > expected to be a full list of all the actual devices (ports + disks). > > I'll see if I add a tag to a port, does it now show up? > > On Tue, Apr 5, 2022 at 7:29 PM Laurent Dumont > wrote: > >> Circling back on this, I see the same in my kolla-ansible + Ussuri. >> >> {"uuid": "a916a95d-3f9b-4e5e-9f64-894470742d8b", "hostname": >> "test-laurent.novalocal", "name": "test-laurent", "launch_index": 0, >> "availability_zone": "nova", "random_seed": >> "EEO9FjZLP51L4DDzQV09NRlwUeuKQa+XlGEWp3nH3XKXCnM5vEwhVgw/qG2kuqLN3HZ+oQIcLsnFPwoCsr5TuYodLTTTEhHgo0xJwZ3mlY/P6Br7QWCOyXCDEIKDxxfvxXhDOKtf9OFkFNROD9GDc4vWrOeCDGtcKshf5QZLsgiIv07fQus9axsqGYosNPdOKAejRa+gVtQfxlqV0kVcVIWWOAQOQdVh/TfoBGaxc8FjSCj/9MHLUMYP/zPSj+NRU4G1AwlKHzmxxiF4LQwHCBuy6dNrG+ImpUs6nLORjlDAgovoMhwIgDVhgihel4eFoT8f2izuq42yCen7yRU7FSfJcL40IlTmdHVTJTfChS2+yP5Y5SjeNHAmO6xCRJ9CCliRXIj8hsCjD0triQi2LCMC/gvZoaLSeczSzUgmL3zEFB+9IcalUuvf4yChK2OqpGfK94YlWR/U7fivdyUMChlaUC9BilPJUkUpCPL2wHiKKQMpPVFK5sRZoFe7nnegLRrMvYKuCPbtp99VqCCm2ts/6u6dgCHZdUD+NvOYUOMBGXYcQz0DVIpF/HyAI+AuW/5HAPCw66NZfwsCfMugzGf5+ljm8zr4UU//pf0vMCVX690dGyVraB/ozuXg1rdQYF8f7iDh9v2vkR+oanuC1sY6bHV+DRMhfc/Xp+KFDCc=", >> "project_id": "4db7dbf9961c4fc6a4589a5cb2ae3c9a", "devices": []}# >> >> Nothing in Devices >> >> But I see it in the network endpoint >> >> # curl http://169.254.169.254/openstack/2018-08-27/network_data.json >> {"links": [{"id": "tap1b4490ec-76", "vif_id": >> "1b4490ec-76b9-4223-9a31-837d60b13cc2", "type": "ovs", "mtu": 1450, >> "ethernet_mac_address": "fa:16:3e:58:a4:4a"}], "networks": [{"id": >> "network0", "type": "ipv4_dhcp", "link": "tap1b4490ec-76", "network_id": >> "bef77274-defd-463c-814a-5051ea2acae0"}], "services": []}# >> >> I'll look at the code, but I am not clear where that snippet is >> being generated. It could be expected that the devices array is empty. >> >> On Sun, Mar 20, 2022 at 11:11 AM Ahmed Abdelhamid < >> ahmedabdelhamid1221 at gmail.com> wrote: >> >>> Thanks. It's OpenStack ussuri, deployed via kolla-ansible >>> >>> On Fri, Mar 18, 2022 at 10:06 PM Laurent Dumont < >>> laurentfdumont at gmail.com> wrote: >>> >>>> That is weird. >>>> >>>> - What version of Openstack are you running? >>>> - How was it deployed? >>>> >>>> >>>> On Fri, Mar 18, 2022 at 9:30 AM Ahmed Abdelhamid < >>>> ahmedabdelhamid1221 at gmail.com> wrote: >>>> >>>>> Thanks, Laurent. I tried it for both CEPH-backed VMs and ones with >>>>> local disk, still, the devices array is empty >>>>> >>>>> Network data curl shows alright >>>>> >>>>> {"links": [{"id": "tapa.....", "vif_id": "ae....", "type": "bridge", >>>>> "mtu": 1500, "ethernet_mac_address": "fa:......."}], "networks": [{"id": >>>>> "network0", "type": "ipv4_dhcp", "link": "tap..... >>>>> >>>>> On Wed, Mar 16, 2022 at 6:39 PM Laurent Dumont < >>>>> laurentfdumont at gmail.com> wrote: >>>>> >>>>>> Are you getting anything from the neutron endpoint? >>>>>> >>>>>> http://169.254.169.254/openstack/2018-08-27/network_data.json >>>>>> >>>>>> Can you provide an "openstack server show $server_id" of the VM? I >>>>>> wonder if it's a case of boot from a volume VM missing that data. It would >>>>>> not explain why the port is not there though. >>>>>> >>>>>> On Tue, Mar 15, 2022 at 9:15 AM Ahmed Abdelhamid < >>>>>> ahmedabdelhamid1221 at gmail.com> wrote: >>>>>> >>>>>>> Hi All, >>>>>>> >>>>>>> I am running into a strange issue with the metadata service. Per metadata-service >>>>>>> manual , >>>>>>> the devices attached to a VM should be visible in >>>>>>> >>>>>>> $ curl http://169.254.169.254/openstack/2018-08-27/meta_data.json >>>>>>> >>>>>>> However, whenever i execute it from a VM , the devices array is >>>>>>> empty and looks like this, any idea why ? >>>>>>> >>>>>>> Thanks >>>>>>> >>>>>>> { "random_seed": "yu5ZnkqF2CqnDZVAfZgarG...", "availability_zone": "nova", "keys": [ { "data": "ssh-rsa AAAAB3NzaC1y...== Generated by Nova\n", "type": "ssh", "name": "mykey" } ], "hostname": "test.novalocal", "launch_index": 0, "meta": { "priority": "low", "role": "webserver" }, "devices": [ ], "project_id": "f7ac731cc11f40efbc03a9f9e1d1d21f", "public_keys": { "mykey": "ssh-rsa AAAAB3NzaC1y...== Generated by Nova\n" }, "name": "test"} >>>>>>> >>>>>>> >>>>>>> >>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Wed Apr 6 00:44:41 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Tue, 5 Apr 2022 20:44:41 -0400 Subject: Metadata service {devices list} In-Reply-To: References: Message-ID: Last spam for tonight! This seems to be the spec from Mitaka - the code with None dates from 6 years ago. https://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/virt-device-role-tagging.html I think my comprehension is wrong, but the intent is around exposing non regular devices (VF, PCI drives). I don't have a way to try that in my lab, so I'll need to check elsewhere. On Tue, Apr 5, 2022 at 8:37 PM Laurent Dumont wrote: > After adding some POTATO debugs, it seems that meta_data generation does > not actually happen for the instances I have tested. > > vif_vfs_trusted_supported = self._check_os_version(ROCKY, version) > *LOG.info('POTATO -3 CHECKING IF WE HAVE DEVICE_METADATA| > '+str(self.instance.device_metadata))* > if self.instance.device_metadata is not None: > *LOG.info('device_metadata is not None| > '+str(self.instance.device_metadata))* > for device in self.instance.device_metadata.devices: > device_metadata = {} > bus = 'none' > > In my logs > 2022-04-06 00:33:08.541 30 INFO nova.api.metadata.base > [req-15612a39-649f-4598-b588-ebcceedd49b1 - - - - -] POTATO -3 CHECKING IF > WE HAVE DEVICE_METADATA| None > > device_metadata = None so the condition is False so it skips all that > code. It seems a bit strange, since None would mean there is nothing so > let's add some data? > > I am not clear on how self.instance.device_metadata is expected to behave. > I'll have to dig a little deeper. > > On Tue, Apr 5, 2022 at 7:58 PM Laurent Dumont > wrote: > >> Trying to trace the code that I dont fully grasp :D >> >> We start in nova/base.py >> >> https://github.com/openstack/nova/blob/stable/ussuri/nova/api/metadata/base.py >> >> We seem to generate a device array with a bunch of stuff inside >> >> if self._check_os_version(NEWTON_ONE, version): >> metadata['devices'] = self._get_device_metadata(version) >> >> (but only if the version is higher than NEWTON_ONE? I dont think we >> generate the file every single time someone calls the API so it's probably >> at VM creation only?) >> >> Looking at the code for _get_device_metadata >> >> >> https://github.com/openstack/nova/blob/184a3c976faed38907af148a533bc6e9faa410f5/nova/api/metadata/base.py#L367 >> >> It seems we try to build an array for Metadata specifically. It's not >> expected to be a full list of all the actual devices (ports + disks). >> >> I'll see if I add a tag to a port, does it now show up? >> >> On Tue, Apr 5, 2022 at 7:29 PM Laurent Dumont >> wrote: >> >>> Circling back on this, I see the same in my kolla-ansible + Ussuri. >>> >>> {"uuid": "a916a95d-3f9b-4e5e-9f64-894470742d8b", "hostname": >>> "test-laurent.novalocal", "name": "test-laurent", "launch_index": 0, >>> "availability_zone": "nova", "random_seed": >>> "EEO9FjZLP51L4DDzQV09NRlwUeuKQa+XlGEWp3nH3XKXCnM5vEwhVgw/qG2kuqLN3HZ+oQIcLsnFPwoCsr5TuYodLTTTEhHgo0xJwZ3mlY/P6Br7QWCOyXCDEIKDxxfvxXhDOKtf9OFkFNROD9GDc4vWrOeCDGtcKshf5QZLsgiIv07fQus9axsqGYosNPdOKAejRa+gVtQfxlqV0kVcVIWWOAQOQdVh/TfoBGaxc8FjSCj/9MHLUMYP/zPSj+NRU4G1AwlKHzmxxiF4LQwHCBuy6dNrG+ImpUs6nLORjlDAgovoMhwIgDVhgihel4eFoT8f2izuq42yCen7yRU7FSfJcL40IlTmdHVTJTfChS2+yP5Y5SjeNHAmO6xCRJ9CCliRXIj8hsCjD0triQi2LCMC/gvZoaLSeczSzUgmL3zEFB+9IcalUuvf4yChK2OqpGfK94YlWR/U7fivdyUMChlaUC9BilPJUkUpCPL2wHiKKQMpPVFK5sRZoFe7nnegLRrMvYKuCPbtp99VqCCm2ts/6u6dgCHZdUD+NvOYUOMBGXYcQz0DVIpF/HyAI+AuW/5HAPCw66NZfwsCfMugzGf5+ljm8zr4UU//pf0vMCVX690dGyVraB/ozuXg1rdQYF8f7iDh9v2vkR+oanuC1sY6bHV+DRMhfc/Xp+KFDCc=", >>> "project_id": "4db7dbf9961c4fc6a4589a5cb2ae3c9a", "devices": []}# >>> >>> Nothing in Devices >>> >>> But I see it in the network endpoint >>> >>> # curl http://169.254.169.254/openstack/2018-08-27/network_data.json >>> {"links": [{"id": "tap1b4490ec-76", "vif_id": >>> "1b4490ec-76b9-4223-9a31-837d60b13cc2", "type": "ovs", "mtu": 1450, >>> "ethernet_mac_address": "fa:16:3e:58:a4:4a"}], "networks": [{"id": >>> "network0", "type": "ipv4_dhcp", "link": "tap1b4490ec-76", "network_id": >>> "bef77274-defd-463c-814a-5051ea2acae0"}], "services": []}# >>> >>> I'll look at the code, but I am not clear where that snippet is >>> being generated. It could be expected that the devices array is empty. >>> >>> On Sun, Mar 20, 2022 at 11:11 AM Ahmed Abdelhamid < >>> ahmedabdelhamid1221 at gmail.com> wrote: >>> >>>> Thanks. It's OpenStack ussuri, deployed via kolla-ansible >>>> >>>> On Fri, Mar 18, 2022 at 10:06 PM Laurent Dumont < >>>> laurentfdumont at gmail.com> wrote: >>>> >>>>> That is weird. >>>>> >>>>> - What version of Openstack are you running? >>>>> - How was it deployed? >>>>> >>>>> >>>>> On Fri, Mar 18, 2022 at 9:30 AM Ahmed Abdelhamid < >>>>> ahmedabdelhamid1221 at gmail.com> wrote: >>>>> >>>>>> Thanks, Laurent. I tried it for both CEPH-backed VMs and ones with >>>>>> local disk, still, the devices array is empty >>>>>> >>>>>> Network data curl shows alright >>>>>> >>>>>> {"links": [{"id": "tapa.....", "vif_id": "ae....", "type": "bridge", >>>>>> "mtu": 1500, "ethernet_mac_address": "fa:......."}], "networks": [{"id": >>>>>> "network0", "type": "ipv4_dhcp", "link": "tap..... >>>>>> >>>>>> On Wed, Mar 16, 2022 at 6:39 PM Laurent Dumont < >>>>>> laurentfdumont at gmail.com> wrote: >>>>>> >>>>>>> Are you getting anything from the neutron endpoint? >>>>>>> >>>>>>> http://169.254.169.254/openstack/2018-08-27/network_data.json >>>>>>> >>>>>>> Can you provide an "openstack server show $server_id" of the VM? I >>>>>>> wonder if it's a case of boot from a volume VM missing that data. It would >>>>>>> not explain why the port is not there though. >>>>>>> >>>>>>> On Tue, Mar 15, 2022 at 9:15 AM Ahmed Abdelhamid < >>>>>>> ahmedabdelhamid1221 at gmail.com> wrote: >>>>>>> >>>>>>>> Hi All, >>>>>>>> >>>>>>>> I am running into a strange issue with the metadata service. Per metadata-service >>>>>>>> manual , >>>>>>>> the devices attached to a VM should be visible in >>>>>>>> >>>>>>>> $ curl http://169.254.169.254/openstack/2018-08-27/meta_data.json >>>>>>>> >>>>>>>> However, whenever i execute it from a VM , the devices array is >>>>>>>> empty and looks like this, any idea why ? >>>>>>>> >>>>>>>> Thanks >>>>>>>> >>>>>>>> { "random_seed": "yu5ZnkqF2CqnDZVAfZgarG...", "availability_zone": "nova", "keys": [ { "data": "ssh-rsa AAAAB3NzaC1y...== Generated by Nova\n", "type": "ssh", "name": "mykey" } ], "hostname": "test.novalocal", "launch_index": 0, "meta": { "priority": "low", "role": "webserver" }, "devices": [ ], "project_id": "f7ac731cc11f40efbc03a9f9e1d1d21f", "public_keys": { "mykey": "ssh-rsa AAAAB3NzaC1y...== Generated by Nova\n" }, "name": "test"} >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ueha.ayumu at fujitsu.com Wed Apr 6 02:57:07 2022 From: ueha.ayumu at fujitsu.com (ueha.ayumu at fujitsu.com) Date: Wed, 6 Apr 2022 02:57:07 +0000 Subject: [tc][tacker][heat-translator] Discusssion about heat-translater maintenance Message-ID: Hi Bob, I?m Ayumu Ueha, I work as a core of Tacker. Previously, some member of Tacker team participated to the core of heat-translator. Since LiangLu has left the Tacker project, I would like to participate the core of heat-translator from the Tacker team instead of him and maintain it. Is it OK? This is agreed within the Tacker team at the Zed vPTG. >heat-translator >- yoshito-ito (yoshito.itou.dr at hco.ntt.co.jp) >- LiangLu (lu.liang at jp.fujitsu.com) *** change to ueha (ueha.ayumu at fujitsu.com) *** Best regards, Ueha -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Wed Apr 6 04:34:12 2022 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 6 Apr 2022 06:34:12 +0200 Subject: [Openstack][nova] server groups In-Reply-To: References: Message-ID: Thanks Laurent. Sometimes the trick works and instances can migrate. We do not understand what is wrong when instances fail to migrate. We are usung soft-anti-affinity policy. The spec field we inserted in both cases seeems the same. Ignazio. Il Mer 6 Apr 2022, 01:03 Laurent Dumont ha scritto: > I'm trying to find where else this was discussed, but afaik, this was > never supported. > > I am not sure if someone was able to "hack" it's way to a working setup. > It's a bit of a shame because it makes server-groups really not flexible :( > > On Tue, Apr 5, 2022 at 3:04 PM Ignazio Cassano > wrote: > >> Hello, we noted that instances can be inserted in server groups only at >> instance creation step but we need to insert in a server group some old >> instances. >> >> We tried to modify database nova_api server group tables but we noted >> that we must modify spec in request_specs table . For us is not clear how >> to modify the spec value. >> We tried to investigate looking at instances inserted in a server group >> at creation step and we got issues in instance live migration. >> Please, anyone could provide any utility to do it or any template ? >> Thanks >> Ignazio >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Wed Apr 6 04:53:46 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Wed, 6 Apr 2022 00:53:46 -0400 Subject: [Openstack][nova] server groups In-Reply-To: References: Message-ID: I cannot easily reproduce, but what does Nova complain about with the live migration? Any chance you can run it with DEBUG? On Wed, Apr 6, 2022 at 12:34 AM Ignazio Cassano wrote: > Thanks Laurent. Sometimes the trick works and instances can migrate. We do > not understand what is wrong when instances fail to migrate. > We are usung soft-anti-affinity policy. > The spec field we inserted in both cases seeems the same. > Ignazio. > > Il Mer 6 Apr 2022, 01:03 Laurent Dumont ha > scritto: > >> I'm trying to find where else this was discussed, but afaik, this was >> never supported. >> >> I am not sure if someone was able to "hack" it's way to a working setup. >> It's a bit of a shame because it makes server-groups really not flexible :( >> >> On Tue, Apr 5, 2022 at 3:04 PM Ignazio Cassano >> wrote: >> >>> Hello, we noted that instances can be inserted in server groups only at >>> instance creation step but we need to insert in a server group some old >>> instances. >>> >>> We tried to modify database nova_api server group tables but we noted >>> that we must modify spec in request_specs table . For us is not clear how >>> to modify the spec value. >>> We tried to investigate looking at instances inserted in a server group >>> at creation step and we got issues in instance live migration. >>> Please, anyone could provide any utility to do it or any template ? >>> Thanks >>> Ignazio >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Wed Apr 6 07:05:10 2022 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 6 Apr 2022 09:05:10 +0200 Subject: [Openstack][nova] server groups In-Reply-To: References: Message-ID: We've 2 cases, those VMs are in the same group, one fails when migration is launched, the other working well, you can see output of nova live migration and spec on DB not working: https://paste.openstack.org/show/b4QfkVHkUpIC97E3aWAx/ working: https://paste.openstack.org/show/busPt39bkfUzQthk1Tcf/ Ignazio Il Mer 6 Apr 2022, 06:53 Laurent Dumont ha scritto: > I cannot easily reproduce, but what does Nova complain about with the live > migration? Any chance you can run it with DEBUG? > > On Wed, Apr 6, 2022 at 12:34 AM Ignazio Cassano > wrote: > >> Thanks Laurent. Sometimes the trick works and instances can migrate. We >> do not understand what is wrong when instances fail to migrate. >> We are usung soft-anti-affinity policy. >> The spec field we inserted in both cases seeems the same. >> Ignazio. >> >> Il Mer 6 Apr 2022, 01:03 Laurent Dumont ha >> scritto: >> >>> I'm trying to find where else this was discussed, but afaik, this was >>> never supported. >>> >>> I am not sure if someone was able to "hack" it's way to a working setup. >>> It's a bit of a shame because it makes server-groups really not flexible :( >>> >>> On Tue, Apr 5, 2022 at 3:04 PM Ignazio Cassano >>> wrote: >>> >>>> Hello, we noted that instances can be inserted in server groups only at >>>> instance creation step but we need to insert in a server group some old >>>> instances. >>>> >>>> We tried to modify database nova_api server group tables but we noted >>>> that we must modify spec in request_specs table . For us is not clear how >>>> to modify the spec value. >>>> We tried to investigate looking at instances inserted in a server group >>>> at creation step and we got issues in instance live migration. >>>> Please, anyone could provide any utility to do it or any template ? >>>> Thanks >>>> Ignazio >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Wed Apr 6 08:12:08 2022 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 6 Apr 2022 10:12:08 +0200 Subject: [dev][infra][qa][tact-sig] Zuul behavior change with Depends-On across queues In-Reply-To: <20220405194612.nt3eu2k5nwsufosx@yuggoth.org> References: <20220405194612.nt3eu2k5nwsufosx@yuggoth.org> Message-ID: Hi fungi, a question - is it possible to configure Zuul so that this "clean check" requirement is per-project rather than per-tenant? -yoctozepto On Tue, 5 Apr 2022 at 21:46, Jeremy Stanley wrote: > > For those who haven't seen the more detailed announcement[*] about > it, just a quick note that if you get a sudden -2 back from Zuul > when approving a change with a Depends-On to a change in a different > project which hasn't merged yet, that's likely an indication those > projects don't share a dependent queue. It's not a bug, but an > intentional clarification of Zuul's enqueuing behavior. > > For most other Zuul deployments (and even our other Zuul tenants in > OpenDev) this is purely cosmetic, but since OpenStack's Zuul tenant > is configured to require a positive Verified vote before enqueuing > into the gate pipeline, it means some changes may end up > unexpectedly needing another pass through check first. It's worth > re-evaluating whether or not this "clean check" rule remains a > useful requirement for gating. It was added some years ago because a > number of gate breaking bugs were traced back to unstable changes > being rechecked enough times that eventually they got lucky and were > able to merge, and then their instability contributed to > destabilizing the integrated gate as a whole. Similarly, changes > were being approved without reviewers confirming their jobs were > passing first, and this led to additional resource waste. > > There is a bit of discussion around "blind rechecks" at the PTG this > week, and so this topic is related; it might be a good idea to > consider it in conjunction with the greater recheck conversation. > > [*] http://lists.opendev.org/pipermail/service-announce/2022-April/000033.html > -- > Jeremy Stanley From hberaud at redhat.com Wed Apr 6 09:16:22 2022 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 6 Apr 2022 11:16:22 +0200 Subject: [Oslo] IRC meeting. In-Reply-To: References: Message-ID: Hey, During the previous series our activity significantly decreased so a meeting by week was a bit overkill, however I'm not against your proposition, so WFM. Cheers Le mar. 5 avr. 2022 ? 19:31, Daniel Mats Niklas Bengtsson a ?crit : > Hi there, > > I would like to know if you agree that we have the meeting once a > week? Instead of the first and third Monday of the month. It will be > easier to manage and if sometimes it is canceled it does not matter > > Even if the meetings each week are short, this way we will have > regular follow-up. > > > -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Wed Apr 6 11:19:15 2022 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Wed, 06 Apr 2022 14:19:15 +0300 Subject: =?utf-8?B?4oCLW29wZW5zdGFjay1hbnNpYmxlXVtQVEddIFNlc3Npb24g4oCLb24g4oCLNnRoIG9mIEFwcmlsIGlz?= =?utf-8?B?IGNhbmNlbGxlZA==?= Message-ID: <2187111649243800@mail.yandex.ru> An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Apr 6 11:57:43 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 6 Apr 2022 11:57:43 +0000 Subject: [dev][infra][qa][tact-sig] Zuul behavior change with Depends-On across queues In-Reply-To: References: <20220405194612.nt3eu2k5nwsufosx@yuggoth.org> Message-ID: <20220406115742.ps4r2wtll4h5diu5@yuggoth.org> On 2022-04-06 10:12:08 +0200 (+0200), Rados?aw Piliszek wrote: > a question - is it possible to configure Zuul so that this "clean > check" requirement is per-project rather than per-tenant? [...] It's part of the gate pipeline's approval requirements here: https://opendev.org/openstack/project-config/src/commit/551b915b8a17cec720ff5959e98456ddcab441a4/zuul.d/pipelines.yaml#L78-L79 I think it could be accomplished by having two gate pipelines, using one for projects which want it but another for projects which don't. Note that any projects sharing a queue would have to rely on the same gate pipeline, and a project could only use one gate or the other (not both). By extension, this also means bifurcation of any project-templates which include a gate pipeline set, so it might result in a lot of duplication for standard templates. For consistency's sake though, it would probably be best for the tenant to have only a single gate pipeline. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From marc-antoine.godde at viarezo.fr Wed Apr 6 12:11:27 2022 From: marc-antoine.godde at viarezo.fr (Marc-Antoine Godde) Date: Wed, 6 Apr 2022 14:11:27 +0200 Subject: Upgrading Openstack nodes In-Reply-To: <1663331649088955@mail.yandex.ru> References: <9683CBBC-AC0D-4FAF-BF29-30FEC3CD18D1@viarezo.fr> <1663331649088955@mail.yandex.ru> Message-ID: Hello, Thanks for these information. Indeed, your link gives us everything. I have to say that sometimes it?s a pain to find what you want in the documentation. Best, Marc-Antoine Godde > Le 4 avr. 2022 ? 18:19, Dmitriy Rabotyagov a ?crit : > > - ??? > > Hey there. > > We have pretty decent documentation on the topic [1]. Also please, never-ever upgrade computes before upgrading controllers, or at least one of them, as repo container is being used for wheels build and it has to be distro-specific. When there's no repo container of required operating system, roles rollback to behavour of not building wheels, which results in cloning repos from OpenDev for each compute independently. > At some scale, this leads to infrastructure "DDoS" which would be great to avoid:) > > Also feel free to join us in IRC #openstack-ansible channel on OFTC network for futher questions. > > > [1] https://docs.openstack.org/openstack-ansible/latest/admin/upgrades/distribution-upgrades.html > > 04.04.2022, 12:18, "Marc-Antoine Godde" : > Hello, > > We are running an Openstack cloud composed of 3 controller nodes and 4 compute nodes. Our deployment was realized with OpenStack-ansible and we are running OpenStack Ussuri on Ubuntu 18.04. Our plan is to upgrade nodes to Ubuntu 20.04, that way we would be able to update to OpenStack Victoria and further. > > We would like to withdraw each node from the cluster, reinstall a clean linux and redeploy the nodes. There is garbage remaining from previous upgrades. We figured out the way in the documentation to remove a compute node from the cluster with Openstack-ansible but we can?t find any related documentation for controller nodes. > > Any help would be very much appreciated. By the way, if you?d have any other suggestions on how to perform that upgrade, fell free to help. > > Best, > Marc-Antoine Godde > > > > -- > Kind Regards, > Dmitriy Rabotyagov > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amonster369 at gmail.com Wed Apr 6 12:14:56 2022 From: amonster369 at gmail.com (A Monster) Date: Wed, 6 Apr 2022 13:14:56 +0100 Subject: [neutron] exposing ip address of external Network from within the virtual machine Message-ID: I have two networks, one internal to openstack " internal_network" 10.10.10.0/24 and an external Network "public" which is connected to an external network 192.168.100.0/24 in order to connect instances to "public" network, I created a router of which the gateway is public network, and also connected to internal_network, and used floating ip to access vm instances from the external network (public), but the problem I have encountered now is that I want to expose the and Ip from the external network inside the vm instance , but when I try to directly attach an interface from the external network to the instance, I don't get and Ip address inside the instance, and even if I assign it manually it still doesn't work, I can't use enable DHCP with public network because it already has its own external dhcp server. How can I solve this problem? -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Apr 6 13:06:29 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 06 Apr 2022 08:06:29 -0500 Subject: [tc][tacker][heat-translator] Discusssion about heat-translater maintenance In-Reply-To: References: Message-ID: <17ffefbcf2c.d17659c3342433.5807614666776334157@ghanshyammann.com> ---- On Tue, 05 Apr 2022 21:57:07 -0500 wrote ---- > > Hi Bob, > > I?m Ayumu Ueha, I work as a core of Tacker. > Previously, some member of Tacker team participated to the core of heat-translator. > Since LiangLu has left the Tacker project, I would like to participate the core of heat-translator from the Tacker team instead of him and maintain it. Is it OK? > This is agreed within the Tacker team at the Zed vPTG. +100, that will be a great help. Rico as the current PTL of Heat can check and do the needful, let's wait for him to reply. Thanks for the help. -gmann > > >heat-translator > >- yoshito-ito (yoshito.itou.dr at hco.ntt.co.jp) > >- LiangLu (lu.liang at jp.fujitsu.com) *** change to ueha (ueha.ayumu at fujitsu.com) *** > > Best regards, > Ueha > From zaitcev at redhat.com Wed Apr 6 13:27:00 2022 From: zaitcev at redhat.com (Pete Zaitcev) Date: Wed, 6 Apr 2022 08:27:00 -0500 Subject: [swift][ptg] Ops feedback session - Apr 7 at 13:00 UTC In-Reply-To: References: Message-ID: <20220406082700.5a40124a@niphredil.zaitcev.lan> On Tue, 5 Apr 2022 22:46:21 +0000 Timothy Burke wrote: > As in PTGs past, we're getting devs and ops together to talk about Swift: what's working, what isn't, and what would be most helpful to improve. We're meeting in Havana (https://www.openstack.org/ptg/rooms/havana) on Apr 7 at 13:00UTC -- if you run a Swift cluster, we hope to see you there! Even if you can't make it, I'd appreciate if you can offer some feedback on this PTG's etherpad (https://etherpad.opendev.org/p/swift-zed-ops-feedback). Swift was supposed to be in Mitaka on April 7. Did we get reassigned? -- Pete From thierry at openstack.org Wed Apr 6 13:31:11 2022 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 6 Apr 2022 15:31:11 +0200 Subject: [ptg][ptl][largescale-sig][all] PTG session: "The Scaling Journey" - Wednesday, April 6 15utc - kilo room In-Reply-To: References: Message-ID: <38890997-9392-a0d7-f217-9eef31ecc2e9@openstack.org> Session today at 15UTC -- join if you can! Belmiro Moreira wrote: > Hi, > > the Large Scale SIG is organizing a PTG session to discuss "The Scaling > Journey". > > > The SIG worked in a "scaling journey" to guide and help operators to > scale their OpenStack deployments. > > Definitely, there are different ways to scale OpenStack! and the > challenges to move from a few hundreds cores to thousands or now > millions cores are completely different. > > > Based on the experience of several operators, we tried to answer > different common questions and identify the pain points. > > https://wiki.openstack.org/wiki/Large_Scale_SIG > > > > If you are interested in scalability it would be great to have your > feedback. > > Also, it's important that PTLs join because they can give the project > vision for scalability and advise on how to overcome possible bottlenecks. > > > To discuss all of this we will have a "zoom" session on Wednesday, April > 6 15utc - "kilo room". > > > See you there! > > > cheers, > > Belmiro > > on behalf of the Large Scale SIG From fungi at yuggoth.org Wed Apr 6 13:35:44 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 6 Apr 2022 13:35:44 +0000 Subject: [swift][ptg] Ops feedback session - Apr 7 at 13:00 UTC In-Reply-To: <20220406082700.5a40124a@niphredil.zaitcev.lan> References: <20220406082700.5a40124a@niphredil.zaitcev.lan> Message-ID: <20220406133543.qz5ue4cz25s4fxgo@yuggoth.org> On 2022-04-06 08:27:00 -0500 (-0500), Pete Zaitcev wrote: [...] > Swift was supposed to be in Mitaka on April 7. Did we get reassigned? Looking at the current schedule[*] (always subject to change), Swift was/is in Mitaka on Tuesday and Wednesday but in Havana on Thursday. The OpenStack Technical Committee was/is in Mitaka on Monday, Thursday and Friday. It's possible one or the other got reshuffled at some point for consistency due to competition for the Thursday timeslot. [*] https://ptg.opendev.org/ptg.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From a.yeremko at connectria.com Wed Apr 6 14:50:25 2022 From: a.yeremko at connectria.com (Alexander Yeremko) Date: Wed, 6 Apr 2022 14:50:25 +0000 Subject: [openstack-ansible] Re: plain text config parameters encryption feature In-Reply-To: <1632421649083792@mail.yandex.ru> References: <1632421649083792@mail.yandex.ru> Message-ID: Hi Dmitry, Thank you for your feedback. It seems my first email was lost, but it's good that Kelsi's letter found you. To clarify a couple of things I shared in my initial email. After the first patch, we fixed comments that were provided to PatchSet #1. And after that, we shared the second patch with fixes. Just to confirm, according to your comments for the second patch, we will need to re-work the logic of the encryption mechanism according to the comment to 'files/encypt_secrets.py' script that arose at the second patchset (PatchSet #2) dated Nov/30/2021/ a comment is by Dmitry Rabotyagov: "We _really_ should make it as an ansible plugin and re-work logic". Is that correct? And one more question. Did I understand you correctly that if we re-work the logic of the encryption mechanism, you might have some options to make backports available for older versions that currently are closed for commits? Dmitry, thank you very much for your efforts. I am looking forward to these confirmations from your side to move forward. Best regards and wishes, Alex Yeremko ________________________________ From: Dmitriy Rabotyagov Sent: Monday, April 4, 2022 6:15 PM To: Kelsi Parenteau ; openstack-discuss at lists.openstack.org Cc: Tina Wisbiski ; Yuliia Romanova ; Alexander Yeremko Subject: Re: plain text config parameters encryption feature [EXTERNAL] This email came from an external sender Hi there. Sorry, I totally missed that email, since we usually use tags to address specific teams, so please, use "[${PROJECT}]" in topic if you address a ML to specific group in future:) 1. There bunch of issues with code proposed, actually, which have been commented: [1] and neither of them were reflected in any way since 10 December. Gerrit Code-Review [2] system is a point where proposed code is being reviewed by Core Reviewers. Which it has been done in quite timely manner if you reffer to timestaps in patch of topic. Why I said about ansible module, because current proposed solution is not idempotent and is hard to maintain. As if you want to fix or change smth in script that manages vault tokens, you will need to edit it in every role that uses it, which is really hard to manage.On the contrary ansible module is being managed from single place, so you just call it from role and don't need to do duplicate code for each role. Also, current solution would create a new vault secret each time role runs even when secret already has been stored which is not idempotent way. Not saying about other 8 comments and that patches were never passing CI. So from my perspective solution requires some effort before it can be considered as ready one. And are we quite picky when it comes to code quality that we merge. 2. According to OpenStack Releases guidelines [3], new features are not eligible for being backported. Also branches you;re mentioning are under Extended Maintenance which means only security patching is generally provided for them. However, OpenStack-Ansible is flexible enough. So you should be able to deploy older OpenStack code with recent roles. We define SHA for services that are being deployed by OSA using SHAs [4], so technically it should be possible to use Yoga version of OpenStack-Ansible and override OpenStack version to Stein to get stein version of OpenStack services deployed. It could be quite tricky in practice though, since we could drop some required variables that are now deprecated, but in most cases it can be fixed trivially. So what I'm saying that technically there's a way to use your code from master for older versions. As Jonathan mentioned, we're quite open for communication in #opnestack-ansible channel on IRC. [1] https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/814865 [2] https://review.opendev.org/ [3] https://docs.openstack.org/project-team-guide/stable-branches.html#maintained [4] https://opendev.org/openstack/openstack-ansible/src/branch/master/playbooks/defaults/repo_packages/openstack_services.yml 04.04.2022, 17:33, "Kelsi Parenteau" : Good morning Openstack, I hope this message finds you well. I wanted to follow up from Alex's last email below to help to clarify our questions here. We're reaching out to ask your reviewers for their feedback on what had changed on your side during our course of work. https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/814865 We had been working with your team over many months, and had been tracking to commit the code upstream. We were not sure why the Openstack reviewers had not brought up this potential concern for us earlier on in our discussions to be addressed. Can you please advise us why that particular comment regarding the requirement for this to be an ansible plugin stops us from being able to commit the code? We look forward to your feedback here, and would be happy to schedule a call as well to talk this through. Please let us know if you have any questions. Thank you, Kelsi Parenteau, PMP, PMI-ACP, CSM Senior Project Manager d: 586.473.1230 I m: 313.404.3214 ________________________________ From: Alexander Yeremko > Sent: Tuesday, March 29, 2022 4:10 PM To: openstack-discuss at lists.openstack.org > Cc: Tina Wisbiski >; Kelsi Parenteau >; Yuliia Romanova > Subject: plain text config parameters encryption feature Dear OpenStack community, we are developing plain text config secrets encryption feature according to the next specification: https://specs.openstack.org/openstack/openstack-ansible-specs/specs/xena/protecting-plaintext-configs.html We started from Glance OS service and submitted two patchsets already: https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/814865 Now we have two questions that we need to clarify to proceed our work on that feature and finish our development: 1. Is it correct that we need to develop more patchsets to rework some logic of encryption mechanism according to comment to 'files/encypt_secrets.py' script that arised at the second patchset (PatchSet 2) dated Nov/30/2021 ? Comment is by Dmitry Rabotyagov: "We _really_ should make it as an ansible plugin and re-work logic" 2. We wish to have such feature in previous releases also, not just in upcoming Yoga or Zed. Stein, Train and Victoria - it would be excellent to have plain text secrets encryption with these releases also. So question is how is it possible to use our feature in those releases also? Can we push some backports to those releases openstack-ansible repo? Could someone be so kind and give us answers? Best regards and wishes, Alex Yeremko This E-Mail (including any attachments) may contain privileged or confidential information. It is intended only for the addressee(s) indicated above. The sender does not waive any of its rights, privileges or other protections respecting this information. Any distribution, copying or other use of this E-Mail or the information it contains, by other than an intended recipient, is not sanctioned and is prohibited. If you received this E-Mail in error, please delete it and advise the sender (by return E-Mail or otherwise) immediately. Any calls held by you with Connectria may be recorded by an automated note taking system to ensure prompt follow up and for information collection purposes, and your attendance on any calls with Connectria confirms your consent to this. Any E-mail received by or sent from Connectria is subject to review by Connectria supervisory personnel. -- Kind Regards, Dmitriy Rabotyagov -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Wed Apr 6 15:07:11 2022 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Wed, 06 Apr 2022 18:07:11 +0300 Subject: [openstack-ansible] Re: plain text config parameters encryption feature In-Reply-To: References: <1632421649083792@mail.yandex.ru> Message-ID: <1978721649256898@mail.yandex.ru> An HTML attachment was scrubbed... URL: From dsmigiel at redhat.com Wed Apr 6 15:55:30 2022 From: dsmigiel at redhat.com (Dariusz Smigiel) Date: Wed, 6 Apr 2022 08:55:30 -0700 Subject: [TripleO] Gate blockers - C8 Wallaby & C9 Master|Wallaby In-Reply-To: References: <12fb1f58-2ad8-4763-9761-cd6f9eabb729@www.fastmail.com> Message-ID: The issue seems to be resolved by now. We were dealing with outdated pypi cache in rdoproject. After flushing it, we're finally picking up goot content. Thanks, Dariusz On Tue, Apr 5, 2022 at 2:19 PM Dariusz Smigiel wrote: > > > > * C8 Wallaby https://bugs.launchpad.net/tripleo/+bug/1967943 > > > * C9 Master|Wallaby: https://bugs.launchpad.net/tripleo/+bug/1967945 > > > > > > Please withhold rechecking until further notice. > > > > I think both of these issues are due to the problem where PyPI's CDN will fallback to the backup backend, and that backup backend is stale without newer package releases. OpenStack notices because constraints require specific versions that cannot be satisfied in these situations. Most other users of PyPI end up getting old versions. > > Clark, I think you hit a nail on the head here: > https://status.python.org/incidents/mxgkk3xxr9v7?u=v8pzlr5n28h8 > > Thanks for that explanation. > I learned a new thing with our infra. > > Dariusz From ignaziocassano at gmail.com Wed Apr 6 16:14:56 2022 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 6 Apr 2022 18:14:56 +0200 Subject: [Openstack][nova] server groups In-Reply-To: References: Message-ID: We solved our issue: the spec field in request_specs table was wrong. Non live migration works fine Ignazio Il Mer 6 Apr 2022, 09:05 Ignazio Cassano ha scritto: > We've 2 cases, those VMs are in the same group, one fails when migration > is launched, the other working well, you can see output of nova live > migration and spec on DB > > not working: > https://paste.openstack.org/show/b4QfkVHkUpIC97E3aWAx/ > working: > https://paste.openstack.org/show/busPt39bkfUzQthk1Tcf/ > > Ignazio > > Il Mer 6 Apr 2022, 06:53 Laurent Dumont ha > scritto: > >> I cannot easily reproduce, but what does Nova complain about with the >> live migration? Any chance you can run it with DEBUG? >> >> On Wed, Apr 6, 2022 at 12:34 AM Ignazio Cassano >> wrote: >> >>> Thanks Laurent. Sometimes the trick works and instances can migrate. We >>> do not understand what is wrong when instances fail to migrate. >>> We are usung soft-anti-affinity policy. >>> The spec field we inserted in both cases seeems the same. >>> Ignazio. >>> >>> Il Mer 6 Apr 2022, 01:03 Laurent Dumont ha >>> scritto: >>> >>>> I'm trying to find where else this was discussed, but afaik, this was >>>> never supported. >>>> >>>> I am not sure if someone was able to "hack" it's way to a working >>>> setup. It's a bit of a shame because it makes server-groups really not >>>> flexible :( >>>> >>>> On Tue, Apr 5, 2022 at 3:04 PM Ignazio Cassano < >>>> ignaziocassano at gmail.com> wrote: >>>> >>>>> Hello, we noted that instances can be inserted in server groups only >>>>> at instance creation step but we need to insert in a server group some old >>>>> instances. >>>>> >>>>> We tried to modify database nova_api server group tables but we noted >>>>> that we must modify spec in request_specs table . For us is not clear how >>>>> to modify the spec value. >>>>> We tried to investigate looking at instances inserted in a server >>>>> group at creation step and we got issues in instance live migration. >>>>> Please, anyone could provide any utility to do it or any template ? >>>>> Thanks >>>>> Ignazio >>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From adivya1.singh at gmail.com Wed Apr 6 16:27:00 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Wed, 6 Apr 2022 21:57:00 +0530 Subject: Wireguard setup between a server and a aio Message-ID: Hi Team, i m trying to set a wireguard setup between a web server and a AIO server with a public IP I have created a tunnel interface between 2 servers with Endpoint as AIP public IP range, but still I am unable to reach the Public IP of AIO. Note : Please note that IT has not allowed public IP range for AIO Regards Adivya Singh -------------- next part -------------- An HTML attachment was scrubbed... URL: From immaculateatim56 at gmail.com Wed Apr 6 21:01:18 2022 From: immaculateatim56 at gmail.com (ATIM IMMACULATE) Date: Thu, 7 Apr 2022 00:01:18 +0300 Subject: Requests to solve Unhelpful error message when neutron server is unavailable Message-ID: I am Immaculate Atim, an outreachy applicant, interested to work on open stack project #1 Add missing CLI support for some Glance API in Openstack Client . I request to work on the issue of Unhelpful error message when neutron server is unavailable . I would like to submit a patch for the stable/ocata branch that fixes this issue if it is decided that this is the right way to fix this issue. I commented to work on it but did not get any feedback so I would be glad to get feedback on this. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bob.haddleton at nokia.com Wed Apr 6 21:20:43 2022 From: bob.haddleton at nokia.com (HADDLETON, Robert W (Bob)) Date: Wed, 6 Apr 2022 16:20:43 -0500 Subject: [tc][tacker][heat-translator] Discusssion about heat-translater maintenance In-Reply-To: References: Message-ID: This is fine with me - Rico can make the necessary changes. Bob On 4/5/2022 9:57 PM, ueha.ayumu at fujitsu.com wrote: > > Hi Bob, > > I?m Ayumu Ueha, I work as a core of Tacker. > > Previously, some member of Tacker team participated to the core of > heat-translator. > > Since LiangLu has left the Tacker project, I would like to participate > the core of heat-translator from the Tacker team instead of him and > maintain it. Is it OK? > > This is agreed within the Tacker team at the Zed vPTG. > > >heat-translator > > >- yoshito-ito (yoshito.itou.dr at hco.ntt.co.jp) > > >- LiangLu (lu.liang at jp.fujitsu.com) *** change to ueha > (ueha.ayumu at fujitsu.com) *** > > Best regards, > > Ueha > -- Bob Haddleton Director of R&D Innovation, Advanced Technology Design Studio Cloud and Network Services Nokia Contact number: +1 630 805 2990 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: bob_haddleton.vcf Type: text/vcard Size: 263 bytes Desc: not available URL: From laurentfdumont at gmail.com Wed Apr 6 21:25:36 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Wed, 6 Apr 2022 17:25:36 -0400 Subject: [Openstack][nova] server groups In-Reply-To: References: Message-ID: Nice! Can you detail the fields you changed? I had a look in the DB for the request_specs and it's a big old JSON blurb. What fields did you change? On Wed, Apr 6, 2022, 12:15 PM Ignazio Cassano wrote: > We solved our issue: the spec field in request_specs table was wrong. Non > live migration works fine > Ignazio > > > Il Mer 6 Apr 2022, 09:05 Ignazio Cassano ha > scritto: > >> We've 2 cases, those VMs are in the same group, one fails when migration >> is launched, the other working well, you can see output of nova live >> migration and spec on DB >> >> not working: >> https://paste.openstack.org/show/b4QfkVHkUpIC97E3aWAx/ >> working: >> https://paste.openstack.org/show/busPt39bkfUzQthk1Tcf/ >> >> Ignazio >> >> Il Mer 6 Apr 2022, 06:53 Laurent Dumont ha >> scritto: >> >>> I cannot easily reproduce, but what does Nova complain about with the >>> live migration? Any chance you can run it with DEBUG? >>> >>> On Wed, Apr 6, 2022 at 12:34 AM Ignazio Cassano < >>> ignaziocassano at gmail.com> wrote: >>> >>>> Thanks Laurent. Sometimes the trick works and instances can migrate. We >>>> do not understand what is wrong when instances fail to migrate. >>>> We are usung soft-anti-affinity policy. >>>> The spec field we inserted in both cases seeems the same. >>>> Ignazio. >>>> >>>> Il Mer 6 Apr 2022, 01:03 Laurent Dumont ha >>>> scritto: >>>> >>>>> I'm trying to find where else this was discussed, but afaik, this was >>>>> never supported. >>>>> >>>>> I am not sure if someone was able to "hack" it's way to a working >>>>> setup. It's a bit of a shame because it makes server-groups really not >>>>> flexible :( >>>>> >>>>> On Tue, Apr 5, 2022 at 3:04 PM Ignazio Cassano < >>>>> ignaziocassano at gmail.com> wrote: >>>>> >>>>>> Hello, we noted that instances can be inserted in server groups only >>>>>> at instance creation step but we need to insert in a server group some old >>>>>> instances. >>>>>> >>>>>> We tried to modify database nova_api server group tables but we noted >>>>>> that we must modify spec in request_specs table . For us is not clear how >>>>>> to modify the spec value. >>>>>> We tried to investigate looking at instances inserted in a server >>>>>> group at creation step and we got issues in instance live migration. >>>>>> Please, anyone could provide any utility to do it or any template ? >>>>>> Thanks >>>>>> Ignazio >>>>>> >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Wed Apr 6 22:40:50 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Wed, 6 Apr 2022 18:40:50 -0400 Subject: Wireguard setup between a server and a aio In-Reply-To: References: Message-ID: I am not quite clear on your setup, but you could consider disabling port-security on the VM in AIO. That might be the cause as Wireguard might generate packets with a fake mac or IP address. On Wed, Apr 6, 2022 at 12:29 PM Adivya Singh wrote: > Hi Team, > > i m trying to set a wireguard setup between a web server and a AIO server > with a public IP > > I have created a tunnel interface between 2 servers with Endpoint as AIP > public IP range, but still I am unable to reach the Public IP of AIO. > > Note : Please note that IT has not allowed public IP range for AIO > > Regards > Adivya Singh > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Thu Apr 7 05:23:41 2022 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 7 Apr 2022 07:23:41 +0200 Subject: [Openstack][nova] server groups In-Reply-To: References: Message-ID: Hello Laurent, today I am out of office. Tomorrow I will send the spec value of an instance with server groups and the spec value of an instance without server groups. Ignazio Il Mer 6 Apr 2022, 23:25 Laurent Dumont ha scritto: > Nice! > > Can you detail the fields you changed? I had a look in the DB for the > request_specs and it's a big old JSON blurb. > > What fields did you change? > > On Wed, Apr 6, 2022, 12:15 PM Ignazio Cassano > wrote: > >> We solved our issue: the spec field in request_specs table was wrong. Non >> live migration works fine >> Ignazio >> >> >> Il Mer 6 Apr 2022, 09:05 Ignazio Cassano ha >> scritto: >> >>> We've 2 cases, those VMs are in the same group, one fails when migration >>> is launched, the other working well, you can see output of nova live >>> migration and spec on DB >>> >>> not working: >>> https://paste.openstack.org/show/b4QfkVHkUpIC97E3aWAx/ >>> working: >>> https://paste.openstack.org/show/busPt39bkfUzQthk1Tcf/ >>> >>> Ignazio >>> >>> Il Mer 6 Apr 2022, 06:53 Laurent Dumont ha >>> scritto: >>> >>>> I cannot easily reproduce, but what does Nova complain about with the >>>> live migration? Any chance you can run it with DEBUG? >>>> >>>> On Wed, Apr 6, 2022 at 12:34 AM Ignazio Cassano < >>>> ignaziocassano at gmail.com> wrote: >>>> >>>>> Thanks Laurent. Sometimes the trick works and instances can migrate. >>>>> We do not understand what is wrong when instances fail to migrate. >>>>> We are usung soft-anti-affinity policy. >>>>> The spec field we inserted in both cases seeems the same. >>>>> Ignazio. >>>>> >>>>> Il Mer 6 Apr 2022, 01:03 Laurent Dumont ha >>>>> scritto: >>>>> >>>>>> I'm trying to find where else this was discussed, but afaik, this was >>>>>> never supported. >>>>>> >>>>>> I am not sure if someone was able to "hack" it's way to a working >>>>>> setup. It's a bit of a shame because it makes server-groups really not >>>>>> flexible :( >>>>>> >>>>>> On Tue, Apr 5, 2022 at 3:04 PM Ignazio Cassano < >>>>>> ignaziocassano at gmail.com> wrote: >>>>>> >>>>>>> Hello, we noted that instances can be inserted in server groups only >>>>>>> at instance creation step but we need to insert in a server group some old >>>>>>> instances. >>>>>>> >>>>>>> We tried to modify database nova_api server group tables but we >>>>>>> noted that we must modify spec in request_specs table . For us is not >>>>>>> clear how to modify the spec value. >>>>>>> We tried to investigate looking at instances inserted in a server >>>>>>> group at creation step and we got issues in instance live migration. >>>>>>> Please, anyone could provide any utility to do it or any template ? >>>>>>> Thanks >>>>>>> Ignazio >>>>>>> >>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Thu Apr 7 05:53:36 2022 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 7 Apr 2022 14:53:36 +0900 Subject: [stable][horizon] add vishalmanchanda and tmazur to horizon-stable-core Message-ID: Hi, To horizon team, During the horizon PTG, we visit our stable branch reviewers. We (all existing core reviewers) agree to add vishalmanchanda and tmazur to the horizon stable core team. We believe they are familiar with the stable policy. Welcome! We also agreed to drop David Lyle from the horizon stable core as he is no longer active in horizon for long. Thanks David for your contributions so far! To stable-maint-core team, Could you apply the following changes to horizon-stable-main group in the Gerrit? Additions: manchandavishal143 at gmail.com, t.v.ovtchinnikova at gmail.com Removal: dklyle0 at gmail.com Thanks, Akihiro Motoki (irc: amotoki) From eblock at nde.ag Thu Apr 7 08:16:52 2022 From: eblock at nde.ag (Eugen Block) Date: Thu, 07 Apr 2022 08:16:52 +0000 Subject: [neutron] exposing ip address of external Network from within the virtual machine In-Reply-To: Message-ID: <20220407081652.Horde.9ev6LHAeGvbWnTFGgeW_hzh@webmail.nde.ag> Hi, > I want to expose the and Ip from the external network inside the vm > instance , but when I try to directly attach an interface from the > external network to the instance, I don't get and Ip address inside the > instance, and even if I assign it manually it still doesn't work, I can't > use enable DHCP with public network because it already has its own external > dhcp server. when you assign the IP manually, does it match what neutron shows for that port? To get an IP from an external network without DHCP you can use config drive during instance creation, cloud-init is required for that. Or if the instance already exists you can create a port with the fixed IP and assign that port to the instance. Then you still need to configure the IP within the instance, but then it should work. Regards, Eugen Zitat von A Monster : > I have two networks, one internal to openstack " internal_network" > 10.10.10.0/24 and an external Network "public" which is connected to an > external network 192.168.100.0/24 > in order to connect instances to "public" network, I created a router of > which the gateway is public network, and also connected to > internal_network, and used floating ip to access vm instances from the > external network (public), but the problem I have encountered now is that > I want to expose the and Ip from the external network inside the vm > instance , but when I try to directly attach an interface from the > external network to the instance, I don't get and Ip address inside the > instance, and even if I assign it manually it still doesn't work, I can't > use enable DHCP with public network because it already has its own external > dhcp server. > How can I solve this problem? From ts-takahashi at nec.com Thu Apr 7 08:41:41 2022 From: ts-takahashi at nec.com (=?iso-2022-jp?B?VEFLQUhBU0hJIFRPU0hJQUtJKBskQjliNjYhIUlSTEAbKEIp?=) Date: Thu, 7 Apr 2022 08:41:41 +0000 Subject: [tacker] Cancel PTG Day4 on 8th Apr. Message-ID: Hi, We will cancel Tacker PTG Day4 on 8th April because all topics have been discussed and completed. Regards, Toshiaki -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5764 bytes Desc: not available URL: From wodel.youchi at gmail.com Thu Apr 7 09:15:34 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Thu, 7 Apr 2022 10:15:34 +0100 Subject: [Kolla-ansible][Xena] Test Trove module Message-ID: Hi, I am testing the deployment of Xena, and part of my testing is to test the different modules. I am trying to test the Trove module and for now there is no dev image for Xena here : https://tarballs.opendev.org/openstack/trove/images/ So I decided to build the image, my base system is Rocky Linux, I followed the documentation here : https://docs.openstack.org/trove/latest/admin/building_guest_images.html#build-images-using-trovestack But when executing the build command [deployer at rscdeployer scripts]$ ./trovestack build-image ubuntu bionic true ubuntu $HOME/images/trove-guest-ubuntu-bionic-dev.qcow2 ******************************************************************************* Params for cmd_build_image function: ubuntu bionic true ubuntu /home/deployer/images/trove-guest-ubuntu-bionic-dev.qcow2 ******************************************************************************* *Ensuring we have all packages needed to build image.sudo: apt-get: command not found* *I got this : sudo: apt-get: command not found * Do I have to build the image on an ubuntu system? I thought that it is similar to building the octavia image, that it is not tied to the Linux distro you are using. Regards. Virus-free. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricolin at ricolky.com Thu Apr 7 09:40:10 2022 From: ricolin at ricolky.com (Rico Lin) Date: Thu, 7 Apr 2022 17:40:10 +0800 Subject: [tc][tacker][heat-translator] Discusssion about heat-translater maintenance In-Reply-To: References: Message-ID: Hi Ayumu Ueha I just added you to heat-translator-core. you should be able to perform core duty and be able to see heat-translator-core on your gerrit groups now. *Rico Lin* On Thu, Apr 7, 2022 at 5:29 AM HADDLETON, Robert W (Bob) < bob.haddleton at nokia.com> wrote: > This is fine with me - Rico can make the necessary changes. > > Bob > > On 4/5/2022 9:57 PM, ueha.ayumu at fujitsu.com wrote: > > Hi Bob, > > > > I?m Ayumu Ueha, I work as a core of Tacker. > > Previously, some member of Tacker team participated to the core of > heat-translator. > > Since LiangLu has left the Tacker project, I would like to participate the > core of heat-translator from the Tacker team instead of him and maintain > it. Is it OK? > > This is agreed within the Tacker team at the Zed vPTG. > > > > >heat-translator > > >- yoshito-ito (yoshito.itou.dr at hco.ntt.co.jp) > > >- LiangLu (lu.liang at jp.fujitsu.com) *** change to ueha ( > ueha.ayumu at fujitsu.com) *** > > > > Best regards, > > Ueha > > > > -- > Bob Haddleton > Director of R&D Innovation, Advanced Technology Design Studio > Cloud and Network Services > Nokia > Contact number: +1 630 805 2990 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ueha.ayumu at fujitsu.com Thu Apr 7 09:51:54 2022 From: ueha.ayumu at fujitsu.com (ueha.ayumu at fujitsu.com) Date: Thu, 7 Apr 2022 09:51:54 +0000 Subject: [tc][tacker][heat-translator] Discusssion about heat-translater maintenance In-Reply-To: References: Message-ID: Hi Rico, Bob Thank you for adding to heat-translator-core! I will start activities as core member. Many thanks! Best regards, Ueha From: Rico Lin Sent: Thursday, April 7, 2022 6:40 PM To: HADDLETON, Robert W (Bob) Cc: OpenStack Discuss Subject: Re: [tc][tacker][heat-translator] Discusssion about heat-translater maintenance Hi Ayumu Ueha I just added you to heat-translator-core. you should be able to perform core duty and be able to see heat-translator-core on your gerrit groups now. Rico Lin On Thu, Apr 7, 2022 at 5:29 AM HADDLETON, Robert W (Bob) > wrote: This is fine with me - Rico can make the necessary changes. Bob On 4/5/2022 9:57 PM, ueha.ayumu at fujitsu.com wrote: Hi Bob, I?m Ayumu Ueha, I work as a core of Tacker. Previously, some member of Tacker team participated to the core of heat-translator. Since LiangLu has left the Tacker project, I would like to participate the core of heat-translator from the Tacker team instead of him and maintain it. Is it OK? This is agreed within the Tacker team at the Zed vPTG. >heat-translator >- yoshito-ito (yoshito.itou.dr at hco.ntt.co.jp) >- LiangLu (lu.liang at jp.fujitsu.com) *** change to ueha (ueha.ayumu at fujitsu.com) *** Best regards, Ueha -- Bob Haddleton Director of R&D Innovation, Advanced Technology Design Studio Cloud and Network Services Nokia Contact number: +1 630 805 2990 -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Thu Apr 7 12:18:32 2022 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Thu, 7 Apr 2022 14:18:32 +0200 Subject: [stable][horizon] add vishalmanchanda and tmazur to horizon-stable-core In-Reply-To: References: Message-ID: Hi horizon team, Added & removed. Welcome to the new stable cores! \o/ Stable policy can be found here: https://docs.openstack.org/project-team-guide/stable-branches.html If you have any question or unsure about a patch then feel free to ping me on #openstack-stable Cheers, El?d (irc: elodilles) On 2022. 04. 07. 7:53, Akihiro Motoki wrote: > Hi, > > To horizon team, > During the horizon PTG, we visit our stable branch reviewers. > We (all existing core reviewers) agree to add vishalmanchanda and > tmazur to the horizon > stable core team. We believe they are familiar with the stable policy. Welcome! > We also agreed to drop David Lyle from the horizon stable core as he > is no longer active > in horizon for long. Thanks David for your contributions so far! > > To stable-maint-core team, > Could you apply the following changes to horizon-stable-main group in > the Gerrit? > > Additions: manchandavishal143 at gmail.com, t.v.ovtchinnikova at gmail.com > Removal: dklyle0 at gmail.com > > Thanks, > Akihiro Motoki (irc: amotoki) > From katonalala at gmail.com Thu Apr 7 12:25:22 2022 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 7 Apr 2022 14:25:22 +0200 Subject: [neutron] PTG - Thursday Message-ID: Hi, As we have no topics we have no meeting today. We have the Nova-Neutron cross project meeting on Friday 14:00 UTC: https://etherpad.opendev.org/p/neutron-zed-ptg#L308 Cheers Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Thu Apr 7 12:36:20 2022 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 7 Apr 2022 14:36:20 +0200 Subject: Requests to solve Unhelpful error message when neutron server is unavailable In-Reply-To: References: Message-ID: Hi Atim, First of all: welcome! The bu you linked (see [1]) describes an openstackclient issue for Neutron, I checked it with current master, and the error msg seems ok: $ openstack extension list --network Failed to retrieve extensions list from Network API So the issue seems to be fixed, but anyway by openstack policies we first fix bugs on master and backport them to older branches. >From your comment on the bug, you are an outreachy applicant, perhaps try to ask for your mentor who can help you find things to help your learning. Lajos Katona(lajoskatona [1]: https://bugs.launchpad.net/ubuntu/+source/python-openstackclient/+bug/1675394 ATIM IMMACULATE ezt ?rta (id?pont: 2022. ?pr. 6., Sze, 23:09): > I am Immaculate Atim, an outreachy applicant, interested to work on open > stack project #1 Add missing CLI support for some Glance API in Openstack > Client . I request to work on the issue of Unhelpful error message when > neutron server is unavailable > . > I would like to submit a patch for the stable/ocata branch that fixes this > issue if it is decided that this is the right way to fix this issue. I > commented to work on it but did not get any feedback so I would be glad to > get feedback on this. > > Thank you. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amonster369 at gmail.com Thu Apr 7 12:46:54 2022 From: amonster369 at gmail.com (A Monster) Date: Thu, 7 Apr 2022 13:46:54 +0100 Subject: [neutron] exposing ip address of external Network from Message-ID: I tried attaching that public interface directly to the instance, and use that same ip address to configure the port manually inside the instance, but it doesn't work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Apr 7 12:55:55 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 07 Apr 2022 07:55:55 -0500 Subject: [stable][horizon] add vishalmanchanda and tmazur to horizon-stable-core In-Reply-To: References: Message-ID: <18004187d92.11ccbbcf7412477.4917568678189936416@ghanshyammann.com> ---- On Thu, 07 Apr 2022 07:18:32 -0500 El?d Ill?s wrote ---- > Hi horizon team, > > Added & removed. Welcome to the new stable cores! \o/ > Stable policy can be found here: > https://docs.openstack.org/project-team-guide/stable-branches.html > If you have any question or unsure about a patch then feel free to ping > me on #openstack-stable Thanks Elod for taking care of it. Also, one thing to note. We have changed the project specific stable team maintenance process and now project team cam do that with consultation of stable-maint-core team. - https://docs.openstack.org/project-team-guide/stable-branches.html#project-specific-teams -gmann > > Cheers, > > El?d > (irc: elodilles) > > > On 2022. 04. 07. 7:53, Akihiro Motoki wrote: > > Hi, > > > > To horizon team, > > During the horizon PTG, we visit our stable branch reviewers. > > We (all existing core reviewers) agree to add vishalmanchanda and > > tmazur to the horizon > > stable core team. We believe they are familiar with the stable policy. Welcome! > > We also agreed to drop David Lyle from the horizon stable core as he > > is no longer active > > in horizon for long. Thanks David for your contributions so far! > > > > To stable-maint-core team, > > Could you apply the following changes to horizon-stable-main group in > > the Gerrit? > > > > Additions: manchandavishal143 at gmail.com, t.v.ovtchinnikova at gmail.com > > Removal: dklyle0 at gmail.com > > > > Thanks, > > Akihiro Motoki (irc: amotoki) > > > From eblock at nde.ag Thu Apr 7 13:19:05 2022 From: eblock at nde.ag (Eugen Block) Date: Thu, 07 Apr 2022 13:19:05 +0000 Subject: [neutron] exposing ip address of external Network from In-Reply-To: Message-ID: <20220407131905.Horde.aLr6AnPGB565CwmN5Aj20XX@webmail.nde.ag> What exactly doesn't work? If you configure the IP manually, does it show in 'ip a' output? Is the interface up? You need to share more details otherwise it's difficult to help, it would be mostly guessing. Zitat von A Monster : > I tried attaching that public interface directly to the instance, and use > that same ip address to configure the port manually inside the instance, > but it doesn't work. From wodel.youchi at gmail.com Thu Apr 7 13:38:02 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Thu, 7 Apr 2022 14:38:02 +0100 Subject: [Kolla-ansible][Xena] Test Trove module In-Reply-To: References: Message-ID: Hi, I found the error, Rocky is not supported for, so I switched to CentOS machine. The script starts but I had two problems : The trovestack script searches for a package named qemu and don't find it, so I modified the script to use qemu* instead of qemu The second problem is related to the download itself, I have this error : 2022-04-07 13:18:02.677 | Caching guest-agent from https://opendev.org/openstack/trove in /home/deployer/.cache/ image-create/source-repositories/guest_agent_842a440b9b12731c50f3b4042bf842ea7e58467d 2022-04-07 13:22:31.299 | error: RPC failed; curl 18 transfer closed with outstanding read data remaining 2022-04-07 13:22:31.299 | error: 6149 bytes of body are still expected 2022-04-07 13:22:31.300 | fetch-pack: unexpected disconnect while reading sideband packet 2022-04-07 13:22:31.300 | fatal: early EOF 2022-04-07 13:22:31.301 | fatal: fetch-pack: invalid index-pack output Any ideas? Regards. Le jeu. 7 avr. 2022 ? 10:15, wodel youchi a ?crit : > Hi, > > I am testing the deployment of Xena, and part of my testing is to test the > different modules. I am trying to test the Trove module and for now there > is no dev image for Xena here : > https://tarballs.opendev.org/openstack/trove/images/ > > So I decided to build the image, my base system is Rocky Linux, I followed > the documentation here : > https://docs.openstack.org/trove/latest/admin/building_guest_images.html#build-images-using-trovestack > > But when executing the build command > [deployer at rscdeployer scripts]$ ./trovestack build-image ubuntu bionic > true ubuntu $HOME/images/trove-guest-ubuntu-bionic-dev.qcow2 > > ******************************************************************************* > Params for cmd_build_image function: ubuntu bionic true ubuntu > /home/deployer/images/trove-guest-ubuntu-bionic-dev.qcow2 > > ******************************************************************************* > > *Ensuring we have all packages needed to build image.sudo: apt-get: > command not found* > > *I got this : sudo: apt-get: command not found * > > Do I have to build the image on an ubuntu system? I thought that it is > similar to building the octavia image, that it is not tied to the Linux > distro you are using. > > Regards. > > > Virus-free. > www.avast.com > > <#m_8737481173232019115_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Thu Apr 7 14:07:09 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Thu, 7 Apr 2022 15:07:09 +0100 Subject: [kolla-ansible][Xena][Magnum] dns configured in template is not pushed into cluster instances Message-ID: Hi, I deployed Xena using kolla-ansible. Then I deployed Magnum, then created a kubernetes template with --dns-nameserver ip_of_my_dns_server That dns server does not get pushed into the master instance, and I get : Apr 07 13:57:32 k8simplekubf34-bnczkwda7z5d-master-0 podman[2153]: Authorization failed: *Unable to establish connection to https://dash.cloud.example.local:5000/v3/auth/tokens * Apr 07 13:57:32 k8simplekubf34-bnczkwda7z5d-master-0 podman[2153]: Source [heat] Unavailable. Apr 07 13:57:32 k8simplekubf34-bnczkwda7z5d-master-0 podman[2153]: /var/lib/os-collect-config/local-data not found. Skipping In the master's /etc/resolv.conf I have google's dns 8.8.8.8 I am using fedora-core 34 for my cluster. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Thu Apr 7 14:28:53 2022 From: zigo at debian.org (Thomas Goirand) Date: Thu, 7 Apr 2022 16:28:53 +0200 Subject: Eventlet fails under Python 3.10 Message-ID: Hi, This was sent to the Debian BTS: https://bugs.debian.org/1009112 This issue was reported upstream: https://github.com/eventlet/eventlet/issues/730 https://github.com/eventlet/eventlet/issues/739 For #730, I tried manually, indeed: # python3.9 tests/isolated/patcher_existing_locks_locked.py pass # python3.10 tests/isolated/patcher_existing_locks_locked.py Traceback (most recent call last): File "/root/eventlet/python-eventlet-0.30.2/tests/isolated/patcher_existing_locks_locked.py", line 19, in lock.release() RuntimeError: cannot release un-acquired lock I don't understand what's going on. Can anyone help? Cheers, Thomas Goirand (zigo) From immaculateatim56 at gmail.com Thu Apr 7 14:53:38 2022 From: immaculateatim56 at gmail.com (ATIM IMMACULATE) Date: Thu, 7 Apr 2022 17:53:38 +0300 Subject: Requests to solve Unhelpful error message when neutron server is unavailable In-Reply-To: References: Message-ID: Thank you for your help. On Thu, Apr 7, 2022, 3:36 PM Lajos Katona wrote: > Hi Atim, > First of all: welcome! > The bu you linked (see [1]) describes an openstackclient issue for > Neutron, I checked it with current master, and the error msg seems ok: > $ openstack extension list --network > Failed to retrieve extensions list from Network API > > So the issue seems to be fixed, but anyway by openstack policies we first > fix bugs on master and backport them to older branches. > From your comment on the bug, you are an outreachy applicant, perhaps try > to ask for your mentor who can help you find things to help your learning. > > Lajos Katona(lajoskatona > > [1]: > https://bugs.launchpad.net/ubuntu/+source/python-openstackclient/+bug/1675394 > > ATIM IMMACULATE ezt ?rta (id?pont: 2022. > ?pr. 6., Sze, 23:09): > >> I am Immaculate Atim, an outreachy applicant, interested to work on open >> stack project #1 Add missing CLI support for some Glance API in Openstack >> Client . I request to work on the issue of Unhelpful error message when >> neutron server is unavailable >> . >> I would like to submit a patch for the stable/ocata branch that fixes this >> issue if it is decided that this is the right way to fix this issue. I >> commented to work on it but did not get any feedback so I would be glad to >> get feedback on this. >> >> Thank you. >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Thu Apr 7 14:58:09 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 07 Apr 2022 07:58:09 -0700 Subject: [Kolla-ansible][Xena] Test Trove module In-Reply-To: References: Message-ID: On Thu, Apr 7, 2022, at 6:38 AM, wodel youchi wrote: > Hi, > I found the error, Rocky is not supported for, so I switched to CentOS > machine. The script starts but I had two problems : > The trovestack script searches for a package named qemu and don't find > it, so I modified the script to use qemu* instead of qemu > > The second problem is related to the download itself, I have this error > : > 2022-04-07 13:18:02.677 | Caching guest-agent from > https://opendev.org/openstack/trove in /home/deployer/.cache/ > image-create/source-repositories/guest_agent_842a440b9b12731c50f3b4042bf842ea7e58467d > 2022-04-07 13:22:31.299 | error: RPC failed; curl 18 transfer closed > with outstanding read data remaining > 2022-04-07 13:22:31.299 | error: 6149 bytes of body are still expected > 2022-04-07 13:22:31.300 | fetch-pack: unexpected disconnect while > reading sideband packet > 2022-04-07 13:22:31.300 | fatal: early EOF > 2022-04-07 13:22:31.301 | fatal: fetch-pack: invalid index-pack output > > Any ideas? This is related to cloning and caching the https://opendev.org/openstack/trove git repo during the image build for the trove database image. This looks like a network error of some sort with the connection ending before it was completed. You might want to double check any proxies or firewalls between you and https://opendev.org. It may have also been the Internet acting up and trying again would be fine. I would try again and if it persists start looking at network connectivity between you and https://opendev.org and take it from there. > > Regards. From amonster369 at gmail.com Thu Apr 7 15:04:26 2022 From: amonster369 at gmail.com (A Monster) Date: Thu, 7 Apr 2022 16:04:26 +0100 Subject: [neutron] exposing ip address of external Network from Message-ID: When I attach an interface to an external network, openstack gives an ip address of 192.168.10.13 to the instance, Inside the instance, I can see a new network interface, but it doesn't get an ip address automatically, and when I configure it to have 192.169.10.13 manually, I still cannot ping the other ips on this network, and I cannot ping 192.168.10.13 from the outside I think it's not feasible to attach instances to external networks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Thu Apr 7 15:38:07 2022 From: amy at demarco.com (Amy Marrich) Date: Thu, 7 Apr 2022 10:38:07 -0500 Subject: OPS Meetup Call for Topics Message-ID: During our PTG session, we started the process of gathering potential topics for the OPS Meetup to be held in Berlin on June 10. If you are interested in attending the event ,please add any topics you wish to discuss to the etherpad[0]. Deadline for adding new topics is April 30, so we can open for session voting in early May. Registration will be available shortly. Thanks, Amy 0 - https://etherpad.opendev.org/p/april2022-ptg-openstack-ops -------------- next part -------------- An HTML attachment was scrubbed... URL: From pdeore at redhat.com Thu Apr 7 16:00:52 2022 From: pdeore at redhat.com (Pranali Deore) Date: Thu, 7 Apr 2022 21:30:52 +0530 Subject: [rhos-storage] [glance] Zed PTG schedule In-Reply-To: References: Message-ID: Hello Everyone, We will not be having the Secure RBAC session tomorrow (Friday), as due to time crunch we couldn't discuss the Glance related RBAC details in today's Open hour secure RBAC session. So, we will discuss the details in our weekly meeting once we have some information from the policy popup meeting after PTG. Thanks, ~Pranali On Tue, Apr 5, 2022 at 10:24 PM Abhishek Kekane wrote: > Hi Everyone, > > An update on the Glance PTG etherpad[1] , on Friday 08 April, we will be > having another session to discuss Secure RBAC community goal where we will > be discussion what work we need to target in this cycle, > This session will be held after Open hour session happening on Thursday so > it will give us more clarity about community goal and Zed cycle target. > > [1] https://etherpad.opendev.org/p/zed-glance-ptg > > > > Thanks & Best Regards, > > Abhishek Kekane > > > On Mon, Mar 28, 2022 at 7:26 PM Abhishek Kekane > wrote: > >> Hello All, >> Greetings!!! >> >> Zed PTG is going to start next week and if you haven't already >> registered, please do so as soon as possible [1]. >> >> I have created an etherpad [2] and also added day wise topics along with >> timings we are going to discuss. Kindly let me know if you have any >> concerns with allotted time slots. We also have one slot open on Wednesday >> and Friday is kept reserved for any unplanned discussions. So please feel >> free to add your topics if you still haven't added yet. >> >> As a reminder, these are the time slots for our discussion. >> >> Tuesday 5 April 2022 >> 1400 UTC to 1700 UTC >> >> Wednesday 6 April 2022 >> 1400 UTC to 1700 UTC >> >> Thursday 7 April 2022 >> 1400 UTC to 1700 UTC >> >> Friday 8 April 2022 >> 1400 UTC to 1700 UTC >> >> NOTE: >> At the moment we don't have any sessions scheduled on Friday, if there >> are any last moment request(s)/topic(s) we will discuss them on Friday else >> we will conclude our PTG on Thursday 7th April. >> >> We will be using bluejeans for our discussion, kindly try to use it once >> before the actual discussion. The meeting URL is mentioned in etherpad [2] >> and will be the same throughout the PTG. >> >> [1] https://openinfra-ptg.eventbrite.com/ >> [2] https://etherpad.opendev.org/p/zed-glance-ptg >> >> Thank you, >> >> Abhishek >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anuragsinghrajawat22 at gmail.com Thu Apr 7 14:57:58 2022 From: anuragsinghrajawat22 at gmail.com (Anurag Singh Rajawat) Date: Thu, 7 Apr 2022 20:27:58 +0530 Subject: [glance] Outreachy 2022 Message-ID: Dear glance team, I'd setup glance, glance-store and glance-client on my local setup, but some tests for glance were failing, also is there any good first issues so that I can understand the project more clearly? I also asked about it in IRC but doesn't got response. Thanks Sincerely Anurag -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Thu Apr 7 18:33:26 2022 From: akekane at redhat.com (Abhishek Kekane) Date: Fri, 8 Apr 2022 00:03:26 +0530 Subject: [glance] Outreachy 2022 In-Reply-To: References: Message-ID: Hi Anurag, Sorry that we were not able to address you on IRC. Currently glance team is busy in PTG which will end tomorrow evening and that is why we might have missed your ping on IRC. I would suggest to share your failures so that we can guide you. I think from Monday onwards everyone will be back to their daily routine so outreachy glance team will help you to resolve your queries. Meanwhile if it is urgent then you can share your doubts and I will try my best to resolve them. Thanks and Regards, Abhishek On Thu, 7 Apr, 2022, 22:30 Anurag Singh Rajawat, < anuragsinghrajawat22 at gmail.com> wrote: > Dear glance team, I'd setup glance, glance-store and glance-client on my > local setup, but some tests for glance were failing, also is there any good > first issues so that I can understand the project more clearly? > I also asked about it in IRC but doesn't got response. > > Thanks > > Sincerely > Anurag > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasufum.o at gmail.com Fri Apr 8 01:48:20 2022 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Fri, 8 Apr 2022 10:48:20 +0900 Subject: [tacker] Cancel PTG Day4 on 8th Apr. In-Reply-To: References: Message-ID: <7de65195-e653-46d4-6c6f-c618892284e3@gmail.com> Toshiaki and team, Thank you for driving the PTG meeting yesterday. I've unbooked Austin room for canceled sessions. Yasufumi On 2022/04/07 17:41, TAKAHASHI TOSHIAKI(?? ??) wrote: > Hi, > > We will cancel Tacker PTG Day4 on 8th April because all topics have been > discussed and completed. > > Regards, > > Toshiaki > From park0kyung0won at dgist.ac.kr Fri Apr 8 04:59:50 2022 From: park0kyung0won at dgist.ac.kr (=?UTF-8?B?67CV6rK97JuQ?=) Date: Fri, 8 Apr 2022 13:59:50 +0900 (KST) Subject: [Question] Do I must separate management network and overlay network? Message-ID: <1126862712.105646.1649393990764.JavaMail.root@mailwas2> An HTML attachment was scrubbed... URL: From park0kyung0won at dgist.ac.kr Fri Apr 8 05:19:53 2022 From: park0kyung0won at dgist.ac.kr (=?UTF-8?B?67CV6rK97JuQ?=) Date: Fri, 8 Apr 2022 14:19:53 +0900 (KST) Subject: I sent e-mail to this mailing list but got reply from support@paribet.ru! Why!?!??? Message-ID: <226576923.105955.1649395193421.JavaMail.root@mailwas2> An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Apr 8 05:52:29 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 08 Apr 2022 07:52:29 +0200 Subject: [neutron] exposing ip address of external Network from In-Reply-To: References: Message-ID: <3485593.R56niFO833@p1> Hi, On czwartek, 7 kwietnia 2022 17:04:26 CEST A Monster wrote: > When I attach an interface to an external network, openstack gives an ip > address of 192.168.10.13 to the instance, > Inside the instance, I can see a new network interface, but it doesn't get > an ip address automatically, and when I configure it to have 192.169.10.13 > manually, I still cannot ping the other ips on this network, and I cannot > ping 192.168.10.13 from the outside > I think it's not feasible to attach instances to external networks. > External network in Neutron is the same L2 network as any other. The only difference is that such network is marked as "external" so neutron knows that can use floating Ips from it and use that network as router's gateway. Plugging instance into such network is totally valid and should works. Maybe Your external network have disabled dhcp and that's why VM don't get IP address automatically. Please check first if You configured proper netmask manually. Then please check step by step where Your packets are dropped. Maybe http://kaplonski.pl/blog/neutron-where-is-my-packet-2/[1] will be somehow helpful for You. This is mostly about vxlan (tenant) network but there is flat network used for router gateway there and traffic from your vm should go the same path as traffic from the router's gateway in that post. -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] http://kaplonski.pl/blog/neutron-where-is-my-packet-2/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From manchandavishal143 at gmail.com Fri Apr 8 06:23:41 2022 From: manchandavishal143 at gmail.com (vishal manchanda) Date: Fri, 8 Apr 2022 11:53:41 +0530 Subject: [stable][horizon] add vishalmanchanda and tmazur to horizon-stable-core In-Reply-To: <18004187d92.11ccbbcf7412477.4917568678189936416@ghanshyammann.com> References: <18004187d92.11ccbbcf7412477.4917568678189936416@ghanshyammann.com> Message-ID: Thanks, amotoki for proposing me to the stable core team and everyone else for the support! Regards, Vishal Manchanda On Thu, Apr 7, 2022 at 6:26 PM Ghanshyam Mann wrote: > ---- On Thu, 07 Apr 2022 07:18:32 -0500 El?d Ill?s > wrote ---- > > Hi horizon team, > > > > Added & removed. Welcome to the new stable cores! \o/ > > Stable policy can be found here: > > https://docs.openstack.org/project-team-guide/stable-branches.html > > If you have any question or unsure about a patch then feel free to ping > > me on #openstack-stable > > Thanks Elod for taking care of it. > > Also, one thing to note. We have changed the project specific stable team > maintenance process > and now project team cam do that with consultation of stable-maint-core > team. > > - > https://docs.openstack.org/project-team-guide/stable-branches.html#project-specific-teams > > -gmann > > > > > Cheers, > > > > El?d > > (irc: elodilles) > > > > > > On 2022. 04. 07. 7:53, Akihiro Motoki wrote: > > > Hi, > > > > > > To horizon team, > > > During the horizon PTG, we visit our stable branch reviewers. > > > We (all existing core reviewers) agree to add vishalmanchanda and > > > tmazur to the horizon > > > stable core team. We believe they are familiar with the stable > policy. Welcome! > > > We also agreed to drop David Lyle from the horizon stable core as he > > > is no longer active > > > in horizon for long. Thanks David for your contributions so far! > > > > > > To stable-maint-core team, > > > Could you apply the following changes to horizon-stable-main group in > > > the Gerrit? > > > > > > Additions: manchandavishal143 at gmail.com, t.v.ovtchinnikova at gmail.com > > > Removal: dklyle0 at gmail.com > > > > > > Thanks, > > > Akihiro Motoki (irc: amotoki) > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Fri Apr 8 07:07:38 2022 From: katonalala at gmail.com (Lajos Katona) Date: Fri, 8 Apr 2022 09:07:38 +0200 Subject: [Question] Do I must separate management network and overlay network? In-Reply-To: <1126862712.105646.1649393990764.JavaMail.root@mailwas2> References: <1126862712.105646.1649393990764.JavaMail.root@mailwas2> Message-ID: Hi, That is an example only, if you don't need provider network, you would like to use only overlay networks like geneve, you can use only the suggested 2 interfaces, one for management and one for traffic. Lajos Katona (lajoskatona) ??? ezt ?rta (id?pont: 2022. ?pr. 8., P, 7:07): > Hello everyone > > I'm trying to setup openstack cluster with openvswitch, following the > guide in link below > > https://docs.openstack.org/neutron/yoga/admin/deploy-ovs-selfservice.html > > > Diagram in the link above states that compute nodes should have three > interfaces(management, overlay and provider) > > My question is, do I really need separated management network and overlay > network? (I only have two switches) > > It seems like overlay traffic between VMs in virtual network are > encapsulated with GENEVE, will not escape to management network. > > Is there any possible security risk of using the same network for both > overlay and management? (not performance concerns but security) > > > Thank you in advance! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Fri Apr 8 07:12:06 2022 From: zigo at debian.org (Thomas Goirand) Date: Fri, 8 Apr 2022 09:12:06 +0200 Subject: [Question] Do I must separate management network and overlay network? In-Reply-To: <1126862712.105646.1649393990764.JavaMail.root@mailwas2> References: <1126862712.105646.1649393990764.JavaMail.root@mailwas2> Message-ID: <5b5c5062-b965-f0ee-f78b-212b8ef9cb12@debian.org> On 4/8/22 06:59, ??? wrote: > Hello everyone > > I'm trying to setup?openstack cluster with openvswitch, following the > guide in link below > > https://docs.openstack.org/neutron/yoga/admin/deploy-ovs-selfservice.html > > > Diagram in the link above states that compute nodes should have three > interfaces(management, overlay and provider) > > My question is, do I really need separated management network and > overlay network? (I only have two switches) You don't *have* to, but it's possible. The only difference in the setup is if the ml2 config file list a different IP address than the management IP, but it's ok if both are the same (it will continue to work). > It seems like overlay traffic between VMs in virtual network are > encapsulated with GENEVE, will not escape to management network. The traffic wont escape. It's just that if one VM floods the management network, your operations may become difficult. Alternatively, you can use the same wire, but with different subnets, and setup QoS in your switch, if you identified this may be a problem. That being said, with modern networking (like 2x 25 Gbits/s becoming very common), this isn't much of a problem anymore. I hope this helps, Cheers, Thomas Goirand (zigo) From oliver.weinmann at me.com Fri Apr 8 07:19:12 2022 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Fri, 8 Apr 2022 09:19:12 +0200 Subject: Magnum not working / kube minions not spawning Message-ID: <931C5465-F7E8-430F-97E0-1FB335509E00@me.com> Hi, I recently deployed Openstack wallaby using kolla-ansible and also deployed magnum. I know it was working fine a while ago and I was able to spin up K8s clusters without a problem. But I think This was on Ussuri back then. I went through the magnum troubleshooting guide but couldn?t solve my problem. Magnum spins up the master node and I can log in via SSH using its floating IP. I checked the logs and saw this after waiting for a few minutes: role.kubernetes.io/master="" + echo 'Trying to label master node with node-role.kubernetes.io/master=""' + sleep 5s ++ kubectl get --raw=/healthz Error from server (InternalError): an error on the server ("[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[-]poststarthook/crd-informer-synced failed: reason withheld\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[+]poststarthook/apiserver/bootstrap-system-flowcontrol-configuration ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[-]autoregister-completion failed: reason withheld\n[+]poststarthook/apiservice-openapi-controller ok\nhealthz check failed") has prevented the request from succeeding + '[' ok = '' ']' + echo 'Trying to label master node with node-role.kubernetes.io/master=""' + sleep 5s Trying to label master node with node-role.kubernetes.io/master="" ++ kubectl get --raw=/healthz + '[' ok = ok ']' + kubectl patch node k8s-test-small-cal-zwe5xmigugwj-master-0 --patch '{"metadata": {"labels": {"node-role.kubernetes.io/master": ""}}}' Error from server (NotFound): nodes "k8s-test-small-cal-zwe5xmigugwj-master-0" not found + echo 'Trying to label master node with node-role.kubernetes.io/master=""' Running kubectl get nodes is just empty even when appending all-namespaces. I pretty much used the documentation that I created when I was using Ussuri. I wonder what has changed since then that would make this fail. I googled for hours but was not able to find similar issues and if then it was about having different version of k8s server and client. Which is definitely not the case. I also tried this on Xena but it also fails. I do have the feeling that the issue is network related but I do not see any issues at all spinning up instances and also the communication between instances works fine. Here are my current configs: Globals.yml [vagrant at seed ~]$ grep ^[^#] /etc/kolla/globals.yml --- kolla_base_distro: "centos" kolla_install_type: "source" openstack_release: "wallaby" kolla_internal_vip_address: "192.168.45.222" kolla_external_vip_address: "192.168.2.222" network_interface: "eth2" neutron_external_interface: "eth1" keepalived_virtual_router_id: "222" enable_haproxy: "yes" enable_magnum: ?yes? multinode hosts file control[01:03] ansible_user=vagrant ansible_password=vagrant ansible_become=true api_interface=eth3 compute[01:02] ansible_user=vagrant ansible_password=vagrant ansible_become=true api_interface=eth3 [control] # These hostname must be resolvable from your deployment host control[01:03] # The above can also be specified as follows: #control[01:03] ansible_user=vagrant ansible_password=vagrant ansible_become=true #compute[01:02] ansible_user=vagrant ansible_password=vagrant ansible_become=true # The network nodes are where your l3-agent and loadbalancers will run # This can be the same as a host in the control group [network] control[01:03] #network01 #network02 [compute] compute[01:02] [monitoring] control[01:03] #monitoring01 # When compute nodes and control nodes use different interfaces, # you need to comment out "api_interface" and other interfaces from the globals.yml # and specify like below: #compute01 neutron_external_interface=eth0 api_interface=em1 storage_interface=em1 tunnel_interface=em1 [storage] control[01:03] #storage01 [deployment] localhost ansible_connection=local cat /etc/kolla/config/magnum.conf [trust] cluster_user_trust = True Sorry for the formatting. Sending this on a smartphone with plenty of copy and paste. Best Regards, Oliver -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Fri Apr 8 07:56:55 2022 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 8 Apr 2022 09:56:55 +0200 Subject: [Openstack][nova] server groups In-Reply-To: References: Message-ID: Hello, int he nova_api dn table request specs, the following is spec value for an instance in server groups: {"nova_object.version": "1.8", "nova_object.changes": ["requested_destination", "instance_uuid", "retry", "num_instances", "pci_requests", "limits", "availability_zone", "force_nodes", "image", "instance_group", "force_hosts", "numa_topology", "ignore_hosts", "flavor", "project_id", "security_groups", "scheduler_hints"], "nova_object.name": "RequestSpec", "nova_object.data": {"requested_destination": null, "instance_uuid": "0cc6b190-73c7-4f97-8ec9-86c58ffadf2d", "retry": null, "num_instances": 2, "pci_requests": {"nova_object.version": "1.1", "nova_object.changes": ["requests"], "nova_object.name": "InstancePCIRequests", "nova_object.data": {"requests": []}, "nova_object.namespace": "nova"}, "limits": {"nova_object.version": "1.0", "nova_object.changes": ["vcpu", "memory_mb", "disk_gb", "numa_topology"], " nova_object.name": "SchedulerLimits", "nova_object.data": {"vcpu": null, "memory_mb": null, "disk_gb": null, "numa_topology": null}, "nova_object.namespace": "nova"}, "availability_zone": "nova", "force_nodes": null, "image": {"nova_object.version": "1.8", "nova_object.changes": ["status", "name", "container_format", "created_at", "disk_format", "updated_at", "id", "min_disk", "min_ram", "checksum", "owner", "properties", "size"], "nova_object.name": "ImageMeta", "nova_object.data": {"status": "active", "created_at": "2021-10-05T13:53:07Z", "name": "centos7-05102021", "container_format": "bare", "min_ram": 0, "disk_format": "qcow2", "updated_at": "2021-10-05T13:53:25Z", "id": "a04131be-7b16-4a0b-bc38-b77428259ec8", "min_disk": 0, "checksum": "db51c6c23e77ac4cde8617070e640e1d", "owner": "0e760ccde5d24af5a571de40220fbf80", "properties": {"nova_object.version": "1.19", "nova_object.changes": ["hw_qemu_guest_agent", "os_type", "os_require_quiesce", "hw_disk_bus"], "nova_object.name": "ImageMetaProps", "nova_object.data": {"hw_qemu_guest_agent": true, "os_type": "linux", "hw_disk_bus": "virtio", "os_require_quiesce": true}, "nova_object.namespace": "nova"}, "size": 4033413120}, "nova_object.namespace": "nova"}, "instance_group": {"nova_object.version": "1.10", "nova_object.changes": ["hosts", "members"], "nova_object.name": "InstanceGroup", "nova_object.data": {"policies": ["soft-anti-affinity"], "project_id": "0e760ccde5d24af5a571de40220fbf80", "user_id": "5134ed3b93284af5ba2f05d7361edf53", "uuid": "2938ef43-9fbc-4f04-aa3b-72144e343558", "deleted": false, "created_at": "2022-03-31T08:52:19Z", "updated_at": null, "hosts": null, "members": null, "deleted_at": null, "id": 67, "name": "topolino"}, "nova_object.namespace": "nova"}, "force_hosts": null, "numa_topology": null, "ignore_hosts": null, "flavor": {"nova_object.version": "1.2", "nova_object.name": "Flavor", "nova_object.data": {"disabled": false, "root_gb": 40, "description": null, "flavorid": "3", "deleted": false, "created_at": "2017-12-20T15:58:13Z", "ephemeral_gb": 0, "updated_at": null, "memory_mb": 4096, "vcpus": 2, "extra_specs": {}, "swap": 0, "rxtx_factor": 1.0, "is_public": true, "deleted_at": null, "vcpu_weight": 0, "name": "m1.medium"}, "nova_object.namespace": "nova"}, "project_id": "0e760ccde5d24af5a571de40220fbf80", "security_groups": {"nova_object.version": "1.1", "nova_object.changes": ["objects"], " nova_object.name": "SecurityGroupList", "nova_object.data": {"objects": [{"nova_object.version": "1.2", "nova_object.changes": ["uuid"], " nova_object.name": "SecurityGroup", "nova_object.data": {"uuid": "4d2fdd79-0f6c-4c26-a87b-a76b5d12901e"}, "nova_object.namespace": "nova"}]}, "nova_object.namespace": "nova"}, "scheduler_hints": {"group": ["2938ef43-9fbc-4f04-aa3b-72144e343558"]}}, "nova_object.namespace": "nova"} The following is the spec value of an instance not in server group: {"nova_object.version": "1.8", "nova_object.changes": ["requested_destination", "instance_uuid", "retry", "num_instances", "pci_requests", "limits", "availability_zone", "force_nodes", "image", "instance_group", "force_hosts", "numa_topology", "ignore_hosts", "flavor", "project_id", "security_groups", "scheduler_hints"], "nova_object.name": "RequestSpec", "nova_object.data": {"requested_destination": null, "instance_uuid": "8c0c44c3-c00a-41ec-bf23-a427d99de2b3", "retry": null, "num_instances": 1, "pci_requests": {"nova_object.version": "1.1", "nova_object.changes": ["requests"], "nova_object.name": "InstancePCIRequests", "nova_object.data": {"requests": []}, "nova_object.namespace": "nova"}, "limits": {"nova_object.version": "1.0", "nova_object.changes": ["vcpu", "memory_mb", "disk_gb", "numa_topology"], " nova_object.name": "SchedulerLimits", "nova_object.data": {"vcpu": null, "memory_mb": null, "disk_gb": null, "numa_topology": null}, "nova_object.namespace": "nova"}, "availability_zone": "nova", "force_nodes": null, "image": {"nova_object.version": "1.8", "nova_object.changes": ["min_disk", "status", "min_ram", "properties", "size"], "nova_object.name": "ImageMeta", "nova_object.data": {"status": "active", "min_disk": 0, "min_ram": 0, "properties": {"nova_object.version": "1.19", "nova_object.changes": ["hw_qemu_guest_agent", "os_type", "os_require_quiesce", "hw_disk_bus"], " nova_object.name": "ImageMetaProps", "nova_object.data": {"hw_qemu_guest_agent": true, "os_type": "linux", "hw_disk_bus": "virtio", "os_require_quiesce": true}, "nova_object.namespace": "nova"}, "size": 42949672960}, "nova_object.namespace": "nova"}, "instance_group": null, "force_hosts": null, "numa_topology": null, "ignore_hosts": null, "flavor": {"nova_object.version": "1.2", "nova_object.name": "Flavor", "nova_object.data": {"disabled": false, "root_gb": 40, "description": null, "flavorid": "3", "deleted": false, "created_at": "2017-12-20T15:58:13Z", "ephemeral_gb": 0, "updated_at": null, "memory_mb": 4096, "vcpus": 2, "extra_specs": {}, "swap": 0, "rxtx_factor": 1.0, "is_public": true, "deleted_at": null, "vcpu_weight": 0, "id": 12, "name": "m1.medium"}, "nova_object.namespace": "nova"}, "project_id": "0e760ccde5d24af5a571de40220fbf80", "security_groups": {"nova_object.version": "1.1", "nova_object.changes": ["objects"], " nova_object.name": "SecurityGroupList", "nova_object.data": {"objects": [{"nova_object.version": "1.2", "nova_object.changes": ["uuid"], " nova_object.name": "SecurityGroup", "nova_object.data": {"uuid": "4d2fdd79-0f6c-4c26-a87b-a76b5d12901e"}, "nova_object.namespace": "nova"}]}, "nova_object.namespace": "nova"}, "scheduler_hints": {}}, "nova_object.namespace": "nova"} As you can note some keys have not values in second case, for example scheduler_hints. Ignazio Il giorno mer 6 apr 2022 alle ore 23:25 Laurent Dumont < laurentfdumont at gmail.com> ha scritto: > Nice! > > Can you detail the fields you changed? I had a look in the DB for the > request_specs and it's a big old JSON blurb. > > What fields did you change? > > On Wed, Apr 6, 2022, 12:15 PM Ignazio Cassano > wrote: > >> We solved our issue: the spec field in request_specs table was wrong. Non >> live migration works fine >> Ignazio >> >> >> Il Mer 6 Apr 2022, 09:05 Ignazio Cassano ha >> scritto: >> >>> We've 2 cases, those VMs are in the same group, one fails when migration >>> is launched, the other working well, you can see output of nova live >>> migration and spec on DB >>> >>> not working: >>> https://paste.openstack.org/show/b4QfkVHkUpIC97E3aWAx/ >>> working: >>> https://paste.openstack.org/show/busPt39bkfUzQthk1Tcf/ >>> >>> Ignazio >>> >>> Il Mer 6 Apr 2022, 06:53 Laurent Dumont ha >>> scritto: >>> >>>> I cannot easily reproduce, but what does Nova complain about with the >>>> live migration? Any chance you can run it with DEBUG? >>>> >>>> On Wed, Apr 6, 2022 at 12:34 AM Ignazio Cassano < >>>> ignaziocassano at gmail.com> wrote: >>>> >>>>> Thanks Laurent. Sometimes the trick works and instances can migrate. >>>>> We do not understand what is wrong when instances fail to migrate. >>>>> We are usung soft-anti-affinity policy. >>>>> The spec field we inserted in both cases seeems the same. >>>>> Ignazio. >>>>> >>>>> Il Mer 6 Apr 2022, 01:03 Laurent Dumont ha >>>>> scritto: >>>>> >>>>>> I'm trying to find where else this was discussed, but afaik, this was >>>>>> never supported. >>>>>> >>>>>> I am not sure if someone was able to "hack" it's way to a working >>>>>> setup. It's a bit of a shame because it makes server-groups really not >>>>>> flexible :( >>>>>> >>>>>> On Tue, Apr 5, 2022 at 3:04 PM Ignazio Cassano < >>>>>> ignaziocassano at gmail.com> wrote: >>>>>> >>>>>>> Hello, we noted that instances can be inserted in server groups only >>>>>>> at instance creation step but we need to insert in a server group some old >>>>>>> instances. >>>>>>> >>>>>>> We tried to modify database nova_api server group tables but we >>>>>>> noted that we must modify spec in request_specs table . For us is not >>>>>>> clear how to modify the spec value. >>>>>>> We tried to investigate looking at instances inserted in a server >>>>>>> group at creation step and we got issues in instance live migration. >>>>>>> Please, anyone could provide any utility to do it or any template ? >>>>>>> Thanks >>>>>>> Ignazio >>>>>>> >>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Apr 8 11:33:28 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 8 Apr 2022 11:33:28 +0000 Subject: I sent e-mail to this mailing list but got reply from support@paribet.ru! Why!?!??? In-Reply-To: <226576923.105955.1649395193421.JavaMail.root@mailwas2> References: <226576923.105955.1649395193421.JavaMail.root@mailwas2> Message-ID: <20220408113328.qgxvtbrdihxziutb@yuggoth.org> On 2022-04-08 14:19:53 +0900 (+0900), ??? wrote: > I've just sent my question email to openstack-discuss at lists.openstack.org > (title: [Question] Do I must separate management network and overlay network?) > > Then I immediately got this reply below, from paribet.ru > > It seems like paribet.ru is russian sports-related website > > Why did I get this reply from support at paribet.ru ???? Someone has been maliciously subscribing that address to many of our mailing lists, and it auto-responds to posts people are making. I've cleaned it up again just now, but will look into ways to block future subscriptions for it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From elod.illes at est.tech Fri Apr 8 18:36:29 2022 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 8 Apr 2022 20:36:29 +0200 Subject: [release] Release countdown for week R-25, Apr 11 - 15 Message-ID: <9e92b9f3-98ff-f63a-7f5c-8151018df803@est.tech> Hi, Welcome back to the release countdown emails! These will be sent at major points in the Zeddevelopment cycle, which should conclude with a final release on October 5th, 2022. Development Focus ----------------- At this stage in the release cycle, focus should be on planning the Zeddevelopment cycle, assessing Zedcommunity goals and approving Zedspecs. General Information ------------------- Zed is a 27 weeks long development cycle. In case you haven't seen it yet, please take a look over the schedule for this release: https://releases.openstack.org/ zed /schedule.html By default, the team PTL is responsible for handling the release cycle and approving release requests. This task can (and probably should) be delegated to release liaisons. Now is a good time to review release liaison information for your team and make sure it is up to date: https://opendev.org/openstack/releases/src/branch/master/data/release_liaisons.yaml By default, all your team deliverables from the Yogarelease are continued in Zedwith a similar release model. Upcoming Deadlines & Dates -------------------------- Zed-1 milestone: May 19th, 2022 OpenInfra Summit: June 7-9, 2022 (Berlin) El?d Ill?s irc: elodilles -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sat Apr 9 04:36:28 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 08 Apr 2022 23:36:28 -0500 Subject: [tc][all][ Zed Virtual PTG RBAC discussions Summary Message-ID: <1800c9bf145.d19b5ce2503791.7304546509368773732@ghanshyammann.com> Hello Everyone, I tried to attend the RBAC-related sessions on various projects[i] but I am sure I might have missed a few of them. I am summarizing the RBAC discussion on what open questions were from the project side and what we discussed in TC PTG. Feel free to append the discussion you had in your project or any query you want TC to solve. Current status: ------------------ * I have started this etherpad[ii] to track the status of this goal, please keep it up to date as you progress the work in your project. Open question: ------------------ 1. heat create_stack API calling the mixed scope APIs (for example create flavor and create server). what is best scope for heat API so that we do not have any security leak. We have not concluded the solution yet as we need the heat team also join the discussion and agree on that. But we have a few possible solutions listed below: ** Heat accepts stack API with system scope *** This means a stack with system resources would require a system admin role => Need to check with services relying on Heat ** Heat assigns a project-scope role to the requester during a processing stack operation and uses this project scope credential to manage project resources ** Heat starts accepting the new header accepting the extra token (say SYSTEM_TOKEN) and uses that to create/interact the system-level resource like create flavor. 2. How to isolate the host level attribute in GET APIs? (cinder and manila have the same issue). Cinder GET volume API response has the host information. One possible solution we discussed is to have a separate API to show the host information to the system user and the rest of the volume response to the project users only. This is similar to what we have in nova. Then we have a few questions from the Tacker side, where tacker create_vnf API internally call heat create_stack and they are planning to make create_vnf API for non-admin users. Direction on enabling the enforce scope by default ------------------------------------------------------------ As keystone, nova, and neutron are ready with the new RBAC, we wanted to enable the scope checks by default. But after seeing the lack of integration testing and the above mentioned open question (especially heat and any deployment project breaking) we decided to hold it. As the first step, we will migrate the tempest tests to the new RBAC and will enable the scope for these services in devstack. And based on the testing results we will decide on it. But after seeing the amount of work needed in Tempest and on the open question, I do not think we will be able to do it in the Zed cycle. Instead, we will target to enable the 'new defaults' by default. We ran out of time in TC and will continue the discuss these in policy popup meetings. I will push the schedule to Ml. [i] https://etherpad.opendev.org/p/rbac-zed-ptg [ii] https://etherpad.opendev.org/p/rbac-goal-tracking -gmann From gmann at ghanshyammann.com Sat Apr 9 04:54:08 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 08 Apr 2022 23:54:08 -0500 Subject: [tc][all][ Technical Committee Zed Virtual PTG discussions details Message-ID: <1800cac2007.de06c145503857.689734927006589137@ghanshyammann.com> Hello Everyone, I am writing the Technical Committee discussion details from the Zed cycle PTG this week. We had a lot of discussions and that is why it is a long email but I tried to categorize the details per topic so that you can easily spot the discussion you are interested in. TC + Community leader's interaction ============================ We continue the interaction session in this PTG also. The main idea here is to interact with community leaders and ask for their feedback on TC. I am happy to see more attendance this time. Below are the topics we discussed in this feedback session. * Updates from TC: ** Status on previous PTG interaction sessions' Action items/feedback: *** TC to do more technical guides for the project: DONE We have a unified limit as the first technical guide to start in this[1]. *** TC to define a concrete checklist to check goal readiness before we select a goal: DONE We decoupled the community-wide goal from the release cycle[2] and also added the goal readiness checklist[3]. *** Spread the word: DONE We have updated a new page in project-team-guide where projects can spread the word/story[4] ** Do not do ignorant Recheck: Recheck without seeing the failure even those are not related to the proposed change is very costly and end up consuming a lot of infra resources. Checking the failure and adding comments in why you are doing a recheck will give related projects a chance to know the frequent failures happening in upstream CI and so does we can spend more time fixing them. Dan Smith has added detailed documentation on "How to Handle Test Failures"[5], this can be used to educate the contributors on recheck and how to debug the failure reason. *** Action Item: **** PTL to start spreading the word and monitor ignorant recheck in their weekly meeting etc. **** On ignorant recheck, Zuul to add a comment with the link to recheck practice documentation. ** Spread the word/project outreach: This was brought up again in this PTG but I forget to update that we did work on this in the Yoga cycle and updated the project-team-guide to list the places where the project can spread the word/story[4]. It is ready for projects to refer it and know where they can post their stories/updates. * Recognition to the new contributors and encourage/help them to continue contributing to OpenStack: When there is a new contributor who contributes a significant amount of work to any project then we should have some way to appreciate them. It can be an appreciation certificate/badge from the foundation or TC, appreciation on social media, especially on linked-in. In TC, we will check what is the best possible and less costly way to do so. We checked in cross-community session with k8s and they seem to have coupon/badge distribution from CNCF foundation. 2021 user survey analysis =================== Jay has prepared the summary of the 2021 TC survey[6] and we covered it at a high level, especially the big changes from the previous survey. As the next step, we will review the proposed summary and merge it. Also, we talked about updating the upgrade question when we start the tick/tock release. Prepare to migrate for SQLAlchemy 2.0 ============================== sqlalchemy 2.0 is under development and might have many incompatible changes. To prepare OpenStack in advance, Stephen sent the mail on what are changes and project needs to do[7], also gave the project status. oslo.db is all good to use the sqlalchemy 2.0 and the required neutron change is also merged. Thanks to Stephen for driving it and taking care of various project work. Elections Analysis: ============== Project elections are not going well and we end up with a lot of missing nominations. This cycle, we had 17 projects on the leaderless list and out of it, 16 missed the nomination. There are various factors that lead to missing nomination, short notice from the election (1-2 weeks), PTLs are not active on ML, Language, Time Zone etc. By seeing the number of leaderless projects end up having the new PTLs (this cycle it was just 1 project) and having such projects repeating the nomination miss, we thought election in those projects are not actually required. TC agrees to change the leader assignment for such a project and instead of finding PTL, we will propose to move these projects to the DPL model. WIth the DPL model, we do not need to repeat the PTL assignment in every cycle. Please note, moving them to DPL model will not be automatically done instead that will be done when there are required liaisons to take care of DPL responsibility. Community-wide goals Check In & discussion: =================================== * FIPS goal ade_lee gave the current status of this FIPS work progress in various projects[8]. Current FIPS jobs are based on the centos9 stream which is not so stable but we will keep monitoring it. Canonical has suggested a way to set up FIPS jobs on ubuntu and will give the foundation keys to do FIPS testing and periodically rotate them. One challenge is the horizon which has the dependencies on Django fix and that will be present in Django higher version (4.0 or later) and it is difficult for the horizon to upgrade to that, especially the current bandwidth team has. He also proposed the different milestones to finish the FIPS work[9]. The Technical Committee agreed to have this as a 'selected goal' and as the next step ade_lee will propose it in governance with defined milestone and we will review it. Thanks, ade_lee for all your work and energy in this. * RBAC goal I have summarised it in a separate email, please refer to it - http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028103.html Release Cadence Adjustment: ======================= In TC and leaders' interaction sessions, Dan presented an overview of the new release cadence and explain the things which will be changed. After that project followed the discussion in their project PTG sessions and then brought the question in the Thursday TC session. There was a question if we can remove the things (which are deprecated in tick release) in tock release and the answer is yes. We discussed the automation of deprecation/upgrade release notes from tock release to tick release and it is fine but we need to make sure we do not make tick release notes so big that people start ignoring those. We will see how it goes in a few of the initial tick release notes. Below are the various other things which need changes: * ACTION: ** Deprecation document in project-team-guide[10]: *** Add the master deployment case as a recommendation *** Intermediate release upgrade case *** Update 4-a paragraph to clarify the 12 months case and things deprecated in tick can be removed in the next tock release. ** Testing runtime *** In tock release, it is ok to add/test the new python version but do not upgrade the distro version. *** In tick release: no change and keep doing the same what we do currently ** Stable branch *** Update release and stable team with what is proposed in https://governance.openstack.org/tc/resolutions/20220210-release-cadence-adjustment.html#example-sequence ** tick-tock name: ***gmann to check foundation about trademark check on "tick", "tock" name Remove release naming instead just use the number ======================================= In yoga cycle, we changed the release naming convention to add the number also in . format[11]. But the release name is less fun nowadays and more of a conflict or objection from the community. Starting from Ussuri, I have seen objections on many release names and the most recent is 'Zed'. Being one of the release name process coordinators for the last couple of releases, I feel that I am doing a good job and have to accept the blame/objection. Also, the process of selecting the release name has its own cost in terms of TC, community time as well as the cost of the legal check by the foundation. Having only the release number will help to understand how old that release is and also in understanding the tick-tock releases. The Drawback of removing the release name is that it requires changes in release tooling and will be less helpful in marketing. The former one is a one-time change and later once can be adjusted with having some tag lines. For example, "'OpenStack 2022.2': The integration of your effort" Considering all the facts, TC agreed to start the release name drop process in Gerrit and the community can add their feedback on that. Gate Health and Checks =================== This is something TC started monitoring and helping on gate health and it is going in a good direction. TC will continue doing it in every weekly meeting. We encourage every project to keep monitoring their rechecks and TC also will keep eye on those when we notice a large number of rechecks without comment. * Infra resource checks: The next thing we checked was if there are enough infra resources we have to have the smooth development of OpenStack in the Zed cycle. Clark and fungi gave updates on those and we are all in a good position at least not in a panic situation. But to monitor the required infra resources correctly, TC will work on listing the required and good-to-have services we need for OpenStack development. That way we will be able to judge the infra resource situation in a better way. * ELK service: The new dashboard for ELK service is ready to use, you can find the dashboard and credential information in this review[12]. We encourage community members to start using it and provide feedback. Accordingly, Clark will plan to stop the older ELK servers in a month or so. Thanks dpawlik for working on it. * nested-virt capable instances for testing We discussed it and they are available and can be requested from the job definition but there is always a risk of a slow job and timing failure and it is preferred to use those in the project gate only and not in the integrated/cross-project gate, Testing strategy: ============= * Direction on the lower constraints As lower constraints are not well tested and always causing issues in upstream CI, we are discussing it again to find some permanent solution. We discussed both cases 1. keeping lower bound in requirements.txt files and 2. lower-constraints.txt file and its testing. We agree to keep the 1st one but drop the 2nd one completely. Below is the agreement and action items: ** AGREE/ACTION ITEM: *** Write up how downstream distro can test their constraint file with OpenStack upstream. *** Keep the lower bound in requirements.txt but add a line saying that they are the best effort but not tested. If they are wrong then file a bug or fix it. *** Drop the lower-constraints.txt file, l-c tox env and its testing on master as well as on all stable branches. TC will add the resolution and communicate that on ML too. * Release specific job template for testing runtime: As brought up in ML[13], release-specific template does not actually work for independent releases model repo where the OpenStack bot does not update the template. Stephen added a new generic template to use in such repo. But if we see these release-specific templates are adding an extra step to update in every repo on a new release. Having a generic template and handling the python version jobs with branch variant[14] will work fine. But the generic template will move all the projects to new testing runtime at the same time from a central place. We agree to do it but with proper communication to all the projects and give some window to test/fix them before we switch to new testing versions. * Performance testing/monitoring Performance testing is good to do but as per our current CI resources, it is very difficult. We discussed few ideas where we can keep eyes on the performance aspect: ** At the start, we can monitor the memory footprint via performance stats in CI jobs ** Before and after DB query counting (how does this patch affect the number of DB queries) ** For API performance, we can monitor all the API requests via a single behind tls proxy gateway and collect the stats. * When to remove the deprecated devstack-gate? We agree to remove it once stable/wallaby is EM and will update the same in README file also with the expected date of stable/wallaby to be EM (2022-10-14). Improvement in project governance (continue from Yoga PTG....) ================================================ This is regarding how we can better keep eyes on the less or no active projects and detect them earlier in the cycle to same the release, infra resources. In the Yoga release, we faced the issue of a few projects being broken during the last date of the final release. We discussed it in the last PTG and agreed to start the 'tech-preview' idea. To define what can be entry and exit criteria for any project to be in 'tech-preview', we need input from the release team on stopping the auto release of projects or so. The release team was busy in their PTG at the same time we had this discussion so I will continue the discussion in TC weekly meetings. Current state of translations in OpenStack ================================ Ianychoi and Seongsoocho attended this session and gave us a detailed picture of the translation work, what is the current state and what all work is required to do including migration from Zanata to weblate. The main issue here is we need more people helping in this work. We also asked who uses those translations and they should come up to help. As the next step, we agreed to add a new question to the user survey ("Do you use i18n translations in your deployments? If yes please tell us which languages you use") and get some stats about translation usage. Meanwhile, Ian and seongsoocho team will continue maintaining. Thanks to Ian and Seongsoocho for their best effort to maintain it. Cross community sessions with the k8s steering committee team: ================================================= k8s steering committee members (Paris, Dims, Bob, and Tim) joined the OpenStack Technical Committee in PTG. We discussed the various topics about Legal support/coverage for contributors, especially in the security disclosure process and export control. We asked if k8s also have the same level of language translation support in code/doc as that we have in OpenStack and they have only for the documentation which is also if anyone proposes to do. Then we discussed the long-term sustainability efforts, especially for the experience contributors who can take higher responsibility as well as train the new contributors. This is an issue in both communities and none of us have any solution to this. In the last, we discussed how k8s recognize the contributor's good work. In k8s along with appreciating on slack, ML, they issue the coupon and badges from the CNCF foundation. TC Zed Activities checks and planning ============================= This is the last hour of the PTG and we started with the yoga cycle retrospective. * Yoga Retrospective We are doing good in gate health monitoring as well doing more technical work. On the improvement side, we need to be faster at making the decision on things that are in discussion for long period instead of keeping them open. * Pop Up Team Check In After checking with the status and need of both active popup teams, we decided to continue with both in Zed cycle. * TC weekly Meeting time check We will keep the current time until daylight saving time changes again. Also, we will continue the video call once a month. * TC liaison continue or drop? As we have improved the interaction with the project with the weekly meetings as well as in PTG sessions (TC+Leaders interaction session), we agreed to drop the TC liaisons completely. * I will prepare the Zed cycle TC Tracker (activities we will do in Zed cycle) * Next week's TC meeting is cancelled and we will resume meeting from 21st April onwards. That is all from PTG, thanks for reading it and stay safe! [1] https://docs.openstack.org/project-team-guide/technical-guides/unified-limits.html [2] https://review.opendev.org/c/openstack/governance/+/816387 [3] https://review.opendev.org/c/openstack/governance/+/835102 [4] https://docs.openstack.org/project-team-guide/spread-the-word.html [5] https://docs.openstack.org/project-team-guide/testing.html#how-to-handle-test-failures [6] https://review.opendev.org/c/openstack/governance/+/836888 [7] http://lists.openstack.org/pipermail/openstack-discuss/2021-August/024122.html [8] https://etherpad.opendev.org/p/qa-zed-ptg-fips [9] https://etherpad.opendev.org/p/zed-ptg-fips-goal [10] https://docs.openstack.org/project-team-guide/deprecation.html [11] https://review.opendev.org/c/openstack/governance/+/829563 [12] https://review.opendev.org/c/openstack/governance-sigs/+/835838 [13] http://lists.openstack.org/pipermail/openstack-discuss/2022-March/027676.html [14] https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/833892 From oliver.weinmann at me.com Sat Apr 9 07:30:42 2022 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Sat, 9 Apr 2022 09:30:42 +0200 Subject: Magnum not working / kube minions not spawning In-Reply-To: <931C5465-F7E8-430F-97E0-1FB335509E00@me.com> References: <931C5465-F7E8-430F-97E0-1FB335509E00@me.com> Message-ID: <3AA82BB6-0D43-4BC0-92DF-30C1BA43611E@me.com> Hi, Problem / Mystery solved. I checked my documentation once again and noticed that I used Fedora Core Os 34.x the last time. Once I used the very same version it worked fine. So it seems that something has changed in fedora core os 35. I couldn?t find any hints on which version is supported with magnum but also the latest magnum doc just refers to latest. Still it doesn?t explicitly say do not use version 35. Is there anything known about this? Best regards, Oliver Von meinem iPhone gesendet > Am 08.04.2022 um 09:21 schrieb Oliver Weinmann : > > ? > Hi, > > I recently deployed Openstack wallaby using kolla-ansible and also deployed magnum. I know it was working fine a while ago and I was able to spin up K8s clusters without a problem. But I think This was on Ussuri back then. I went through the magnum troubleshooting guide but couldn?t solve my problem. Magnum spins up the master node and I can log in via SSH using its floating IP. I checked the logs and saw this after waiting for a few minutes: > > role.kubernetes.io/master="" > + echo 'Trying to label master node with node-role.kubernetes.io/master=""' > + sleep 5s > ++ kubectl get --raw=/healthz > Error from server (InternalError): an error on the server ("[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[-]poststarthook/crd-informer-synced failed: reason withheld\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[+]poststarthook/apiserver/bootstrap-system-flowcontrol-configuration ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[-]autoregister-completion failed: reason withheld\n[+]poststarthook/apiservice-openapi-controller ok\nhealthz check failed") has prevented the request from succeeding > + '[' ok = '' ']' > + echo 'Trying to label master node with node-role.kubernetes.io/master=""' > + sleep 5s > Trying to label master node with node-role.kubernetes.io/master="" > ++ kubectl get --raw=/healthz > + '[' ok = ok ']' > + kubectl patch node k8s-test-small-cal-zwe5xmigugwj-master-0 --patch '{"metadata": {"labels": {"node-role.kubernetes.io/master": ""}}}' > Error from server (NotFound): nodes "k8s-test-small-cal-zwe5xmigugwj-master-0" not found > + echo 'Trying to label master node with node-role.kubernetes.io/master=""' > > Running kubectl get nodes is just empty even when appending all-namespaces. I pretty much used the documentation that I created when I was using Ussuri. I wonder what has changed since then that would make this fail. > > I googled for hours but was not able to find similar issues and if then it was about having different version of k8s server and client. Which is definitely not the case. I also tried this on Xena but it also fails. > > I do have the feeling that the issue is network related but I do not see any issues at all spinning up instances and also the communication between instances works fine. > > Here are my current configs: > > Globals.yml > > [vagrant at seed ~]$ grep ^[^#] /etc/kolla/globals.yml > --- > kolla_base_distro: "centos" > kolla_install_type: "source" > openstack_release: "wallaby" > kolla_internal_vip_address: "192.168.45.222" > kolla_external_vip_address: "192.168.2.222" > network_interface: "eth2" > neutron_external_interface: "eth1" > keepalived_virtual_router_id: "222" > enable_haproxy: "yes" > enable_magnum: ?yes? > > > > multinode hosts file > > control[01:03] ansible_user=vagrant ansible_password=vagrant ansible_become=true api_interface=eth3 > compute[01:02] ansible_user=vagrant ansible_password=vagrant ansible_become=true api_interface=eth3 > > [control] > # These hostname must be resolvable from your deployment host > control[01:03] > > # The above can also be specified as follows: > #control[01:03] ansible_user=vagrant ansible_password=vagrant ansible_become=true > #compute[01:02] ansible_user=vagrant ansible_password=vagrant ansible_become=true > > # The network nodes are where your l3-agent and loadbalancers will run > # This can be the same as a host in the control group > [network] > control[01:03] > #network01 > #network02 > > [compute] > compute[01:02] > > [monitoring] > control[01:03] > #monitoring01 > > # When compute nodes and control nodes use different interfaces, > # you need to comment out "api_interface" and other interfaces from the globals.yml > # and specify like below: > #compute01 neutron_external_interface=eth0 api_interface=em1 storage_interface=em1 tunnel_interface=em1 > > [storage] > control[01:03] > #storage01 > > [deployment] > localhost ansible_connection=local > > cat /etc/kolla/config/magnum.conf > > [trust] > cluster_user_trust = True > > Sorry for the formatting. Sending this on a smartphone with plenty of copy and paste. > > Best Regards, > Oliver -------------- next part -------------- An HTML attachment was scrubbed... URL: From JOliveira at itcservicios.com Fri Apr 8 21:27:57 2022 From: JOliveira at itcservicios.com (Joao Oliveira) Date: Fri, 8 Apr 2022 21:27:57 +0000 Subject: Resize instance vm Linux error Message-ID: How I can Resize a Instance in the dashboard ?? [cid:image004.png at 01D84B6B.A5B5F960] I try this and it is not working. Any sugestions ?? Thanks Jo?o de Deus Oliveira Ingeniero de Infraestructura [Descripci?n: Descripci?n: Descripci?n: Logo ITC Servicios] Inform?tica, Tecnolog?a & Comunicaciones Edificio ITC Tower Av. Las Ramblas #100, Torre B Barrio Equipetrol Norte Santa Cruz - Bolivia Tel: +(591) 3 344-4424 Ext. 4368 Fax: +(591) 3-344-4433 M?vil: +(591) 67011038 E-mail: joliveira at itcservicios.com Web: www.itc-e.com AVISO DE CONFIDENCIALIDAD Y PRIVACIDAD: El uso de la informaci?n transmitida en este correo electr?nico est? limitado a la persona a la cual va dirigido. El correo puede contener informaci?n privada, privilegiada, confidencial o exenta de revelaci?n bajo las leyes aplicables. Si usted no es el destinatario pretendido o sospecha que el mensaje le hubiera sido enviado sin la debida autorizaci?n, queda avisado que est? estrictamente prohibido cualquier uso, diseminaci?n o copia de esta informaci?n. Si usted ha recibido este mensaje por equivocaci?n le pedimos notificarnos a vuelta de correo y borrar el mensaje. CONFIDENTIALITY AND PRIVACY NOTICE: This email is intended solely for the use of the individual to whom it is addressed and may contain information that is privileged, confidential or otherwise exempt from disclosure under applicable law. If you are not the intended recipient or it appears that this mail has been forwarded to you without proper authority, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify us via reply email and delete this message. De: Abhishek Kekane Enviado el: jueves, 7 de abril de 2022 14:33 Para: Anurag Singh Rajawat CC: OpenStack Discuss Asunto: Re: [glance] Outreachy 2022 Hi Anurag, Sorry that we were not able to address you on IRC. Currently glance team is busy in PTG which will end tomorrow evening and that is why we might have missed your ping on IRC. I would suggest to share your failures so that we can guide you. I think from Monday onwards everyone will be back to their daily routine so outreachy glance team will help you to resolve your queries. Meanwhile if it is urgent then you can share your doubts and I will try my best to resolve them. Thanks and Regards, Abhishek On Thu, 7 Apr, 2022, 22:30 Anurag Singh Rajawat, > wrote: Dear glance team, I'd setup glance, glance-store and glance-client on my local setup, but some tests for glance were failing, also is there any good first issues so that I can understand the project more clearly? I also asked about it in IRC but doesn't got response. Thanks Sincerely Anurag -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 4650 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 132 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 97732 bytes Desc: image004.png URL: From rezabojnordi2012 at gmail.com Sat Apr 9 20:08:42 2022 From: rezabojnordi2012 at gmail.com (Reza Bojnordi) Date: Sun, 10 Apr 2022 00:38:42 +0430 Subject: error Message-ID: Hi, sorry I have problem on Openstack tcd3gw': No module named 'etcd3gw': ModuleNotFoundError: No module named 'etcd3gw' Apr 09 20:09:34 infra1-nova-api-container-483c43d7 nova-metadata-wsgi[2912]: 2022-04-09 20:09:34.836 2912 ERROR stevedore.extension [req-089ca62f-02ee-49f9-a305-75f426102ebc - - - - -] Could not load 'oslo_cache.e tcd3gw': No module named 'etcd3gw': ModuleNotFoundError: No module named 'etcd3gw' -- reza bojnordi about.me/rbojnordi -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyliu0592 at hotmail.com Sun Apr 10 02:28:00 2022 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Sun, 10 Apr 2022 02:28:00 +0000 Subject: Xena and CEPH RBD backend (show_image_direct_url status ) In-Reply-To: References: Message-ID: Hi Erno, I have a Xena setup with Ceph. When create a snapshot of an image, it is a full copy. When create a volume from an image, it is an incremental copy. show_multiple_locations is true. show_image_direct_url doesn't seem having effect, true or false, the same result. With Ussuri, both of above 2 creations are incremental copy. Is there any way we can do incremental snapshot for image? Thanks! Tony ________________________________________ From: Erno Kuvaja Sent: March 16, 2022 06:33 AM To: west, andrew Cc: openstack-discuss at lists.openstack.org Subject: Re: Xena and CEPH RBD backend (show_image_direct_url status ) On Thu, Feb 24, 2022 at 2:37 PM west, andrew > wrote: Hello experts Currently using openstack Xena and Ceph backend (Pacific 16.2.7) It seems there is a bug (since Wallaby?) where the efficient use of a CEPH Pacific RBD backend (i.e with copy-on-write-cloning) is not working . Show_image_direct_url needs to be False to create volumes (or ephemeral volumes for nova) This can of course be tremendously slow (Nova , ephemeral root disk) without copy-on-write cloning feature of Ceph. As Ceph RBD is THE most favourite backend for block storage in openstack I am wondering how others are coping (or workarounds found ?) Which combinations of Openstack and Ceph are known to work well with copy-on-write-cloning? How is the noted GRAVE Security RISK of enabling Show_image_direct_url mitigated ? (i.e I think , for CEPH RBD, it needs to be True to get cloning to work efficiently) See another report of this issue here: Re: Ceph Pacifif and Openstack Wallaby - ERROR cinder.scheduler.flows.create_volume ? CEPH Filesystem Users (spinics.net) Thanks for any help or pointers, Andrew West Openstack consulting CGG France ________________________________ ?This e-mail and any accompanying attachments are confidential. The information is intended solely for the use of the individual to whom it is addressed. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited. If you are not the intended recipient, you must not review, disclose, copy, distribute or use this e-mail; please delete it from your system and notify the sender immediately.? Hi Andrew, Sorry for the delayed reply. I got distracted and forgot after the first time I noticed this. So far I see you only mentioning 'show_image_direct_url' setting but AFAIK also the 'show_multiple_locations' is required for these features to work, is that set true and the issue still persists? - jokke From tonyliu0592 at hotmail.com Sun Apr 10 02:39:42 2022 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Sun, 10 Apr 2022 02:39:42 +0000 Subject: Xena and CEPH RBD backend (show_image_direct_url status ) In-Reply-To: References: Message-ID: To clarify, what I did was to create a snapshot of VM based on image. Is it because Nova doesn't get the image location from Glance? Thanks! Tony ________________________________________ From: Tony Liu Sent: April 9, 2022 07:28 PM To: Erno Kuvaja; west, andrew Cc: openstack-discuss at lists.openstack.org Subject: Re: Xena and CEPH RBD backend (show_image_direct_url status ) Hi Erno, I have a Xena setup with Ceph. When create a snapshot of an image, it is a full copy. When create a volume from an image, it is an incremental copy. show_multiple_locations is true. show_image_direct_url doesn't seem having effect, true or false, the same result. With Ussuri, both of above 2 creations are incremental copy. Is there any way we can do incremental snapshot for image? Thanks! Tony ________________________________________ From: Erno Kuvaja Sent: March 16, 2022 06:33 AM To: west, andrew Cc: openstack-discuss at lists.openstack.org Subject: Re: Xena and CEPH RBD backend (show_image_direct_url status ) On Thu, Feb 24, 2022 at 2:37 PM west, andrew > wrote: Hello experts Currently using openstack Xena and Ceph backend (Pacific 16.2.7) It seems there is a bug (since Wallaby?) where the efficient use of a CEPH Pacific RBD backend (i.e with copy-on-write-cloning) is not working . Show_image_direct_url needs to be False to create volumes (or ephemeral volumes for nova) This can of course be tremendously slow (Nova , ephemeral root disk) without copy-on-write cloning feature of Ceph. As Ceph RBD is THE most favourite backend for block storage in openstack I am wondering how others are coping (or workarounds found ?) Which combinations of Openstack and Ceph are known to work well with copy-on-write-cloning? How is the noted GRAVE Security RISK of enabling Show_image_direct_url mitigated ? (i.e I think , for CEPH RBD, it needs to be True to get cloning to work efficiently) See another report of this issue here: Re: Ceph Pacifif and Openstack Wallaby - ERROR cinder.scheduler.flows.create_volume ? CEPH Filesystem Users (spinics.net) Thanks for any help or pointers, Andrew West Openstack consulting CGG France ________________________________ ?This e-mail and any accompanying attachments are confidential. The information is intended solely for the use of the individual to whom it is addressed. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited. If you are not the intended recipient, you must not review, disclose, copy, distribute or use this e-mail; please delete it from your system and notify the sender immediately.? Hi Andrew, Sorry for the delayed reply. I got distracted and forgot after the first time I noticed this. So far I see you only mentioning 'show_image_direct_url' setting but AFAIK also the 'show_multiple_locations' is required for these features to work, is that set true and the issue still persists? - jokke From massimo.sgaravatto at gmail.com Sun Apr 10 07:08:41 2022 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Sun, 10 Apr 2022 09:08:41 +0200 Subject: Resize instance vm Linux error In-Reply-To: References: Message-ID: Which version of OpenStack ? You might be affected by this bug: https://bugs.launchpad.net/horizon/+bug/1940834 On Sat, Apr 9, 2022 at 11:45 PM Joao Oliveira wrote: > How I can Resize a Instance in the dashboard ?? > > > > I try this and it is not working. > > > > Any sugestions ?? > > > > Thanks > > > > *Jo?o de Deus Oliveira* > Ingeniero de Infraestructura > > *[image: Descripci?n: Descripci?n: Descripci?n: Logo ITC Servicios]* > > *Inform?tica, Tecnolog?a & Comunicaciones* > > > > Edificio ITC Tower > > Av. Las Ramblas #100, Torre B > > Barrio Equipetrol Norte > Santa Cruz - Bolivia > > > > Tel: +(591) 3 344-4424 Ext. 4368 > Fax: +(591) 3-344-4433 > > M?vil: +(591) 67011038 > > > E-mail: joliveira at itcservicios.com > Web: www.itc-e.com > > *AVISO DE CONFIDENCIALIDAD Y PRIVACIDAD: *El uso de la informaci?n > transmitida en este correo electr?nico est? limitado a la persona a la cual > va dirigido. El correo puede contener informaci?n privada, privilegiada, > confidencial o exenta de revelaci?n bajo las leyes aplicables. Si usted no > es el destinatario pretendido o sospecha que el mensaje le hubiera sido > enviado sin la debida autorizaci?n, queda avisado que est? estrictamente > prohibido cualquier uso, diseminaci?n o copia de esta informaci?n. Si usted > ha recibido este mensaje por equivocaci?n le pedimos notificarnos a vuelta > de correo y borrar el mensaje. > > *CONFIDENTIALITY AND PRIVACY NOTICE:* This email is intended solely for > the use of the individual to whom it is addressed and may contain > information that is privileged, confidential or otherwise exempt from > disclosure under applicable law. If you are not the intended recipient or > it appears that this mail has been forwarded to you without proper > authority, you are hereby notified that any dissemination, distribution, or > copying of this communication is strictly prohibited. If you have received > this communication in error, please notify us via reply email and delete > this message. > > > > *De:* Abhishek Kekane > *Enviado el:* jueves, 7 de abril de 2022 14:33 > *Para:* Anurag Singh Rajawat > *CC:* OpenStack Discuss > *Asunto:* Re: [glance] Outreachy 2022 > > > > Hi Anurag, > > > > Sorry that we were not able to address you on IRC. > > > > Currently glance team is busy in PTG which will end tomorrow evening and > that is why we might have missed your ping on IRC. > > > > I would suggest to share your failures so that we can guide you. I think > from Monday onwards everyone will be back to their daily routine so > outreachy glance team will help you to resolve your queries. > > > > Meanwhile if it is urgent then you can share your doubts and I will try my > best to resolve them. > > > > Thanks and Regards, > > > > Abhishek > > > > On Thu, 7 Apr, 2022, 22:30 Anurag Singh Rajawat, < > anuragsinghrajawat22 at gmail.com> wrote: > > Dear glance team, I'd setup glance, glance-store and glance-client on my > local setup, but some tests for glance were failing, also is there any good > first issues so that I can understand the project more clearly? > > I also asked about it in IRC but doesn't got response. > > > > Thanks > > > > Sincerely > > Anurag > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 4650 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 132 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 97732 bytes Desc: not available URL: From wodel.youchi at gmail.com Sun Apr 10 09:40:13 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Sun, 10 Apr 2022 10:40:13 +0100 Subject: [Kolla-ansible][Xena] Test Trove module In-Reply-To: References: Message-ID: Hi, I used a VM machine on the cloud with a good connexion to build the image but, it didn't work for me so far : I get this error while constructing the image : 2022-04-10 08:55:25.497 | *+ install_deb_packages install iscsi-initiator-utils* 2022-04-10 08:55:25.497 | + DEBIAN_FRONTEND=noninteractive 2022-04-10 08:55:25.497 | + http_proxy= 2022-04-10 08:55:25.497 | + https_proxy= 2022-04-10 08:55:25.497 | + no_proxy= 2022-04-10 08:55:25.497 | + apt-get --option Dpkg::Options::=--force-confold --option Dpkg::Options::=--force-confdef --assume-yes install iscsi-initiator-utils 2022-04-10 08:55:25.541 | Reading package lists... 2022-04-10 08:55:25.788 | Building dependency tree... 2022-04-10 08:55:25.788 | Reading state information... 2022-04-10 08:55:25.825 | *E: Unable to locate package iscsi-initiator-utils* 2022-04-10 08:55:25.838 | ++ diskimage_builder/lib/img-functions:run_in_target:59 : check_break after-error run_in_target bash 2022-04-10 08:55:25.843 | ++ diskimage_builder/lib/common-functions:check_break:143 : echo '' 2022-04-10 08:55:25.844 | ++ diskimage_builder/lib/common-functions:check_break:143 : egrep -e '(,|^)after-error(,|$)' -q 2022-04-10 08:55:25.851 | + diskimage_builder/lib/img-functions:run_in_target:1 : trap_cleanup 2022-04-10 08:55:25.855 | + diskimage_builder/lib/img-functions:trap_cleanup:36 I am not an Ubuntu person but I think the package's name is open-iscsi. This is the command I used to build the image : ./trovestack build-image ubuntu bionic true ubuntu /home/stack/trove-xena-guest-ubuntu-bionic-dev.qcow2 My OS is a Centos 8 Stream. you can find the whole log of the operation attached. Thanks in advance. Regards. Le jeu. 7 avr. 2022 ? 16:02, Clark Boylan a ?crit : > On Thu, Apr 7, 2022, at 6:38 AM, wodel youchi wrote: > > Hi, > > I found the error, Rocky is not supported for, so I switched to CentOS > > machine. The script starts but I had two problems : > > The trovestack script searches for a package named qemu and don't find > > it, so I modified the script to use qemu* instead of qemu > > > > The second problem is related to the download itself, I have this error > > : > > 2022-04-07 13:18:02.677 | Caching guest-agent from > > https://opendev.org/openstack/trove in /home/deployer/.cache/ > > > image-create/source-repositories/guest_agent_842a440b9b12731c50f3b4042bf842ea7e58467d > > 2022-04-07 13:22:31.299 | error: RPC failed; curl 18 transfer closed > > with outstanding read data remaining > > 2022-04-07 13:22:31.299 | error: 6149 bytes of body are still expected > > 2022-04-07 13:22:31.300 | fetch-pack: unexpected disconnect while > > reading sideband packet > > 2022-04-07 13:22:31.300 | fatal: early EOF > > 2022-04-07 13:22:31.301 | fatal: fetch-pack: invalid index-pack output > > > > Any ideas? > > This is related to cloning and caching the > https://opendev.org/openstack/trove git repo during the image build for > the trove database image. This looks like a network error of some sort with > the connection ending before it was completed. You might want to double > check any proxies or firewalls between you and https://opendev.org. It > may have also been the Internet acting up and trying again would be fine. I > would try again and if it persists start looking at network connectivity > between you and https://opendev.org and take it from there. > > > > > Regards. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: trove-build2.log Type: application/octet-stream Size: 302893 bytes Desc: not available URL: From laurentfdumont at gmail.com Sun Apr 10 14:00:18 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Sun, 10 Apr 2022 10:00:18 -0400 Subject: error In-Reply-To: References: Message-ID: We are going to need more context - What are you trying to do? - What version of Openstack are you running? I found this which is quite similar : https://bugs.launchpad.net/devstack/+bug/1820892 On Sat, Apr 9, 2022 at 5:45 PM Reza Bojnordi wrote: > Hi, sorry I have problem on Openstack > > tcd3gw': No module named 'etcd3gw': ModuleNotFoundError: No module named > 'etcd3gw' > > > Apr 09 20:09:34 infra1-nova-api-container-483c43d7 > nova-metadata-wsgi[2912]: 2022-04-09 20:09:34.836 2912 ERROR > stevedore.extension [req-089ca62f-02ee-49f9-a305-75f426102ebc - - - - -] > Could not load 'oslo_cache.e > tcd3gw': No module named 'etcd3gw': ModuleNotFoundError: No module named > 'etcd3gw' > > -- > > > reza bojnordi > about.me/rbojnordi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjoen at dds.nl Sun Apr 10 16:17:49 2022 From: tjoen at dds.nl (tjoen) Date: Sun, 10 Apr 2022 18:17:49 +0200 Subject: error In-Reply-To: References: Message-ID: <223175a1-cfa4-7fbc-c7f6-dee8a3b93b15@dds.nl> On 4/10/22 16:00, Laurent Dumont wrote: > We are going to need more context > > - What are you trying to do? > - What version of Openstack are you running? > > I found this which is quite similar : > https://bugs.launchpad.net/devstack/+bug/1820892 > On Sat, Apr 9, 2022 at 5:45 PM Reza Bojnordi > wrote: > >> Hi, sorry I have problem on Openstack >> >> tcd3gw': No module named 'etcd3gw': ModuleNotFoundError: No module named >> 'etcd3gw' It is a missin Python dependency pbr futurist requests and six need it From laurentfdumont at gmail.com Sun Apr 10 20:40:02 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Sun, 10 Apr 2022 16:40:02 -0400 Subject: error In-Reply-To: <223175a1-cfa4-7fbc-c7f6-dee8a3b93b15@dds.nl> References: <223175a1-cfa4-7fbc-c7f6-dee8a3b93b15@dds.nl> Message-ID: What OS are the controllers/computes running? I have no first hand experience with openstack-ansible but I do believe it runs lxc containers for the control plane. Missing dependencies inside those is a bit strange. What deployment scenario did you use? Source or distribution packages? https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest/configure.html On Sun, Apr 10, 2022 at 12:21 PM tjoen wrote: > On 4/10/22 16:00, Laurent Dumont wrote: > > We are going to need more context > > > > - What are you trying to do? > > - What version of Openstack are you running? > > > > I found this which is quite similar : > > https://bugs.launchpad.net/devstack/+bug/1820892 > > > On Sat, Apr 9, 2022 at 5:45 PM Reza Bojnordi > > > wrote: > > > >> Hi, sorry I have problem on Openstack > >> > >> tcd3gw': No module named 'etcd3gw': ModuleNotFoundError: No module named > >> 'etcd3gw' > > It is a missin Python dependency > pbr futurist requests and six need it > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Mon Apr 11 06:15:01 2022 From: akekane at redhat.com (Abhishek Kekane) Date: Mon, 11 Apr 2022 11:45:01 +0530 Subject: [Glance] Zed PTG Summary Message-ID: Hi All, We had our fifth virtual PTG between 4th April to 8th April 2022. Thanks to everyone who joined the virtual PTG sessions. Using bluejeans app we had lots of discussion around different topics for glance, glance + cinder, fips and Secure RBAC. I have created etherpad [1] with Notes from the session and which also includes the recordings of each discussion. Here is a short summary of the discussions. Tuesday, April 5th 2022 # Yoga Retrospective On the positive note, we managed to complete all the work we targeted for the yoga cycle. In addition to that we have organized a first ever glance review party where we managed to perform group reviews which helped us to cover our review load in the final milestone. On the other side we need to reorganize our bi-weekly bugs meeting and also improve our documentation and API references. Recording: https://bluejeans.com/s/kVeDPbu6kX2 - Chapter 1 # Cache API - New API to trigger periodic job In Yoga we have managed to put together new endpoints for cache, this cycle we should add a new API to trigger the periodic job to cache the images. Here we have decided to get rid of periodic job and add a new API to cache(pre-cache) the specified image(s) instantly rather than waiting for the next periodic run to pre-cache those images. Recordings: https://bluejeans.com/s/kVeDPbu6kX2 - Chapter 2 # Glance Cache improvements, restrict duplicate downloads How we can avoid multiple downloading of the same image in cache on first download. Final design - https://review.opendev.org/c/openstack/glance-specs/+/734683 Recordings: https://bluejeans.com/s/kVeDPbu6kX2 - Chapter 3 # Distributed responsibilities among cores/team >From this cycle we have decided to follow a distributed leadership model internally which will help us to train internal members to take the PTL responsibilities in the upcoming cycle. We have decided to distribute the below responsibilities among ourselves in this cycle. Release management: pranali/abhishek Bug management: cyril/abhishek Meetings:pranali Stable branch management: jokke Cross project communication:abhishekk Mailing lists:pranali/abhishekk PTG/summit preparation:pranali/abhishekk Vulnerability management: jokke Infra management: abhishekk Recordings: https://bluejeans.com/s/kVeDPbu6kX2 - Chapter 4 # Secure RBAC - System Scope Unfortunately due to time crunch we were not able to find answers to a few of our queries in this PTG, so we have decided to attend the Open office hours and Policy popup meetings to get them sorted. As per community goal we should be enforcing the new RBAC policies from this cycle and support system-admin. Once we get our doubts sorted then we will share more information about the same. Recordings: https://bluejeans.com/s/kVeDPbu6kX2 - Chapter 5 # Policy refactoring - Part 2 In Xena we have managed to move all policy checks to the API layer. This cycle we need to work on removing dead code of policy and authorization layer. So we are going to ensure that the policy and authorization layer are not used anywhere before removing it from the code base. Recordings: https://bluejeans.com/s/kVeDPbu6kX2 - Chapter 6 Wednesday, April 06th 2022 # Proposal for moving away from onion architecture This is a long term goal and from this cycle we should start doing homework on how we can squash two or more layers together and move away from the onion architecture and obtain a popular and simple MVC architecture in upcoming cycles. This cycle we will be mostly working on finalizing the detail spec about this work. Recordings: https://bluejeans.com/s/ZmURX2kOeJy - Chapter 1 # Import image from another region When dealing with a multi-region cloud it often appears that operators or customers need to copy images from a region to another (it can be public images or even remote backup of instance snapshot). It is currently complicated to implement as customers need to save the image locally, then upload it to the new region. We propose to rely mostly on the web-download code of glance to directly download an image from a remote glance, calling this method ? glance-download ?. Note that this first version will require a federated Keystone between all the glance in order to avoid all authentication problems (we will rely on the context token of the target glance to authenticate to the remote glance). Recordings: https://bluejeans.com/s/ZmURX2kOeJy - Chapter 2 # Expanding stores-info detail for other stores In Yoga we added a new API ``stores-detail`` to expose the properties of stores but currently it is only exposing details of rbd store, we are planning to extend its support to expose properties of other stores as well. Recordings: https://bluejeans.com/s/ZmURX2kOeJy - Chapter 3 # Discussion of property injection coherency between image import and possible implementation in upload Glance does support injecting certain properties for images created by non-admin users by using inject metadata import plugin, but same is not possible when we do not use import workflow and use traditional way to create the image. This cycle we will be working on supporting injecting the metadata while creating images using upload workflow. https://bluejeans.com/s/ZmURX2kOeJy - Chapter 4 Thursday, April 07th 2022 # Refactor glance cinder store Currently we have one file for cinder store where the logic for handling all cinder backend types (on basis of protocol) exists. It makes the code less readable and prone to mistakes and even the unit testing for code coverage is difficult due to a lot of nested code. The idea of this proposal is to divide the one big file into backend specific files on the basis of operations sharing the same interface. Eg: connect_volume call for most of the backends is the same except for remotefs type drivers where it is handled in a custom way. https://bluejeans.com/s/oiS49m8Pj_o - Chapter 1 # Native Image Encryption Barbican updates: - Microversion is done - Secret consumer will be implemented by milestone 1 - Luzi is interested to work on image encryption - Glance will keep watch and review the work if posted this cycle # Default Glance to configure multiple stores Glance has deprecated single stores configuration since Stein cycle and now will start putting efforts to deploy glance using multistore by-default and then remove single store support from glance. This might be going to take a couple of cycles, so in yoga we are going to migrate internal unit and functional tests to use multistore config and also going to modify devstack to deploy glance using multistore configuration for swift and Ceph (for file and cinder it's already supported). We need to notify respected deployment teams (ansible/tripleo/ceph-admin) about our work and we are moving from single store configuration. Recordings: https://bluejeans.com/s/oiS49m8Pj_o - Chapter 2 # Fips overview Path forward: - add experimental/periodic fips job on centos 8 (it will run on master) - centos 9 dependency fixes (tempest and swift changes) - Once dependency merges, experimental/periodic job will be run for centos 9 (enough time to verify that it is stable now) - Once it is stable, move it to check/gate queue - Then backport swift/tempest dependencies to stable branches - Run centos 9 fips job as periodic on stable branches - Once stable move those to gate/check queue for stable branches Recordings: https://bluejeans.com/s/oiS49m8Pj_o - Chapter 3 # Cross project meet with Cinder Discussion 1: New API to expose location information We have OSSN-0065 describing the security risk of enabling ``show_multiple_locations`` option but this is required for cinder to perform certain optimizations when creating a volume from image (in case of cinder and RBD store). The proposal is to create a new admin only API to provide the location of the image and avoid dependency on the config options. Decided to write a spec describing the current API design for the new locations API (alternative: nova's approach of using alternative endpoint and service role/token as well) Discussion 2: Clone v2: RBD deferred deletion Recently cinder has utilized Ceph clone v2 support for its RBD backend, since then if you attempt to delete an image from glance that has a dependent volume, all future uses of that image will fail in error state. Despite the fact that the image itself is still inside of Ceph/Glance. This issue is reproducible if you are using ceph client version greater than 'luminous'. Decided to fix things on cinder side and see how we can fix glance using the same techniques (also document it since customers face these issues all the time) Recordings: https://bluejeans.com/s/efAqf0e5RDQ - Chapter 2 Friday, April 08th 2022 # Image Export with metadata Especially if we implement the glance-download discussed on Wednesday it might be worth exploring my old idea of image export, which would bundle the image metadata together with the image payload itself for easier import into the glance environment. This will need bits of work on both sides of the process. The source will need to be able embed the metadata into the end of the datastream sent to the client and the receiving end will need to understand how to pick up and parse that data. To make easier transfer of the image (especially RAW images from all of our Ceph deployments) the original image payload should be compressed on the fly when sent to the client and the metadata can be added after the compression is closed. This way the image can be brought in older glance deployments too that supports the uncompression and if it doesn't know to look at the metadata that part will be just simply ignored. Recordings: https://bluejeans.com/s/C7F5pDPR_v4 - Chapter 1 You will find the detailed information about the same in the PTG etherpad [1] along with the recordings of the sessions and milestone wise priorities at the bottom of the etherpad. Kindly let me know if you have any questions about the same. [1] https://etherpad.opendev.org/p/zed-glance-ptg Thank you, Abhishek -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.juszkiewicz at linaro.org Mon Apr 11 07:08:37 2022 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Mon, 11 Apr 2022 09:08:37 +0200 Subject: [requirements][kolla][infra] Python versions in u-c Message-ID: OpenStack Zed lists Python 3.8 as minimal version. Debian 'bullseye' (current stable) as Python 3.9, Ubuntu 20.04 'focal' has 3.8. Ubuntu 22.04 'jammy' has 3.10, CentOS Stream 9 (so RHEL 9, Rocky Linux 9) has 3.9 version. We ignore anything RHEL 8 based as they have Python 3.6 only and there is no work on getting OpenStack Zed supported there. So there will be 3.8, 3.9, 3.10 used with Zed. Two question to requirements team: 1. Are there plans to get rid of 3.6 entries from openstack/requirements/upper-constraints.txt in near future? 2. Can we get "python_version=='3.8'" lines be changed to ">=" ones? Or once 3.6 entries drop just assume that we have 3.8+ and drop 'python_version' entries? websocket-client===1.3.1;python_version=='3.6' websocket-client===1.3.2;python_version=='3.8' Asking due to my recent Kolla/infra work on getting CentOS Stream 9 supported in both places. Infra team has CI job which builds Python wheel cache to make sure that other CI jobs do not have to do it. Times are nicely cut due to this stuff. There are 55 entries with "python_version=='3.8'". Probably none of them are time critical anymore as upstream projects provide aarch64 wheels most of times too. From arne.wiebalck at cern.ch Mon Apr 11 07:11:55 2022 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Mon, 11 Apr 2022 09:11:55 +0200 Subject: [baremetal-sig][ironic] Tue Apr 12, 2022, 2pm UTC: "Using OpenStack Ironic for HPC at Berlin Institute of Health" Message-ID: <587b0630-d8f7-e38c-fc14-002e81a31b05@cern.ch> Dear all, The Bare Metal SIG will meet tomorrow Tue Apr 12, 2022, 2pm UTC featuring a topic of the day presentation by Dr. Manuel Holtgrewe: "Bare Metal for Health - Using OpenStack Ironic for HPC at Berlin Institute of Health" Everyone is welcome, all details on how to join can be found on the SIG's etherpad: https://etherpad.opendev.org/p/bare-metal-sig Hope to see you there! Arne -- Arne Wiebalck CERN IT From jonathan.rosser at rd.bbc.co.uk Mon Apr 11 07:30:04 2022 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Mon, 11 Apr 2022 08:30:04 +0100 Subject: [openstack-ansible] Re: error In-Reply-To: References: <223175a1-cfa4-7fbc-c7f6-dee8a3b93b15@dds.nl> Message-ID: <4aeaef5e-a9fb-6fb3-0ce0-76990d2e868d@rd.bbc.co.uk> This really is a bug in oslo. To be clear there is no actual error here just a spurious log message. The code underneath nova tries to speculatively load the etcd driver and produces an error when it is not available. The etcd driver is not needed if etcd is not being used, which in this case it is not. See the dicussion on the comments here https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/765336 Jonathan. On 10/04/2022 21:40, Laurent Dumont wrote: > What OS are the controllers/computes running? I have no first hand > experience with openstack-ansible but I do believe it runs lxc > containers for the control plane. > > Missing?dependencies inside those is a bit strange. > > What deployment scenario did you use? Source or distribution packages? > > https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest/configure.html > > On Sun, Apr 10, 2022 at 12:21 PM tjoen wrote: > > On 4/10/22 16:00, Laurent Dumont wrote: > > We are going to need more context > > > >? ? ?- What are you trying to do? > >? ? ?- What version of Openstack are you running? > > > > I found this which is quite similar : > > https://bugs.launchpad.net/devstack/+bug/1820892 > > > On Sat, Apr 9, 2022 at 5:45 PM Reza Bojnordi > > > wrote: > > > >> Hi, sorry I have problem on Openstack > >> > >> tcd3gw': No module named 'etcd3gw': ModuleNotFoundError: No > module named > >> 'etcd3gw' > > It is a missin Python dependency > pbr futurist requests and six need it > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Mon Apr 11 09:00:53 2022 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Mon, 11 Apr 2022 11:00:53 +0200 Subject: [neutron] Bug Deputy Report January 03 - 10 In-Reply-To: References: <02b79e74-7d84-a6ba-537f-7db1a7ee6532@inovex.de> <68250a50-fc2e-4e8e-883d-5a214edc1f1a@inovex.de> Message-ID: <940662fe-24df-6bba-2c3a-30015b70fcff@inovex.de> Hey Lajos, On 21/03/2022 11:12, Lajos Katona wrote: > Hi Christian, > Thanks for your efforts for reproduction. > I will bring this topic to the team meeting tomorrow > (https://meetings.opendev.org/#Neutron_Team_Meeting ): > https://wiki.openstack.org/wiki/Network/Meetings#On_Demand_Agenda > > Regarding your frustration, I totally understand it. It is, I think > can be a topic for the coming PTG. 1) First thanks again for raising issue of a lack of maintainers for VPNaaS at https://meetings.opendev.org/meetings/networking/2022/networking.2022-03-22-14.06.log.html#l-62 . Were there any more take-aways from your PTG discussion if I may ask? Is there any chance anybody might look at our reported and DevStack-reproduced issue about the duplicate IPtable rules (https://bugs.launchpad.net/neutron/+bug/1943449) ? Do you need more info of any kind? 2) If there is a way forward for keeping VPNaaS ... ?* Will OVN receive support at some point? https://review.opendev.org/c/openstack/neutron-vpnaas/+/765353 3) More protocols? Are there any plans on extending on the types as currently only IPSEC is supported (https://opendev.org/openstack/neutron-vpnaas/src/branch/master/neutron_vpnaas/services/vpn/device_drivers). I was thinking about Wireguard which is also built into the linux kernel and saw amazing pick-up in recent times. Even consumer devices use it now as it's MUCH simpler than IPSEC. Regards Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Apr 11 09:51:46 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 11 Apr 2022 11:51:46 +0200 Subject: [Zuul] [neutron] Errors in the jobs definitions and EOL of some old networking-midonet branches Message-ID: <2750729.Y6S9NjorxK@p1> Hi, Today I got back to checking my old patches which are going to fix some zuul jobs' definitions. And I found that CI for networking-midonet in some stable branches is totally broken: * stable/ussuri: https://review.opendev.org/c/openstack/networking-midonet/+/823273[1] - everything is red here * stable/train: https://review.opendev.org/c/openstack/networking-midonet/+/823275[2] - most of the jobs are red * stable/stein: https://review.opendev.org/c/openstack/networking-midonet/+/823276[3] - here there were errors in the definition of the jobs, I changed it now, maybe it will run, So my question is if we want, and have resources, to fix it somehow or should we maybe set those branches as EOL now? Also, another question to the Zuul team - will it be ok if such branch will be EOL to get rid of the Zuul configuration errors warnings? Or should we somehow force to merge those patches first to get rid of the errors in https://zuul.opendev.org/t/openstack/config-errors[4] ? -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://review.opendev.org/c/openstack/networking-midonet/+/823273 [2] https://review.opendev.org/c/openstack/networking-midonet/+/823275 [3] https://review.opendev.org/c/openstack/networking-midonet/+/823276 [4] https://zuul.opendev.org/t/openstack/config-errors -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Mon Apr 11 10:00:52 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 11 Apr 2022 12:00:52 +0200 Subject: [Zaqar][Zuul][TC] Errors in the jobs definitions in stable branches Message-ID: <17744253.sWSEgdgrri@p1> Hi, Some time ago I pushed patches to fix definitions of the Zaqar's jobs in stable branches: * stable/stein: https://review.opendev.org/c/openstack/zaqar/+/823289[1] * stable/rocky: https://review.opendev.org/c/openstack/zaqar/+/823295[2] * stable/queens: https://review.opendev.org/c/openstack/zaqar/+/823296[3] * stable/pike: https://review.opendev.org/c/openstack/zaqar/+/823297[4] It seems that CI for all those branches is broken. Do You still have plans to fix it so I can rebase my patches and we can get rid of those zuul's configuration errors? Or should those branches be maybe EOL? I'm tagging TC in the topic of that email too as there is still no PTL of the Zaqar project appointed. -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://review.opendev.org/c/openstack/zaqar/+/823289 [2] https://review.opendev.org/c/openstack/zaqar/+/823295 [3] https://review.opendev.org/c/openstack/zaqar/+/823296 [4] https://review.opendev.org/c/openstack/zaqar/+/823297 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Mon Apr 11 10:01:01 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 11 Apr 2022 12:01:01 +0200 Subject: [Senlin][Zuul] Errors in the jobs definitions in stable branches Message-ID: <3190546.VqM8IeB0Os@p1> Hi, Some time ago I pushed patches to fix definitions of the Zaqar's jobs in stable branches: * stable/stein: https://review.opendev.org/c/openstack/senlin/+/823287[1] * stable/rocky: https://review.opendev.org/c/openstack/senlin/+/823293[2] * stable/queens: https://review.opendev.org/c/openstack/senlin/+/823294[3] It seems that CI for all those branches is broken. Do You still have plans to fix it so I can rebase my patches and we can get rid of those zuul's configuration errors? Or should those branches be maybe EOL? -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://review.opendev.org/c/openstack/senlin/+/823287 [2] https://review.opendev.org/c/openstack/senlin/+/823293 [3] https://review.opendev.org/c/openstack/senlin/+/823294 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From katonalala at gmail.com Mon Apr 11 10:01:45 2022 From: katonalala at gmail.com (Lajos Katona) Date: Mon, 11 Apr 2022 12:01:45 +0200 Subject: [all][neutron][neutron-vpnaas] Maintainers needed Message-ID: Hi, In the last few cycles neutron-vpnaas has no serious maintainers, and most patches merged were from Neutron core team or from Release team. Recently even neutron-vpnaas gate jobs started to fail. During the Zed PTG we discussed this topic (see [1]), For the maintenance we need someone to be the contact person for the project, who takes care of the project?s CI and review patches, answers bugs. Of course that?s only a minimal requirement. If the new maintainer works on new features for the project, it?s even better :) If we don?t have any new maintainer(s) before milestone Zed-2, which is July 11 - July 15 week according to [2], we will start marking neutron-vpnaas as deprecated and in the next cycle (AA, or perhapc 2023.1) we will propose to retire the project. So if You are using this project now, or if You have customers who are using it, please consider the possibility of maintaining it. Otherwise, please be aware that it is highly possible that the project will be deprecated and moved out from the official OpenStack projects. [1]: https://etherpad.opendev.org/p/neutron-zed-ptg#L201 [2]: https://releases.openstack.org/zed/schedule.html Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucasagomes at gmail.com Mon Apr 11 10:03:53 2022 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Mon, 11 Apr 2022 11:03:53 +0100 Subject: [neutron] Bug Deputy Report April 04 - 11 Message-ID: Hi, This is the Neutron bug report from April 4th to 11th. Critical: * https://bugs.launchpad.net/neutron/+bug/1967893 - " [stable/yoga] tempest.scenario.test_network_qos_placement.MinBwAllocationPlacementTest fails in neutron-ovs-tempest-multinode-full job Edit" - Assigned to: Lajos Katona High: * https://bugs.launchpad.net/neutron/+bug/1967839 - " [L3] NDP extension not handing "ha_state_change" correctly" - Assigned to: Rodolfo Alonso * https://bugs.launchpad.net/neutron/+bug/1967996 - "[OVN] External subscribers to feed generated in Openstack fails if no internal VM is subscribed to that feed first" - Unassigned Medium: * https://bugs.launchpad.net/neutron/+bug/1967742 - "ML2 - Network Context, not possible to see original/current segments" - Unassigned * https://bugs.launchpad.net/neutron/+bug/1968343 - "Security Group Rule create with forged integer security_group_id causes exceptions" - Assigned to: Andrew Karpow Needs further triage: * https://bugs.launchpad.net/neutron/+bug/1968057 - "Doc needed for configuring neutron AZs for edge" - Unassigned * https://bugs.launchpad.net/neutron/+bug/1968269 - " [router qos] qos binding is not clear after remove gateway" - Unassigned Cheers, Lucas -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Apr 11 10:04:36 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 11 Apr 2022 12:04:36 +0200 Subject: [vmware-nsx] EOL old branches Message-ID: <1912987.jZfb76A358@p1> Hi Salvatore, According to Your comments in my patches to x/vmware-nsx project: * stable/stein: https://review.opendev.org/c/x/vmware-nsx/+/823277[1] * stable/rocky: https://review.opendev.org/c/x/vmware-nsx/+/823291[2] Can You maybe make them EOL finally? -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://review.opendev.org/c/x/vmware-nsx/+/823277 [2] https://review.opendev.org/c/x/vmware-nsx/+/823291 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From katonalala at gmail.com Mon Apr 11 12:17:11 2022 From: katonalala at gmail.com (Lajos Katona) Date: Mon, 11 Apr 2022 14:17:11 +0200 Subject: [all][neutron] Pike - End of Life Message-ID: Hi, As time and cycles are passing it's time to bring up the topic of EOLing an old branch again. In Neutron (and its satellite projects, aka stadium) stable/pike has had low activity in the last cycle and recently even the jobs for it started to fail, and we cause some zuul config errors which are hard to fix as the gate is broken. Based on the Ocata EOL mail (see [1]), Neutron team decided to EOL Pike, see the discussions during the Zed PTG (see [2]) [1]: http://lists.openstack.org/pipermail/openstack-discuss/2021-April/021949.html [2]: https://etherpad.opendev.org/p/neutron-zed-ptg#L254 Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Apr 11 12:18:12 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 11 Apr 2022 12:18:12 +0000 Subject: [Zuul] [neutron] Errors in the jobs definitions and EOL of some old networking-midonet branches In-Reply-To: <2750729.Y6S9NjorxK@p1> References: <2750729.Y6S9NjorxK@p1> Message-ID: <20220411121811.z3xw6jb5n3fw2bzm@yuggoth.org> On 2022-04-11 11:51:46 +0200 (+0200), Slawek Kaplonski wrote: [...] > will it be ok if such branch will be EOL to get rid of the Zuul > configuration errors warnings? Or should we somehow force to merge > those patches first to get rid of the errors [...] Simply tagging the branch and removing it (standard EOL practice for the release managers) should suffice. There's no need to correct the configuration on branches if they're being deleted. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From romain.chanu at univ-lyon1.fr Mon Apr 11 12:29:58 2022 From: romain.chanu at univ-lyon1.fr (CHANU ROMAIN) Date: Mon, 11 Apr 2022 12:29:58 +0000 Subject: [manila][ussuri] Manila return 404 resource not found after simple install Message-ID: <884d03497953b7000c025f2832abe5ed72316e8a.camel@univ-lyon1.fr> Hello, I tried to install manila on Ubuntu servers and I'm stuck during the first steps. I followed this documentation: https://docs.openstack.org/manila/ussuri/install/install-controller-ubuntu.html When I tried to verify the operation I always get the same message: # manila service-list ERROR: Not Found (HTTP 404) In manila-api.log 2022-04-11 12:12:56.533 125 INFO eventlet.wsgi.server [req-c03111de- 46bb-40e1-9440-2f62a94a61f0 ee3ae5e43b4c6246c4d95264b2978bf05dce57920b6b852d62b450ad6e7fb392 59bc7ad25c364c809a97c1a55caec161 - - -] IP_CLIENT,IP_LB "GET /v2/services HTTP/1.1" status: 404 len: 228 time: 0.1233838 If I use -d #manila -d service-list DEBUG (connectionpool:208) Starting new HTTP connection (1): FQDN DEBUG (connectionpool:396) http://FQDN:8786 "GET /v2/services HTTP/1.1" 404 112 RESP: [404] {'Content-Length': '112', 'Content-Type': 'application/json', 'Date': 'Mon, 11 Apr 2022 12:14:51 GMT'} RESP BODY: {"message": "The resource could not be found.

\n\n\n", "code": "404 Not Found", "title": "Not Found"} Services are running, database is populated, rabbitmq has some exchanges... I currently run out of ideas... Did someone already face this issue? Best regards, Romain -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4513 bytes Desc: not available URL: From skaplons at redhat.com Mon Apr 11 12:30:52 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 11 Apr 2022 14:30:52 +0200 Subject: [Zuul] [neutron] Errors in the jobs definitions and EOL of some old networking-midonet branches In-Reply-To: <20220411121811.z3xw6jb5n3fw2bzm@yuggoth.org> References: <2750729.Y6S9NjorxK@p1> <20220411121811.z3xw6jb5n3fw2bzm@yuggoth.org> Message-ID: <5081792.6fTUFtlzNn@p1> Hi, On poniedzia?ek, 11 kwietnia 2022 14:18:12 CEST Jeremy Stanley wrote: > On 2022-04-11 11:51:46 +0200 (+0200), Slawek Kaplonski wrote: > [...] > > will it be ok if such branch will be EOL to get rid of the Zuul > > configuration errors warnings? Or should we somehow force to merge > > those patches first to get rid of the errors > [...] > > Simply tagging the branch and removing it (standard EOL practice for > the release managers) should suffice. There's no need to correct the > configuration on branches if they're being deleted. Thanks for confirmation. That's what I thought but I wanted to be sure :) > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From mthode at mthode.org Mon Apr 11 13:39:24 2022 From: mthode at mthode.org (Matthew Thode) Date: Mon, 11 Apr 2022 08:39:24 -0500 Subject: [requirements][kolla][infra] Python versions in u-c In-Reply-To: References: Message-ID: <20220411133924.k3uaksvqtxqzsclg@mthode.org> On 22-04-11 09:08:37, Marcin Juszkiewicz wrote: > OpenStack Zed lists Python 3.8 as minimal version. > > Debian 'bullseye' (current stable) as Python 3.9, Ubuntu 20.04 'focal' has > 3.8. Ubuntu 22.04 'jammy' has 3.10, CentOS Stream 9 (so RHEL 9, Rocky Linux > 9) has 3.9 version. > > We ignore anything RHEL 8 based as they have Python 3.6 only and there is no > work on getting OpenStack Zed supported there. > > So there will be 3.8, 3.9, 3.10 used with Zed. > > Two question to requirements team: > > 1. Are there plans to get rid of 3.6 entries from > openstack/requirements/upper-constraints.txt in near future? > > 2. Can we get "python_version=='3.8'" lines be changed to ">=" ones? Or once > 3.6 entries drop just assume that we have 3.8+ and drop 'python_version' > entries? > > websocket-client===1.3.1;python_version=='3.6' > websocket-client===1.3.2;python_version=='3.8' > > > Asking due to my recent Kolla/infra work on getting CentOS Stream 9 > supported in both places. > > Infra team has CI job which builds Python wheel cache to make sure that > other CI jobs do not have to do it. Times are nicely cut due to this stuff. > > There are 55 entries with "python_version=='3.8'". Probably none of them are > time critical anymore as upstream projects provide aarch64 wheels most of > times too. > Yes, once 3.6 is dropped ('soon') the websocket line will revert to websocket-client===1.3.2 I'll try to get that done this week (add py39 at the same time). -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From katonalala at gmail.com Mon Apr 11 13:42:44 2022 From: katonalala at gmail.com (Lajos Katona) Date: Mon, 11 Apr 2022 15:42:44 +0200 Subject: [zuul][neutron][neutron-fwaas][tap-as-a-service] Errors in the jobs definitions and EOL of some old neutron-fwaas and tap-as-a-service branches Message-ID: Hi, During the PTG we got questions about projects which cause zuul config errors (see [1]). Slawek started a thread regarding networking-midonet (see [2]), I join to that with neutron-fwaas and tap-as-a-servie. - neutron-fwaas' stable/rocky and stable/queens branches - tap-as-a-service stable/ocata, stable/pike, stable/queens and stable/rocky branches are affected - as tap-as-a-service was under x/ namespace previously I am not sure what can happen to those branches. If there is nobody to fix the config and gate errors on these old branches, then the best would be to EOL and later delete these branches. [1]: https://zuul.opendev.org/t/openstack/config-errors [2]: http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028120.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Mon Apr 11 14:38:32 2022 From: katonalala at gmail.com (Lajos Katona) Date: Mon, 11 Apr 2022 16:38:32 +0200 Subject: [neutron] Bug Deputy Report January 03 - 10 In-Reply-To: <940662fe-24df-6bba-2c3a-30015b70fcff@inovex.de> References: <02b79e74-7d84-a6ba-537f-7db1a7ee6532@inovex.de> <68250a50-fc2e-4e8e-883d-5a214edc1f1a@inovex.de> <940662fe-24df-6bba-2c3a-30015b70fcff@inovex.de> Message-ID: Hi, We touched this topic during the PTG, but it seems that there was nobody from the participants who would like to maintain neutron-vpnaas, so I sent out a mail asking for help: http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028123.html Regards Lajos Katona (lajoskatona) Christian Rohmann ezt ?rta (id?pont: 2022. ?pr. 11., H, 11:00): > Hey Lajos, > > On 21/03/2022 11:12, Lajos Katona wrote: > > Hi Christian, > Thanks for your efforts for reproduction. > I will bring this topic to the team meeting tomorrow ( > https://meetings.opendev.org/#Neutron_Team_Meeting ): > https://wiki.openstack.org/wiki/Network/Meetings#On_Demand_Agenda > > Regarding your frustration, I totally understand it. It is, I think can be > a topic for the coming PTG. > > > 1) First thanks again for raising issue of a lack of maintainers for > VPNaaS at > https://meetings.opendev.org/meetings/networking/2022/networking.2022-03-22-14.06.log.html#l-62 > . > Were there any more take-aways from your PTG discussion if I may ask? > > Is there any chance anybody might look at our reported and > DevStack-reproduced issue about the duplicate IPtable rules ( > https://bugs.launchpad.net/neutron/+bug/1943449) ? > Do you need more info of any kind? > > > 2) If there is a way forward for keeping VPNaaS ... > > * Will OVN receive support at some point? > https://review.opendev.org/c/openstack/neutron-vpnaas/+/765353 > > 3) More protocols? > > Are there any plans on extending on the types as currently only IPSEC is > supported ( > https://opendev.org/openstack/neutron-vpnaas/src/branch/master/neutron_vpnaas/services/vpn/device_drivers > ). > I was thinking about Wireguard which is also built into the linux kernel > and saw amazing pick-up in recent times. Even consumer devices use it now > as it's MUCH simpler than IPSEC. > > > > > Regards > > > Christian > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amonster369 at gmail.com Mon Apr 11 14:50:18 2022 From: amonster369 at gmail.com (A Monster) Date: Mon, 11 Apr 2022 15:50:18 +0100 Subject: [neutron] exposing ip address of external Network from Message-ID: Hi, indeed I have dhcp disabled for this network, and that's because I have an external dhcp server which attributes ip addresses to hosts on this network, so I cannot check the dhcp option when creating subnets for this public network. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Mon Apr 11 18:03:02 2022 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Mon, 11 Apr 2022 23:33:02 +0530 Subject: [manila][ussuri] Manila return 404 resource not found after simple install In-Reply-To: <884d03497953b7000c025f2832abe5ed72316e8a.camel@univ-lyon1.fr> References: <884d03497953b7000c025f2832abe5ed72316e8a.camel@univ-lyon1.fr> Message-ID: On Mon, Apr 11, 2022 at 6:10 PM CHANU ROMAIN wrote: > > Hello, > > I tried to install manila on Ubuntu servers and I'm stuck during the > first steps. I followed this documentation: > > https://docs.openstack.org/manila/ussuri/install/install-controller-ubuntu.html > > When I tried to verify the operation I always get the same message: > > # manila service-list > ERROR: Not Found (HTTP 404) > > In manila-api.log > 2022-04-11 12:12:56.533 125 INFO eventlet.wsgi.server [req-c03111de- > 46bb-40e1-9440-2f62a94a61f0 > ee3ae5e43b4c6246c4d95264b2978bf05dce57920b6b852d62b450ad6e7fb392 > 59bc7ad25c364c809a97c1a55caec161 - - -] IP_CLIENT,IP_LB "GET > /v2/services HTTP/1.1" status: 404 len: 228 time: 0.1233838 > > If I use -d > > #manila -d service-list > DEBUG (connectionpool:208) Starting new HTTP connection (1): FQDN > DEBUG (connectionpool:396) http://FQDN:8786 "GET /v2/services HTTP/1.1" > 404 112 > RESP: [404] {'Content-Length': '112', 'Content-Type': > 'application/json', 'Date': 'Mon, 11 Apr 2022 12:14:51 GMT'} > RESP BODY: {"message": "The resource could not be found.

/>\n\n\n", "code": "404 Not Found", "title": "Not Found"} Hi Romain, What version of python-manilaclient was used? Was the "os-share-api-microversion" overridden? Can you please share the full logs from the api service as well as the debug log from the client? You can obfuscate the IPs and any other sensitive information Reporting an issue on https://bugs.launchpad.net/manila might be better than debugging this via the mailing list. You can upload your log files there and post links from a pastebin service such as http://paste.openstack.org/ > > > Services are running, database is populated, rabbitmq has some > exchanges... I currently run out of ideas... Did someone already face > this issue? > > Best regards, > Romain -------------- next part -------------- An HTML attachment was scrubbed... URL: From sorrison at gmail.com Tue Apr 12 02:00:02 2022 From: sorrison at gmail.com (Sam Morrison) Date: Tue, 12 Apr 2022 12:00:02 +1000 Subject: [neutron][ops] OVN scale issues and reviving midonet plugin Message-ID: Hi, We recently tried to migrate our install from ML2 midonet -> OVN driver on our Victoria Openstack install with ~1000 hypervisors. Victoria was the last release where midonet plugin was supported so was a good motivation to move. Unfortunately when we changed the neutron-server config over to use OVN in our production install everything went very bad. We got lots of things like: ovsdbapp.exceptions.TimeoutException: Commands [, , ] exceeded timeout 180 seconds, cause: TXN queue is ful ovsdbapp.exceptions.TimeoutException: Commands [, , ] exceeded timeout 180 seconds, cause: Result queue is empty ovsdbapp.exceptions.TimeoutException: Commands [] exceeded timeout 180 seconds, cause: Result queue is empt among other things, which forced us to roll back. Our next approach is to get everything up to yoga and try again, (with some better live testing before we make the switch somehow) In the mean time we have revived the networking-midonet plugin in our own branch but just wanted to check to see if anyone else is in this situation and are or have looked into running midonet on wallaby and beyond? Cheers, Sam From gmann at ghanshyammann.com Tue Apr 12 02:33:13 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 11 Apr 2022 21:33:13 -0500 Subject: [all][tc] Canceling TC this week meetings Message-ID: <1801b9e31f0.dbe28af5624997.2608886375828372104@ghanshyammann.com> Hello Everyone, TC will not have this week meeting and will continue the weekly meeting from April 21st onwards. -gmann From kkchn.in at gmail.com Tue Apr 12 04:22:30 2022 From: kkchn.in at gmail.com (KK CHN) Date: Tue, 12 Apr 2022 09:52:30 +0530 Subject: Infrastructure and Virtual Resources utilisation tool for openstack cloud Message-ID: List, Can anyone point out the best metering and monitoring tool/tool combination for the complete infrastructure statistics and ( for report generation) as well as virtual resources utilization (including health check of virtual machines / virtual resources ) report generation tools which can be integrated with OpenStack middleware. ? any suggestions most welcome. Thanks in advance, Krish -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Tue Apr 12 04:44:43 2022 From: gagehugo at gmail.com (Gage Hugo) Date: Mon, 11 Apr 2022 23:44:43 -0500 Subject: [openstack-helm] No meeting this week Message-ID: Hey team, Since there is nothing on the agenda the meeting for this week is cancelled. We will meet next week at the usual time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue Apr 12 09:48:06 2022 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 12 Apr 2022 11:48:06 +0200 Subject: [nova][placement] Zed PTG summary Message-ID: Yet again, let me try to provide you a summary about our previous PTG, this time for the Zed release. You can see all the notes thanks to this read-only etherpad : https://etherpad.opendev.org/p/r.75a986ce3ac43bb74b93c5de63d84fe9 ### Cross-project discussions ### # Neutron cross-project discussion with Nova - The nova community asked how to require Neutron backends to support a list of Neutron API extensions (for example multiple port bindings). We discussed whether it was rather a documentation issue or a code issue, but eventually it was rather a supporting concern as we don't test all of the backends for Nova. We then agreed on discussing this during the next OpenInfra Summit in Berlin with operators then, if possible. - We agreed on providing a new job for verifying whether heal_instance_info_cache_interval is no longer needed. # Cinder cross-project discussion with Nova - We discussed how we could remove the os-brick rootwrap config from nova and rather moving the config into directly os-brick. We eventually agreed on trying to rather implement the rootwrap->privsep transition, hopefully, thanks to some Stephen's help ;) # Cyborg cross-project discussion with Nova - We agreed on continuing to accept to provide traits for Resource Providers for owners (OWNER_NOVA at least). Context is https://review.opendev.org/c/openstack/nova-specs/+/836583 and we discussed about some implementation nits. ### Nova specific topics ### # Procedural discussions - We discussed on Yoga retrospective. We were happy to not have a RC2, to have new contributors and to have provided features to Placement for the last two cycles. We agreed to stopping to accept blind rechecks and to use the review priority label. Eventually we agreed on maybe adding new stable cores. - We need to fix Placement tests that verify the number of traits as for the moment it creates some issues. We agreed on relaxing those tests and to add a PTL doc saying we would bump os-traits and os-resource-classes usage in placement before RC1. - We agreed on having the same deadlines that for the last cycle, which are, Spec approval freeze for Zed-2 milestone (July 14th), and FeatureFreeze for Zed-3 (Sep 1st). We will have two spec review days - We agreed on keeping the same release model for os-vif - We agreed on continuing to use the review priority label in Gerrit and encourage contributors to use it (once we land the necessary changes in Gerrit) - Follow-up discussion on the novaclient CLI deprecation we agreed on Yoga. We'll modify the spec template to ask for rather updating OSC instead of the shell, and how to return some warnings if you use the shell. - We agreed on having some bug triage rotation between our contributors. I'll explain it in our next team meeting today :-) - We agreed on saying 'Unsupported' instead of 'Deprecated" for some features that need help if we don't want to remove them :-) - Again, we loudly said ***NO FOR BLIND RECHECKS*** https://docs.openstack.org/project-team-guide/testing.html#how-to-handle-test-failures # New Release cadence (ie. the tick-tock release balance) - We agreed on writing some guidelines in our Nova documentation explaining the differences between tick and tock releases from a nova perspective and what people could do. We agreed on *not* bumping the compute minimum supported version in a tick release. That would also be super cool if we could have some sphinx directive in reno that contributors would use if they add/modify things in a tock release but want operators to know in the next tick. # Planned API modifications - We agreed on testing new policy defaults in the gate and not yet enforce new scopes until we are sure everything works correctly. Devstack will need modifications for those enablements and proper communication has to be made to deployment projects to make sure they're on the same page before we change anything. - We agreed on deprecating the possibility to generate new keypairs thru the Nova API. Only keypair imports will be accepted in the future. - We agreed on some usecase for adding network domain name in our metadata API. Things have to be further discussed in a spec but the direction sounds reasonable. - We agreed on letting userdata be editables. Details were discussed about the limits of this new mutable field (configdrive et when to apply) but spec will be updated accordingly. - Being able to unshelve an instance by passing a destination seems a valid usecase, spec has to be reviewed besides the implementation. - We'll try to somehow ensure that renaming tenant_id into project_id in our API isn't yet again deferred to the next cycle, even tho the series is large and merge conflicts are important. - Manila shares could be attached or detached to instances thru a new API and notifications would be emitted # Other - Centos9 will be tested on a weekly basis in our gate at first - Nova healthcheck is definitely a thing we want, but we miss hands on deck. - having PCI devices being tracked in Placement is also something we'd love to see arriving on this cycle - we'll continue further testing emulated hardware. Maybe some blueprints would come up but we also said we'll try to give visibility on the periodic gate runs by looking at them in our weekly meeting. - We discussed a very interesting case with the Scaphandre project that needs to use virtio-fs for passing system usage to the guests. We eventually agreed on two possible directions, with one preferred, which is to reuse the Manila shares support we're going to add in Zed. - We can be smarter on placing guest CPUs on physical cores or dies, but this requires a blueprint at least as we agreed on providing some flavor extraspec or image property for letting operators to define the CPU placement. - Windows guests would benefit from new enlightenments in Zed if we merge some small change - Having CPU cores be externally managed by a daemon that would turn them on/off based on consumption seems a valid usecase. We discussed about some details but spec has to be reproposed. That's it. If you reached this line, kudos, you're brave. I tried to be short, but it looks to me I failed. Anyway, the source of truth remains the etherpad, just please avoid to amend it now. -Sylvain -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Tue Apr 12 10:42:14 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Tue, 12 Apr 2022 11:42:14 +0100 Subject: [Kolla-ansible][Xena] How to undeploy (remove) a service Message-ID: Hi, How can I remove a deployed service with kolla-ansible? Regards. Virus-free. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Tue Apr 12 11:49:08 2022 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 12 Apr 2022 07:49:08 -0400 Subject: [Kolla-ansible][Xena] How to undeploy (remove) a service In-Reply-To: References: Message-ID: <3B5C6F44-E039-4D8B-9A08-57A69B455FF7@gmail.com> Remove it from inventory file and run deploy command. Also you may need to remove container by hand. Sent from my iPhone > On Apr 12, 2022, at 6:44 AM, wodel youchi wrote: > > ? > Hi, > > How can I remove a deployed service with kolla-ansible? > > Regards. > > Virus-free. www.avast.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From niujie at chinamobile.com Tue Apr 12 12:36:38 2022 From: niujie at chinamobile.com (niujie) Date: Tue, 12 Apr 2022 20:36:38 +0800 Subject: [all] New CFN(Computing Force Network) SIG Proposal Message-ID: <192b01d84e69$f4e80a90$deb81fb0$@com> Hi all I'm from China Mobile, China Mobile is recently working on build a new information infrastructure focusing on connectivity, computing power, and capabilities, this new information infrastructure is called Computing Force Network, we think OpenStack community which gathers global wisdom together is a perfect platform to discuss topics like CFN, so we are proposing to create a new SIG for CFN(Computing Force Network). Below is CFN brief introduction and initial SIG scope. With the flourish of new business scenarios such as hybrid cloud, multi-cloud, AI, big data processing, edge computing, building a new information infrastructure based on multiple key technologies that converged cloud and network, will better support global digital transformation. This new infrastructure is not only relates to cloud, it is getting more and more connected with network, and at the same time, we also need to consider how to converge multiple technologies like AI, Blockchain, big data, security to provide this all-in-one service. Computing Force Network(CFN) is a new information infrastructure that based on network, focused on computing, deeply converged Artificial intelligence, Block chain, Cloud, Data, Network, Edge computing, End application, Security(ABCDNETS), providing all-in-one services. Xiaodong Duan, Vice president of China Mobile Research Institute, introduced the vision and architecture of Computing Force Network in 2021 November OpenInfra Live Keynotes by his presentation Connection + Computing + Capability Opens a New Era of Digital Infrastructure, he proposed the new era of CFN. We are expecting to work with OpenStack on how to build this new information infrastructure, and how to promote the development and implementation of next generation infrastructure, achieve ubiquitous computing force, computing & network convergence, intelligence orchestration, all-in-one service. Then computing force will become common utilities like water and electric step by step, computing force will be ready for access upon use and connected by single entry point. The above vision of CFN , from technical perspective, will mainly focus on unified management and orchestration of computing + network integrated system, computing and network deeply converged in architecture, form and protocols aspect, bringing potential changes to OpenStack components. CFN is aiming to achieve seamlessly migration of any application between any heterogeneous platforms, it's a challenge for the industry currently, we feel that in pursuit of CFN could potentially contributes to the development and evolution of OpenStack. In this CFN SIG, we will mainly focus on discussing how to build the new information infrastructure of CFN, related key technologies, and what's the impact on OpenStack brought by the network & could convergence trend , the topics are including but not limited to: 1, A computing basement for unified management of container, VM and Bare Metal 2, Computing infrastructure which eliminated the difference between heterogeneous hardware 3, Measurement criteria and scheduling scheme based on unified computing infrastructure 4, Network solutions for SDN integrating smart NIC for data center 5, Unified orchestration & management for "network + cloud", and "cloud + edge + end" integrated scheduling solution We will have regular meetings to investigate and discuss business scenarios, development trend, technical scheme, release technical documents, technical proposal and requirements for OpenStack Projects, and propose new project when necessary. We will also collaborate with other open source projects like LFN, CNCF, LFE, to have a consistent plan across communities, and align with global standardization organization like ETSI, 3GPP, IETF, to promote CFN related technical scheme become the standard in industry. If you have any thoughts, interests, questions, requirements, we can discuss by this mailing list. Any suggestions are welcomed, and we are really hoping to hear from anyone, and work with you. Jie Niu China Mobile -------------- next part -------------- An HTML attachment was scrubbed... URL: From niujie at chinamobile.com Tue Apr 12 12:38:49 2022 From: niujie at chinamobile.com (niujie) Date: Tue, 12 Apr 2022 20:38:49 +0800 Subject: =?gb2312?B?s7e72DogW2FsbF0gTmV3IENGTihDb21wdXRpbmcgRm9yY2UgTmV0d29yayk=?= =?gb2312?B?IFNJRyBQcm9wb3NhbCA=?= Message-ID: niujie ??????[all] New CFN(Computing Force Network) SIG Proposal ?? -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 1161 bytes Desc: not available URL: From noonedeadpunk at ya.ru Tue Apr 12 12:45:54 2022 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Tue, 12 Apr 2022 14:45:54 +0200 Subject: [openstack-ansible][PTG] Zed PTG results Message-ID: Hi everyone! Thanks everybody for taking time and attending a session. Despite we didn't have much topics in our agenda, I believe it we had great discussion about further steps in project development. We have defined topics that we aim to implement before releasing Yoga: * Move os_octavia role to use PKI role * Implement keystone RBAC support and switch services role to `service`. Be ready for final migration in Zed. * Start implementing support for encrypting connection between haproxy and it's backends * We will also try to get Ubuntu 22.04 support before release, despite in general we add Ubuntu LTS support in autumn. Following plans regarding Zed release were set: * Add unit testing of collections and common roles, with molecule. At same time we should use dependencies from integrated repo for that. * We will support concept of Tick/Tock releases and add testing for jumping through releases. * Implement shared queues for our jobs and see effect from that * Fix repo server to _really_ support multiple distros in the same environment * We should revise in AA PTL if we should keep support for non-SSL deployments, and keep upgrade path in Zed. There was an agreement to EOL Pike and Queens releases in nearest future. We also move Victoria to EM soon. With lack of contributions/interest regarding some of features we are reaching to community and interested parties to help out with support of: * Installation path from distro packages * Support of CentOS distributions If you're interested in continuation of being these part of OSA, please reach us out via ML or IRC to get more information about how you can help out. -- Kind regards, Dmitriy Rabotyagov -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Tue Apr 12 13:07:43 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Tue, 12 Apr 2022 14:07:43 +0100 Subject: [Kolla-ansible][Xena] Test Trove module In-Reply-To: References: Message-ID: Hi, Any suggestions??? Regards. Le dim. 10 avr. 2022 ? 10:40, wodel youchi a ?crit : > Hi, > > I used a VM machine on the cloud with a good connexion to build the image > but, it didn't work for me so far : > I get this error while constructing the image : > > 2022-04-10 08:55:25.497 | *+ install_deb_packages install > iscsi-initiator-utils* > 2022-04-10 08:55:25.497 | + DEBIAN_FRONTEND=noninteractive > 2022-04-10 08:55:25.497 | + http_proxy= > 2022-04-10 08:55:25.497 | + https_proxy= > 2022-04-10 08:55:25.497 | + no_proxy= > 2022-04-10 08:55:25.497 | + apt-get --option > Dpkg::Options::=--force-confold --option Dpkg::Options::=--force-confdef > --assume-yes install iscsi-initiator-utils > 2022-04-10 08:55:25.541 | Reading package lists... > 2022-04-10 08:55:25.788 | Building dependency tree... > 2022-04-10 08:55:25.788 | Reading state information... > 2022-04-10 08:55:25.825 | *E: Unable to locate package > iscsi-initiator-utils* > 2022-04-10 08:55:25.838 | ++ > diskimage_builder/lib/img-functions:run_in_target:59 > : check_break after-error run_in_target bash > 2022-04-10 08:55:25.843 | ++ > diskimage_builder/lib/common-functions:check_break:143 > : echo '' > 2022-04-10 08:55:25.844 | ++ > diskimage_builder/lib/common-functions:check_break:143 > : egrep -e '(,|^)after-error(,|$)' -q > 2022-04-10 08:55:25.851 | + > diskimage_builder/lib/img-functions:run_in_target:1 > : trap_cleanup > 2022-04-10 08:55:25.855 | + > diskimage_builder/lib/img-functions:trap_cleanup:36 > > I am not an Ubuntu person but I think the package's name is open-iscsi. > > This is the command I used to build the image : ./trovestack build-image > ubuntu bionic true ubuntu > /home/stack/trove-xena-guest-ubuntu-bionic-dev.qcow2 > My OS is a Centos 8 Stream. you can find the whole log of the operation > attached. > > Thanks in advance. > > Regards. > > Le jeu. 7 avr. 2022 ? 16:02, Clark Boylan a ?crit : > >> On Thu, Apr 7, 2022, at 6:38 AM, wodel youchi wrote: >> > Hi, >> > I found the error, Rocky is not supported for, so I switched to CentOS >> > machine. The script starts but I had two problems : >> > The trovestack script searches for a package named qemu and don't find >> > it, so I modified the script to use qemu* instead of qemu >> > >> > The second problem is related to the download itself, I have this error >> > : >> > 2022-04-07 13:18:02.677 | Caching guest-agent from >> > https://opendev.org/openstack/trove in /home/deployer/.cache/ >> > >> image-create/source-repositories/guest_agent_842a440b9b12731c50f3b4042bf842ea7e58467d >> > 2022-04-07 13:22:31.299 | error: RPC failed; curl 18 transfer closed >> > with outstanding read data remaining >> > 2022-04-07 13:22:31.299 | error: 6149 bytes of body are still expected >> > 2022-04-07 13:22:31.300 | fetch-pack: unexpected disconnect while >> > reading sideband packet >> > 2022-04-07 13:22:31.300 | fatal: early EOF >> > 2022-04-07 13:22:31.301 | fatal: fetch-pack: invalid index-pack output >> > >> > Any ideas? >> >> This is related to cloning and caching the >> https://opendev.org/openstack/trove git repo during the image build for >> the trove database image. This looks like a network error of some sort with >> the connection ending before it was completed. You might want to double >> check any proxies or firewalls between you and https://opendev.org. It >> may have also been the Internet acting up and trying again would be fine. I >> would try again and if it persists start looking at network connectivity >> between you and https://opendev.org and take it from there. >> >> > >> > Regards. >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Tue Apr 12 13:56:44 2022 From: mkopec at redhat.com (Martin Kopec) Date: Tue, 12 Apr 2022 15:56:44 +0200 Subject: [qa][ptg] PTG Summary Message-ID: Hello everyone, thank you to all who participated in the PTG discussions and shared their thoughts and opinions. It's very much appreciated! Here is the summary of the discussions [1]: * Yoga Retrospective ** more details in [1] * Secure RBAC ** Devstack - mostly done, it has already options to enable enforce scope ** Tempest - tests will need to be migrated *** will be started after Keystone and Nova are migrated * FIPS current status and next plans ** Tempest will use ecdsa keys by default (instead of rsa) ** QA team will help reviewing related patches *** https://etherpad.opendev.org/p/qa-zed-ptg-fips * Tempest tests in mixed-architecture environments ** there is no support in Zuul for a testing like this ** still in a brainstorming phase looking for a potential solution, if you have any ideas, feel free to reach out * Direction on replacing the scenario manager ** in progress, the plan is to finish it (merge related patches) sooner in the cycle * Retirement of QA projects ** openstack-health ** tempest-lib ** os-testr *** ostestr command was removed from os-testr repo * Future of Centos Stream support ** we'd like to create at least an experimental job on Rocky linux ** during the Zed cycle we would like to deprecated Centos 8 Stream and influence projects to start using Centos 9 Stream jobs The discussed topics are transformed into priority items [2] we will be focusing on this cycle. [1] https://etherpad.opendev.org/p/qa-zed-ptg [2] https://etherpad.opendev.org/p/qa-zed-priority Regards, -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA IM: kopecmartin -------------- next part -------------- An HTML attachment was scrubbed... URL: From JOliveira at itcservicios.com Tue Apr 12 14:02:26 2022 From: JOliveira at itcservicios.com (Joao Oliveira) Date: Tue, 12 Apr 2022 14:02:26 +0000 Subject: Resize instance vm Linux error In-Reply-To: References: Message-ID: <749dc10a4fbb44ffb1e95ff3fc63822b@itcservicios.com> Hello Thanks for the answer, we solved it with a colleague, we had to open the ssh traffic between the nodes before doing the resizing Jo?o de Deus Oliveira Ingeniero de Infraestructura Y PRIVACIDAD: El uso de la informaci?n transmitida en este correo electr?nico est? limitado a la persona a la cual va dirigido. El correo puede contener informaci?n privada, privilegiada, confidencial o exenta de revelaci?n bajo las leyes aplicables. Si usted no es el destinatario pretendido o sospecha que el mensaje le hubiera sido enviado sin la debida autorizaci?n, queda avisado que est? estrictamente prohibido cualquier uso, diseminaci?n o copia de esta informaci?n. Si usted ha recibido este mensaje por equivocaci?n le pedimos notificarnos a vuelta de correo y borrar el mensaje. CONFIDENTIALITY AND PRIVACY NOTICE: This email is intended solely for the use of the individual to whom it is addressed and may contain information that is privileged, confidential or otherwise exempt from disclosure under applicable law. If you are not the intended recipient or it appears that this mail has been forwarded to you without proper authority, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify us via reply email and delete this message. De: Massimo Sgaravatto Enviado el: domingo, 10 de abril de 2022 03:09 Para: Joao Oliveira CC: OpenStack Discuss Asunto: Re: Resize instance vm Linux error Which version of OpenStack ? You might be affected by this bug: https://bugs.launchpad.net/horizon/+bug/1940834 On Sat, Apr 9, 2022 at 11:45 PM Joao Oliveira > wrote: How I can Resize a Instance in the dashboard ?? [cid:image001.png at 01D84E54.5E82CF00] I try this and it is not working. Any sugestions ?? Thanks Jo?o de Deus Oliveira Ingeniero de Infraestructura [Descripci?n: Descripci?n: Descripci?n: Logo ITC Servicios] Inform?tica, Tecnolog?a & Comunicaciones Edificio ITC Tower Av. Las Ramblas #100, Torre B Barrio Equipetrol Norte Santa Cruz - Bolivia Tel: +(591) 3 344-4424 Ext. 4368 Fax: +(591) 3-344-4433 M?vil: +(591) 67011038 E-mail: joliveira at itcservicios.com Web: www.itc-e.com AVISO DE CONFIDENCIALIDAD Y PRIVACIDAD: El uso de la informaci?n transmitida en este correo electr?nico est? limitado a la persona a la cual va dirigido. El correo puede contener informaci?n privada, privilegiada, confidencial o exenta de revelaci?n bajo las leyes aplicables. Si usted no es el destinatario pretendido o sospecha que el mensaje le hubiera sido enviado sin la debida autorizaci?n, queda avisado que est? estrictamente prohibido cualquier uso, diseminaci?n o copia de esta informaci?n. Si usted ha recibido este mensaje por equivocaci?n le pedimos notificarnos a vuelta de correo y borrar el mensaje. CONFIDENTIALITY AND PRIVACY NOTICE: This email is intended solely for the use of the individual to whom it is addressed and may contain information that is privileged, confidential or otherwise exempt from disclosure under applicable law. If you are not the intended recipient or it appears that this mail has been forwarded to you without proper authority, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify us via reply email and delete this message. De: Abhishek Kekane > Enviado el: jueves, 7 de abril de 2022 14:33 Para: Anurag Singh Rajawat > CC: OpenStack Discuss > Asunto: Re: [glance] Outreachy 2022 Hi Anurag, Sorry that we were not able to address you on IRC. Currently glance team is busy in PTG which will end tomorrow evening and that is why we might have missed your ping on IRC. I would suggest to share your failures so that we can guide you. I think from Monday onwards everyone will be back to their daily routine so outreachy glance team will help you to resolve your queries. Meanwhile if it is urgent then you can share your doubts and I will try my best to resolve them. Thanks and Regards, Abhishek On Thu, 7 Apr, 2022, 22:30 Anurag Singh Rajawat, > wrote: Dear glance team, I'd setup glance, glance-store and glance-client on my local setup, but some tests for glance were failing, also is there any good first issues so that I can understand the project more clearly? I also asked about it in IRC but doesn't got response. Thanks Sincerely Anurag -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 97732 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 4650 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 132 bytes Desc: image003.png URL: From borges.ds at gmail.com Tue Apr 12 14:08:30 2022 From: borges.ds at gmail.com (borges.ds at gmail.com) Date: Tue, 12 Apr 2022 11:08:30 -0300 Subject: (Ansible Galaxy) Assign static ip to build a sever Message-ID: Hi folks, I am using the server module to build/decomm virtual machines. However I am only able to build VMs using dynamic IP assignment.. Whenever I try to assign a static using float ips, the instruction is ignored. Could you help me to figure this out? The collection documentation is not clear about that... -- Aguardo sua resposta, David Borges de Sousa (3374-9157 / 87846638) -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Tue Apr 12 14:24:45 2022 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 12 Apr 2022 16:24:45 +0200 Subject: [Kolla-ansible][Xena] Test Trove module In-Reply-To: References: Message-ID: Just in case it's not obvious - it is not in Kolla Ansible scope to fix. Reach out to the Trove team. Maybe they are only filtering on the [trove] tag - you might want to try that. -yoctozepto On Tue, 12 Apr 2022 at 15:09, wodel youchi wrote: > > Hi, > > Any suggestions??? > > Regards. > > Le dim. 10 avr. 2022 ? 10:40, wodel youchi a ?crit : >> >> Hi, >> >> I used a VM machine on the cloud with a good connexion to build the image but, it didn't work for me so far : >> I get this error while constructing the image : >> >> 2022-04-10 08:55:25.497 | + install_deb_packages install iscsi-initiator-utils >> 2022-04-10 08:55:25.497 | + DEBIAN_FRONTEND=noninteractive >> 2022-04-10 08:55:25.497 | + http_proxy= >> 2022-04-10 08:55:25.497 | + https_proxy= >> 2022-04-10 08:55:25.497 | + no_proxy= >> 2022-04-10 08:55:25.497 | + apt-get --option Dpkg::Options::=--force-confold --option Dpkg::Options::=--force-confdef --assume-yes install iscsi-initiator-utils >> 2022-04-10 08:55:25.541 | Reading package lists... >> 2022-04-10 08:55:25.788 | Building dependency tree... >> 2022-04-10 08:55:25.788 | Reading state information... >> 2022-04-10 08:55:25.825 | E: Unable to locate package iscsi-initiator-utils >> 2022-04-10 08:55:25.838 | ++ diskimage_builder/lib/img-functions:run_in_target:59 : check_break after-error run_in_target bash >> 2022-04-10 08:55:25.843 | ++ diskimage_builder/lib/common-functions:check_break:143 : echo '' >> 2022-04-10 08:55:25.844 | ++ diskimage_builder/lib/common-functions:check_break:143 : egrep -e '(,|^)after-error(,|$)' -q >> 2022-04-10 08:55:25.851 | + diskimage_builder/lib/img-functions:run_in_target:1 : trap_cleanup >> 2022-04-10 08:55:25.855 | + diskimage_builder/lib/img-functions:trap_cleanup:36 >> >> I am not an Ubuntu person but I think the package's name is open-iscsi. >> >> This is the command I used to build the image : ./trovestack build-image ubuntu bionic true ubuntu /home/stack/trove-xena-guest-ubuntu-bionic-dev.qcow2 >> My OS is a Centos 8 Stream. you can find the whole log of the operation attached. >> >> Thanks in advance. >> >> Regards. >> >> Le jeu. 7 avr. 2022 ? 16:02, Clark Boylan a ?crit : >>> >>> On Thu, Apr 7, 2022, at 6:38 AM, wodel youchi wrote: >>> > Hi, >>> > I found the error, Rocky is not supported for, so I switched to CentOS >>> > machine. The script starts but I had two problems : >>> > The trovestack script searches for a package named qemu and don't find >>> > it, so I modified the script to use qemu* instead of qemu >>> > >>> > The second problem is related to the download itself, I have this error >>> > : >>> > 2022-04-07 13:18:02.677 | Caching guest-agent from >>> > https://opendev.org/openstack/trove in /home/deployer/.cache/ >>> > image-create/source-repositories/guest_agent_842a440b9b12731c50f3b4042bf842ea7e58467d >>> > 2022-04-07 13:22:31.299 | error: RPC failed; curl 18 transfer closed >>> > with outstanding read data remaining >>> > 2022-04-07 13:22:31.299 | error: 6149 bytes of body are still expected >>> > 2022-04-07 13:22:31.300 | fetch-pack: unexpected disconnect while >>> > reading sideband packet >>> > 2022-04-07 13:22:31.300 | fatal: early EOF >>> > 2022-04-07 13:22:31.301 | fatal: fetch-pack: invalid index-pack output >>> > >>> > Any ideas? >>> >>> This is related to cloning and caching the https://opendev.org/openstack/trove git repo during the image build for the trove database image. This looks like a network error of some sort with the connection ending before it was completed. You might want to double check any proxies or firewalls between you and https://opendev.org. It may have also been the Internet acting up and trying again would be fine. I would try again and if it persists start looking at network connectivity between you and https://opendev.org and take it from there. >>> >>> > >>> > Regards. >>> From christian.rohmann at inovex.de Tue Apr 12 14:35:58 2022 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Tue, 12 Apr 2022 16:35:58 +0200 Subject: [puppet] How to disable installation of mysql::server and only create databases? Message-ID: <6ab28227-52d9-f93e-089d-3f9e099e3776@inovex.de> Hey openstack-discuss, is there any intended way of using e.g cinder::db::mysql (which is using puppet-openstacklib) to create databases, but not also run / include mysql::server which apparently happens at https://github.com/openstack/puppet-openstacklib/blob/33fb90326fadd59759d4a65dae0ac873e34ee95b/manifests/db/mysql.pp#L80 ? In short I already have a running database server and only want the databases to be created. Thanks, Christian From adivya1.singh at gmail.com Tue Apr 12 14:48:49 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Tue, 12 Apr 2022 20:18:49 +0530 Subject: Network Node Scaling Message-ID: Hi Team, Is there any specified link available which i can go through and do "Network Node Scaling" in my Environment. We are running Openstack Xena in my environment Regards Adivya Singh -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Tue Apr 12 15:20:30 2022 From: marios at redhat.com (Marios Andreou) Date: Tue, 12 Apr 2022 18:20:30 +0300 Subject: [TripleO] please stop posting patches for tripleo* stable/ussuri (going EOL) In-Reply-To: References: Message-ID: On Tue, Apr 5, 2022 at 12:25 PM Marios Andreou wrote: > > Hello TripleO > > As proposed at [1] and also discussed in yesterday's TripleO Z PTG meet [2] we are going to move stable/ussuri for all tripleo repos to EOL. > > In order to move ahead we need to have no open patches against stable/ussuri. > > ** please stop posting patches to stable/ussuri tripleo repos ** > > If you have open patches can you please either get them merged, or abandon them by next Tuesday 12th. After this we will have to abandon any open patches (e.g. folks moved on/not even looking there) and then I can update the EOL proposal at [3] with the latest commit hashes so we can proceed. > As promised ;) and having heard no feedback or objections I abandoned all the open stable/ussuri tripleo things (for reference the list of abandoned reviews is at [1]). I'll respin the EOL proposal at https://review.opendev.org/c/openstack/releases/+/834049 so we can close it out. regards, marios [1] (list of abandoned stable/ussuri tripleo reviews): * https://review.opendev.org/c/openstack/os-net-config/+/834026 * https://review.opendev.org/c/openstack/os-net-config/+/836226 * https://review.opendev.org/c/openstack/os-net-config/+/836739 * https://review.opendev.org/c/openstack/paunch/+/825249 * https://review.opendev.org/c/openstack/paunch/+/730896 * https://review.opendev.org/c/openstack/puppet-tripleo/+/828880 * https://review.opendev.org/c/openstack/puppet-tripleo/+/836757 * https://review.opendev.org/c/openstack/puppet-tripleo/+/828446 * https://review.opendev.org/c/openstack/puppet-tripleo/+/829184 * https://review.opendev.org/c/openstack/python-tripleoclient/+/823642 * https://review.opendev.org/c/openstack/python-tripleoclient/+/836758 * https://review.opendev.org/c/openstack/python-tripleoclient/+/823643 * https://review.opendev.org/c/openstack/python-tripleoclient/+/825536 * https://review.opendev.org/c/openstack/python-tripleoclient/+/831760 * https://review.opendev.org/c/openstack/tripleo-ansible/+/833975 * https://review.opendev.org/c/openstack/tripleo-common/+/836790 * https://review.opendev.org/c/openstack/tripleo-common/+/804797 * https://review.opendev.org/c/openstack/tripleo-common/+/828849 * https://review.opendev.org/c/openstack/tripleo-heat-templates/+/836783 * https://review.opendev.org/c/openstack/tripleo-heat-templates/+/836781 * https://review.opendev.org/c/openstack/tripleo-heat-templates/+/836766 * https://review.opendev.org/c/openstack/tripleo-heat-templates/+/805896 * https://review.opendev.org/c/openstack/tripleo-heat-templates/+/822493 * https://review.opendev.org/c/openstack/tripleo-heat-templates/+/827150 * https://review.opendev.org/c/openstack/tripleo-heat-templates/+/829251 * https://review.opendev.org/c/openstack/tripleo-heat-templates/+/829746 * https://review.opendev.org/c/openstack/tripleo-heat-templates/+/830282 * https://review.opendev.org/c/openstack/tripleo-heat-templates/+/830576 * https://review.opendev.org/c/openstack/tripleo-heat-templates/+/831751 * https://review.opendev.org/c/openstack/tripleo-heat-templates/+/833967 * https://review.opendev.org/c/openstack/tripleo-heat-templates/+/834619 * https://review.opendev.org/c/openstack/tripleo-heat-templates/+/834948 * https://review.opendev.org/c/openstack/tripleo-heat-templates/+/836262 * https://review.opendev.org/c/openstack/tripleo-puppet-elements/+/835794 * https://review.opendev.org/c/openstack/tripleo-validations/+/831759 > For reference the repos with open reviews at time of writing are at [4] below. > > Please speak up if you need more time or with any other comments > > regards, marios > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028025.html > [2] https://etherpad.opendev.org/p/tripleo-zed-ci-load > [3] https://review.opendev.org/c/openstack/releases/+/834049 > > [4] ( list of repos with open reviews for stable/ussuri): > * https://review.opendev.org/q/project:openstack%252Fos-net-config+status:open+branch:stable/ussuri > * https://review.opendev.org/q/project:openstack%252Fpaunch+status:open+branch:stable/ussuri > * https://review.opendev.org/q/project:openstack%252Fpuppet-tripleo+status:open+branch:stable/ussuri > * https://review.opendev.org/q/project:openstack%252Fpython-tripleoclient+status:open+branch:stable/ussuri > * https://review.opendev.org/q/project:openstack%252Ftripleo-ansible+status:open+branch:stable/ussuri > * https://review.opendev.org/q/project:openstack%252Ftripleo-common+status:open+branch:stable/ussuri > * https://review.opendev.org/q/project:openstack%252Ftripleo-heat-templates+status:open+branch:stable/ussuri > * https://review.opendev.org/q/project:openstack%252Ftripleo-puppet-elements+status:open+branch:stable/ussuri > * https://review.opendev.org/q/project:openstack%252Ftripleo-validations+status:open+branch:stable/ussuri From satish.txt at gmail.com Tue Apr 12 15:49:44 2022 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 12 Apr 2022 11:49:44 -0400 Subject: Network Node Scaling In-Reply-To: References: Message-ID: It's hard to find a single document for all your questions. There are many technologies to scale network nodes, like DVR, OVN etc. If you have 3 network nodes and you want to add more nodes that will help but again you need to keep in mind about the HA tenant router because it won't do load balancing between multiple network nodes. In our case we deployed using a vlan base provider to mitigate networking issues. its all depends on what you are trying to do. On Tue, Apr 12, 2022 at 10:50 AM Adivya Singh wrote: > Hi Team, > > Is there any specified link available which i can go through and do > "Network Node Scaling" in my Environment. > > We are running Openstack Xena in my environment > > Regards > Adivya Singh > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adivya1.singh at gmail.com Tue Apr 12 15:52:45 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Tue, 12 Apr 2022 21:22:45 +0530 Subject: Network Node Scaling In-Reply-To: References: Message-ID: i don't think you understand my question, i am saying about using Ansible playbook in Xena Release for Network Node Scaling. On Tue, Apr 12, 2022 at 9:19 PM Satish Patel wrote: > > It's hard to find a single document for all your questions. There are many > technologies to scale network nodes, like DVR, OVN etc. If you have 3 > network nodes and you want to add more nodes that will help but again you > need to keep in mind about the HA tenant router because it won't do load > balancing between multiple network nodes. > > In our case we deployed using a vlan base provider to mitigate networking > issues. its all depends on what you are trying to do. > > > On Tue, Apr 12, 2022 at 10:50 AM Adivya Singh > wrote: > >> Hi Team, >> >> Is there any specified link available which i can go through and do >> "Network Node Scaling" in my Environment. >> >> We are running Openstack Xena in my environment >> >> Regards >> Adivya Singh >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Tue Apr 12 16:00:02 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Tue, 12 Apr 2022 18:00:02 +0200 Subject: Network Node Scaling In-Reply-To: References: Message-ID: I think it depends a lot on the tooling that you rely on while deploying OpenStack. As basically these instructions would be project specific if we're talking about openstack-ansible, kolla-ansible or anything else. As each project has their own way of scaling environments. So it would be great if you specified how your environment was configured in the first place and what tool was used for that. ??, 12 ???. 2022 ?. ? 17:55, Adivya Singh : > i don't think you understand my question, i am saying about using Ansible > playbook in Xena Release for Network Node Scaling. > > > > > On Tue, Apr 12, 2022 at 9:19 PM Satish Patel wrote: > >> >> It's hard to find a single document for all your questions. There are >> many technologies to scale network nodes, like DVR, OVN etc. If you have 3 >> network nodes and you want to add more nodes that will help but again you >> need to keep in mind about the HA tenant router because it won't do load >> balancing between multiple network nodes. >> >> In our case we deployed using a vlan base provider to mitigate networking >> issues. its all depends on what you are trying to do. >> >> >> On Tue, Apr 12, 2022 at 10:50 AM Adivya Singh >> wrote: >> >>> Hi Team, >>> >>> Is there any specified link available which i can go through and do >>> "Network Node Scaling" in my Environment. >>> >>> We are running Openstack Xena in my environment >>> >>> Regards >>> Adivya Singh >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Tue Apr 12 16:31:38 2022 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Tue, 12 Apr 2022 18:31:38 +0200 Subject: [nova] final release before Victoria Extended Maintenance transition? Message-ID: <75db86c1-0189-dab8-be9d-4f1e2730109f@est.tech> Hi Nova folks! We have released from stable/victoria quite recently, but there are still many open bug fixes on it [1]. If anyone from vendors / operators / developers would like to have any bugfix to be released in Victoria before the Extended Maintenance transition, please feel free to add it / mark them at our nova-stable-victoria-em etherpad [1]. Please consider that the transition deadline is in 2 weeks, so this needs to be ready as soon as possible. Nova stable cores are also welcome to review any patches. If there will be merged patches that we could release before the deadline, then I'll prepare the release patch. [1] https://etherpad.opendev.org/p/nova-stable-victoria-em Thanks, El?d From gagehugo at gmail.com Tue Apr 12 20:44:55 2022 From: gagehugo at gmail.com (Gage Hugo) Date: Tue, 12 Apr 2022 15:44:55 -0500 Subject: [openstack-helm] openstack-helm-addons and docs repository retirement Message-ID: Hey everyone, We recently had a PTG session and one of the topics that was discussed was the retirement of the openstack-helm-addons and openstack-helm-docs repositories. This topic has been mentioned now for several PTGs and from our discussion at the latest one, it was determined that due to the lack of updates to the repos as well as the charts themselves receiving no real maintenance, we would look to retire both these repositories this cycle. If anyone is still using the 3 charts that still exist in the repos and would like to maintain them, please let us know. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Tue Apr 12 20:50:05 2022 From: gagehugo at gmail.com (Gage Hugo) Date: Tue, 12 Apr 2022 15:50:05 -0500 Subject: [openstack-helm] Dropping train support Message-ID: Hi all, One of the topics from the latest PTG discussions[0] was looking to drop support of the Train release from our charts due to a lack of interest in maintaining them and several issues with broken images due to dependencies. As a result of those discussions, we'd like to make dropping Train support a goal for this cycle. If anyone is still using Train and has a pressing issue where they cannot upgrade, please reach out and we can discuss a path forward. [0] https://etherpad.opendev.org/p/openstack-helm-zed-ptg Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Tue Apr 12 23:28:37 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Wed, 13 Apr 2022 00:28:37 +0100 Subject: [Trove][Xena] Errors when creating postgresql insances Message-ID: Hi, I defined a postgresql data store with two versions 10 and 12. When creating db instances I am getting different errors : *For Postgresql 10, I am getting : * Traceback (most recent call last): File "/var/lib/kolla/venv/lib/python3.6/site-packages/trove/taskmanager/models.py", line 436, in wait_for_instance time_out=timeout) File "/var/lib/kolla/venv/lib/python3.6/site-packages/trove/common/utils.py", line 223, in poll_until return wait_for_task(task) File "/var/lib/kolla/venv/lib/python3.6/site-packages/trove/common/utils.py", line 207, in wait_for_task return polling_task.wait() File "/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/event.py", line 125, in wait result = hub.switch() File "/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/hubs/hub.py", line 313, in switch return self.greenlet.switch() File "/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_service/loopingcall.py", line 150, in _run_loop result = func(*self.args, **self.kw) File "/var/lib/kolla/venv/lib/python3.6/site-packages/trove/common/utils.py", line 194, in poll_and_check obj = retriever() File "/var/lib/kolla/venv/lib/python3.6/site-packages/trove/taskmanager/models.py", line 791, in _service_is_active raise TroveError(_("Service not active, status: %s") % status) trove.common.exception.TroveError: Service not active, status: failed to spawn ==> trove-conductor.log <== 2022-04-13 00:26:52.334 124 ERROR trove.conductor.manager [-] Guest exception on request req-b5bd49e8-e147-41b2-8292-2900bb4022a1: ['Traceback (most recent call last):\n', ' File "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/manager.py", line 218, in prepare\n ds_version=ds_version)\n', ' File "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/manager.py", line 234, in _prepare\n cluster_config, snapshot, ds_version=ds_version)\n', ' File "/opt/guest-agent-venv/lib/python3.6/site-packages/osprofiler/profiler.py", line 160, in wrapper\n result = f(*args, **kwargs)\n', ' File "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/postgres/manager.py", line 161, in do_prepare\n self.app.start_db(ds_version=ds_version, command=command)\n', ' File "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/postgres/service.py", line 229, in start_db\n raise exception.TroveError("Failed to start database service")\n', *'trove.common.exception.TroveError: Failed to start database service\n']* *And for Postgresql 12, I am getting :* Server type: guest Traceback (most recent call last): File "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/manager.py", line 845, in create_user self.adm.create_users(users) File "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/postgres/service.py", line 516, in create_users self.create_user(models.PostgreSQLUser.deserialize(user), None) File "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/postgres/service.py", line 549, in create_user [models.PostgreSQLSchema.deserialize(db) for db in user.databases]) File "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/postgres/service.py", line 558, in _grant_access [db.name for db in databases], File "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/postgres/service.py", line 432, in grant_access database=database, File "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/postgres/service.py", line 723, in psql return self.connection.execute(statement) File "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/postgres/service.py", line 763, in execute autocommit=True) File "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/postgres/service.py", line 776, in _execute_stmt cursor.execute(cmd, data_values) psycopg2.errors.InvalidCatalogName: database "dbweb01" does not exist And this error from > ==> trove-conductor.log <== > 2022-04-12 23:58:11.689 46 ERROR trove.conductor.manager [-] Guest > exception on request req-7c0e005a-2b02-4acf-bdc5-e9d7a964cd7e: > ['Traceback (most recent call last):\n', ' File > "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/manager.py", > line 808, in create_database\n return > self.adm.create_databases(databases)\n', ' File > "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/postgres/service.py", > line 464, in create_databases\n > self.create_database(models.PostgreSQLSchema.deserialize(database))\n', ' > File > "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/postgres/service.py", > line 477, in create_database\n collation=database.collate,\n', ' File > "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/postgres/service.py", > line 723, in psql\n return self.connection.execute(statement)\n', ' > File > "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/postgres/service.py", > line 763, in execute\n autocommit=True)\n', ' File > "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/postgres/service.py", > line 776, in _execute_stmt\n cursor.execute(cmd, data_values)\n', > 'psycopg2.errors.ActiveSqlTransaction: *CREATE DATABASE cannot run inside > a transaction block\n\n']* > Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Wed Apr 13 00:05:22 2022 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 12 Apr 2022 17:05:22 -0700 Subject: [all] Devstack jobs are failing due to a git security fix Message-ID: tldr: All devstack based jobs are going to fail with newer versions of git - don't bother rechecking git has released a security fix [1] that is starting to roll out in distributions (Ubuntu focal for example) that will cause pbr to be unable to access the package metadata for packages checked out locally due to the directory ownership used in devstack. We have opened a devstack bug to track this issue: https://bugs.launchpad.net/devstack/+bug/1968798 Please see the bug for the details and example error. Michael P.S. Thanks to clarkb, fungi, and ianw for helping track down the root cause! [1] https://github.com/git/git/commit/8959555cee7ec045958f9b6dd62e541affb7e7d9 From manchandavishal143 at gmail.com Wed Apr 13 05:10:57 2022 From: manchandavishal143 at gmail.com (vishal manchanda) Date: Wed, 13 Apr 2022 10:40:57 +0530 Subject: [horizon] Cancelling Today's Weekly meeting Message-ID: Hello Team, I won't be able to chair today's horizon weekly meeting due to some travel. So if someone from team can host it on my behalf or we can skip this week meeting. Note: I will join office from thrusday so if anything urgent please reach out to horizon core team. Thanks & regards, Vishal Manchanda -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Wed Apr 13 07:11:27 2022 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 13 Apr 2022 17:11:27 +1000 Subject: [all] Devstack jobs are failing due to a git security fix In-Reply-To: References: Message-ID: On Tue, Apr 12, 2022 at 05:05:22PM -0700, Michael Johnson wrote: 65;6602;1c> tldr: All devstack based jobs are going to fail with newer versions of > git - don't bother rechecking > > git has released a security fix [1] that is starting to roll out in > distributions (Ubuntu focal for example) that will cause pbr to be > unable to access the package metadata for packages checked out locally > due to the directory ownership used in devstack. This turns out to be annoyingly complicated. Since devstack checks out all code as "stack" and then installs globally with "sudo pip install -e ...", pbr will be running in a directory owned by "stack" as root and its git calls will hit this failure. If we make the code directories owned by root, we now have additional problems. Several places do things in the code repositories -- e.g. setup virtualenvs, run ./tools/*.sh scripts to generate sample config files and run tox as "stack" (tox then tries to install the source tree in it's virtualenv -- if it's owned by root -- again -- failure). I explored a bunch of these options in https://review.opendev.org/c/openstack/devstack/+/837636 and anyone feel free to take over that and keep trying. The other option is to use the new config flag to mark our checkouts as safe. This is obviously simpler, but it seems like a very ugly thing for a nominally generic tool like devstack to do to your global git config. This is done with https://review.opendev.org/c/openstack/devstack/+/837659 and appears to work; but will need backporting for grenade if we want to take this path. When this kicked off I sent in a link to HN thinking that thanks to our very upstream focused CI we were likely some of the first to hit this; it's currently the top post so I think that is accurate that this is having wide impact: https://news.ycombinator.com/item?id=31009675 It is probably worth keeping one eye on upstream for any developments that might change our options. -i From noonedeadpunk at gmail.com Wed Apr 13 09:17:06 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Wed, 13 Apr 2022 11:17:06 +0200 Subject: [all] Devstack jobs are failing due to a git security fix In-Reply-To: References: Message-ID: Hey! I actually wonder if the approach with config flag to mark checkouts as safe should be applied more generally, when zuul preps repos for usage, instead of hook in devstack specifically. As it's a more general issue, since zuul repos can't be used as is now for other projects as well (limited to devstack). ??, 13 ???. 2022 ?. ? 09:14, Ian Wienand : > On Tue, Apr 12, 2022 at 05:05:22PM -0700, Michael Johnson wrote: > 65;6602;1c> tldr: All devstack based jobs are going to fail with newer > versions of > > git - don't bother rechecking > > > > git has released a security fix [1] that is starting to roll out in > > distributions (Ubuntu focal for example) that will cause pbr to be > > unable to access the package metadata for packages checked out locally > > due to the directory ownership used in devstack. > > This turns out to be annoyingly complicated. > > Since devstack checks out all code as "stack" and then installs > globally with "sudo pip install -e ...", pbr will be running in a > directory owned by "stack" as root and its git calls will hit this > failure. > > If we make the code directories owned by root, we now have additional > problems. Several places do things in the code repositories -- > e.g. setup virtualenvs, run ./tools/*.sh scripts to generate sample > config files and run tox as "stack" (tox then tries to install the > source tree in it's virtualenv -- if it's owned by root -- again -- > failure). > > I explored a bunch of these options in > > https://review.opendev.org/c/openstack/devstack/+/837636 > > and anyone feel free to take over that and keep trying. > > The other option is to use the new config flag to mark our checkouts > as safe. This is obviously simpler, but it seems like a very ugly > thing for a nominally generic tool like devstack to do to your global > git config. This is done with > > https://review.opendev.org/c/openstack/devstack/+/837659 > > and appears to work; but will need backporting for grenade if we want > to take this path. > > When this kicked off I sent in a link to HN thinking that thanks to > our very upstream focused CI we were likely some of the first to hit > this; it's currently the top post so I think that is accurate that > this is having wide impact: > > https://news.ycombinator.com/item?id=31009675 > > It is probably worth keeping one eye on upstream for any developments > that might change our options. > > -i > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Wed Apr 13 09:19:31 2022 From: katonalala at gmail.com (Lajos Katona) Date: Wed, 13 Apr 2022 11:19:31 +0200 Subject: [neutron] Zed PTG Summary Message-ID: Hi, I will try to summarize the Neutron PTG sessions during the last week. The etherpad which we used: https://etherpad.opendev.org/p/neutron-zed-ptg # Day 1 (Monday April 4.) ## Yoga retrospective (I just list here the topics, not every discussion under them): * Good things ** New active people around the team ** Revived project: neutron-fwaas ** video CI meetings every second week are very good IMO ** OVN backend is getting more and more attention and there are less gaps between OVN and OVS backends * Bad / Not so good: ** less and less active people ** to much depends on Red Hat - ~66% of reviews, ~62% of commits (in Neutron official projects) ** lack of maintainers for some stadium projects (neutron-vpnaas) and some backends/drivers (linuxbridge) * As action we previously decided to have a forum session during the Berlin Summit to discuss the lack of activity/maintainers for some backends and stadium projects hopefully with some operators also. ## Short update on bgp related blueprints https://etherpad.opendev.org/p/neutron-zed-ptg#L111 The development of these blueprints are stopped, we will revert the already merged code. # Day 2 (Tuesday April 5.) ## Have nova / os-vif delete the trunk bridges to avoid race conditions The creation of trunk bridges now done by os-vif, and the deletion is done by Neutron, to avoid the race condition we agreed to have both operations done by os-vif. ## skip-level upgrade (tick-tick) We discussed together how this affects Neutron, what is working currently and what is in front of us. We have periodic jobs for ovs and ovn backend. Actions: * Make ovn grenade upgrade jobs green ## When we say something is not supported? * Recently from the Neutron stadium neutron-vpnaas project was that lost all maintainers. Neutron core team keeps an eye on the gate of networking projects, and keep them green, but if there is nobody with specific knowledge for the given area we can't solve the bugs. * We have similar problems with "in-tree" drivers also, a good example is the linuxbridge driver. Agreement was to keep the current approach for stadium project, so send out mail to openstack-discuss, asking for help, and if nobody appears, make the project retired and delete the code but keep git history for it. For in-tree code (drivers, extensions....): * We keep existing jobs for linuxbridge driver for example, but when the tests start to fail we skip them and finally we stop the job also. To make it clear for operators we add warning logs highlighting that the given feature/driver is experimental, and introduce cfg option to enable such features explicitly. We plan to discuss these questions during the Berlin Summit with operators. ## Prefix delegation for Openstack Neutron aka: dibbler tool for dhcpv6 is concluded The original bug: https://bugs.launchpad.net/neutron/+bug/1916428 * The tool behind PD, Dibbler is no more maintained and the suggested tool ISC Kea has no DHCP client actually. * this feature is not tested in upstream CI Due to these we decided to mark prefix delegation as experimental feature. # Day 3 (Wednesday April 6.) ## neutron-dynamic-routing + OVN Redhat works on a BGP solution for OVN, that is ovn-bgp-agent, and currently it doesn't need the existing API in neutron-dynamic-routing. The current approach to make neutron-dynamic-routing work with OVN will be checked. ## Option for OVN to disable DHCP/DNS and use dhcp-agent instead Technically it is possible, a new RFE will be proposed and the drivers team will discuss the usecase behind it. ## CI status / zuul job config errors In the list of zuul cfg errors ( https://zuul.opendev.org/t/openstack/config-errors ) old Neutron stadium branches are listed. * Decision was to EOL these old branches where fixing is not possible. Another topic was from Slawek if we want to have neutron-tempest-plugin-api-ovs job, and the agreement was to merge OVS scenario and API jobs. ## inconsistencies in OVS firewall on an agent restart We will go for a cfg option to select between OVS flow installation in batches and per port, and operators can select which is best for their environment. ## Pain Points TC started a discussion to cover the collected operator pain points, last week we checked again the list for Neutron ( https://etherpad.opendev.org/p/pain-point-elimination#L268): * neutron (via python calls) and OVN (via C calls) can have different ideas about what the hostname is, particulary if a deployer (rightly or wrongly) sets their hostname to be an FQDN ** it was fixed: https://review.opendev.org/q/Iea2533f4c52935b4ecda9ec22fb619c131febfa1 * Useful error msg when network deletion fails due to existing resources on it ** done: https://review.opendev.org/c/openstack/neutron/+/821935 * Open vSwitch Agent excess logging at INFO level ** we check if we can change some to debug, Slawek create a DNM job with only info logs to discuss during one of the coming meetings. * OVN: Spoofing of DNS reponses seems very wrong behavior ** We agreed that for this we need an RFE (see above for disabling OVN DNS/DHCP) ## Edge topics Together with the Designate team we agreed that this is more a documentation problem based on our current understanding. Based on Redhat downstream bugs for edge deployments we try to cover these in this cycle. # Day 4 (Thursday April 7.) We have no sassions. # Day 5 (Friday April 8.) ## Nova - Neutron xproject ### How to require ml2 plugins to implement multiple port bindings The problem is that there are old drivers (hyperv perhaps and others) that do not implement it and this cause a lot of extra code path in Nova. Sean suggested a Forum topic for Berlin to discuss it with operators also. ### Is heal_instance_info_cache_interval still needed? Let's create a job which sets heal_instance_info_cache_interval to 0, and decide the next steps based on the results. The happy moment with the team screenshot: https://photos.app.goo.gl/P6YuYgxzqEiv2gHt8 I hope next time we can meet in person :-) Thanks to everybody for the great discussions. Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Apr 13 11:06:22 2022 From: smooney at redhat.com (Sean Mooney) Date: Wed, 13 Apr 2022 12:06:22 +0100 Subject: [all] Devstack jobs are failing due to a git security fix In-Reply-To: References: Message-ID: On Tue, 2022-04-12 at 17:05 -0700, Michael Johnson wrote: > tldr: All devstack based jobs are going to fail with newer versions of > git - don't bother rechecking > > git has released a security fix [1] that is starting to roll out in > distributions (Ubuntu focal for example) that will cause pbr to be > unable to access the package metadata for packages checked out locally > due to the directory ownership used in devstack. > > We have opened a devstack bug to track this issue: > https://bugs.launchpad.net/devstack/+bug/1968798 ok ok i hit this a few months ago but tought this was related to arm i spoke to clark and fungi about it when i was tryign to get devstack running on a vm macbook air this broke the ablity to do "pip install -e" which broke my normal workflow of keeping my workign repos in /opt/repos and the devstack managed ones in /opt/stack i was getting the same pbr issue when i tried to sudo pip install -e /opt/repos/nova that is also when i encounted the issues with unistalling so this has been out in the wild since at least january > > Please see the bug for the details and example error. > > Michael > > P.S. Thanks to clarkb, fungi, and ianw for helping track down the root cause! > > [1] https://github.com/git/git/commit/8959555cee7ec045958f9b6dd62e541affb7e7d9 > From fungi at yuggoth.org Wed Apr 13 11:59:20 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 13 Apr 2022 11:59:20 +0000 Subject: [all] Devstack jobs are failing due to a git security fix In-Reply-To: References: Message-ID: <20220413115919.de7odplf7yk6uuf7@yuggoth.org> On 2022-04-13 17:11:27 +1000 (+1000), Ian Wienand wrote: [...] > Since devstack checks out all code as "stack" and then installs > globally with "sudo pip install -e ...", pbr will be running in a > directory owned by "stack" as root and its git calls will hit this > failure. > > If we make the code directories owned by root, we now have additional > problems. Several places do things in the code repositories -- > e.g. setup virtualenvs, run ./tools/*.sh scripts to generate sample > config files and run tox as "stack" (tox then tries to install the > source tree in it's virtualenv -- if it's owned by root -- again -- > failure). [...] Forgive me as caffeine is still finding its way into my veins, but it has occurred to me that the error is occurring because we're calling PBR (and thus Git) while installing the software, when that's not strictly necessary. It happens because we're taking advantage of pip's ability to call out to a build process before installing, but we can always separate building and installing. The former doesn't need root privs, and the latter doesn't need to call PBR/Git. Update the install-from-source routine to build a wheel as stack and then only sudo pip install the resulting wheel. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Wed Apr 13 12:02:05 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 13 Apr 2022 12:02:05 +0000 Subject: [all] Devstack jobs are failing due to a git security fix In-Reply-To: References: Message-ID: <20220413120205.d6y3mv5mq57h4oxy@yuggoth.org> On 2022-04-13 11:17:06 +0200 (+0200), Dmitriy Rabotyagov wrote: > I actually wonder if the approach with config flag to mark checkouts as > safe should be applied more generally, when zuul preps repos for usage, > instead of hook in devstack specifically. As it's a more general issue, > since zuul repos can't be used as is now for other projects as well > (limited to devstack). [...] I don't follow the logic here. Zuul checkouts are owned by the zuul user which is also the user under which the job payload is executed. This problem only arises if you try to run Git as a different user than zuul. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Wed Apr 13 12:05:28 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 13 Apr 2022 12:05:28 +0000 Subject: [all] Devstack jobs are failing due to a git security fix In-Reply-To: References: Message-ID: <20220413120528.nnix63uui3myvata@yuggoth.org> On 2022-04-13 12:06:22 +0100 (+0100), Sean Mooney wrote: [...] > i hit this a few months ago [...] > this has been out in the wild since at least january [...] It's easy to get confused here. You saw the same symptoms, because the error returned by PBR is the same any time there's a problem using Git to query revisions and tags, but as I'm sure you can imagine, there are limitless ways for that to happen. You didn't see this bug though, since it's the result of a security fix for a vulnerability disclosed yesterday. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From smooney at redhat.com Wed Apr 13 12:16:14 2022 From: smooney at redhat.com (Sean Mooney) Date: Wed, 13 Apr 2022 13:16:14 +0100 Subject: [all] Devstack jobs are failing due to a git security fix In-Reply-To: <20220413120528.nnix63uui3myvata@yuggoth.org> References: <20220413120528.nnix63uui3myvata@yuggoth.org> Message-ID: On Wed, 2022-04-13 at 12:05 +0000, Jeremy Stanley wrote: > On 2022-04-13 12:06:22 +0100 (+0100), Sean Mooney wrote: > [...] > > i hit this a few months ago > [...] > > this has been out in the wild since at least january > [...] > > It's easy to get confused here. You saw the same symptoms, because > the error returned by PBR is the same any time there's a problem > using Git to query revisions and tags, but as I'm sure you can > imagine, there are limitless ways for that to happen. You didn't see > this bug though, since it's the result of a security fix for a > vulnerability disclosed yesterday. ah thanks for context. From amonster369 at gmail.com Wed Apr 13 12:24:52 2022 From: amonster369 at gmail.com (A Monster) Date: Wed, 13 Apr 2022 13:24:52 +0100 Subject: Error while creating attached to an external interface Message-ID: I'm trying to launch an instance attached to an external network, but I get an all host exhausted error message, and the neutron log file show the following error : *ERROR neutron.plugins.ml2.managers [req-6979d5fd-8e6a-4802-a758-e4882b7ac5b6 397fa0267ae340e1b24a3a96ae302f32 05b9f31fca434f7a96b63a1d17e8b14c - default default] Failed to bind port d57ac8f5-7904-495f-a4b9-9b23df681634 on host compute1.localdomain for vnic_type normal using segments [{'id': '475c70c2-6a35-4721-a726-cbf8c0b7e778', 'network_type': 'flat', 'physical_network': 'physnet2', 'segmentation_id': None, 'network_id': 'a1c2cc8f-6a04-48ce-9d87-a351da1d3f3e'}]* But when I try to launch an instance using another external network, it works just fine. -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Apr 13 12:53:55 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 13 Apr 2022 09:53:55 -0300 Subject: [cinder] Bug deputy report for week of 04-13-2022 Message-ID: This is a bug report from 03-30-2022 to 04-13-2022. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- High - https://bugs.launchpad.net/os-brick/+bug/1967790 " Encryptor connect_volume not changing the symlink." Fix proposed to marter. Medium - https://bugs.launchpad.net/cinder/+bug/1968746 "cinder-manage db sync fails due to row size too large." Unassigned. - https://bugs.launchpad.net/cinder/+bug/1968645 "Concurrent migration of vms with the same multiattach volume fails." Unassigned. Low - https://bugs.launchpad.net/cinder/+bug/1968170 "reimage_volume failure message action does not exist." Unassigned. Cheers, -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Wed Apr 13 12:57:02 2022 From: marios at redhat.com (Marios Andreou) Date: Wed, 13 Apr 2022 15:57:02 +0300 Subject: [TripleO] Final TripleO repos release for stable/victoria - any requests? Message-ID: Hello The stable/victoria branch for all tripleo repos will transition to Extended Maintenance in 2 weeks [1]. To prevent delays I have prepared a final victoria release at [2]. That [2] will be updated after its depends-on merges (puppet metadata bump) to pickup the latest victoria commits at that point. If there are any patches you want included then please speak up and I'll wait for and include those commits before updating releases/+/836921 [2]. This is the last ever release to be made from stable/victoria. Once it goes to Extended Maintenance we can no longer release. I'll hold it for a few days.. Unless I hear otherwise I will update [2] on Monday and try to get it merged next week. thanks, marios [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028013.html [2] https://review.opendev.org/c/openstack/releases/+/836921 From wodel.youchi at gmail.com Tue Apr 12 23:16:59 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Wed, 13 Apr 2022 00:16:59 +0100 Subject: [Trove][Xena] Error building Trove image Message-ID: Hi, When trying to build Trove I am getting this error message : 2022-04-10 08:55:25.497 | *+ install_deb_packages install iscsi-initiator-utils* 2022-04-10 08:55:25.497 | + DEBIAN_FRONTEND=noninteractive 2022-04-10 08:55:25.497 | + http_proxy= 2022-04-10 08:55:25.497 | + https_proxy= 2022-04-10 08:55:25.497 | + no_proxy= 2022-04-10 08:55:25.497 | + apt-get --option Dpkg::Options::=--force-confold --option Dpkg::Options::=--force-confdef --assume-yes install iscsi-initiator-utils 2022-04-10 08:55:25.541 | Reading package lists... 2022-04-10 08:55:25.788 | Building dependency tree... 2022-04-10 08:55:25.788 | Reading state information... 2022-04-10 08:55:25.825 | *E: Unable to locate package iscsi-initiator-utils* 2022-04-10 08:55:25.838 | ++ diskimage_builder/lib/img-functions:run_in_target:59 : check_break after-error run_in_target bash 2022-04-10 08:55:25.843 | ++ diskimage_builder/lib/common-functions:check_break:143 : echo '' 2022-04-10 08:55:25.844 | ++ diskimage_builder/lib/common-functions:check_break:143 : egrep -e '(,|^)after-error(,|$)' -q 2022-04-10 08:55:25.851 | + diskimage_builder/lib/img-functions:run_in_target:1 : trap_cleanup 2022-04-10 08:55:25.855 | + diskimage_builder/lib/img-functions:trap_cleanup:36 I am not an Ubuntu person but I think the package's name is open-iscsi. This is the command I used to build the image : ./trovestack build-image ubuntu bionic true ubuntu /home/stack/trove-xena-guest-ubuntu-bionic-dev.qcow2 My OS is a Centos 8 Stream. you can find the whole log of the operation attached. Thanks in advance. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: trove-build2.log Type: application/octet-stream Size: 302893 bytes Desc: not available URL: From niujie at chinamobile.com Wed Apr 13 05:34:30 2022 From: niujie at chinamobile.com (niujie) Date: Wed, 13 Apr 2022 13:34:30 +0800 Subject: [all] Resend New CFN(Computing Force Network) SIG Proposal Message-ID: Hi all I sent an email yesterday about New CFN(Computing Force Network) SIG Proposal, I tried to recall it because there was a typo in email address, then I get recall failed msg, so I assume the email was sent out successfully, and plan to keep it as it was. But I found that the ?recall? action was logged in pipermail, it might cause misunderstanding, we are sure about propose for a new SIG, so I?m sending this again, sorry for the email flood J I'm from China Mobile, China Mobile is recently working on build a new information infrastructure focusing on connectivity, computing power, and capabilities, this new information infrastructure is called Computing Force Network, we think OpenStack community which gathers global wisdom together is a perfect platform to discuss topics like CFN, so we are proposing to create a new SIG for CFN(Computing Force Network). Below is CFN brief introduction and initial SIG scope. With the flourish of new business scenarios such as hybrid cloud, multi-cloud, AI, big data processing, edge computing, building a new information infrastructure based on multiple key technologies that converged cloud and network, will better support global digital transformation. This new infrastructure is not only relates to cloud, it is getting more and more connected with network, and at the same time, we also need to consider how to converge multiple technologies like AI, Blockchain, big data, security to provide this all-in-one service. Computing Force Network(CFN) is a new information infrastructure that based on network, focused on computing, deeply converged Artificial intelligence, Block chain, Cloud, Data, Network, Edge computing, End application, Security(ABCDNETS), providing all-in-one services. Xiaodong Duan, Vice president of China Mobile Research Institute, introduced the vision and architecture of Computing Force Network in 2021 November OpenInfra Live Keynotes by his presentation Connection + Computing + Capability Opens a New Era of Digital Infrastructure, he proposed the new era of CFN. We are expecting to work with OpenStack on how to build this new information infrastructure, and how to promote the development and implementation of next generation infrastructure, achieve ubiquitous computing force, computing & network convergence, intelligence orchestration, all-in-one service. Then computing force will become common utilities like water and electric step by step, computing force will be ready for access upon use and connected by single entry point. The above vision of CFN , from technical perspective, will mainly focus on unified management and orchestration of computing + network integrated system, computing and network deeply converged in architecture, form and protocols aspect, bringing potential changes to OpenStack components. CFN is aiming to achieve seamlessly migration of any application between any heterogeneous platforms, it's a challenge for the industry currently, we feel that in pursuit of CFN could potentially contributes to the development and evolution of OpenStack. In this CFN SIG, we will mainly focus on discussing how to build the new information infrastructure of CFN, related key technologies, and what's the impact on OpenStack brought by the network & could convergence trend , the topics are including but not limited to: 1, A computing basement for unified management of container, VM and Bare Metal 2, Computing infrastructure which eliminated the difference between heterogeneous hardware 3, Measurement criteria and scheduling scheme based on unified computing infrastructure 4, Network solutions for SDN integrating smart NIC for data center 5, Unified orchestration & management for "network + cloud", and "cloud + edge + end" integrated scheduling solution We will have regular meetings to investigate and discuss business scenarios, development trend, technical scheme, release technical documents, technical proposal and requirements for OpenStack Projects, and propose new project when necessary. We will also collaborate with other open source projects like LFN, CNCF, LFE, to have a consistent plan across communities, and align with global standardization organization like ETSI, 3GPP, IETF, to promote CFN related technical scheme become the standard in industry. If you have any thoughts, interests, questions, requirements, we can discuss by this mailing list. Any suggestions are welcomed, and we are really hoping to hear from anyone, and work with you. Jie Niu China Mobile -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Wed Apr 13 14:01:44 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Wed, 13 Apr 2022 23:01:44 +0900 Subject: [puppet] How to disable installation of mysql::server and only create databases? In-Reply-To: <6ab28227-52d9-f93e-089d-3f9e099e3776@inovex.de> References: <6ab28227-52d9-f93e-089d-3f9e099e3776@inovex.de> Message-ID: I'm afraid no. As you mentioned the openstacklib::db::mysql defined type includes the mysql::server, which triggers installation and configuration of mysql. This is basically because the implementation uses some resources like mysql_user imported from puppetlabs-mysql and these resources are designed to work in the node where mysql is deployed by mysql::server. (Especially the ~/.my.cnf file created). Technically speaking you can remove that include as long as you can prepare all of the required resources with these resources but that'd be quite tricky. On Tue, Apr 12, 2022 at 11:42 PM Christian Rohmann < christian.rohmann at inovex.de> wrote: > Hey openstack-discuss, > > is there any intended way of using e.g cinder::db::mysql (which is using > puppet-openstacklib) to create databases, but not also run / include > mysql::server which apparently happens at > > https://github.com/openstack/puppet-openstacklib/blob/33fb90326fadd59759d4a65dae0ac873e34ee95b/manifests/db/mysql.pp#L80 > ? > > In short I already have a running database server and only want the > databases to be created. > > > Thanks, > > > Christian > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From abishop at redhat.com Wed Apr 13 14:12:14 2022 From: abishop at redhat.com (Alan Bishop) Date: Wed, 13 Apr 2022 07:12:14 -0700 Subject: [TripleO] Final TripleO repos release for stable/victoria - any requests? In-Reply-To: References: Message-ID: On Wed, Apr 13, 2022 at 5:59 AM Marios Andreou wrote: > Hello > > The stable/victoria branch for all tripleo repos will transition to > Extended Maintenance in 2 weeks [1]. > > To prevent delays I have prepared a final victoria release at [2]. > > That [2] will be updated after its depends-on merges (puppet metadata > bump) to pickup the latest victoria commits at that point. > > If there are any patches you want included then please speak up and > I'll wait for and include those commits before updating > releases/+/836921 [2]. > Hi Marios, I've got one: https://review.opendev.org/c/openstack/tripleo-heat-templates/+/837648 It just passed CI, so if you'd like to approve it... (wink, wink) Alan > This is the last ever release to be made from stable/victoria. Once it > goes to Extended Maintenance we can no longer release. > > I'll hold it for a few days.. Unless I hear otherwise I will update > [2] on Monday and try to get it merged next week. > > thanks, marios > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028013.html > [2] https://review.opendev.org/c/openstack/releases/+/836921 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Wed Apr 13 14:48:12 2022 From: pierre at stackhpc.com (Pierre Riteau) Date: Wed, 13 Apr 2022 16:48:12 +0200 Subject: [blazar][requirements] setuptools and python_version in upper constraints Message-ID: Hello, In the blazar project, we have been seeing a job timeout failure in openstack-tox-py39 affecting master and stable/yoga. tox starts the "lockutils-wrapper python setup.py testr --slowest --testr-args=" process which doesn't show progress until job timeout. It started happening sometime between 2022-03-10 16:01:39 (last success on master) and 2022-03-24 16:35:15 (first timeout occurrence) [1], with no change in blazar itself and few changes in requirements. I resumed debugging today and managed to reproduce it using Ubuntu 20.04 (it doesn't happen on macOS). Here is the traceback after interrupting it if anyone wants to take a look [2]. The python process is using 100% of the CPU until interrupted. I tracked down the regression to the upper constraint on setuptools. For example, stable/yoga has: setuptools===59.6.0;python_version=='3.6' setuptools===60.9.3;python_version=='3.8' It appears this is ignored in the py39 job so the job runs with the latest setuptools. Indeed, there were some releases between March 10 and March 24. I still have to figure out what changed in setuptools to cause this behaviour. Question for requirements maintainers: is this expected behaviour, or should upper constraints also include lines for python_version=='3.9' on yoga? Thanks, Pierre Riteau (priteau) [1] https://zuul.openstack.org/builds?job_name=openstack-tox-py39&project=openstack%2Fblazar&skip=0 [2] https://paste.opendev.org/show/bZO7ELmTvfMUGJPdlQ4k/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Wed Apr 13 15:02:49 2022 From: mthode at mthode.org (Matthew Thode) Date: Wed, 13 Apr 2022 10:02:49 -0500 Subject: [blazar][requirements] setuptools and python_version in upper constraints In-Reply-To: References: Message-ID: <20220413150249.afi3gppvbqn725ar@mthode.org> On 22-04-13 16:48:12, Pierre Riteau wrote: > Hello, > > In the blazar project, we have been seeing a job timeout failure in > openstack-tox-py39 affecting master and stable/yoga. tox starts the > "lockutils-wrapper python setup.py testr --slowest --testr-args=" process > which doesn't show progress until job timeout. > > It started happening sometime between 2022-03-10 16:01:39 (last success on > master) and 2022-03-24 16:35:15 (first timeout occurrence) [1], with no > change in blazar itself and few changes in requirements. > > I resumed debugging today and managed to reproduce it using Ubuntu 20.04 > (it doesn't happen on macOS). Here is the traceback after interrupting it > if anyone wants to take a look [2]. The python process is using 100% of the > CPU until interrupted. > > I tracked down the regression to the upper constraint on setuptools. For > example, stable/yoga has: > > setuptools===59.6.0;python_version=='3.6' > setuptools===60.9.3;python_version=='3.8' > > It appears this is ignored in the py39 job so the job runs with the latest > setuptools. Indeed, there were some releases between March 10 and March 24. > I still have to figure out what changed in setuptools to cause this > behaviour. > > Question for requirements maintainers: is this expected behaviour, or > should upper constraints also include lines for python_version=='3.9' on > yoga? > > Thanks, > Pierre Riteau (priteau) > > [1] > https://zuul.openstack.org/builds?job_name=openstack-tox-py39&project=openstack%2Fblazar&skip=0 > [2] https://paste.opendev.org/show/bZO7ELmTvfMUGJPdlQ4k/ I plan on removing py36 and adding py39 constraints today or tomorrow. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From cdilorenzo at gmail.com Wed Apr 13 15:56:53 2022 From: cdilorenzo at gmail.com (Chris DiLorenzo) Date: Wed, 13 Apr 2022 11:56:53 -0400 Subject: [neutron][ovn] Need VM accessible on Internet, and able to access DC resources Message-ID: To Internet To Data Center Resources (10.x) - -/ - -/ / -/ / -/ / -/ -/ -/ / -/ / Public Provider Network / Private Provider Network / +------------------------+ +-------------------------+ | | | | | | | | | Router #1 |--------------| Router #2 | | SNAT Enabled | .2| SNAT Enabled | | | | | | | | | +------------------------+ +-------------------------+ | 192.168.1.1 | | | | | | | | | 192.168.1.10 (FIP: Public IP) +---------|-------------+ | | | | | | | VM | | | | | +-----------------------+ I am running Openstack Xena with OVN and distributed FIP enabled. We are trying to come up with a way to make a VM accessible from the Internet and still have it able to access internal Data Center services. Our thought is to setup a router between the tenant network and an internet accessible provider network. We'll assign a FIP to the VM. Then, we create an additional router that connects to the same tenant network but routes to a provider network that has access to everything inside the DC. We would then add a static route on Router #1 like 10.0.0.0/8 nexthop 192.168.1.2. I've tried setting this up in our lab, but it's not working. I can't ping to anything inside the DC. Should this work? Any best practice here we should look at? Thanks Chris From Molka.Gharbaoui at santannapisa.it Wed Apr 13 16:41:29 2022 From: Molka.Gharbaoui at santannapisa.it (Molka Gharbaoui) Date: Wed, 13 Apr 2022 16:41:29 +0000 Subject: [devstack] Not able to install devstack on ubuntu 20.04 Message-ID: Hi all, I am trying to install devstack by downloading the cose through the following link: git clone https://opendev.org/openstack/devstack and then executing ./stack command which I used to use several times without any error. However, today I am getting the following error (I only changed the VM): Obtaining file:///opt/stack/keystone Preparing metadata (setup.py) ... error error: subprocess-exited-with-error ? python setup.py egg_info did not run successfully. ? exit code: 1 ??> [16 lines of output] Error parsing Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/pbr/core.py", line 111, in pbr attrs = util.cfg_to_args(path, dist.script_args) File "/usr/local/lib/python3.8/dist-packages/pbr/util.py", line 272, in cfg_to_args pbr.hooks.setup_hook(config) File "/usr/local/lib/python3.8/dist-packages/pbr/hooks/__init__.py", line 25, in setup_hook metadata_config.run() File "/usr/local/lib/python3.8/dist-packages/pbr/hooks/base.py", line 27, in run self.hook() File "/usr/local/lib/python3.8/dist-packages/pbr/hooks/metadata.py", line 25, in hook self.config['version'] = packaging.get_version( File "/usr/local/lib/python3.8/dist-packages/pbr/packaging.py", line 872, in get_version raise Exception("Versioning for this project requires either an sdist" Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name keystone was given, but was not able to be found. error in setup command: Error parsing /opt/stack/keystone/setup.cfg: Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name keystone was given, but was not able to be found. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed ? Encountered error while generating package metadata. ??> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. +inc/python:pip_install:1 exit_trap +./stack.sh:exit_trap:523 local r=1 ++./stack.sh:exit_trap:524 jobs -p +./stack.sh:exit_trap:524 jobs= +./stack.sh:exit_trap:527 [[ -n '' ]] +./stack.sh:exit_trap:533 '[' -f '' ']' +./stack.sh:exit_trap:538 kill_spinner +./stack.sh:kill_spinner:433 '[' '!' -z '' ']' +./stack.sh:exit_trap:540 [[ 1 -ne 0 ]] +./stack.sh:exit_trap:541 echo 'Error on exit' Error on exit +./stack.sh:exit_trap:543 type -p generate-subunit +./stack.sh:exit_trap:544 generate-subunit 1649867885 63 fail +./stack.sh:exit_trap:546 [[ -z /opt/stack/logs ]] +./stack.sh:exit_trap:549 /usr/bin/python3.8 /opt/stack/devstack/tools/worlddump.py -d /opt/stack/logs World dumping... see /opt/stack/logs/worlddump-2022-04-13-163909.txt for details +./stack.sh:exit_trap:558 exit 1 Changing the owner of the directories using the following commands allows to go further in the installation process: sudo chown -R root:root /opt/stack/horizon/ sudo chown -R root:root /opt/stack/nova/ sudo chown -R root:root /opt/stack/cinder/ etc. However, then I am getting other errors related to permissions issues so I guess changing the directories ownership is not the solution. Could you please help me in resolving this. Thank you in advance! -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Apr 13 16:53:43 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 13 Apr 2022 16:53:43 +0000 Subject: [devstack] Not able to install devstack on ubuntu 20.04 In-Reply-To: References: Message-ID: <20220413165342.x5p3j2s7iyw23t4n@yuggoth.org> On 2022-04-13 16:41:29 +0000 (+0000), Molka Gharbaoui wrote: > I am trying to install devstack by downloading the cose through > the following link: git clone > https://opendev.org/openstack/devstack and then executing ./stack > command which I used to use several times without any error. [...] A security fix for Git broke some assumptions DevStack makes when installing services from source repositories, and workarounds are in the process of being reviewed now. See this other ML thread for details: http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028160.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Molka.Gharbaoui at santannapisa.it Wed Apr 13 18:07:36 2022 From: Molka.Gharbaoui at santannapisa.it (Molka Gharbaoui) Date: Wed, 13 Apr 2022 18:07:36 +0000 Subject: R: [devstack] Not able to install devstack on ubuntu 20.04 Message-ID: Thank you Jeremy for your prompt reply! Changing the directories ownership definetly does not work for me. Other many errors are generated. I will follow the threads to check if hopefully a better solution is suggested. ________________________________ Da: Jeremy Stanley Inviato: Mercoled?, 13 Aprile, 2022 18:53 A: openstack-discuss at lists.openstack.org Oggetto: Re: [devstack] Not able to install devstack on ubuntu 20.04 -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.juszkiewicz at linaro.org Wed Apr 13 18:29:27 2022 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Wed, 13 Apr 2022 20:29:27 +0200 Subject: [blazar][requirements] setuptools and python_version in upper constraints In-Reply-To: <20220413150249.afi3gppvbqn725ar@mthode.org> References: <20220413150249.afi3gppvbqn725ar@mthode.org> Message-ID: W dniu 13.04.2022 o?17:02, Matthew Thode pisze: >> I tracked down the regression to the upper constraint on setuptools. For >> example, stable/yoga has: >> >> setuptools===59.6.0;python_version=='3.6' >> setuptools===60.9.3;python_version=='3.8' >> >> It appears this is ignored in the py39 job so the job runs with the latest >> setuptools. Indeed, there were some releases between March 10 and March 24. >> I still have to figure out what changed in setuptools to cause this >> behaviour. >> >> Question for requirements maintainers: is this expected behaviour, or >> should upper constraints also include lines for python_version=='3.9' on >> yoga? > I plan on removing py36 and adding py39 constraints today or tomorrow. So we will have ones for 3.8, other ones for 3.9 and then for 3.10 too? Can we just do one set with "3.8 is minimal, if someone runs older then it is their problem"? 3.8 - Ubuntu 'focal' 20.04 3.9 - Debian 'bullseye' 11, CentOS Stream 9/RHEL 9 (and rebuilds) 3.10 - Ubuntu 'jammy' 22.04 Those are "main" distributions OpenStack Zed runs on. We should have one set with "python_version" limits used only when it is REALLY needed. From cboylan at sapwetik.org Wed Apr 13 18:33:55 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 13 Apr 2022 11:33:55 -0700 Subject: [blazar][requirements] setuptools and python_version in upper constraints In-Reply-To: References: <20220413150249.afi3gppvbqn725ar@mthode.org> Message-ID: On Wed, Apr 13, 2022, at 11:29 AM, Marcin Juszkiewicz wrote: > W dniu 13.04.2022 o?17:02, Matthew Thode pisze: >>> I tracked down the regression to the upper constraint on setuptools. For >>> example, stable/yoga has: >>> >>> setuptools===59.6.0;python_version=='3.6' >>> setuptools===60.9.3;python_version=='3.8' >>> >>> It appears this is ignored in the py39 job so the job runs with the latest >>> setuptools. Indeed, there were some releases between March 10 and March 24. >>> I still have to figure out what changed in setuptools to cause this >>> behaviour. >>> >>> Question for requirements maintainers: is this expected behaviour, or >>> should upper constraints also include lines for python_version=='3.9' on >>> yoga? > >> I plan on removing py36 and adding py39 constraints today or tomorrow. > > So we will have ones for 3.8, other ones for 3.9 and then for 3.10 too? > > Can we just do one set with "3.8 is minimal, if someone runs older then > it is their problem"? I don't think doing that would be a good idea (or possible in all cases). The idea here is that we're always trying to use the newest possible package versions. If a dependency drops support for 3.8 then you get a 3.8 specific entry for that python version and another for 3.9/3.10 for the newer stuff. It is possible (and likely) that a dependency could drop 3.8 and have newer versions for 3.9. It is also possible that a dependency could have no versions that satisfy all of 3.8, 3.9, and 3.10. Basically you have to accept that there may be entries for any version of python that you support due to the way dependencies handle python support, and the desire to have up to date dependencies. > > 3.8 - Ubuntu 'focal' 20.04 > 3.9 - Debian 'bullseye' 11, CentOS Stream 9/RHEL 9 (and rebuilds) > 3.10 - Ubuntu 'jammy' 22.04 > > Those are "main" distributions OpenStack Zed runs on. We should have one > set with "python_version" limits used only when it is REALLY needed. From marios at redhat.com Wed Apr 13 19:42:36 2022 From: marios at redhat.com (Marios Andreou) Date: Wed, 13 Apr 2022 22:42:36 +0300 Subject: [TripleO] Final TripleO repos release for stable/victoria - any requests? In-Reply-To: References: Message-ID: On Wednesday, April 13, 2022, Alan Bishop wrote: > > > On Wed, Apr 13, 2022 at 5:59 AM Marios Andreou wrote: > >> Hello >> >> The stable/victoria branch for all tripleo repos will transition to >> Extended Maintenance in 2 weeks [1]. >> >> To prevent delays I have prepared a final victoria release at [2]. >> >> That [2] will be updated after its depends-on merges (puppet metadata >> bump) to pickup the latest victoria commits at that point. >> >> If there are any patches you want included then please speak up and >> I'll wait for and include those commits before updating >> releases/+/836921 [2]. >> > > Hi Marios, > > I've got one: https://review.opendev.org/c/openstack/tripleo-heat- > templates/+/837648 > > It just passed CI, so if you'd like to approve it... (wink, wink) > > ack thanks for raising it.. looks like Fulton got that merged o/ > Alan > > >> This is the last ever release to be made from stable/victoria. Once it >> goes to Extended Maintenance we can no longer release. >> >> I'll hold it for a few days.. Unless I hear otherwise I will update >> [2] on Monday and try to get it merged next week. >> >> thanks, marios >> >> [1] http://lists.openstack.org/pipermail/openstack-discuss/ >> 2022-April/028013.html >> [2] https://review.opendev.org/c/openstack/releases/+/836921 >> >> >> -- _sent from my mobile - sorry for spacing spelling etc_ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Apr 13 22:05:31 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 13 Apr 2022 22:05:31 +0000 Subject: [all] Devstack jobs are failing due to a git security fix In-Reply-To: <20220413115919.de7odplf7yk6uuf7@yuggoth.org> References: <20220413115919.de7odplf7yk6uuf7@yuggoth.org> Message-ID: <20220413220530.77yjsgcsh7wpsd4q@yuggoth.org> On 2022-04-13 11:59:20 +0000 (+0000), Jeremy Stanley wrote: [...] > Forgive me as caffeine is still finding its way into my veins, but > it has occurred to me that the error is occurring because we're > calling PBR (and thus Git) while installing the software, when > that's not strictly necessary. It happens because we're taking > advantage of pip's ability to call out to a build process before > installing, but we can always separate building and installing. The > former doesn't need root privs, and the latter doesn't need to call > PBR/Git. > > Update the install-from-source routine to build a wheel as stack and > then only sudo pip install the resulting wheel. I was able to make a successful go of this in https://review.opendev.org/837731 so if there's interest we have evidence it's possible to continue down that path. Unfortunately, it comes at the expense of losing editable mode installation (pip install -e, setup.py develop) as that doesn't use pip's normal package-then-install codepath and instead tightly couples the build and install steps. I've heard from a couple of people so far that editable mode support in DevStack is critical to keep, so it's probably better to resurrect the venv install solution in https://review.opendev.org/558930 since that would allow us to drop the `sudo pip install` nastiness once and for all. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cboylan at sapwetik.org Wed Apr 13 22:06:52 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 13 Apr 2022 15:06:52 -0700 Subject: [all] Devstack jobs are failing due to a git security fix In-Reply-To: References: Message-ID: On Wed, Apr 13, 2022, at 12:11 AM, Ian Wienand wrote: > On Tue, Apr 12, 2022 at 05:05:22PM -0700, Michael Johnson wrote: > 65;6602;1c> tldr: All devstack based jobs are going to fail with newer > versions of >> git - don't bother rechecking >> >> git has released a security fix [1] that is starting to roll out in >> distributions (Ubuntu focal for example) that will cause pbr to be >> unable to access the package metadata for packages checked out locally >> due to the directory ownership used in devstack. > > This turns out to be annoyingly complicated. > > Since devstack checks out all code as "stack" and then installs > globally with "sudo pip install -e ...", pbr will be running in a > directory owned by "stack" as root and its git calls will hit this > failure. > > If we make the code directories owned by root, we now have additional > problems. Several places do things in the code repositories -- > e.g. setup virtualenvs, run ./tools/*.sh scripts to generate sample > config files and run tox as "stack" (tox then tries to install the > source tree in it's virtualenv -- if it's owned by root -- again -- > failure). > > I explored a bunch of these options in > > https://review.opendev.org/c/openstack/devstack/+/837636 > > and anyone feel free to take over that and keep trying. > > The other option is to use the new config flag to mark our checkouts > as safe. This is obviously simpler, but it seems like a very ugly > thing for a nominally generic tool like devstack to do to your global > git config. This is done with > > https://review.opendev.org/c/openstack/devstack/+/837659 > > and appears to work; but will need backporting for grenade if we want > to take this path. This ended up being the quickest option to unblocking things so we backported it all the way through to Victoria then landed the changes from Victoria up to master in that order. This means that devstack testing should work again and you can recheck/approve/push changes once again. However, we noticed that these changes don't quite work on Ubuntu Bionic just on Ubuntu Focal. Dan pushed up https://review.opendev.org/c/openstack/devstack/+/837759 to address the Bionic problem and make unstack clean up after ourselves. Once this lands to master we can backport it using our typical backporting process. Finally fungi has been working on https://review.opendev.org/c/openstack/devstack/+/837731 to separate the package creation step from the package installation step. This allows us to build the python package as the stack user and do the install as root avoiding any git concerns about different ownership of repositories. As the commit message in that change notes this effectively means that we cannot have editable installs anymore. If we decide that is a necessary feature of devstack then I think we should look into resurrecting https://review.opendev.org/c/openstack/devstack/+/558930 to have devstack install into a global virtualenv. Then stack can own the virtualenv, and there is no git concern about file ownership. In the past this change sort of died out as it is quite a large change to how devstack operates and will potentially have significant fallout of its own if we land it and there just didn't seem to be a will to go through that. Maybe this situation has changed our opinion on that. Others should feel free to push updates to that change as I'm not sure I'll have time to dedicate to it again. > > When this kicked off I sent in a link to HN thinking that thanks to > our very upstream focused CI we were likely some of the first to hit > this; it's currently the top post so I think that is accurate that > this is having wide impact: > > https://news.ycombinator.com/item?id=31009675 > > It is probably worth keeping one eye on upstream for any developments > that might change our options. > > -i From ramishra at redhat.com Thu Apr 14 03:32:30 2022 From: ramishra at redhat.com (Rabi Mishra) Date: Thu, 14 Apr 2022 09:02:30 +0530 Subject: [blazar][requirements] setuptools and python_version in upper constraints In-Reply-To: References: Message-ID: On Wed, Apr 13, 2022 at 8:21 PM Pierre Riteau wrote: > Hello, > > In the blazar project, we have been seeing a job timeout failure in > openstack-tox-py39 affecting master and stable/yoga. tox starts the > "lockutils-wrapper python setup.py testr --slowest --testr-args=" process > which doesn't show progress until job timeout. > I think you should ideally be migrating to stestr which does not have the issue. I've revived https://review.opendev.org/c/openstack/blazar/+/581547. > It started happening sometime between 2022-03-10 16:01:39 (last success on > master) and 2022-03-24 16:35:15 (first timeout occurrence) [1], with no > change in blazar itself and few changes in requirements. > > I resumed debugging today and managed to reproduce it using Ubuntu 20.04 > (it doesn't happen on macOS). Here is the traceback after interrupting it > if anyone wants to take a look [2]. The python process is using 100% of the > CPU until interrupted. > > I tracked down the regression to the upper constraint on setuptools. For > example, stable/yoga has: > > setuptools===59.6.0;python_version=='3.6' > setuptools===60.9.3;python_version=='3.8' > > It appears this is ignored in the py39 job so the job runs with the latest > setuptools. Indeed, there were some releases between March 10 and March 24. > I still have to figure out what changed in setuptools to cause this > behaviour. > > Question for requirements maintainers: is this expected behaviour, or > should upper constraints also include lines for python_version=='3.9' on > yoga? > > Thanks, > Pierre Riteau (priteau) > > [1] > https://zuul.openstack.org/builds?job_name=openstack-tox-py39&project=openstack%2Fblazar&skip=0 > [2] https://paste.opendev.org/show/bZO7ELmTvfMUGJPdlQ4k/ > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Thu Apr 14 07:26:31 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Thu, 14 Apr 2022 12:56:31 +0530 Subject: [cinder][PTG] Summary of Zed PTG Message-ID: Hi All, Here is a summary of Cinder Zed PTG[1] conducted from 05th April - 8th April, 2022 from 1300 to 1700 UTC each day. Please look at the topic summary and the action items if you were the author for any topic. [1] https://wiki.openstack.org/wiki/CinderZedPTGSummary Thanks and regards Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Thu Apr 14 08:44:59 2022 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 14 Apr 2022 10:44:59 +0200 Subject: [blazar][requirements] setuptools and python_version in upper constraints In-Reply-To: References: Message-ID: On Thu, 14 Apr 2022 at 05:32, Rabi Mishra wrote: > > > On Wed, Apr 13, 2022 at 8:21 PM Pierre Riteau wrote: > >> Hello, >> >> In the blazar project, we have been seeing a job timeout failure in >> openstack-tox-py39 affecting master and stable/yoga. tox starts the >> "lockutils-wrapper python setup.py testr --slowest --testr-args=" process >> which doesn't show progress until job timeout. >> > > I think you should ideally be migrating to stestr which does not have the > issue. I've revived > https://review.opendev.org/c/openstack/blazar/+/581547. > Many thanks Rabi! Switching to stestr fixes the issue. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ces.eduardo98 at gmail.com Thu Apr 14 10:51:57 2022 From: ces.eduardo98 at gmail.com (Carlos Silva) Date: Thu, 14 Apr 2022 07:51:57 -0300 Subject: [manila] Zed PTG summary Message-ID: Hello, zorillas and interested stackers! Thank you for attending the PTG over the last week. We had a good audience and a lot of good discussions over the proposed topics. On the official etherpad [9] you will find the notes we took during the discussions, as well as links to additional etherpads or other references. The YouTube playlist [14] holds all the recordings we had over the week. Every video corresponds to one PTG day, and their description contains the exact time frame of the discussions. *== Xena cycle retrospective ==* * We agreed and were happy with the events we have been doing in the community (collaborative review sessions, bugsquashes and some other). They help us to understand the changes that are coming in every cycle, and also get to share knowledge with the contributors. For this cycle we will define different themes for our bugsquashes. Some dates are already defined. * We agreed on the importance of the reviews from different affiliations and we decided on a few practices to involve more community members in the review process. * We brought up the enhancements we need to do in the Manila UI, and mentioned some features that we are lacking for feature parity compared to the Manila core code. * We decided to host a hackathon in the beginning of the cycle, as we had a very positive output from the last one we had. This one can focus on the functional testing of the OSC for Manila, which we are lacking. * During the Z cycle we will host the first code walkthrough for Manila. (kudos for gouthamr for the idea). The intention of this event is to help the new contributors to better understand the code and allow them to ask questions, as well as explain the structure of our repositories and go over some key parts of Manila. * We discussed the difficulties we are facing with ethercalc and we got to a point where we saw there was no alternative way to replace it. More details about the retrospective available in [1]. *== Thin Provisioning and Oversubscription improvements ==* * haixin and gouthamr brought up this topic after a few discussions on how the oversubscription issue should be solved in Manila. * Storage pools that support thin provisioning are open to an "oversubscription". * We have a ratio for allowing oversubscriptions and it also influences the way the manila scheduler filters acts. We currently have issues with that calculation. * If the backends do not report their allocated_capacity_gb, it will be inaccurate. * We can have the share manager to have a more precise calculation. * Action Items: ** haixin will write a spec with more details on how this issue is going to be addressed. *== NetApp ONTAP - migration from ZAPI to REST API ==** nahimsouza and fabioaurelio talked about the NetApp ONTAP drivers that are currently using ZAPI for communication with NetApp hardware. * NetApp is now planning to deprecate ZAPI in favor of REST. This means that if they find issues within the ZAPI calls, NetApp won't fix them anymore. * The side effect of this is that NetApp wants to migrate their driver calls from ZAPI to rest, but it will impact pretty much all operations the driver performs. * These are going to be huge changes and they intend to backport it to the oldest maintained release. * They have lots of customers using the latest versions of ONTAP but those customers do not often upgrade their OpenStack deployments. * They are looking for approaches on how to backport these changes even further. *== Add support for automatic snapshot creation and removal ==** Kiran brought up the solution proposed with a spec [2], which intends to add an automatic snapshot creation policy and automatic snapshot deletion policy for a share, considering an interval defined by an administrator. * We've asked for possible alternatives such as automating the snapshots creation via manilaclient/REST API, and also raised a few questions that would help us to enforce the need of enhancing the API. * Action Items: ** Kiran will come back with updates on the discussion points. *== Manila UI updates ==** vkmc discussed the updates for Manila UI alongside the NDSU students. * They talked about the plans we have for the UI in this cycle, as well as bringing up the feature-parity issue we currently have. * We started by filing blueprints and identifying the scope of necessary changes to get feature parity. * A taiga board [3] was created to track the progress. * We also have a few blueprints [4] for that. *== Consistent and Secure RBAC changes ==** gouthamr, gmann and vhari brought up the plans for the Z cycle, which would be continuing the plans highlighted on [5] So for Z, the idea is to: * Harden Phase 1 by re-evaluating the use of "admin" at system scope, disallowing system scope users from creating/manipulating project resources * Continue working on tempest protection tests. Liron has a patch pending reviews that starts this work [6] * Test existing tempest API/scenario tests with the new RBAC defaults and adjust any credential use or API expectations by the end of the cycle * Start working on the phase 2 per the community goal in [5] Beyond Z: * A release - Phase 3: System Reader and System Member in the default RBAC * B release: Remove deprecated policies * == OpenStack Client updates for Manila ==** maaritamm presented the progress we had so far for the OSC integration. * We are quite close to feature parity. The idea is to reach feature parity in the Z cycle and add a deprecation warning to the native client shell in python-manilaclient. * We are lacking functional tests and we came up with a plan of organizing a hackathon to be hosted close to the z-3 milestone. * This hackathon will be focused on addressing the functional tests for the client and wrapping it up. * As enforced in the past cycle, maintainers implementing new functionalities to python-manilaclient must _only_ implement them for OSC. * New features in the python-manilaclient shell will actively be discouraged. * A more detailed version is available in [7]. *== Support for highly available NFS Ganesha in the CephFS driver ==** vkmc, fmount and gouthamr presented the the Ceph orchestrator tooling that now natively supports deploying nfs-ganesha gateway servers as "ceph-nfs" cluster daemons in active/active configurations. * The Ceph community has also added support for nfs APIs to create and manipulate exports on such ceph-nfs daemons. * We discussed the changes needed to support these in Manila and how users can migrate their workloads. To provide adequate control to deployers and end users, we agreed to introduce a new protocol helper layer that will live alongside the current DBUS API helper. * When the deployment supports new NFS clusters, deployers would be able to configure "alternate" server IPs in manila which can aid a slow transition for end users to migrate to newer "preferred" servers. * We're planning to automate this transition in Triple-O while also documenting this for developers of other deployment tools. *== Devstack-plugin-ceph changes ==* * vkmc and fmount presented the changes they are implementing in order to install/deploy the ceph cluster using cephadm. There is an open change for review [8]. We are also testing the Manila CephFS job in [9]. * Native CephFS scenario tests concerning ceph-fuse are failing sporadically with timeouts while mounting. * There are few issues with testing and there is more information about them in [9]. As next steps we have: * Testing with CentOS as the devstack host and Manila's client, so we have different approaches of testing and can ensure that ubuntu testing isn't compromising the VMs and causing the possible lack of resources we are experiencing. * Splitting CI changes into different test sets focusing on different services * Turning on cephadm based deployment in the multi-node job *== Topic #Bug updates ==** vhari has shown some bug analytics and the progress we have been making over the past cycles. * We have a bunch of bugs that are old and no one has been touching them in a while (at least 2 years). * Maintainers should check launchpad and evaluate those bugs we created and/or are assigned to us and see if they are still relevant. If they are, we should keep active on them, otherwise we can just let it fall into the auto-expiry policy. * Bugsquashes have been efficient for us, we should keep doing them. ** fabioaurelio suggested a bugsquash theme that would double up as a review jam. We'd be focusing on looking at the lingering changes that are closing bugs and getting closure for them. * When we don't have the time to do the bug triaging during the Manila meeting, we will do the triaging in #openstack-manila after. * We will be hosting our bugsquashes in Z-1 and the other one between Z-2 and 3. * Action items: ** carloss will update the change with manila specific milestones to reflect our scheduled events *== Metadata API status update ==** This is an effort that started some cycles ago. We intend to allow metadata on more user facing resources in Manila [10], and do that in a generic way. * Over the Yoga cycle that was accomplished for shares, and there is a change to also permit that for share snapshots. What's next: * Getting the share snapshots changes merged over the Z cycle (M-1) * Getting the new metadata mechanism also implemented for * Share Export Locations - Target: M3 (at least API + CLI) * Share Access Rules - Target: M3 (at least API + CLI) * Share Groups - Target: M3 (at least API + CLI) *== NetApp CI Zuul v3 Migration - Challenges and lessons learned ==** Building and maintaining a stable CI system has been a huge pain point to Manila's "third party" vendor storage developers for the past few releases. * NetApp contributors have talked about their Zuul v3 setup using Software Factory. They brought up some information on their Zuul v3 CI setup and talked about some difficulties they faced during the setup. * The recording is on Manila's YouTube channel [11]. There were questions asked and raised and the discussion documented in [9]. * Action items: * Through the release cycle, we hope to offer more venues for more knowledge sharing regarding third party CI maintenance. *== FIPS compliance ==** carloss ashrod98 and ade_lee talked the attendance through the challenges and testing for FIPS * We've been through the next steps and what has been done so far in Manila ** Few changes were proposed and merged ** Our CI job is working fine with CentOS 8 but we decided we should already have it on CentOS 9 * We needed to monkey patch paramiko for some drivers ** gouthamr is concerned that if this patching paramiko change is merged, this could break older branches in the future ** A discussion in openstack-discuss will be started to ensure if this is the best approach for this issue. * At the moment, we should stick with testing using CentOS but testing with Ubuntu is under way. * Compliance steps will start on Z release * ade_lee started an audit in OpenStack services to identify non FIPS compliant libraries * Goal should be completed on AA cycle * Action Items: ** Start a discussion on patching paramiko or switching to other lib ** Submit the jobs for python-manilaclient, manila-ui and manila-tempest-plugin *== Tech Debt ==*These are a few items we would like to get moving for Manila in the next cycles. They usually involve a community goal, a request for enhancement or recommendations from the community: * Migrating from privsep to rootwrap: In progress, few changes were merged over Y cycle and we intend to have even more progress over Z. * Dropping python-keystoneclient from python-manilaclient: this is also in progress and we got a few related changes merged. We will bring this up again in the upcoming manila meetings. * Cinder/Nova online extensions support with the generic driver: this hasn't started yet and we are looking for an owner. This is more of a request for enhancement that would benefit those using the generic driver. * 26 volumes limit in the generic driver: this is something that was called out in the generic driver documentation. We agreed to try an approach where we would update the manila image and check the output for it. * Publish the container driver image on tarballs.openstack.org (or quay.io): we had few changes to the container driver in the past cycles. One functionality was implemented on it (add/update security services in share networks that are used), and for that change to be tested, the container must have OpenLDAP installed. We have a way to publish new artifacts and we could benefit from reusing that. Action items: ** carloss will work on having the new image proposed ** Bring the keystoneclient change in the upcoming manila meetings. ** Update the manila image setting hw_scsi_model=virtio-scsi and investigate if the 26 volumes limit will be solved ** Work on finding people interested to contribute in the Cinder/Nova online extensions support with the generic driver *== Manila CSI updates ==** gman0, gouthamr and vkmc talked us through some of the updates in the Manila CSI since the last PTG [12]. * We don't currently test using RWO volumes with manila-csi in cloud-provider-openstack, so we could add an "external-attacher" sidecar to ensure RWO-ness * Doing backups with manila-csi and velero/restic * Action items: ** open an issue against the CPO repo and work on testing with RWO ** open a redmine tracker to call the limitation of giving huge names to snapshots in the docs *== Community Hour ==** gouthamr brought up some details about the release cadency adjustment [13] * Also, mentioned a few implications of this and how deprecations, testing and upgrades will work * Few details in the deprecation windows and how we should be working and targeting changes in the next few cycles. * A job was recently merged in Manila to test the upgrade from tick to tick releases * Action items: ** Update the tick release job to the Xena branch [1] https://etherpad.opendev.org/p/manila-yoga-retrospective [2] https://review.opendev.org/c/openstack/manila-specs/+/823165 [3] https://review.opendev.org/c/openstack/manila-specs/+/823165 [4] https://blueprints.launchpad.net/manila-ui/+spec/api-version-features [5] https://governance.openstack.org/tc/goals/selected/consistent-and-secure-rbac.html [6] https://review.opendev.org/c/openstack/manila-tempest-plugin/+/805938 [7] https://etherpad.opendev.org/p/zorilla-ptg-manila-osc [8] https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/826484 [9] https://etherpad.opendev.org/p/zorilla-ptg-manila [10] https://specs.openstack.org/openstack/manila-specs/specs/yoga/metadata-for-share-resources.html [11] https://www.youtube.com/watch?v=Pn1ZEnlHE7A [12] https://etherpad.opendev.org/p/zorilla-ptg-manila#L413 [13] https://governance.openstack.org/tc/resolutions/20220210-release-cadence-adjustment.html [14] https://www.youtube.com/watch?v=AIqrLdprkaE&list=PLnpzT0InFrqCjifjP1OzFgnfiB1j0j6F2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Thu Apr 14 13:24:37 2022 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Thu, 14 Apr 2022 15:24:37 +0200 Subject: [puppet] How to disable installation of mysql::server and only create databases? In-Reply-To: References: <6ab28227-52d9-f93e-089d-3f9e099e3776@inovex.de> Message-ID: Hey Takashi, On 13/04/2022 16:01, Takashi Kajinami wrote: > This is basically because the implementation uses some resources like > mysql_user > imported from puppetlabs-mysql and these resources are designed to > work in the node > where mysql is deployed by mysql::server. (Especially the ~/.my.cnf > file created). > > Technically speaking you can remove that include as long as you can > prepare all of > the required resources with these resources but that'd be quite tricky. Thanks for the quick and thorough response. I worked around the issue by just using the mysql::db class directly. It's actually clearly documented on how to use it on an existing server and without installing or managing it: ?* https://forge.puppet.com/modules/puppetlabs/mysql#work-with-an-existing-server Regards and thanks again! Christian From dtantsur at redhat.com Thu Apr 14 14:27:32 2022 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 14 Apr 2022 16:27:32 +0200 Subject: [all] Devstack jobs are failing due to a git security fix In-Reply-To: References: Message-ID: On Thu, Apr 14, 2022 at 12:12 AM Clark Boylan wrote: > On Wed, Apr 13, 2022, at 12:11 AM, Ian Wienand wrote: > > On Tue, Apr 12, 2022 at 05:05:22PM -0700, Michael Johnson wrote: > > 65;6602;1c> tldr: All devstack based jobs are going to fail with newer > > versions of > >> git - don't bother rechecking > >> > >> git has released a security fix [1] that is starting to roll out in > >> distributions (Ubuntu focal for example) that will cause pbr to be > >> unable to access the package metadata for packages checked out locally > >> due to the directory ownership used in devstack. > > > > This turns out to be annoyingly complicated. > > > > Since devstack checks out all code as "stack" and then installs > > globally with "sudo pip install -e ...", pbr will be running in a > > directory owned by "stack" as root and its git calls will hit this > > failure. > > > > If we make the code directories owned by root, we now have additional > > problems. Several places do things in the code repositories -- > > e.g. setup virtualenvs, run ./tools/*.sh scripts to generate sample > > config files and run tox as "stack" (tox then tries to install the > > source tree in it's virtualenv -- if it's owned by root -- again -- > > failure). > > > > I explored a bunch of these options in > > > > https://review.opendev.org/c/openstack/devstack/+/837636 > > > > and anyone feel free to take over that and keep trying. > > > > The other option is to use the new config flag to mark our checkouts > > as safe. This is obviously simpler, but it seems like a very ugly > > thing for a nominally generic tool like devstack to do to your global > > git config. This is done with > > > > https://review.opendev.org/c/openstack/devstack/+/837659 > > > > and appears to work; but will need backporting for grenade if we want > > to take this path. > > This ended up being the quickest option to unblocking things so we > backported it all the way through to Victoria then landed the changes from > Victoria up to master in that order. This means that devstack testing > should work again and you can recheck/approve/push changes once again. > > However, we noticed that these changes don't quite work on Ubuntu Bionic > just on Ubuntu Focal. Dan pushed up > https://review.opendev.org/c/openstack/devstack/+/837759 to address the > Bionic problem and make unstack clean up after ourselves. Once this lands > to master we can backport it using our typical backporting process. > > Finally fungi has been working on > https://review.opendev.org/c/openstack/devstack/+/837731 to separate the > package creation step from the package installation step. This allows us to > build the python package as the stack user and do the install as root > avoiding any git concerns about different ownership of repositories. As the > commit message in that change notes this effectively means that we cannot > have editable installs anymore. > > If we decide that is a necessary feature of devstack then I think we > should look into resurrecting > https://review.opendev.org/c/openstack/devstack/+/558930 to have devstack > install into a global virtualenv. Then stack can own the virtualenv, and > there is no git concern about file ownership. In the past this change sort > of died out as it is quite a large change to how devstack operates and will > potentially have significant fallout of its own if we land it and there > just didn't seem to be a will to go through that. Maybe this situation has > changed our opinion on that. Others should feel free to push updates to > that change as I'm not sure I'll have time to dedicate to it again. > As a data point: maintaining bifrost has become much easier once we did a similar thing and started using a virtualenv. Dmitry > > > > > When this kicked off I sent in a link to HN thinking that thanks to > > our very upstream focused CI we were likely some of the first to hit > > this; it's currently the top post so I think that is accurate that > > this is having wide impact: > > > > https://news.ycombinator.com/item?id=31009675 > > > > It is probably worth keeping one eye on upstream for any developments > > that might change our options. > > > > -i > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Apr 14 14:47:56 2022 From: smooney at redhat.com (Sean Mooney) Date: Thu, 14 Apr 2022 15:47:56 +0100 Subject: [all] Devstack jobs are failing due to a git security fix In-Reply-To: References: Message-ID: <0b4e8913dedab3514ee11510aae015a3c59825dd.camel@redhat.com> On Thu, 2022-04-14 at 16:27 +0200, Dmitry Tantsur wrote: > On Thu, Apr 14, 2022 at 12:12 AM Clark Boylan wrote: > > > On Wed, Apr 13, 2022, at 12:11 AM, Ian Wienand wrote: > > > On Tue, Apr 12, 2022 at 05:05:22PM -0700, Michael Johnson wrote: > > > 65;6602;1c> tldr: All devstack based jobs are going to fail with newer > > > versions of > > > > git - don't bother rechecking > > > > > > > > git has released a security fix [1] that is starting to roll out in > > > > distributions (Ubuntu focal for example) that will cause pbr to be > > > > unable to access the package metadata for packages checked out locally > > > > due to the directory ownership used in devstack. > > > > > > This turns out to be annoyingly complicated. > > > > > > Since devstack checks out all code as "stack" and then installs > > > globally with "sudo pip install -e ...", pbr will be running in a > > > directory owned by "stack" as root and its git calls will hit this > > > failure. > > > > > > If we make the code directories owned by root, we now have additional > > > problems. Several places do things in the code repositories -- > > > e.g. setup virtualenvs, run ./tools/*.sh scripts to generate sample > > > config files and run tox as "stack" (tox then tries to install the > > > source tree in it's virtualenv -- if it's owned by root -- again -- > > > failure). > > > > > > I explored a bunch of these options in > > > > > > https://review.opendev.org/c/openstack/devstack/+/837636 > > > > > > and anyone feel free to take over that and keep trying. > > > > > > The other option is to use the new config flag to mark our checkouts > > > as safe. This is obviously simpler, but it seems like a very ugly > > > thing for a nominally generic tool like devstack to do to your global > > > git config. This is done with > > > > > > https://review.opendev.org/c/openstack/devstack/+/837659 > > > > > > and appears to work; but will need backporting for grenade if we want > > > to take this path. > > > > This ended up being the quickest option to unblocking things so we > > backported it all the way through to Victoria then landed the changes from > > Victoria up to master in that order. This means that devstack testing > > should work again and you can recheck/approve/push changes once again. > > > > However, we noticed that these changes don't quite work on Ubuntu Bionic > > just on Ubuntu Focal. Dan pushed up > > https://review.opendev.org/c/openstack/devstack/+/837759 to address the > > Bionic problem and make unstack clean up after ourselves. Once this lands > > to master we can backport it using our typical backporting process. > > > > Finally fungi has been working on > > https://review.opendev.org/c/openstack/devstack/+/837731 to separate the > > package creation step from the package installation step. This allows us to > > build the python package as the stack user and do the install as root > > avoiding any git concerns about different ownership of repositories. As the > > commit message in that change notes this effectively means that we cannot > > have editable installs anymore. > > > > If we decide that is a necessary feature of devstack then I think we > > should look into resurrecting > > https://review.opendev.org/c/openstack/devstack/+/558930 to have devstack > > install into a global virtualenv. Then stack can own the virtualenv, and > > there is no git concern about file ownership. In the past this change sort > > of died out as it is quite a large change to how devstack operates and will > > potentially have significant fallout of its own if we land it and there > > just didn't seem to be a will to go through that. Maybe this situation has > > changed our opinion on that. Others should feel free to push updates to > > that change as I'm not sure I'll have time to dedicate to it again. > > > > As a data point: maintaining bifrost has become much easier once we did a > similar thing and started using a virtualenv. apparntly issue with ironic were one of the things that stop this progressing in the past so perhaps your expirnce with virtual env and bifrost could help unblock this if we review that effort. usign a virtual evev will isolate use form the distro packages which should allow use to remvoe alot of hte fixups in https://github.com/openstack/devstack/blob/master/tools/fixup_stuff.sh so there are defintly advantages to be had. developers will jsut have to learn to activate the venv on the cli or in there ide but it woudl allow much of the existing workflow to be kept without need to use sudo for python package installs. > > Dmitry > > > > > > > > > > When this kicked off I sent in a link to HN thinking that thanks to > > > our very upstream focused CI we were likely some of the first to hit > > > this; it's currently the top post so I think that is accurate that > > > this is having wide impact: > > > > > > https://news.ycombinator.com/item?id=31009675 > > > > > > It is probably worth keeping one eye on upstream for any developments > > > that might change our options. > > > > > > -i > > > > > From peljasz at yahoo.co.uk Thu Apr 14 16:52:52 2022 From: peljasz at yahoo.co.uk (lejeczek) Date: Thu, 14 Apr 2022 17:52:52 +0100 Subject: client authentication - ? gui okey but cli fails References: Message-ID: Hi guys. An end user here thus please tailor your advice according - meaning that I have no admin access. With GUI in a web browser I do log in but with CLI I get "usual": The request you have made requires authentication. (HTTP 401) (Request-ID:.... (rc file downloaded off the GUI) What can be the issue? many thanks, L. From katonalala at gmail.com Thu Apr 14 17:42:45 2022 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 14 Apr 2022 19:42:45 +0200 Subject: [neutron] Drivers meeting - Friday 15.4.2022 - cancelled Message-ID: Hi Neutron Drivers! We have Good Friday this week, so let's cancel the drivers meeting for this week. See you at the meeting next week. Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Thu Apr 14 18:47:10 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 14 Apr 2022 19:47:10 +0100 Subject: [murano][octavia][sahara][zaqar][zun][oslo] Pending removal of 'oslo_db.sqlalchemy.test_base' Message-ID: o/ This is a heads up to the maintainers of the aforementioned projects that the oslo team are planning to remove the 'oslo_db.sqlalchemy.test_base' module this cycle. This module has been deprecated since 2015 and we want to get rid of it to reduce load on the overburdened oslo maintainers. I have already fixed the issue in a couple of projects. These can be used as blueprints for fixing the remaining affected projects: * masakari (https://review.opendev.org/c/openstack/masakari/+/802761) * glance (https://review.opendev.org/c/openstack/glance/+/802762) * manila (https://review.opendev.org/c/openstack/manila/+/802763) I would love to fix the remaining projects but my limited time is currently focused elsewhere. The oslo.db change is available at [1]. We'd like this to be merged in the next month but we can push that out to later in the cycle if teams need more time. Just shout. Please let us know if you have any concerns or if the above changes are not sufficient as a guide for how to address these issues. Cheers, Stephen [1] https://review.opendev.org/c/openstack/oslo.db/+/798136 From stephenfin at redhat.com Thu Apr 14 18:56:47 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 14 Apr 2022 19:56:47 +0100 Subject: [cyborg][zun][masakari][freezer-api][heat][tacker][oslo] Pending removal of 'oslo_db.sqlalchemy.enginefacade.LegacyEngineFacade' Message-ID: o/ This is a heads up to the maintainers of the aforementioned projects that the oslo team are planning to remove the 'oslo_db.sqlalchemy.enginefacade.LegacyEngineFacade' class and related 'get_legacy_facade' helpers this cycle. The legacy engine facade pattern has been deprecated since 2015 and we want to get rid of it to reduce load on the overburdened oslo maintainers. The prefered alternative is to rely on oslo.context RequestContext-based session management. The work required to migrate can be quite significant depending on how large your DB API is but it is pretty trivial. This issue has already been fixed in a couple of projects. These can be used as blueprints for fixing the remaining affected projects: * cinder (https://review.opendev.org/q/topic:remove-legacyfacade) * nova (https://review.opendev.org/q/topic:bp%252Fnew-oslodb-enginefacade) We expect this one to take affected projects a while to resolve, so we won't remove this module until near the end of the cycle. However, this thing really needs to be removed sooner rather than later. Please let us know if you have any concerns or if the above changes are not sufficient as a guide for how to address these issues. Cheers, Stephen From gmann at ghanshyammann.com Thu Apr 14 19:19:23 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 14 Apr 2022 14:19:23 -0500 Subject: [all][tc] Dropping the lower constraints maintenance Message-ID: <180298412b2.f53997df67944.7250649137990774360@ghanshyammann.com> Hello Everyone, In zed cycle PTG, TC discussed the lower constraints maintenance again and after considering all the existing and current issues (discussed any times in the ML thread[1]), we agreed to drop the lower-constraints.txt file, x env, and its testing on master as well as on stable branches. But for reference to anyone using them (Debian as we know), we will keep them as it is in the requirements.txt file with our best effort to keep them up to date. I have proposed the TC resolution for this, if you have any feedback please add it on Gerrit patch: - https://review.opendev.org/c/openstack/governance/+/838004 [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019659.html http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019390.html http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019672.html -gmann From laurentfdumont at gmail.com Thu Apr 14 21:54:51 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Thu, 14 Apr 2022 17:54:51 -0400 Subject: client authentication - ? gui okey but cli fails In-Reply-To: References: Message-ID: Hello! - What CLI command are you using? - Can you paste the output of the command with --debug On Thu, Apr 14, 2022 at 12:56 PM lejeczek wrote: > Hi guys. > > An end user here thus please tailor your advice according - > meaning that I have no admin access. > > With GUI in a web browser I do log in but with CLI I get > "usual": > > The request you have made requires authentication. (HTTP > 401) (Request-ID:.... > > (rc file downloaded off the GUI) > What can be the issue? > many thanks, L. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gthiemonge at redhat.com Fri Apr 15 06:06:30 2022 From: gthiemonge at redhat.com (Gregory Thiemonge) Date: Fri, 15 Apr 2022 08:06:30 +0200 Subject: [murano][octavia][sahara][zaqar][zun][oslo] Pending removal of 'oslo_db.sqlalchemy.test_base' In-Reply-To: References: Message-ID: Thanks for the heads up Stephen, we will take a look in Octavia On Thu, Apr 14, 2022 at 8:52 PM Stephen Finucane wrote: > o/ > > This is a heads up to the maintainers of the aforementioned projects that > the > oslo team are planning to remove the 'oslo_db.sqlalchemy.test_base' module > this > cycle. This module has been deprecated since 2015 and we want to get rid > of it > to reduce load on the overburdened oslo maintainers. I have already fixed > the > issue in a couple of projects. These can be used as blueprints for fixing > the > remaining affected projects: > > * masakari (https://review.opendev.org/c/openstack/masakari/+/802761) > * glance (https://review.opendev.org/c/openstack/glance/+/802762) > * manila (https://review.opendev.org/c/openstack/manila/+/802763) > > I would love to fix the remaining projects but my limited time is currently > focused elsewhere. The oslo.db change is available at [1]. We'd like this > to be > merged in the next month but we can push that out to later in the cycle if > teams > need more time. Just shout. > > Please let us know if you have any concerns or if the above changes are not > sufficient as a guide for how to address these issues. > > Cheers, > Stephen > > [1] https://review.opendev.org/c/openstack/oslo.db/+/798136 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Fri Apr 15 09:24:39 2022 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 15 Apr 2022 11:24:39 +0200 Subject: [largescale-sig] Large Scale SIG Forum session in Berlin? Message-ID: <4dee1ef9-b938-92f7-cf21-85425ed37988@openstack.org> Hi everyone, The CFP for Forum sessions is open until Wednesday next week at: https://cfp.openstack.org/app/berlin-2022/18/presentations Should we file a session for the Large Scale SIG? Could be a generic session ("Meet the Large Scale SIG") or a more precise topic. -- Thierry Carrez (ttx) From yasufum.o at gmail.com Fri Apr 15 12:40:36 2022 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Fri, 15 Apr 2022 21:40:36 +0900 Subject: [tacker] Zed PTG summary Message-ID: Hi, Thank you for joining the PTG. Here is a summary of Zed PTG of Tacker. We discussed 17 proposals through three days and agreed to whole topics. The details of proposals are on the PTG's etherpad [1]. - heat-translator maintenance - Ayumu Ueha proposed himself as a core of heat-translator from Tacker team and approved by the team. - Enhance security of APIs from Tacker for validating the certificate of a destination server. - Enhancement of Tacker-horizon Dashboard for enabling some Day2 operations (might include a part of Day0/1 operations), such as manual heal, modify VNF, change the configuration of VNFM, etc, and monitoring through GUI This blueprint proposes to extend the tacker-horizon dashboard with some new interfaces and functionalities. - Support VNF using Hardware Acceleration such as GPU, FPGA, ASIC. - Remove restrictions of using Helm Chart. - v1 API Refactoring in order to maintain and extend Tacker continuously, it's important to improve the maintainability of the v1 API. - CLI support for paging query results to improve usability of the cli tool. - We still need to discuss how we provide the pagination of the results. - Support multi artifact of ansible driver. It's already to be under reviewing. https://review.opendev.org/c/openstack/tacker/+/836107 - Enhance Multi tenant policy in LCM for enhancing of the multi-tenant policy in the VNF lifecycle management. It enables a non-admin role user to instantiate VNF, and defines tenancy isolation between admin and the non-admin role users. - Enhancement of CNF operations for k8s usecases. - https://blueprints.launchpad.net/tacker/+spec/database synchronization - https://blueprints.launchpad.net/tacker/+spec/enhance-cnf-lcm - https://blueprints.launchpad.net/tacker/+spec/enhancement-container-update - https://blueprints.launchpad.net/tacker/+spec/suport-openid-k8s-vim - https://blueprints.launchpad.net/tacker/+spec/support-instantiationlevel-cnf - Support of automatic operations - https://blueprints.launchpad.net/tacker/+spec/support-auto-lcm - https://blueprints.launchpad.net/tacker/+spec/support-autoheal-queue - Improve V2 code's UT Coverage and refactoring Tacker from caishuwen. - Enhance VNF Update functionality. - https://blueprints.launchpad.net/tacker/+spec/enhance-change-package - https://blueprints.launchpad.net/tacker/+spec/individual-vnfc-management - Support HA-cluster for high availability is an important requirement to apply Tacker to commercial systems. HA cluster such as ACT-SBY or N-ACT will be supported. - Enhance vnflcm API (hirofumi-noguchi) - https://blueprints.launchpad.net/tacker/+spec/enhance-placement - https://blueprints.launchpad.net/tacker/+spec/enhance-utils-extmanagedvl - https://blueprints.launchpad.net/tacker/+spec/multi-version-vnfd - https://blueprints.launchpad.net/tacker/+spec/sol004-package-management - https://blueprints.launchpad.net/tacker/+spec/support-subscription-cli - https://blueprints.launchpad.net/tacker/+spec/vim-management-reorganization - https://blueprints.launchpad.net/tacker/+spec/abolish-default-vim - Tacker's new requirements for commercial systems (hirofumi-noguchi) - https://blueprints.launchpad.net/tacker/+spec/enhance-system-management - https://blueprints.launchpad.net/tacker/+spec/command-conflict-handling - https://blueprints.launchpad.net/tacker/+spec/system-performance-management [1] https://etherpad.opendev.org/p/tacker-zed-ptg Thanks, Yasufumi From fpantano at redhat.com Fri Apr 15 14:27:37 2022 From: fpantano at redhat.com (Francesco Pantano) Date: Fri, 15 Apr 2022 16:27:37 +0200 Subject: [TripleO][Ceph] Zed PTG Summary Message-ID: Hello everyone, Here are a few highlights on the TripleO Ceph integration status and the plans for the next cycle. *1. Deployed ceph as a separate step* TripleO now provides a different stage (with a new cli set of commands) to bootstrap the Ceph cluster before reaching the overcloud deployment phase. This is the new default approach since Wallaby, and the short term plan is to work on the upstream CI consolidation to make sure we run this stage on the existing TripleO standalone scenarios, extending the coverage to both phases (before the overcloud deployment, when the Ceph cluster is created, and during the overcloud deployment, when the cluster is finalized according to the enabled services). It's worth mentioning that great progress in this direction has been made, and the collaboration with the tripleo-ci is one of the key points here, as they're helping on the automation aspect to test upstream pending bits with daily jobs. The next step will be working together on the automation of the promotion mechanism, which is supposed to make this process less error-prone. *2. Decouple Ceph Upgrades* Nautilus to Pacific is still managed by ceph-ansible but the stage of upgrading the cluster has been moved before the overcloud upgrade, resulting in a different maintenance window. Once the cluster is moved to Pacific, cephadm is enabled, and from this moment onwards, the upgrade process, as well as minor updates, will be managed by cephadm and can be seen as a day2 operation. The operator can now perform these kinds of tasks without any interaction with TripleO, which is still used to pull the new containers (unless another registry reachable from the overcloud is used), but the scope has been limited. *3. Ganesha transitioning to Ceph orchestrator and Ingress migration* This has been the main topic for this first ptg session: the feature it's tracked by two already approved upstream specs and the goal is to support a Ganesha service managed by cephadm instead of a tripleo-managed one. The TripleO conversation impacted many areas: *a.* the networkv2 flow has been improved and it's now possible to reserve more than 1 VIP per network, but it applies only to the ceph services; *b.* a new TripleO resource, the CephIngress daemon, has been added, and it's a key component (provided by Ceph) that is supposed to provide HA for the ceph-nfs managed daemon *c.* The tripleo cli is extended and the ceph-nfs daemon can be deployed during the bootstrap of the ceph cluster *d.* This feature depends on the manila driver development [1], which represents an effort to implement a driver that can interact with the Ceph orch cli (and the layer it provides for nfs) instead of using dbus. Further information about this conversation can be found here [1]. Part of this conversation (and really good input here actually) was about the migration plan for already existing environments where operators would like to move from a TripleO managed Ganesha to a highly available ceph-nfs managed by cephadm. The outcome here is: *1.* It's possible to move to the cephadm managed ingress daemon during the upgrade under certain constraints, but we should provide tools to finalize the migration because there's an impact not only on the server-side (and the manila service itself) but also on the clients where the shares are mounted; *2.* We might want to have options to keep the PCS managed VIP for Ganesha and avoid forcing people to migrate, and this flow should be consistent at tripleo heat templates level; For those who are interested, here's the etherpad [2] and the recording of the session [3]. Thanks, Francesco [1] https://etherpad.opendev.org/p/zorilla-ptg-manila-planning [2] https://etherpad.opendev.org/p/tripleo-zed-ceph [3] https://slagle.fedorapeople.org/tripleo-zed-ptg/tripleo-zed-ptg-ceph.mp4 -- Francesco Pantano GPG KEY: F41BD75C -------------- next part -------------- An HTML attachment was scrubbed... URL: From rodrigo.lima at o2sistemas.com Fri Apr 15 22:14:39 2022 From: rodrigo.lima at o2sistemas.com (Rodrigo Lima) Date: Fri, 15 Apr 2022 19:14:39 -0300 Subject: [Kolla-ansible][Glance] Glance HA deployment with shared file backend Message-ID: Hi Guys, hope all is well! According with the kolla-ansible documentation, " By default when using file backend only one glance-api container can be running." So my questions is quite simple: Can I deploy multiple glance_api containers in HA mode (like if I used ceph or swift backend) if I use a shared filesystem (eg. NFS) in all glance nodes? If I can do so, what I need to change in deployment files/scripts/yaml? -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sat Apr 16 01:35:06 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 15 Apr 2022 20:35:06 -0500 Subject: [all][tc] Technical Committee Zed cycle Kick-Off Message-ID: <18030026b29.10173784d114017.62850585421299472@ghanshyammann.com> Hello Everyone, In PTG, we decided to continue the TC per-cycle-tracker which help us to finish the things within the timeframe of one cycle. During PTG week (TC+leader interaction and TC-related sessions), we had a lot of productive discussions[1] and collected many working items for TC. The Technical Committee's main goal is to drive more technical work to solve the OpenStack existing technical challenges. This is the main feedback from the community to TC working and I am happy to have that feedback, as well as TC, started working on it in a more productive way. Of course, we will continue doing the governance part too which does not take much bandwidth of TC members and we will also try to minimize such process-related tasks as per the current situation and needs of the community. In the Zed cycle also, we will continue to focus on the Technical works. I have created the Zed cycle TC tracker and we will be targeting 7 Technical items + 4 process/governance-focused items. Also, we will be open to taking more items if anything important comes up in between the cycle. - https://etherpad.opendev.org/p/tc-zed-tracker With that, let's continue to work together and have another successful cycle. Feel free to reach out to TC at any time (does not need to wait for the weekly meeting) for your queries or help. Also, you can find all the cycle wise trackers on this wiki page[2] [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028104.html [2] https://wiki.openstack.org/wiki/Technical_Committee_Tracker -gmann From gmann at ghanshyammann.com Sat Apr 16 01:37:47 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 15 Apr 2022 20:37:47 -0500 Subject: [all][tc] What's happening in Technical Committee: summary April 15th, 21: Reading: 10 min Message-ID: <1803004e077.cab23310114057.8062773357691226772@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We cancelled this week's meeting. * Next TC weekly meeting will be on April 21st Thursday 15:00 UTC, feel free to add the topic on the agenda[1] by April 20th. 2. What we completed this week: ========================= * Added goal readiness checklist[2] * Retired openstack-health[3] * Retired tempest-lib[4] * Add Ganesha based Ceph NFS Charm[5] * PTG summary sent to ML[6] 3. Activities In progress: ================== TC Tracker for Zed cycle ------------------------------ * With the Zed cycle kickoff[7], I have started the Zed cycle TC tracker by listing all the items we discussed in PTG but we will add more during the cycle also if something important comes up[8] Open Reviews ----------------- * Seven open reviews for ongoing activities[9]. Drop the lower constraints maintenance ------------------------------------------------ This has been going on for many months, if I remember correctly it was first brought up in Dec 2021. During those discussions, we figured out that maintaining and testing the lower constraints has many challenges. TC discussed all those challenges in Zed PTG and agreed to drop lower-constraints.txt file and its testing but keep them in requirements.txt with our best effort. I proposed the TC resolution on this[10], feel free to feedback there if there is any strong objection in this direction. Also sent it on ML[11] Consistent and Secure Default RBAC -------------------------------------------- I summarized the RBAC PTG discussion[12], we did not finish the discussion over the open question and decided to continue it in RBAC meetings later. I will schedule the next meeting soon. If you have any other questions than the current one, please add them to the etherpad[13] Removing the TC tag framework ---------------------------------- In TC tag framework, every TC member used to assign a few projects to check their health, needs from TC etc. It was meant to improve the interaction between TC and PTLs and contact points for any projects when TC needs it. But it did not go that way and even many of us do not know if that still exists or not. With TC weekly meetings and PTG interaction sessions, we anyways are more contacted with leaders and discussed it in PTG and agreed to remove this framework. I have proposed the removal[14] 2021 User Survey TC Question Analysis ----------------------------------------------- Jay has summarized the TC's user survey in PTG and it is up in gerrit for review[15]. Feel free to check and provide feedback. Zed cycle Leaderless projects ---------------------------------- Zaqar PTL assigned is merged[16], with that only Adjutant project is leaderless/maintainer-less. We will check Adjutant's situation again on ML and hope Braden will be ready with their company side permission[17]. Fixing Zuul config error ---------------------------- Requesting projects with zuul config error to look into those and fix them which should not take much time[18]. Project updates ------------------- * Add the cinder-three-par charm to Openstack charms[19] 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[20]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [21] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://review.opendev.org/c/openstack/governance/+/835102 [3] https://review.opendev.org/c/openstack/governance/+/836706 [4] https://review.opendev.org/c/openstack/governance/+/836704 [5] https://review.opendev.org/c/openstack/governance/+/835429 [6] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028104.html [7] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028206.html [8] https://etherpad.opendev.org/p/tc-zed-tracker [9] https://review.opendev.org/q/projects:openstack/governance+status:open [10] https://review.opendev.org/c/openstack/governance/+/838004 [11] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028199.html [12] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028103.html [13] https://etherpad.opendev.org/p/rbac-zed-ptg [14] https://review.opendev.org/c/openstack/governance/+/837891 [15] https://review.opendev.org/c/openstack/governance/+/836888 [16] https://review.opendev.org/c/openstack/governance/+/831123 [17] http://lists.openstack.org/pipermail/openstack-discuss/2022-March/027626.html [18] https://etherpad.opendev.org/p/zuul-config-error-openstack [19] https://review.opendev.org/c/openstack/governance/+/837781 [20] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [21] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From radoslaw.piliszek at gmail.com Sat Apr 16 07:54:25 2022 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sat, 16 Apr 2022 09:54:25 +0200 Subject: [Kolla-ansible][Glance] Glance HA deployment with shared file backend In-Reply-To: References: Message-ID: Hi Rodrigo, the wording could certainly be improved but following the example given below the content you quote gives you what you want. [1] I.e., setting the ``glance_file_datadir_volume`` to a path will lift the imposed single-node limitation. [1] https://docs.openstack.org/kolla-ansible/latest/reference/shared-services/glance-guide.html#file-backend -yoctozepto On Sat, 16 Apr 2022 at 00:21, Rodrigo Lima wrote: > Hi Guys, hope all is well! > > According with the kolla-ansible documentation, " By default when using > file backend only one glance-api container can be running." So my questions > is quite simple: > Can I deploy multiple glance_api containers in HA mode (like if I used > ceph or swift backend) if I use a shared filesystem (eg. NFS) in all glance > nodes? > If I can do so, what I need to change in deployment files/scripts/yaml? > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Sat Apr 16 08:40:30 2022 From: eblock at nde.ag (Eugen Block) Date: Sat, 16 Apr 2022 08:40:30 +0000 Subject: [neutron][nova] port binding fails for existing networks Message-ID: <20220416084030.Horde.Q8KAlVdykKFDqwEI3r0riDm@webmail.nde.ag> Hi *, I have a kind of strange case which I'm trying to solve for hours, I could use some fresh ideas. It's a HA cloud (Victoria) deployed by Salt and the 2 control nodes are managed by pacemaker, the third controller will join soon. There are around 16 compute nodes at the moment. This two-node-control plane works well, except if there are unplanned outages. Since the last outage of one control node we struggle to revive neutron (I believe neutron is the issue here). I'll try to focus on the main issue here, let me know if more details are required. After the failed node was back online all openstack agents show as "up" (openstack compute service list, openstack network agent list). Running VMs don't seem to be impacted (as far as I can tell). But we can't create new instances in existing networks, and since we use Octavia we also can't (re)build any LBs at the moment. When I create a new test network the instance spawns successfully and is active within a few seconds. For existing networks we get the famous "port binding failed" from nova-compute.log. But I see the port being created, it just can't be attached to the instance. One more strange thing: I don't see any entries in the nova-scheduler.log or nova-conductor.log for the successfully built instance, except for the recently mentioned etcd3gw message from nova-conductor, but this didn't impact the instance creation yet. We have investigated this for hours, we have rebooted both control nodes multiple times in order to kill any remaining processes. The galera DB seems fine, rabbitmq also behaves normally (I think), we tried multiple times to put one node in standby to only have one node to look at which also didn't help. So basically we restarted everything multiple times on the control nodes and also nova-compute and openvswitch-agent on all compute nodes, the issue is still not resolved. Does anyone have further ideas to resolve this? I'd be happy to provide more details, just let me know what you need. Happy Easter! Eugen From hiwkby at yahoo.com Sat Apr 16 08:56:29 2022 From: hiwkby at yahoo.com (Hirotaka Wakabayashi) Date: Sat, 16 Apr 2022 08:56:29 +0000 (UTC) Subject: [Trove][Xena] Error building Trove image References: <242906694.289768.1650099389146.ref@mail.yahoo.com> Message-ID: <242906694.289768.1650099389146@mail.yahoo.com> Hi Wodel, I think you need to fix DIB element files to build a guest image as CentOS Stream. Officially supported operating system in Trove is currently Ubuntu[1]. I am preparing Installation instructions Fedora/CentOS Stream users[2]? but that is not completed. You may not really need to build a guest image as CentOS Stream if you are a mysql or postgresql user, because Trove supports Docker Since Victoria. This means database service can runs as docker container inside the Trove instance. You can define the datastore image by using the "docker_image" config option[4]. --- [1]: https://docs.openstack.org/trove/latest/admin/building_guest_images.html [2]: https://storyboard.openstack.org/#!/story/2009918 [3]: https://opendev.org/openstack/trove/src/branch/master/integration/scripts/files/elements [4]: https://docs.openstack.org/trove/latest/admin/run_trove_in_production.html#configure-trove-guest-agent Regards,Hirotaka On Wednesday, April 13, 2022, 10:04:53 PM GMT+9, wrote: Send openstack-discuss mailing list submissions to ??? openstack-discuss at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit ??? http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss or, via email, send a message with subject or body 'help' to ??? openstack-discuss-request at lists.openstack.org You can reach the person managing the list at ??? openstack-discuss-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of openstack-discuss digest..." Today's Topics: ? 1. [Trove][Xena] Error building Trove image (wodel youchi) ---------------------------------------------------------------------- Message: 1 Date: Wed, 13 Apr 2022 00:16:59 +0100 From: wodel youchi To: OpenStack Discuss Subject: [Trove][Xena] Error building Trove image Message-ID: ??? Content-Type: text/plain; charset="utf-8" Hi, When trying to build Trove I am getting this error message : 2022-04-10 08:55:25.497 | *+ install_deb_packages install iscsi-initiator-utils* 2022-04-10 08:55:25.497 | + DEBIAN_FRONTEND=noninteractive 2022-04-10 08:55:25.497 | + http_proxy= 2022-04-10 08:55:25.497 | + https_proxy= 2022-04-10 08:55:25.497 | + no_proxy= 2022-04-10 08:55:25.497 | + apt-get --option Dpkg::Options::=--force-confold --option Dpkg::Options::=--force-confdef --assume-yes install iscsi-initiator-utils 2022-04-10 08:55:25.541 | Reading package lists... 2022-04-10 08:55:25.788 | Building dependency tree... 2022-04-10 08:55:25.788 | Reading state information... 2022-04-10 08:55:25.825 | *E: Unable to locate package iscsi-initiator-utils* 2022-04-10 08:55:25.838 | ++ diskimage_builder/lib/img-functions:run_in_target:59 ? ? ? :? check_break after-error run_in_target bash 2022-04-10 08:55:25.843 | ++ diskimage_builder/lib/common-functions:check_break:143 ? ? ? :? echo '' 2022-04-10 08:55:25.844 | ++ diskimage_builder/lib/common-functions:check_break:143 ? ? ? :? egrep -e '(,|^)after-error(,|$)' -q 2022-04-10 08:55:25.851 | + diskimage_builder/lib/img-functions:run_in_target:1 ? ? :? trap_cleanup 2022-04-10 08:55:25.855 | + diskimage_builder/lib/img-functions:trap_cleanup:36 I am not an Ubuntu person but I think the package's name is open-iscsi. This is the command I used to build the image :? ./trovestack build-image ubuntu bionic true ubuntu /home/stack/trove-xena-guest-ubuntu-bionic-dev.qcow2 My OS is a Centos 8 Stream. you can find the whole log of the operation attached. Thanks in advance. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: trove-build2.log Type: application/octet-stream Size: 302893 bytes Desc: not available URL: ------------------------------ Subject: Digest Footer _______________________________________________ openstack-discuss mailing list openstack-discuss at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss ------------------------------ End of openstack-discuss Digest, Vol 42, Issue 62 ************************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From andy at andybotting.com Sat Apr 16 10:55:50 2022 From: andy at andybotting.com (Andy Botting) Date: Sat, 16 Apr 2022 20:55:50 +1000 Subject: [murano][octavia][sahara][zaqar][zun][oslo] Pending removal of 'oslo_db.sqlalchemy.test_base' In-Reply-To: References: Message-ID: > This is a heads up to the maintainers of the aforementioned projects that the > oslo team are planning to remove the 'oslo_db.sqlalchemy.test_base' module this > cycle. This module has been deprecated since 2015 and we want to get rid of it > to reduce load on the overburdened oslo maintainers. I have already fixed the > issue in a couple of projects. These can be used as blueprints for fixing the > remaining affected projects: > > * masakari (https://review.opendev.org/c/openstack/masakari/+/802761) > * glance (https://review.opendev.org/c/openstack/glance/+/802762) > * manila (https://review.opendev.org/c/openstack/manila/+/802763) > > I would love to fix the remaining projects but my limited time is currently > focused elsewhere. The oslo.db change is available at [1]. We'd like this to be > merged in the next month but we can push that out to later in the cycle if teams > need more time. Just shout. > > Please let us know if you have any concerns or if the above changes are not > sufficient as a guide for how to address these issues. Thanks Stephen, I'll look into Murano over the next few days. cheers, Andy From ltoscano at redhat.com Sat Apr 16 12:50:34 2022 From: ltoscano at redhat.com (Luigi Toscano) Date: Sat, 16 Apr 2022 14:50:34 +0200 Subject: [murano][octavia][sahara][zaqar][zun][oslo] Pending removal of 'oslo_db.sqlalchemy.test_base' In-Reply-To: References: Message-ID: <22398351.4csPzL39Zc@whitebase.usersys.redhat.com> On Thursday, 14 April 2022 20:47:10 CEST Stephen Finucane wrote: > o/ > > This is a heads up to the maintainers of the aforementioned projects that > the oslo team are planning to remove the 'oslo_db.sqlalchemy.test_base' > module this cycle. This module has been deprecated since 2015 and we want > to get rid of it to reduce load on the overburdened oslo maintainers. I > have already fixed the issue in a couple of projects. These can be used as > blueprints for fixing the remaining affected projects: > > * masakari (https://review.opendev.org/c/openstack/masakari/+/802761) > * glance (https://review.opendev.org/c/openstack/glance/+/802762) > * manila (https://review.opendev.org/c/openstack/manila/+/802763) > > I would love to fix the remaining projects but my limited time is currently > focused elsewhere. The oslo.db change is available at [1]. We'd like this to > be merged in the next month but we can push that out to later in the cycle > if teams need more time. Just shout. Thanks for the notice and the example. I've tried to draft a patch but I'm puzzled because it works locally with all the 3 python versions (py36, py38, py39) on Fedora 35, but it fails on the gates. What am I missing? https://review.opendev.org/c/openstack/sahara/+/838046 Ciao -- Luigi From laurentfdumont at gmail.com Sat Apr 16 13:30:40 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Sat, 16 Apr 2022 09:30:40 -0400 Subject: [neutron][nova] port binding fails for existing networks In-Reply-To: <20220416084030.Horde.Q8KAlVdykKFDqwEI3r0riDm@webmail.nde.ag> References: <20220416084030.Horde.Q8KAlVdykKFDqwEI3r0riDm@webmail.nde.ag> Message-ID: I've seen failures with port bindings when rabbitmq was not in a good state. Messages between services transit through Rabbit so Nova/Neutron might not be able to follow the flow correctly. Can you double check that rabbit is good to go? - rabbitmqctl cluster_status - rabbitmqctl list_queues I would also recommend turning the logs to DEBUG for all the services and trying to follow a server create request-id. On Sat, Apr 16, 2022 at 4:44 AM Eugen Block wrote: > Hi *, > > I have a kind of strange case which I'm trying to solve for hours, I > could use some fresh ideas. > It's a HA cloud (Victoria) deployed by Salt and the 2 control nodes > are managed by pacemaker, the third controller will join soon. There > are around 16 compute nodes at the moment. > This two-node-control plane works well, except if there are unplanned > outages. Since the last outage of one control node we struggle to > revive neutron (I believe neutron is the issue here). I'll try to > focus on the main issue here, let me know if more details are required. > After the failed node was back online all openstack agents show as > "up" (openstack compute service list, openstack network agent list). > Running VMs don't seem to be impacted (as far as I can tell). But we > can't create new instances in existing networks, and since we use > Octavia we also can't (re)build any LBs at the moment. When I create a > new test network the instance spawns successfully and is active within > a few seconds. For existing networks we get the famous "port binding > failed" from nova-compute.log. But I see the port being created, it > just can't be attached to the instance. One more strange thing: I > don't see any entries in the nova-scheduler.log or nova-conductor.log > for the successfully built instance, except for the recently mentioned > etcd3gw message from nova-conductor, but this didn't impact the > instance creation yet. > We have investigated this for hours, we have rebooted both control > nodes multiple times in order to kill any remaining processes. The > galera DB seems fine, rabbitmq also behaves normally (I think), we > tried multiple times to put one node in standby to only have one node > to look at which also didn't help. > So basically we restarted everything multiple times on the control > nodes and also nova-compute and openvswitch-agent on all compute > nodes, the issue is still not resolved. > Does anyone have further ideas to resolve this? I'd be happy to > provide more details, just let me know what you need. > > Happy Easter! > Eugen > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Sat Apr 16 19:36:25 2022 From: smooney at redhat.com (Sean Mooney) Date: Sat, 16 Apr 2022 20:36:25 +0100 Subject: [neutron][nova] port binding fails for existing networks In-Reply-To: References: <20220416084030.Horde.Q8KAlVdykKFDqwEI3r0riDm@webmail.nde.ag> Message-ID: On Sat, 2022-04-16 at 09:30 -0400, Laurent Dumont wrote: > I've seen failures with port bindings when rabbitmq was not in a good > state. Messages between services transit through Rabbit so Nova/Neutron > might not be able to follow the flow correctly. that is not quite right. inter service message happen via http rest apis. intra service comunication happens via rabbit. nova never calls neutron over rabbit nor does neutron call nova over rabbit however it is ture that rabit issue can somethime cause prort bingin issues. if you are using ml2/ovs the agent report/heatbeat can be lost form the perspective of the neutron server and it can consider the service down. if the agent is "down" then the ml2/ovs mech driver will refuse to bind the prot. assuming the agent is up in the db the requst to bidn the port never actully transits rabbitmq. the comptue node makes a http request to the neturon-server which host the api endpoing and executes the ml2 drivers. the ml2/ovs dirver only uses info form the neutron db which it access directly. the neutron server debug logs shoudl have records for bidning request which shoudl detail why the port binding failed. it shoudl show each loaded ml2 driver beign tried in sequence ot bind the port and if it cant log the reason why. i would start by checking that the ovs l2 agents show as up in the db/api then find a port id for one of the failed port bidngins and trace the debug logs for the port bdining in the neutorn server logs for the error and if you find one post it here. > > Can you double check that rabbit is good to go? > > - rabbitmqctl cluster_status > - rabbitmqctl list_queues > > I would also recommend turning the logs to DEBUG for all the services and > trying to follow a server create request-id. > > On Sat, Apr 16, 2022 at 4:44 AM Eugen Block wrote: > > > Hi *, > > > > I have a kind of strange case which I'm trying to solve for hours, I > > could use some fresh ideas. > > It's a HA cloud (Victoria) deployed by Salt and the 2 control nodes > > are managed by pacemaker, the third controller will join soon. There > > are around 16 compute nodes at the moment. > > This two-node-control plane works well, except if there are unplanned > > outages. Since the last outage of one control node we struggle to > > revive neutron (I believe neutron is the issue here). I'll try to > > focus on the main issue here, let me know if more details are required. > > After the failed node was back online all openstack agents show as > > "up" (openstack compute service list, openstack network agent list). > > Running VMs don't seem to be impacted (as far as I can tell). But we > > can't create new instances in existing networks, and since we use > > Octavia we also can't (re)build any LBs at the moment. When I create a > > new test network the instance spawns successfully and is active within > > a few seconds. For existing networks we get the famous "port binding > > failed" from nova-compute.log. But I see the port being created, it > > just can't be attached to the instance. One more strange thing: I > > don't see any entries in the nova-scheduler.log or nova-conductor.log > > for the successfully built instance, except for the recently mentioned > > etcd3gw message from nova-conductor, but this didn't impact the > > instance creation yet. > > We have investigated this for hours, we have rebooted both control > > nodes multiple times in order to kill any remaining processes. The > > galera DB seems fine, rabbitmq also behaves normally (I think), we > > tried multiple times to put one node in standby to only have one node > > to look at which also didn't help. > > So basically we restarted everything multiple times on the control > > nodes and also nova-compute and openvswitch-agent on all compute > > nodes, the issue is still not resolved. > > Does anyone have further ideas to resolve this? I'd be happy to > > provide more details, just let me know what you need. > > > > Happy Easter! > > Eugen > > > > > > From eblock at nde.ag Sat Apr 16 22:23:11 2022 From: eblock at nde.ag (Eugen Block) Date: Sat, 16 Apr 2022 22:23:11 +0000 Subject: [neutron][nova] port binding fails for existing networks In-Reply-To: References: <20220416084030.Horde.Q8KAlVdykKFDqwEI3r0riDm@webmail.nde.ag> Message-ID: <20220416222311.Horde.joCC7hAiYQEJJn_jZq8cDrI@webmail.nde.ag> Thank you both for your comments, I appreciate it! Before digging into the logs I tried again with one of the two control nodes disabled. But I didn't disable all services, only apache, memcached, neutron, nova and octavia so all my requests would go to the active control node but rabbit and galera would be in sync. This already seemed to clean things up somehow, now I was able to launch instances and LBs into an active state. Awesome! Then I started the mentioned services on the other control node again and things stopped working. Note that this setup worked for months and we have another cloud with two control nodes which works like a charm for years now. The only significant thing I noticed while switching back to one active neutron/nova/octavia node was this message from the neutron-dhcp-agent.log: 2022-04-16 23:59:29.180 36882 ERROR neutron_lib.rpc [req-905aecd6-ff22-4549-a0cb-ef5259692f5d - - - - -] Timeout in RPC method get_active_networks_info. Waiting for 510 seconds before next attempt. If the server is not down, consider increasing the rpc_response_timeout option as Neutron server(s) may be overloaded and unable to respond quickly enough.: oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply to message ID 6676c45f5b0c42af8e34f8fb4aba3aca I'll need to take a closer look for more of these messages after the weekend, but more importantly why we can't seem to reenable the second node. I'll enable debug logs then and hopefully find a trace to the root cause. If you have other comments please don't hesitate, I thankful for any ideas. Thanks! Eugen Zitat von Sean Mooney : > On Sat, 2022-04-16 at 09:30 -0400, Laurent Dumont wrote: >> I've seen failures with port bindings when rabbitmq was not in a good >> state. Messages between services transit through Rabbit so Nova/Neutron >> might not be able to follow the flow correctly. > that is not quite right. > > inter service message happen via http rest apis. > intra service comunication happens via rabbit. > nova never calls neutron over rabbit nor does neutron call nova over rabbit > > however it is ture that rabit issue can somethime cause prort bingin issues. > if you are using ml2/ovs the agent report/heatbeat can be lost form > the perspective of the neutron server > and it can consider the service down. if the agent is "down" then > the ml2/ovs mech driver will refuse to > bind the prot. > > assuming the agent is up in the db the requst to bidn the port never > actully transits rabbitmq. > > the comptue node makes a http request to the neturon-server which > host the api endpoing and executes the ml2 drivers. > the ml2/ovs dirver only uses info form the neutron db which it > access directly. > > the neutron server debug logs shoudl have records for bidning > request which shoudl detail why the port binding failed. > it shoudl show each loaded ml2 driver beign tried in sequence ot > bind the port and if it cant log the reason why. > > i would start by checking that the ovs l2 agents show as up in the db/api > then find a port id for one of the failed port bidngins and trace > the debug logs for the port bdining in the neutorn server > logs for the error and if you find one post it here. > >> >> Can you double check that rabbit is good to go? >> >> - rabbitmqctl cluster_status >> - rabbitmqctl list_queues >> >> I would also recommend turning the logs to DEBUG for all the services and >> trying to follow a server create request-id. >> >> On Sat, Apr 16, 2022 at 4:44 AM Eugen Block wrote: >> >> > Hi *, >> > >> > I have a kind of strange case which I'm trying to solve for hours, I >> > could use some fresh ideas. >> > It's a HA cloud (Victoria) deployed by Salt and the 2 control nodes >> > are managed by pacemaker, the third controller will join soon. There >> > are around 16 compute nodes at the moment. >> > This two-node-control plane works well, except if there are unplanned >> > outages. Since the last outage of one control node we struggle to >> > revive neutron (I believe neutron is the issue here). I'll try to >> > focus on the main issue here, let me know if more details are required. >> > After the failed node was back online all openstack agents show as >> > "up" (openstack compute service list, openstack network agent list). >> > Running VMs don't seem to be impacted (as far as I can tell). But we >> > can't create new instances in existing networks, and since we use >> > Octavia we also can't (re)build any LBs at the moment. When I create a >> > new test network the instance spawns successfully and is active within >> > a few seconds. For existing networks we get the famous "port binding >> > failed" from nova-compute.log. But I see the port being created, it >> > just can't be attached to the instance. One more strange thing: I >> > don't see any entries in the nova-scheduler.log or nova-conductor.log >> > for the successfully built instance, except for the recently mentioned >> > etcd3gw message from nova-conductor, but this didn't impact the >> > instance creation yet. >> > We have investigated this for hours, we have rebooted both control >> > nodes multiple times in order to kill any remaining processes. The >> > galera DB seems fine, rabbitmq also behaves normally (I think), we >> > tried multiple times to put one node in standby to only have one node >> > to look at which also didn't help. >> > So basically we restarted everything multiple times on the control >> > nodes and also nova-compute and openvswitch-agent on all compute >> > nodes, the issue is still not resolved. >> > Does anyone have further ideas to resolve this? I'd be happy to >> > provide more details, just let me know what you need. >> > >> > Happy Easter! >> > Eugen >> > >> > >> > From laurentfdumont at gmail.com Sat Apr 16 23:33:15 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Sat, 16 Apr 2022 19:33:15 -0400 Subject: [neutron][nova] port binding fails for existing networks In-Reply-To: <20220416222311.Horde.joCC7hAiYQEJJn_jZq8cDrI@webmail.nde.ag> References: <20220416084030.Horde.Q8KAlVdykKFDqwEI3r0riDm@webmail.nde.ag> <20220416222311.Horde.joCC7hAiYQEJJn_jZq8cDrI@webmail.nde.ag> Message-ID: You can probably try each one in turn. Might be an issue with one of the two. On Sat, Apr 16, 2022 at 6:23 PM Eugen Block wrote: > Thank you both for your comments, I appreciate it! > Before digging into the logs I tried again with one of the two control > nodes disabled. But I didn't disable all services, only apache, > memcached, neutron, nova and octavia so all my requests would go to > the active control node but rabbit and galera would be in sync. This > already seemed to clean things up somehow, now I was able to launch > instances and LBs into an active state. Awesome! Then I started the > mentioned services on the other control node again and things stopped > working. Note that this setup worked for months and we have another > cloud with two control nodes which works like a charm for years now. > The only significant thing I noticed while switching back to one > active neutron/nova/octavia node was this message from the > neutron-dhcp-agent.log: > > 2022-04-16 23:59:29.180 36882 ERROR neutron_lib.rpc > [req-905aecd6-ff22-4549-a0cb-ef5259692f5d - - - - -] Timeout in RPC > method get_active_networks_info. Waiting for 510 seconds before next > attempt. If the server is not down, consider increasing the > rpc_response_timeout option as Neutron server(s) may be overloaded and > unable to respond quickly enough.: > oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a > reply to message ID 6676c45f5b0c42af8e34f8fb4aba3aca > > I'll need to take a closer look for more of these messages after the > weekend, but more importantly why we can't seem to reenable the second > node. I'll enable debug logs then and hopefully find a trace to the > root cause. > If you have other comments please don't hesitate, I thankful for any ideas. > > Thanks! > Eugen > > > Zitat von Sean Mooney : > > > On Sat, 2022-04-16 at 09:30 -0400, Laurent Dumont wrote: > >> I've seen failures with port bindings when rabbitmq was not in a good > >> state. Messages between services transit through Rabbit so Nova/Neutron > >> might not be able to follow the flow correctly. > > that is not quite right. > > > > inter service message happen via http rest apis. > > intra service comunication happens via rabbit. > > nova never calls neutron over rabbit nor does neutron call nova over > rabbit > > > > however it is ture that rabit issue can somethime cause prort bingin > issues. > > if you are using ml2/ovs the agent report/heatbeat can be lost form > > the perspective of the neutron server > > and it can consider the service down. if the agent is "down" then > > the ml2/ovs mech driver will refuse to > > bind the prot. > > > > assuming the agent is up in the db the requst to bidn the port never > > actully transits rabbitmq. > > > > the comptue node makes a http request to the neturon-server which > > host the api endpoing and executes the ml2 drivers. > > the ml2/ovs dirver only uses info form the neutron db which it > > access directly. > > > > the neutron server debug logs shoudl have records for bidning > > request which shoudl detail why the port binding failed. > > it shoudl show each loaded ml2 driver beign tried in sequence ot > > bind the port and if it cant log the reason why. > > > > i would start by checking that the ovs l2 agents show as up in the db/api > > then find a port id for one of the failed port bidngins and trace > > the debug logs for the port bdining in the neutorn server > > logs for the error and if you find one post it here. > > > >> > >> Can you double check that rabbit is good to go? > >> > >> - rabbitmqctl cluster_status > >> - rabbitmqctl list_queues > >> > >> I would also recommend turning the logs to DEBUG for all the services > and > >> trying to follow a server create request-id. > >> > >> On Sat, Apr 16, 2022 at 4:44 AM Eugen Block wrote: > >> > >> > Hi *, > >> > > >> > I have a kind of strange case which I'm trying to solve for hours, I > >> > could use some fresh ideas. > >> > It's a HA cloud (Victoria) deployed by Salt and the 2 control nodes > >> > are managed by pacemaker, the third controller will join soon. There > >> > are around 16 compute nodes at the moment. > >> > This two-node-control plane works well, except if there are unplanned > >> > outages. Since the last outage of one control node we struggle to > >> > revive neutron (I believe neutron is the issue here). I'll try to > >> > focus on the main issue here, let me know if more details are > required. > >> > After the failed node was back online all openstack agents show as > >> > "up" (openstack compute service list, openstack network agent list). > >> > Running VMs don't seem to be impacted (as far as I can tell). But we > >> > can't create new instances in existing networks, and since we use > >> > Octavia we also can't (re)build any LBs at the moment. When I create a > >> > new test network the instance spawns successfully and is active within > >> > a few seconds. For existing networks we get the famous "port binding > >> > failed" from nova-compute.log. But I see the port being created, it > >> > just can't be attached to the instance. One more strange thing: I > >> > don't see any entries in the nova-scheduler.log or nova-conductor.log > >> > for the successfully built instance, except for the recently mentioned > >> > etcd3gw message from nova-conductor, but this didn't impact the > >> > instance creation yet. > >> > We have investigated this for hours, we have rebooted both control > >> > nodes multiple times in order to kill any remaining processes. The > >> > galera DB seems fine, rabbitmq also behaves normally (I think), we > >> > tried multiple times to put one node in standby to only have one node > >> > to look at which also didn't help. > >> > So basically we restarted everything multiple times on the control > >> > nodes and also nova-compute and openvswitch-agent on all compute > >> > nodes, the issue is still not resolved. > >> > Does anyone have further ideas to resolve this? I'd be happy to > >> > provide more details, just let me know what you need. > >> > > >> > Happy Easter! > >> > Eugen > >> > > >> > > >> > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wchy1001 at gmail.com Sun Apr 17 10:00:40 2022 From: wchy1001 at gmail.com (W Ch) Date: Sun, 17 Apr 2022 18:00:40 +0800 Subject: [Trove][Xena] Error building Trove image In-Reply-To: References: Message-ID: Hi: i tested it in patch[0]. and It passed on centos 8 stream. *install_deb_packages install iscsi-initiator-utils* This is because of a wrong environment variable, you might need to fix this manually, refer to: [1] And i think you also need this fix too.[2] [0]https://review.opendev.org/c/openstack/trove/+/838175 [1] https://review.opendev.org/c/openstack/trove/+/838175/24/integration/scripts/functions_qemu [2] https://review.opendev.org/c/openstack/trove/+/838175/24/integration/scripts/functions#241 thanks Best regards wodel youchi ?2022?4?13??? 21:14??? > Hi, > When trying to build Trove I am getting this error message : > > 2022-04-10 08:55:25.497 | *+ install_deb_packages install > iscsi-initiator-utils* > 2022-04-10 08:55:25.497 | + DEBIAN_FRONTEND=noninteractive > 2022-04-10 08:55:25.497 | + http_proxy= > 2022-04-10 08:55:25.497 | + https_proxy= > 2022-04-10 08:55:25.497 | + no_proxy= > 2022-04-10 08:55:25.497 | + apt-get --option > Dpkg::Options::=--force-confold --option Dpkg::Options::=--force-confdef > --assume-yes install iscsi-initiator-utils > 2022-04-10 08:55:25.541 | Reading package lists... > 2022-04-10 08:55:25.788 | Building dependency tree... > 2022-04-10 08:55:25.788 | Reading state information... > 2022-04-10 08:55:25.825 | *E: Unable to locate package > iscsi-initiator-utils* > 2022-04-10 08:55:25.838 | ++ > diskimage_builder/lib/img-functions:run_in_target:59 > : check_break after-error run_in_target bash > 2022-04-10 08:55:25.843 | ++ > diskimage_builder/lib/common-functions:check_break:143 > : echo '' > 2022-04-10 08:55:25.844 | ++ > diskimage_builder/lib/common-functions:check_break:143 > : egrep -e '(,|^)after-error(,|$)' -q > 2022-04-10 08:55:25.851 | + > diskimage_builder/lib/img-functions:run_in_target:1 > : trap_cleanup > 2022-04-10 08:55:25.855 | + > diskimage_builder/lib/img-functions:trap_cleanup:36 > > I am not an Ubuntu person but I think the package's name is open-iscsi. > > This is the command I used to build the image : ./trovestack build-image > ubuntu bionic true ubuntu > /home/stack/trove-xena-guest-ubuntu-bionic-dev.qcow2 > My OS is a Centos 8 Stream. you can find the whole log of the operation > attached. > > Thanks in advance. > > Regards. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Mon Apr 18 08:18:07 2022 From: syedammad83 at gmail.com (Ammad Syed) Date: Mon, 18 Apr 2022 13:18:07 +0500 Subject: [neutron] Mysql FK Errors Message-ID: Hi, I am using neutron 19.1 with OVN backend. I am receiving below mentioned errors in my percona xtradb cluster logs for neutron database. 2022-04-04T15:39:17.342175+05:00 187579 [ERROR] [MY-011825] [InnoDB] WSREP: referenced FK check fail: Lock wait index `PRIMARY` table `neutron_ml2`.`portsecuritybindings` 2022-04-05T14:18:19.801183+05:00 275503 [ERROR] [MY-011825] [InnoDB] WSREP: referenced FK check fail: Lock wait index `PRIMARY` table `neutron_ml2`.`ml2_port_binding_levels` Can you guys advice about these errors. Is there anything to worry about? or should I just ignore them ? Ammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Mon Apr 18 14:37:04 2022 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 18 Apr 2022 11:37:04 -0300 Subject: [ironic] Skipping today's upstream meeting Message-ID: Hi Ironicers, We will skip the weekly meeting this week, since we have Easter Holidays and other holidays in some countries =) ) -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the ironic-core and puppet-manager-core team in OpenStack* *Senior Software Engineer at Red Hat Brazil* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jlibosva at redhat.com Mon Apr 18 15:11:56 2022 From: jlibosva at redhat.com (Jakub Libosvar) Date: Mon, 18 Apr 2022 11:11:56 -0400 Subject: [Neutron] Bug deputy report April 11 - April 18 Message-ID: <544af7e9-4acc-ccc7-4d1b-0bd1b38d9de9@redhat.com> Hi all, I was the bug deputy for the previous week. There is one critical bug from Slawek with proposed revert. Here is the report: Critical -------- - Test tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_update_instance_port_admin_state is failing constantly since 7.04.2022 https://bugs.launchpad.net/neutron/+bug/1968896 Revert proposed: https://review.opendev.org/c/openstack/neutron/+/837685 Medium ------ - neutron-dhcp-agent memory leak on network sync failure https://bugs.launchpad.net/neutron/+bug/1969270 Needs an assignee - [L3-DVR]l3-agent arp table will not update https://bugs.launchpad.net/neutron/+bug/1968860 Needs an assignee - network DHCP agent ports status is DOWN in multi network creation simultaneously https://bugs.launchpad.net/neutron/+bug/1968859 Needs an assignee - Importing neutron.common.config module registers config options https://bugs.launchpad.net/neutron/+bug/1968606 Proposed fix: https://review.opendev.org/c/openstack/neutron/+/837392 Incomplete ---------- - unable to attach an interface to an external network https://bugs.launchpad.net/neutron/+bug/1968893 Looks like configuration issue, more logs are needed - too many l3 dvr agents got notifications after a server got deleted https://bugs.launchpad.net/neutron/+bug/1968837 I'm not sure I understand what the issue is Invalid ------- - ovn-controller don't update new flows https://bugs.launchpad.net/neutron/+bug/1969354 It looks like misconfigured environment, provided some tips From marios at redhat.com Mon Apr 18 15:31:09 2022 From: marios at redhat.com (Marios Andreou) Date: Mon, 18 Apr 2022 18:31:09 +0300 Subject: [TripleO] Final TripleO repos release for stable/victoria - any requests? In-Reply-To: References: Message-ID: On Wed, Apr 13, 2022 at 3:57 PM Marios Andreou wrote: > > Hello > > The stable/victoria branch for all tripleo repos will transition to > Extended Maintenance in 2 weeks [1]. > > To prevent delays I have prepared a final victoria release at [2]. > > That [2] will be updated after its depends-on merges (puppet metadata > bump) to pickup the latest victoria commits at that point. > > If there are any patches you want included then please speak up and > I'll wait for and include those commits before updating > releases/+/836921 [2]. > > This is the last ever release to be made from stable/victoria. Once it > goes to Extended Maintenance we can no longer release. > > I'll hold it for a few days.. Unless I hear otherwise I will update > [2] on Monday and try to get it merged next week. > This is less than ideal as many folks are out, especially at the start of this week or even until next week. However we are on a time limit as we don't want to delay the move to Extended Maintenance. The victoria release patch at https://review.opendev.org/c/openstack/releases/+/836921 should be updated and merged by the end of this week max start of next week. Once https://review.opendev.org/c/openstack/puppet-tripleo/+/836920/3#message-9f2a6b922a3bef96a5a7c15f78c3c5d192fe969e merges I'll update releases/+/836921 with the latest commits sometime in the next couple of days. If you see this and want a particular thing included that is still in review please speak up - last chance ;) regards, marios > thanks, marios > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028013.html > [2] https://review.opendev.org/c/openstack/releases/+/836921 From wodel.youchi at gmail.com Mon Apr 18 11:32:15 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Mon, 18 Apr 2022 12:32:15 +0100 Subject: [Kolla-ansible][Xena][Ceph-RGW] need help configuring Ceph RGW for Swift and S3 access Message-ID: Hi, I am having trouble configuring Openstack to use Ceph RGW as the Object store backend for Swift and S3. My setup is an HCI, I have 3 controllers which are also my ceph mgrs, mons and rgws and 9 compte/storage servers (osds). Xena is deployed with Ceph Pacific. Ceph public network is a private network on vlan10 with 10.10.1.0/24 as a subnet. Here is a snippet from my globals.yml : > --- > kolla_base_distro: "centos" > kolla_install_type: "source" > openstack_release: "xena" > kolla_internal_vip_address: "10.10.3.1" > kolla_internal_fqdn: "dashint.cloud.example.com" > kolla_external_vip_address: "x.x.x.x" > kolla_external_fqdn: "dash.cloud.example.com " > docker_registry: 192.168.1.16:4000 > network_interface: "bond0" > kolla_external_vip_interface: "bond1" > api_interface: "bond1.30" > *storage_interface: "bond1.10" <---------------- VLAN10 (public ceph > network)* > tunnel_interface: "bond1.40" > dns_interface: "bond1" > octavia_network_interface: "bond1.301" > neutron_external_interface: "bond2" > neutron_plugin_agent: "openvswitch" > keepalived_virtual_router_id: "51" > kolla_enable_tls_internal: "yes" > kolla_enable_tls_external: "yes" > kolla_certificates_dir: "{{ node_config }}/certificates" > kolla_external_fqdn_cert: "{{ kolla_certificates_dir }}/haproxy.pem" > kolla_internal_fqdn_cert: "{{ kolla_certificates_dir > }}/haproxy-internal.pem" > kolla_admin_openrc_cacert: "{{ kolla_certificates_dir }}/ca.pem" > kolla_copy_ca_into_containers: "yes" > kolla_enable_tls_backend: "yes" > kolla_verify_tls_backend: "no" > kolla_tls_backend_cert: "{{ kolla_certificates_dir }}/backend-cert.pem" > kolla_tls_backend_key: "{{ kolla_certificates_dir }}/backend-key.pem" > enable_openstack_core: "yes" > enable_hacluster: "yes" > enable_haproxy: "yes" > enable_aodh: "yes" > enable_barbican: "yes" > enable_ceilometer: "yes" > enable_central_logging: "yes" > > *enable_ceph_rgw: "yes"enable_ceph_rgw_loadbalancer: "{{ enable_ceph_rgw | > bool }}"* > enable_cinder: "yes" > enable_cinder_backup: "yes" > enable_collectd: "yes" > enable_designate: "yes" > enable_elasticsearch_curator: "yes" > enable_freezer: "no" > enable_gnocchi: "yes" > enable_gnocchi_statsd: "yes" > enable_magnum: "yes" > enable_manila: "yes" > enable_manila_backend_cephfs_native: "yes" > enable_mariabackup: "yes" > enable_masakari: "yes" > enable_neutron_vpnaas: "yes" > enable_neutron_qos: "yes" > enable_neutron_agent_ha: "yes" > enable_neutron_provider_networks: "yes" > enable_neutron_segments: "yes" > enable_octavia: "yes" > enable_trove: "yes" > external_ceph_cephx_enabled: "yes" > ceph_glance_keyring: "ceph.client.glance.keyring" > ceph_glance_user: "glance" > ceph_glance_pool_name: "images" > ceph_cinder_keyring: "ceph.client.cinder.keyring" > ceph_cinder_user: "cinder" > ceph_cinder_pool_name: "volumes" > ceph_cinder_backup_keyring: "ceph.client.cinder-backup.keyring" > ceph_cinder_backup_user: "cinder-backup" > ceph_cinder_backup_pool_name: "backups" > ceph_nova_keyring: "{{ ceph_cinder_keyring }}" > ceph_nova_user: "cinder" > ceph_nova_pool_name: "vms" > ceph_gnocchi_keyring: "ceph.client.gnocchi.keyring" > ceph_gnocchi_user: "gnocchi" > ceph_gnocchi_pool_name: "metrics" > ceph_manila_keyring: "ceph.client.manila.keyring" > ceph_manila_user: "manila" > glance_backend_ceph: "yes" > glance_backend_file: "no" > gnocchi_backend_storage: "ceph" > cinder_backend_ceph: "yes" > cinder_backup_driver: "ceph" > cloudkitty_collector_backend: "gnocchi" > designate_ns_record: "cloud.example.com " > nova_backend_ceph: "yes" > nova_compute_virt_type: "kvm" > octavia_auto_configure: yes > octavia_amp_flavor: > name: "amphora" > is_public: no > vcpus: 1 > ram: 1024 > disk: 5 > octavia_amp_network: > name: lb-mgmt-net > provider_network_type: vlan > provider_segmentation_id: 301 > provider_physical_network: physnet1 > external: false > shared: false > subnet: > name: lb-mgmt-subnet > cidr: "10.7.0.0/16" > allocation_pool_start: "10.7.0.50" > allocation_pool_end: "10.7.255.200" > no_gateway_ip: yes > enable_dhcp: yes > mtu: 9000 > octavia_amp_network_cidr: 10.10.7.0/24 > octavia_amp_image_tag: "amphora" > octavia_certs_country: XZ > octavia_certs_state: Gotham > octavia_certs_organization: WAYNE > octavia_certs_organizational_unit: IT > horizon_keystone_multidomain: true > elasticsearch_curator_dry_run: "no" > enable_cluster_user_trust: true > > > > > > > > > > > > *ceph_rgw_hosts: - host: controllera ip: 10.10.1.5 > port: 8080 - host: controllerb ip: 10.10.1.9 > port: 8080 - host: controllerc ip: 10.10.1.13 > port: 8080ceph_rgw_swift_account_in_url: trueceph_rgw_swift_compatibility: > true* And Here is my ceph all.yml file > --- > dummy: > ceph_release_num: 16 > cluster: ceph > configure_firewall: False > *monitor_interface: bond1.10* > monitor_address_block: 10.10.1.0/24 > is_hci: true > hci_safety_factor: 0.2 > osd_memory_target: 4294967296 > *public_network: 10.10.1.0/24 * > cluster_network: 10.10.2.0/24 > *radosgw_interface: "{{ monitor_interface }}"* > *radosgw_address_block: 10.10.1.0/24 * > nfs_file_gw: true > nfs_obj_gw: true > ceph_docker_image: "ceph/daemon" > ceph_docker_image_tag: latest-pacific > ceph_docker_registry: 192.168.1.16:4000 > containerized_deployment: True > openstack_config: true > openstack_glance_pool: > name: "images" > pg_autoscale_mode: False > application: "rbd" > pg_num: 128 > pgp_num: 128 > target_size_ratio: 5.00 > rule_name: "SSD" > openstack_cinder_pool: > name: "volumes" > pg_autoscale_mode: False > application: "rbd" > pg_num: 1024 > pgp_num: 1024 > target_size_ratio: 42.80 > rule_name: "SSD" > openstack_nova_pool: > name: "vms" > pg_autoscale_mode: False > application: "rbd" > pg_num: 256 > pgp_num: 256 > target_size_ratio: 10.00 > rule_name: "SSD" > openstack_cinder_backup_pool: > name: "backups" > pg_autoscale_mode: False > application: "rbd" > pg_num: 512 > pgp_num: 512 > target_size_ratio: 18.00 > rule_name: "SSD" > openstack_gnocchi_pool: > name: "metrics" > pg_autoscale_mode: False > application: "rbd" > pg_num: 32 > pgp_num: 32 > target_size_ratio: 0.10 > rule_name: "SSD" > openstack_cephfs_data_pool: > name: "cephfs_data" > pg_autoscale_mode: False > application: "cephfs" > pg_num: 256 > pgp_num: 256 > target_size_ratio: 10.00 > rule_name: "SSD" > openstack_cephfs_metadata_pool: > name: "cephfs_metadata" > pg_autoscale_mode: False > application: "cephfs" > pg_num: 32 > pgp_num: 32 > target_size_ratio: 0.10 > rule_name: "SSD" > openstack_pools: > - "{{ openstack_glance_pool }}" > - "{{ openstack_cinder_pool }}" > - "{{ openstack_nova_pool }}" > - "{{ openstack_cinder_backup_pool }}" > - "{{ openstack_gnocchi_pool }}" > - "{{ openstack_cephfs_data_pool }}" > - "{{ openstack_cephfs_metadata_pool }}" > openstack_keys: > - { name: client.glance, caps: { mon: "profile rbd", osd: "profile rbd > pool={{ openstack_cinder_pool.name }}, profile rbd pool={{ > openstack_glance_pool.name }}"}, mode: "0600" } > - { name: client.cinder, caps: { mon: "profile rbd", osd: "profile rbd > pool={{ openstack_cinder_pool.name }}, profile rbd pool={{ > openstack_nova_pool.name }}, profile rbd pool={{ > openstack_glance_pool.name }}"}, mode: "0600" } > - { name: client.cinder-backup, caps: { mon: "profile rbd", osd: > "profile rbd pool={{ openstack_cinder_backup_pool.name }}"}, mode: "0600" > } > - { name: client.gnocchi, caps: { mon: "profile rbd", osd: "profile rbd > pool={{ openstack_gnocchi_pool.name }}"}, mode: "0600", } > - { name: client.openstack, caps: { mon: "profile rbd", osd: "profile > rbd pool={{ openstack_glance_pool.name }}, profile rbd pool={{ > openstack_nova_pool.name }}, profile rbd pool={{ > openstack_cinder_pool.name }}, profile rbd pool={{ > openstack_cinder_backup_pool.name }}"}, mode: "0600" } > dashboard_enabled: True > dashboard_protocol: https > dashboard_port: 8443 > dashboard_network: "192.168.1.0/24" > dashboard_admin_user: admin > dashboard_admin_user_ro: true > dashboard_admin_password: *********** > dashboard_crt: '/home/deployer/work/site-central/chaininv.crt' > dashboard_key: '/home/deployer/work/site-central/cloud_example.com.priv' > dashboard_grafana_api_no_ssl_verify: true > dashboard_rgw_api_user_id: admin > dashboard_rgw_api_no_ssl_verify: true > dashboard_frontend_vip: '192.168.1.5' > node_exporter_container_image: " > 192.168.1.16:4000/prom/node-exporter:v0.17.0" > grafana_admin_user: admin > grafana_admin_password: ********* > grafana_crt: '/home/deployer/work/site-central/chaininv.crt' > grafana_key: '/home/deployer/work/site-central/cloud_example.com.priv' > grafana_server_fqdn: 'grafanasrv.cloud.example.com' > grafana_container_image: "192.168.1.16:4000/grafana/grafana:6.7.4" > grafana_dashboard_version: pacific > prometheus_container_image: "192.168.1.16:4000/prom/prometheus:v2.7.2" > alertmanager_container_image: "192.168.1.16:4000/prom/alertmanager:v0.16.2 > " > And my rgws.yml > --- > dummy: > copy_admin_key: true > rgw_create_pools: > "{{ rgw_zone }}.rgw.buckets.data": > pg_num: 256 > pgp_num: 256 > size: 3 > type: replicated > pg_autoscale_mode: False > rule_id: 1 > "{{ rgw_zone }}.rgw.buckets.index": > pg_num: 64 > pgp_num: 64 > size: 3 > type: replicated > pg_autoscale_mode: False > rule_id: 1 > "{{ rgw_zone }}.rgw.meta": > pg_num: 32 > pgp_num: 32 > size: 3 > type: replicated > pg_autoscale_mode: False > rule_id: 1 > "{{ rgw_zone }}.rgw.log": > pg_num: 32 > pgp_num: 32 > size: 3 > type: replicated > pg_autoscale_mode: False > rule_id: 1 > "{{ rgw_zone }}.rgw.control": > pg_num: 32 > pgp_num: 32 > size: 3 > type: replicated > pg_autoscale_mode: False > rule_id: 1 > The ceph_rgw user was created by kolla (xenavenv) [deployer at rscdeployer ~]$ openstack user list | grep ceph | 3262aa7e03ab49c8a5710dfe3b16a136 | ceph_rgw This is my ceph.conf from one of my controllers : > [root at controllera ~]# cat /etc/ceph/ceph.conf > [client.rgw.controllera.rgw0] > host = controllera > rgw_keystone_url = https://dash.cloud.example.com:5000 > ##Authentication using username, password and tenant. Preferred. > rgw_keystone_verify_ssl = false > rgw_keystone_api_version = 3 > rgw_keystone_admin_user = ceph_rgw > rgw_keystone_admin_password = cos2Jcnpnw9BhGwvPm************************** > rgw_keystone_admin_domain = Default > rgw_keystone_admin_project = service > rgw_s3_auth_use_keystone = true > rgw_keystone_accepted_roles = admin > rgw_keystone_implicit_tenants = true > rgw_swift_account_in_url = true > keyring = /var/lib/ceph/radosgw/ceph-rgw.controllera.rgw0/keyring > log file = /var/log/ceph/ceph-rgw-controllera.rgw0.log > rgw frontends = beast endpoint=10.10.1.5:8080 > rgw thread pool size = 512 > #For Debug > debug ms = 1 > debug rgw = 20 > > > # Please do not change this file directly since it is managed by Ansible > and will be overwritten > [global] > cluster network = 10.10.2.0/24 > fsid = da094354-6ade-415a-a424-************ > mon host = [v2:10.10.1.5:3300,v1:10.10.1.5:6789],[v2:10.10.1.9:3300,v1: > 10.10.1.9:6789],[v2:10.10.1.13:3300,v1:10.10.1.13:6789] > mon initial members = controllera,controllerb,controllerc > osd pool default crush rule = 1 > *public network = 10.10.1.0/24 * > Here are my swift endpoints (xenavenv) [deployer at rscdeployer ~]$ openstack endpoint list | grep swift | 4082b4acf8bc4e4c9efc6e2d0e293724 | RegionOne | swift | object-store | True | admin | https://dashint.cloud.example.com:6780/v1/AUTH_%(project_id)s | | b13a2f53e13e4650b4efdb8184eb0211 | RegionOne | swift | object-store | True | internal | https://dashint.cloud.example.com:6780/v1/AUTH_%(project_id)s | | f85b36ff9a2b49bc9eaadf1aafdee28c | RegionOne | swift | object-store | True | public | https://dash.cloud.example.com:6780/v1/AUTH_%(project_id)s | When I connect to Horizon -> Project -> Object Store -> Containers I get theses errors : - Unable to get the swift container listing - Unable to fetch the policy details. I cannot create a new container from the WebUI, the Storage policy parameter is empty. If I try to create a new container from the CLI, I get this : (xenavenv) [deployer at rscdeployer ~]$ source cephrgw-openrc.sh (xenavenv) [deployer at rscdeployer ~]$ openstack container create demo -v START with options: container create demo -v command: container create -> openstackclient.object.v1.container.CreateContainer (auth=True) Using auth plugin: password Not Found (HTTP 404) END return value: 1 This is the log from RGW service when I execute the above command : > 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 CONTENT_LENGTH=0 > 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_ACCEPT=*/* > 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_ACCEPT_ENCODING=gzip, > deflate > 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_HOST= > dashint.cloud.example.com:6780 > 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 > HTTP_USER_AGENT=openstacksdk/0.59.0 keystoneauth1/4.4.0 > python-requests/2.26.0 CPython/3.8.8 > 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_VERSION=1.1 > 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 > HTTP_X_AUTH_TOKEN=gAAAAABiXUrjDFNzXx03mt1lbpUiCqNND1HACspSfg6h_TMxKYND5Hb9BO3FxH0a7CYoBXgRJywGszlK8cl-7zbUNRjHmxgIzmyh-CrWyGv793ZLOAmT_XShcrIKThjIIH3gTxYoX1TXwOKbsvMuZnI5EKKsol2y2MhcqPLeLGc28_AwoOr_b80 > 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 > HTTP_X_FORWARDED_FOR=10.10.3.16 > 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_X_FORWARDED_PROTO=https > 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 REMOTE_ADDR=10.10.1.13 > 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 REQUEST_METHOD=PUT > 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 > REQUEST_URI=/v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo > 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 > SCRIPT_URI=/v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo > 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 SERVER_PORT=8080 > 2022-04-18T12:26:27.995+0100 7f22e07a9700 1 ====== starting new request > req=0x7f23221aa620 ===== > 2022-04-18T12:26:27.995+0100 7f22e07a9700 2 req 728157015944164764 > 0.000000000s initializing for trans_id = > tx000000a1aeef2b40f759c-00625d4ae3-4b389-default > 2022-04-18T12:26:27.995+0100 7f22e07a9700 10 req 728157015944164764 > 0.000000000s rgw api priority: s3=8 s3website=7 > 2022-04-18T12:26:27.995+0100 7f22e07a9700 10 req 728157015944164764 > 0.000000000s host=dashint.cloud.example.com > 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 req 728157015944164764 > 0.000000000s subdomain= domain= in_hosted_domain=0 > in_hosted_domain_s3website=0 > 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 req 728157015944164764 > 0.000000000s final domain/bucket subdomain= domain= in_hosted_domain=0 > in_hosted_domain_s3website=0 s->info.domain= > s->info.request_uri=/v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo > 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 req 728157015944164764 > 0.000000000s get_handler handler=22RGWHandler_REST_Obj_S3 > 2022-04-18T12:26:27.995+0100 7f22e07a9700 10 req 728157015944164764 > 0.000000000s handler=22RGWHandler_REST_Obj_S3 > 2022-04-18T12:26:27.995+0100 7f22e07a9700 2 req 728157015944164764 > 0.000000000s getting op 1 > 2022-04-18T12:26:27.995+0100 7f22e07a9700 1 -- 10.10.1.13:0/2715436964 > --> [v2:10.10.1.7:6801/4815,v1:10.10.1.7:6803/4815] -- > osd_op(unknown.0.0:1516 12.3 12:c14cb721:::script.prerequest.:head [call > version.read in=11b,getxattrs,stat] snapc 0=[] > ondisk+read+known_if_redirected e1182) v8 -- 0x56055eb2c400 con > 0x56055e53b000 > 2022-04-18T12:26:27.996+0100 7f230d002700 1 -- 10.10.1.13:0/2715436964 > <== osd.23 v2:10.10.1.7:6801/4815 22 ==== osd_op_reply(1516 > script.prerequest. [call,getxattrs,stat] v0'0 uv0 ondisk = -2 ((2) No such > file or directory)) v8 ==== 246+0+0 (crc 0 0 0) 0x56055ea18b40 con > 0x56055e53b000 > 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 > 0.001000002s s3:put_obj scheduling with throttler client=2 cost=1 > 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 > 0.001000002s s3:put_obj op=21RGWPutObj_ObjStore_S3 > 2022-04-18T12:26:27.996+0100 7f22ddfa4700 2 req 728157015944164764 > 0.001000002s s3:put_obj verifying requester > 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 > 0.001000002s s3:put_obj rgw::auth::StrategyRegistry::s3_main_strategy_t: > trying rgw::auth::s3::AWSAuthStrategy > 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 > 0.001000002s s3:put_obj rgw::auth::s3::AWSAuthStrategy: trying > rgw::auth::s3::S3AnonymousEngine > 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 > 0.001000002s s3:put_obj rgw::auth::s3::S3AnonymousEngine granted access > 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 > 0.001000002s s3:put_obj rgw::auth::s3::AWSAuthStrategy granted access > 2022-04-18T12:26:27.996+0100 7f22ddfa4700 2 req 728157015944164764 > 0.001000002s s3:put_obj normalizing buckets and tenants > 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 > 0.001000002s s->object=AUTH_971efa4cb18f42f7a405342072c39c9d/demo > s->bucket=v1 > 2022-04-18T12:26:27.996+0100 7f22ddfa4700 2 req 728157015944164764 > 0.001000002s s3:put_obj init permissions > 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 > 0.001000002s s3:put_obj get_system_obj_state: rctx=0x7f23221a9000 > obj=default.rgw.meta:root:v1 state=0x56055ea8c520 s->prefetch_data=0 > 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 > 0.001000002s s3:put_obj cache get: name=default.rgw.meta+root+v1 : miss > 2022-04-18T12:26:27.996+0100 7f22ddfa4700 1 -- 10.10.1.13:0/2715436964 > --> [v2:10.10.1.3:6802/4933,v1:10.10.1.3:6806/4933] -- > osd_op(unknown.0.0:1517 11.b 11:d05f7b30:root::v1:head [call version.read > in=11b,getxattrs,stat] snapc 0=[] ondisk+read+known_if_redirected e1182) v8 > -- 0x56055eb2cc00 con 0x56055e585000 > 2022-04-18T12:26:27.997+0100 7f230c801700 1 -- 10.10.1.13:0/2715436964 > <== osd.3 v2:10.10.1.3:6802/4933 9 ==== osd_op_reply(1517 v1 > [call,getxattrs,stat] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) > v8 ==== 230+0+0 (crc 0 0 0) 0x56055e39db00 con 0x56055e585000 > 2022-04-18T12:26:27.997+0100 7f22dd7a3700 10 req 728157015944164764 > 0.002000004s s3:put_obj cache put: name=default.rgw.meta+root+v1 > info.flags=0x0 > 2022-04-18T12:26:27.997+0100 7f22dd7a3700 10 req 728157015944164764 > 0.002000004s s3:put_obj adding default.rgw.meta+root+v1 to cache LRU end > 2022-04-18T12:26:27.997+0100 7f22dd7a3700 10 req 728157015944164764 > 0.002000004s s3:put_obj init_permissions on failed, ret=-2002 > 2022-04-18T12:26:27.997+0100 7f22dd7a3700 1 req 728157015944164764 > 0.002000004s op->ERRORHANDLER: err_no=-2002 new_err_no=-2002 > 2022-04-18T12:26:27.997+0100 7f22dbfa0700 1 -- 10.10.1.13:0/2715436964 > --> [v2:10.10.1.8:6804/4817,v1:10.10.1.8:6805/4817] -- > osd_op(unknown.0.0:1518 12.1f 12:fb11263f:::script.postrequest.:head [call > version.read in=11b,getxattrs,stat] snapc 0=[] > ondisk+read+known_if_redirected e1182) v8 -- 0x56055eb2d000 con > 0x56055e94c800 > 2022-04-18T12:26:27.998+0100 7f230d002700 1 -- 10.10.1.13:0/2715436964 > <== osd.9 v2:10.10.1.8:6804/4817 10 ==== osd_op_reply(1518 > script.postrequest. [call,getxattrs,stat] v0'0 uv0 ondisk = -2 ((2) No such > file or directory)) v8 ==== 247+0+0 (crc 0 0 0) 0x56055ea18b40 con > 0x56055e94c800 > 2022-04-18T12:26:27.998+0100 7f22d8f9a700 2 req 728157015944164764 > 0.003000006s s3:put_obj op status=0 > 2022-04-18T12:26:27.998+0100 7f22d8f9a700 2 req 728157015944164764 > 0.003000006s s3:put_obj http status=404 > 2022-04-18T12:26:27.998+0100 7f22d8f9a700 1 ====== req done > req=0x7f23221aa620 op status=0 http_status=404 latency=0.003000006s ====== > 2022-04-18T12:26:27.998+0100 7f22d8f9a700 1 beast: 0x7f23221aa620: > 10.10.1.13 - anonymous [18/Apr/2022:12:26:27.995 +0100] "PUT > /v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo HTTP/1.1" 404 214 - > "openstacksdk/0.59.0 keystoneauth1/4.4.0 python-requests/2.26.0 > CPython/3.8.8" - latency=0.003000006s > Could you help please. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Mon Apr 18 22:38:13 2022 From: amy at demarco.com (Amy Marrich) Date: Mon, 18 Apr 2022 17:38:13 -0500 Subject: OPS Meetup Meeting tomorrow Message-ID: Just a reminder that the OPS Meetup team will meet tomorrow morning at 13:00 UTC in the #openstack-operators room on OFTC. See you there! -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Mon Apr 18 23:19:48 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Tue, 19 Apr 2022 00:19:48 +0100 Subject: [Kolla-ansible][Xena] Error deploying Cloudkitty Message-ID: Hi, I am trying to deploy Cloudkitty, but I get this error message : TASK [cloudkitty : Creating Cloudkitty influxdb database] > ****************************************************** > task path: > /home/deployer/kollavenv/xenavenv/share/kolla-ansible/ansible/roles/cloudkitty/tasks/bootstrap.yml:36 fatal: [192.168.1.5 -> 192.168.1.5]: FAILED! => { > "action": "influxdb_database", > "changed": false, > "invocation": { > "module_args": { > "database_name": "cloudkitty", > "hostname": "dashint.cloud.cerist.dz", > "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", > "path": "", > "port": 8086, > "proxies": {}, > "retries": 3, > "ssl": false, > "state": "present", > "timeout": null, > "udp_port": 4444, > "use_udp": false, > "username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", > "validate_certs": true > } > }, > "msg": "('Connection aborted.', RemoteDisconnected('Remote end closed > connection without response',))" > } On the influxdb container I did this : > [root at controllerb ~]# docker ps | grep inf > 68b3ebfefbec > 192.168.1.16:4000/openstack.kolla/centos-source-influxdb:xena > "dumb-init --single-?" 22 minutes ago Up 22 minutes > influxdb > [root at controllerb ~]# docker exec -it influxdb /bin/bash > (influxdb)[influxdb at controllerb /]$ influx > Failed to connect to http://localhost:8086: Get http://localhost:8086/ping: > dial tcp [::1]:8086: connect: connection refused > Please check your connection settings and ensure 'influxd' is running. > (influxdb)[influxdb at controllerb /]$ ps -ef > UID PID PPID C STIME TTY TIME CMD > influxdb 1 0 0 Apr18 ? 00:00:00 dumb-init > --single-child -- kolla_start > influxdb 7 1 0 Apr18 ? 00:00:01 /usr/bin/influxd > -config /etc/influxdb/influxdb.conf > influxdb 45 0 0 00:12 pts/0 00:00:00 /bin/bash > influxdb 78 45 0 00:12 pts/0 00:00:00 ps -ef > (influxdb)[influxdb at controllerb /]$ I have no log file for influxdb, the directory is empty. Any ideas? Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Apr 19 01:20:27 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 18 Apr 2022 20:20:27 -0500 Subject: [all][tc] Technical Committee next weekly meeting on April 21 at 1500 UTC Message-ID: <1803f6816f5.1079eb8e9187164.1531746983034997208@ghanshyammann.com> Hello Everyone, Technical Committee's next weekly meeting is scheduled for April 21 at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, April 20, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From akekane at redhat.com Tue Apr 19 05:09:43 2022 From: akekane at redhat.com (Abhishek Kekane) Date: Tue, 19 Apr 2022 10:39:43 +0530 Subject: [glance][devstack][tripleo][ansible][ceph_admin] Glance moving away from single store Configuration Message-ID: Hello Everyone, Glance has added support to configure multiple stores as a store backend in Stein cycle, and it is very stable now. So in upcoming cycles we are going to remove single store support and use multiple stores support only (PS. you can configure a single store using multiple stores configuration options). As a first step, we have started adding support in devstack [1][2][3] for configuring glance as multiple stores for each of the glance store backend. This cycle we are going to default multistore configuration in devstack so that our gate/check (CI) jobs should test using the same. Following cycles we will start removing single store support from glance code base. If you have any questions related to this work kindly revert back to this mail or you can join us in our weekly meeting, every Thursday at 1400 UTC #openstack-meeting IRC channel as well. [1] https://review.opendev.org/c/openstack/devstack/+/741654 [2] https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/741801/ [3] https://review.opendev.org/c/openstack/devstack/+/741802 Thank you, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Tue Apr 19 06:29:42 2022 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 19 Apr 2022 08:29:42 +0200 Subject: [Kolla-ansible][Xena] Error deploying Cloudkitty In-Reply-To: References: Message-ID: Hello, InfluxDB is configured to only listen on the internal API interface. Can you check the hostname you are using resolves correctly from the cloudkitty host? Inside the influxdb container, you should use `influxdb -host ` with the internal IP of the influxdb host. Also check if the output of `docker logs influxdb` has any logs. Best wishes, Pierre Riteau (priteau) On Tue, 19 Apr 2022 at 01:24, wodel youchi wrote: > Hi, > > I am trying to deploy Cloudkitty, but I get this error message : > > TASK [cloudkitty : Creating Cloudkitty influxdb database] >> ****************************************************** >> task path: >> /home/deployer/kollavenv/xenavenv/share/kolla-ansible/ansible/roles/cloudkitty/tasks/bootstrap.yml:36 > > > fatal: [192.168.1.5 -> 192.168.1.5]: FAILED! => { >> "action": "influxdb_database", >> "changed": false, >> "invocation": { >> "module_args": { >> "database_name": "cloudkitty", >> "hostname": "dashint.cloud.cerist.dz", >> "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", >> "path": "", >> "port": 8086, >> "proxies": {}, >> "retries": 3, >> "ssl": false, >> "state": "present", >> "timeout": null, >> "udp_port": 4444, >> "use_udp": false, >> "username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", >> "validate_certs": true >> } >> }, >> "msg": "('Connection aborted.', RemoteDisconnected('Remote end closed >> connection without response',))" >> } > > > > On the influxdb container I did this : > >> [root at controllerb ~]# docker ps | grep inf >> 68b3ebfefbec >> 192.168.1.16:4000/openstack.kolla/centos-source-influxdb:xena >> "dumb-init --single-?" 22 minutes ago Up 22 minutes >> influxdb >> [root at controllerb ~]# docker exec -it influxdb /bin/bash >> (influxdb)[influxdb at controllerb /]$ influx >> Failed to connect to http://localhost:8086: Get >> http://localhost:8086/ping: dial tcp [::1]:8086: connect: connection >> refused >> Please check your connection settings and ensure 'influxd' is running. >> (influxdb)[influxdb at controllerb /]$ ps -ef >> UID PID PPID C STIME TTY TIME CMD >> influxdb 1 0 0 Apr18 ? 00:00:00 dumb-init >> --single-child -- kolla_start >> influxdb 7 1 0 Apr18 ? 00:00:01 /usr/bin/influxd >> -config /etc/influxdb/influxdb.conf >> influxdb 45 0 0 00:12 pts/0 00:00:00 /bin/bash >> influxdb 78 45 0 00:12 pts/0 00:00:00 ps -ef >> (influxdb)[influxdb at controllerb /]$ > > > I have no log file for influxdb, the directory is empty. > > Any ideas? > > Regards. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Tue Apr 19 07:05:29 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Tue, 19 Apr 2022 12:35:29 +0530 Subject: ERROR openstack [-] Resource OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type OS::Neutron::Port and the Neutron service is not available when using ephemeral Heat.| Openstack tripleo wallaby version Message-ID: Hi, I am currently trying to deploy openstack wallaby using tripleo arch. I created the network jinja templates, ran the following commands also: #openstack overcloud network provision --stack overcloud --output networks-deployed-environment.yaml custom_network_data.yaml # openstack overcloud network vip provision --stack overcloud --output vip-deployed-environment.yaml custom_vip_data.yaml # openstack overcloud node provision --stack overcloud --overcloud-ssh-key /home/stack/sshkey/id_rsa overcloud-baremetal-deploy.yaml and used the environment files in the openstack overcloud deploy command: (undercloud) [stack at hkg2director ~]$ cat deploy.sh #!/bin/bash THT=/usr/share/openstack-tripleo-heat-templates/ CNF=/home/stack/ openstack overcloud deploy --templates $THT \ -r $CNF/templates/roles_data.yaml \ -n $CNF/workplace/custom_network_data.yaml \ -e ~/containers-prepare-parameter.yaml \ -e $CNF/templates/node-info.yaml \ -e $CNF/templates/scheduler-hints.yaml \ -e $CNF/workplace/networks-deployed-environment.yaml \ -e $CNF/workplace/vip-deployed-environment.yaml \ -e $CNF/workplace/overcloud-baremetal-deployed.yaml \ -e $CNF/workplace/custom-net-bond-with-vlans.yaml Now when i run the ./deploy.sh script i encounter an error stating: ERROR openstack [-] Resource OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type OS::Neutron::Port and the Neutron service is not available when using ephemeral Heat. The generated environments from 'openstack overcloud baremetal provision' and 'openstack overcloud network provision' must be included with the deployment command.: tripleoclient.exceptions.InvalidConfiguration: Resource OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type OS::Neutron::Port and the Neutron service is not available when using ephemeral Heat. The generated environments from 'openstack overcloud baremetal provision' and 'openstack overcloud network provision' must be included with the deployment command. 2022-04-19 13:47:16.582 735924 INFO osc_lib.shell [-] END return value: 1 Can someone tell me where the mistake is? With regards, Swogat Pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: custom_vip_data.yml Type: application/octet-stream Size: 1457 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vip-deployed-environment.yml Type: application/octet-stream Size: 1429 bytes Desc: not available URL: From fpantano at redhat.com Tue Apr 19 07:07:15 2022 From: fpantano at redhat.com (Francesco Pantano) Date: Tue, 19 Apr 2022 09:07:15 +0200 Subject: [glance][devstack][tripleo][ansible][ceph_admin] Glance moving away from single store Configuration In-Reply-To: References: Message-ID: Hi Abhishek, Thanks for sharing the changes you made in devstack. As you might know, in devstack-plugin-ceph there's an in progress effort to migrate to cephadm [1] and we're trying to align the existing code to make all the OpenStack components to work with this new approach. For this reason I updated the patch [2] to include the updates you submitted in [3]. Feel free to go through the topic [1] and see if we need extra changes for glance. Thanks, [1] https://review.opendev.org/q/topic:bp%252Fcephadm_deploy [2] https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/826484/59..60/devstack/lib/cephadm [3] https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/741801/ On Tue, Apr 19, 2022 at 7:20 AM Abhishek Kekane wrote: > Hello Everyone, > > Glance has added support to configure multiple stores as a store backend > in Stein cycle, and it is very stable now. So in upcoming cycles we are > going to remove single store support and use multiple stores support only > (PS. you can configure a single store using multiple stores configuration > options). As a first step, we have started adding support in devstack > [1][2][3] for configuring glance as multiple stores for each of the glance > store backend. This cycle we are going to default multistore configuration > in devstack so that our gate/check (CI) jobs should test using the same. > Following cycles we will start removing single store support from glance > code base. > > If you have any questions related to this work kindly revert back to this > mail or you can join us in our weekly meeting, every Thursday at 1400 UTC > #openstack-meeting IRC channel as well. > > [1] https://review.opendev.org/c/openstack/devstack/+/741654 > [2] https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/741801/ > [3] https://review.opendev.org/c/openstack/devstack/+/741802 > > > Thank you, > > Abhishek Kekane > -- Francesco Pantano GPG KEY: F41BD75C -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Tue Apr 19 07:28:03 2022 From: akekane at redhat.com (Abhishek Kekane) Date: Tue, 19 Apr 2022 12:58:03 +0530 Subject: [glance][devstack][tripleo][ansible][ceph_admin] Glance moving away from single store Configuration In-Reply-To: References: Message-ID: Hi Francesco, Thank you for sharing the updates and your efforts, I will go through the changes proposed by you. Thanks & Best Regards, Abhishek Kekane On Tue, Apr 19, 2022 at 12:37 PM Francesco Pantano wrote: > Hi Abhishek, > Thanks for sharing the changes you made in devstack. > As you might know, in devstack-plugin-ceph there's an in progress effort > to migrate to cephadm [1] and we're trying to > align the existing code to make all the OpenStack components to work with > this new approach. > For this reason I updated the patch [2] to include the updates you > submitted in [3]. > > Feel free to go through the topic [1] and see if we need extra changes for > glance. > > Thanks, > > [1] https://review.opendev.org/q/topic:bp%252Fcephadm_deploy > [2] > https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/826484/59..60/devstack/lib/cephadm > [3] https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/741801/ > > > On Tue, Apr 19, 2022 at 7:20 AM Abhishek Kekane > wrote: > >> Hello Everyone, >> >> Glance has added support to configure multiple stores as a store backend >> in Stein cycle, and it is very stable now. So in upcoming cycles we are >> going to remove single store support and use multiple stores support only >> (PS. you can configure a single store using multiple stores configuration >> options). As a first step, we have started adding support in devstack >> [1][2][3] for configuring glance as multiple stores for each of the glance >> store backend. This cycle we are going to default multistore configuration >> in devstack so that our gate/check (CI) jobs should test using the same. >> Following cycles we will start removing single store support from glance >> code base. >> >> If you have any questions related to this work kindly revert back to this >> mail or you can join us in our weekly meeting, every Thursday at 1400 UTC >> #openstack-meeting IRC channel as well. >> >> [1] https://review.opendev.org/c/openstack/devstack/+/741654 >> [2] https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/741801/ >> [3] https://review.opendev.org/c/openstack/devstack/+/741802 >> >> >> Thank you, >> >> Abhishek Kekane >> > > > -- > Francesco Pantano > GPG KEY: F41BD75C > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Tue Apr 19 08:08:33 2022 From: eblock at nde.ag (Eugen Block) Date: Tue, 19 Apr 2022 08:08:33 +0000 Subject: [neutron][nova] port binding fails for existing networks In-Reply-To: References: <20220416084030.Horde.Q8KAlVdykKFDqwEI3r0riDm@webmail.nde.ag> <20220416222311.Horde.joCC7hAiYQEJJn_jZq8cDrI@webmail.nde.ag> Message-ID: <20220419080833.Horde.6nZwkIdJ3ybxpX6FOhkGBpx@webmail.nde.ag> I have an interesting update on this. For the last two days I let the cloud work in a degraded (and unmanaged) state wrt pacemaker, meaning I stopped apache, memcached, neutron, openvswitch, nova and octavia services on one control node. Today I wanted to start more services one by one, hoping to find the responsible one. But everything works fine, after each new service I tried to launch an instance and all attempts were successful. So I disabled the pacemaker maintenance mode and retried, and still everything worked. I assume that some of the services still had some cached references to the disabled control node and couldn't recover, does that make sense? On the other hand, we rebooted both control nodes a few times, I expected that to clean up anything like that. So while the issue seems to be resolved I still have no idea what went wrong. :-( Anyway, I hope to bring the third node online this week so we'll hopefully be more resilient against control node failure. Thanks again for your comments! Eugen Zitat von Laurent Dumont : > You can probably try each one in turn. Might be an issue with one of the > two. > > On Sat, Apr 16, 2022 at 6:23 PM Eugen Block wrote: > >> Thank you both for your comments, I appreciate it! >> Before digging into the logs I tried again with one of the two control >> nodes disabled. But I didn't disable all services, only apache, >> memcached, neutron, nova and octavia so all my requests would go to >> the active control node but rabbit and galera would be in sync. This >> already seemed to clean things up somehow, now I was able to launch >> instances and LBs into an active state. Awesome! Then I started the >> mentioned services on the other control node again and things stopped >> working. Note that this setup worked for months and we have another >> cloud with two control nodes which works like a charm for years now. >> The only significant thing I noticed while switching back to one >> active neutron/nova/octavia node was this message from the >> neutron-dhcp-agent.log: >> >> 2022-04-16 23:59:29.180 36882 ERROR neutron_lib.rpc >> [req-905aecd6-ff22-4549-a0cb-ef5259692f5d - - - - -] Timeout in RPC >> method get_active_networks_info. Waiting for 510 seconds before next >> attempt. If the server is not down, consider increasing the >> rpc_response_timeout option as Neutron server(s) may be overloaded and >> unable to respond quickly enough.: >> oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a >> reply to message ID 6676c45f5b0c42af8e34f8fb4aba3aca >> >> I'll need to take a closer look for more of these messages after the >> weekend, but more importantly why we can't seem to reenable the second >> node. I'll enable debug logs then and hopefully find a trace to the >> root cause. >> If you have other comments please don't hesitate, I thankful for any ideas. >> >> Thanks! >> Eugen >> >> >> Zitat von Sean Mooney : >> >> > On Sat, 2022-04-16 at 09:30 -0400, Laurent Dumont wrote: >> >> I've seen failures with port bindings when rabbitmq was not in a good >> >> state. Messages between services transit through Rabbit so Nova/Neutron >> >> might not be able to follow the flow correctly. >> > that is not quite right. >> > >> > inter service message happen via http rest apis. >> > intra service comunication happens via rabbit. >> > nova never calls neutron over rabbit nor does neutron call nova over >> rabbit >> > >> > however it is ture that rabit issue can somethime cause prort bingin >> issues. >> > if you are using ml2/ovs the agent report/heatbeat can be lost form >> > the perspective of the neutron server >> > and it can consider the service down. if the agent is "down" then >> > the ml2/ovs mech driver will refuse to >> > bind the prot. >> > >> > assuming the agent is up in the db the requst to bidn the port never >> > actully transits rabbitmq. >> > >> > the comptue node makes a http request to the neturon-server which >> > host the api endpoing and executes the ml2 drivers. >> > the ml2/ovs dirver only uses info form the neutron db which it >> > access directly. >> > >> > the neutron server debug logs shoudl have records for bidning >> > request which shoudl detail why the port binding failed. >> > it shoudl show each loaded ml2 driver beign tried in sequence ot >> > bind the port and if it cant log the reason why. >> > >> > i would start by checking that the ovs l2 agents show as up in the db/api >> > then find a port id for one of the failed port bidngins and trace >> > the debug logs for the port bdining in the neutorn server >> > logs for the error and if you find one post it here. >> > >> >> >> >> Can you double check that rabbit is good to go? >> >> >> >> - rabbitmqctl cluster_status >> >> - rabbitmqctl list_queues >> >> >> >> I would also recommend turning the logs to DEBUG for all the services >> and >> >> trying to follow a server create request-id. >> >> >> >> On Sat, Apr 16, 2022 at 4:44 AM Eugen Block wrote: >> >> >> >> > Hi *, >> >> > >> >> > I have a kind of strange case which I'm trying to solve for hours, I >> >> > could use some fresh ideas. >> >> > It's a HA cloud (Victoria) deployed by Salt and the 2 control nodes >> >> > are managed by pacemaker, the third controller will join soon. There >> >> > are around 16 compute nodes at the moment. >> >> > This two-node-control plane works well, except if there are unplanned >> >> > outages. Since the last outage of one control node we struggle to >> >> > revive neutron (I believe neutron is the issue here). I'll try to >> >> > focus on the main issue here, let me know if more details are >> required. >> >> > After the failed node was back online all openstack agents show as >> >> > "up" (openstack compute service list, openstack network agent list). >> >> > Running VMs don't seem to be impacted (as far as I can tell). But we >> >> > can't create new instances in existing networks, and since we use >> >> > Octavia we also can't (re)build any LBs at the moment. When I create a >> >> > new test network the instance spawns successfully and is active within >> >> > a few seconds. For existing networks we get the famous "port binding >> >> > failed" from nova-compute.log. But I see the port being created, it >> >> > just can't be attached to the instance. One more strange thing: I >> >> > don't see any entries in the nova-scheduler.log or nova-conductor.log >> >> > for the successfully built instance, except for the recently mentioned >> >> > etcd3gw message from nova-conductor, but this didn't impact the >> >> > instance creation yet. >> >> > We have investigated this for hours, we have rebooted both control >> >> > nodes multiple times in order to kill any remaining processes. The >> >> > galera DB seems fine, rabbitmq also behaves normally (I think), we >> >> > tried multiple times to put one node in standby to only have one node >> >> > to look at which also didn't help. >> >> > So basically we restarted everything multiple times on the control >> >> > nodes and also nova-compute and openvswitch-agent on all compute >> >> > nodes, the issue is still not resolved. >> >> > Does anyone have further ideas to resolve this? I'd be happy to >> >> > provide more details, just let me know what you need. >> >> > >> >> > Happy Easter! >> >> > Eugen >> >> > >> >> > >> >> > >> >> >> >> From skaplons at redhat.com Tue Apr 19 08:48:45 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 19 Apr 2022 10:48:45 +0200 Subject: [neutron] CI meeting agenda for 19.04 Message-ID: <3163790.aeNJFYEL58@p1> Hi, Just quick reminder that today at 15:00 UTC we will have Neutron CI weekly meeting. It will be the video meeting this week: *https://meetpad.opendev.org/neutron-ci-meetings[1]* Agenda for the meeting is available on https://etherpad.opendev.org/p/neutron-ci-meetings#L10[2] -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://meetpad.opendev.org/neutron-ci-meetings [2] https://etherpad.opendev.org/p/neutron-ci-meetings#L10 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From noonedeadpunk at gmail.com Tue Apr 19 09:39:13 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Tue, 19 Apr 2022 11:39:13 +0200 Subject: =?UTF-8?Q?=5Bopenstack=2Dansible=5D_Nominate_Damian_D=C4=85browski_for_o?= =?UTF-8?Q?penstack=2Dansible_core_team?= Message-ID: Hi OSA Cores! I'm happy to nominate Damian D?browski (damiandabrowski) to the core reviewers team. He has been doing a good job lately in reviewing incoming patches, helping out in IRC and participating in community activities, so I think he will be a good match for the Core Reviewers group. So I call for current Core Reviewers to support this nomination or raise objections to it until 22nd of April 2022. If no objections are raised we will add Damian to the team next week. -- Kind regards, Dmitriy Rabotyagov -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjensas at redhat.com Tue Apr 19 10:53:39 2022 From: hjensas at redhat.com (Harald Jensas) Date: Tue, 19 Apr 2022 12:53:39 +0200 Subject: ERROR openstack [-] Resource OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type OS::Neutron::Port and the Neutron service is not available when using ephemeral Heat.| Openstack tripleo wallaby version In-Reply-To: References: Message-ID: On 4/19/22 09:05, Swogat Pradhan wrote: > Hi, > I am currently trying to deploy openstack wallaby using tripleo arch. > I created the network jinja templates, ran the following commands also: > > #openstack overcloud network provision --stack overcloud --output > networks-deployed-environment.yaml ? custom_network_data.yaml > #??openstack overcloud network vip provision --stack overcloud --output > vip-deployed-environment.yaml ? ? custom_vip_data.yaml > #?openstack overcloud node provision ? --stack overcloud > --overcloud-ssh-key /home/stack/sshkey/id_rsa > overcloud-baremetal-deploy.yaml > > and used the environment files in the openstack overcloud deploy command: > > (undercloud) [stack at hkg2director ~]$ cat deploy.sh > #!/bin/bash > THT=/usr/share/openstack-tripleo-heat-templates/ > CNF=/home/stack/ > openstack overcloud deploy ?--templates $THT \ > -r $CNF/templates/roles_data.yaml \ > -n $CNF/workplace/custom_network_data.yaml \ > -e ~/containers-prepare-parameter.yaml \ > -e $CNF/templates/node-info.yaml \ > -e $CNF/templates/scheduler-hints.yaml \ > -e $CNF/workplace/networks-deployed-environment.yaml \ > -e $CNF/workplace/vip-deployed-environment.yaml \ > -e $CNF/workplace/overcloud-baremetal-deployed.yaml \ > -e $CNF/workplace/custom-net-bond-with-vlans.yaml > Does $CNF/workplace/custom-net-bond-with-vlans.yaml set OS::TripleO::Network::Ports::ControlPlaneVipPort ? > Now when i run the ./deploy.sh script i encounter an error stating: > > ERROR openstack [-] Resource > OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type > OS::Neutron::Port and the Neutron service is not available when using > ephemeral Heat. The generated environments from 'openstack overcloud > baremetal provision' and 'openstack overcloud network provision' must be > included with the deployment command.: > tripleoclient.exceptions.InvalidConfiguration: Resource > OS::TripleO::Network::Ports::ControlPlaneVipPort maps to type > OS::Neutron::Port and the Neutron service is not available when using > ephemeral Heat. The generated environments from 'openstack overcloud > baremetal provision' and 'openstack overcloud network provision' must be > included with the deployment command. > 2022-04-19 13:47:16.582 735924 INFO osc_lib.shell [-] END return value: 1 > > Can someone tell me where the mistake is? > > With regards, > Swogat Pradhan From wodel.youchi at gmail.com Tue Apr 19 11:15:14 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Tue, 19 Apr 2022 12:15:14 +0100 Subject: [Kolla-ansible][Xena] Error deploying Cloudkitty In-Reply-To: References: Message-ID: Hi, I tested with influx -host First I tested with the internal api IP address of the host itself, and it did work : influx -host 10.10.3.9 Then I tested with VIP of the internal api, which is held by haproxy : influx -host 10.10.3.1, it didn't work, looking in the haproxy configuration file of influxdb, I noticed that haproxy uses https in the front end, so I tested with : influx -ssl -host 10.10.3.1 and it did work. And if you see the error message from TASK [cloudkitty : Creating Cloudkitty influxdb database], ssl is false fatal: [192.168.1.5 -> 192.168.1.5]: FAILED! => { "action": "influxdb_database", "changed": false, "invocation": { "module_args": { "database_name": "cloudkitty", "hostname": "dashint.cloud.cerist.dz", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "path": "", "port": 8086, "proxies": {}, "retries": 3, *"ssl": false,* "state": "present", "timeout": null, "udp_port": 4444, "use_udp": false, "username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "validate_certs": true } }, "msg": "('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))" } Could that be the problem? if yes how to force Cloudkitty to enable ssl? Regards. Virus-free. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> Le mar. 19 avr. 2022 ? 07:30, Pierre Riteau a ?crit : > Hello, > > InfluxDB is configured to only listen on the internal API interface. Can > you check the hostname you are using resolves correctly from the cloudkitty > host? > Inside the influxdb container, you should use `influxdb -host > ` with the internal IP of the influxdb host. > > Also check if the output of `docker logs influxdb` has any logs. > > Best wishes, > Pierre Riteau (priteau) > > On Tue, 19 Apr 2022 at 01:24, wodel youchi wrote: > >> Hi, >> >> I am trying to deploy Cloudkitty, but I get this error message : >> >> TASK [cloudkitty : Creating Cloudkitty influxdb database] >>> ****************************************************** >>> task path: >>> /home/deployer/kollavenv/xenavenv/share/kolla-ansible/ansible/roles/cloudkitty/tasks/bootstrap.yml:36 >> >> >> fatal: [192.168.1.5 -> 192.168.1.5]: FAILED! => { >>> "action": "influxdb_database", >>> "changed": false, >>> "invocation": { >>> "module_args": { >>> "database_name": "cloudkitty", >>> "hostname": "dashint.cloud.cerist.dz", >>> "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", >>> "path": "", >>> "port": 8086, >>> "proxies": {}, >>> "retries": 3, >>> "ssl": false, >>> "state": "present", >>> "timeout": null, >>> "udp_port": 4444, >>> "use_udp": false, >>> "username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", >>> "validate_certs": true >>> } >>> }, >>> "msg": "('Connection aborted.', RemoteDisconnected('Remote end >>> closed connection without response',))" >>> } >> >> >> >> On the influxdb container I did this : >> >>> [root at controllerb ~]# docker ps | grep inf >>> 68b3ebfefbec >>> 192.168.1.16:4000/openstack.kolla/centos-source-influxdb:xena >>> "dumb-init --single-?" 22 minutes ago Up 22 minutes >>> influxdb >>> [root at controllerb ~]# docker exec -it influxdb /bin/bash >>> (influxdb)[influxdb at controllerb /]$ influx >>> Failed to connect to http://localhost:8086: Get >>> http://localhost:8086/ping: dial tcp [::1]:8086: connect: connection >>> refused >>> Please check your connection settings and ensure 'influxd' is running. >>> (influxdb)[influxdb at controllerb /]$ ps -ef >>> UID PID PPID C STIME TTY TIME CMD >>> influxdb 1 0 0 Apr18 ? 00:00:00 dumb-init >>> --single-child -- kolla_start >>> influxdb 7 1 0 Apr18 ? 00:00:01 /usr/bin/influxd >>> -config /etc/influxdb/influxdb.conf >>> influxdb 45 0 0 00:12 pts/0 00:00:00 /bin/bash >>> influxdb 78 45 0 00:12 pts/0 00:00:00 ps -ef >>> (influxdb)[influxdb at controllerb /]$ >> >> >> I have no log file for influxdb, the directory is empty. >> >> Any ideas? >> >> Regards. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Tue Apr 19 11:19:45 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 19 Apr 2022 12:19:45 +0100 Subject: [murano][octavia][sahara][zaqar][zun][oslo] Pending removal of 'oslo_db.sqlalchemy.test_base' In-Reply-To: <22398351.4csPzL39Zc@whitebase.usersys.redhat.com> References: <22398351.4csPzL39Zc@whitebase.usersys.redhat.com> Message-ID: <16646eb8db7d5cb756260f03244ead878b2d6fdc.camel@redhat.com> On Sat, 2022-04-16 at 14:50 +0200, Luigi Toscano wrote: > On Thursday, 14 April 2022 20:47:10 CEST Stephen Finucane wrote: > > o/ > > > > This is a heads up to the maintainers of the aforementioned projects that > > the oslo team are planning to remove the 'oslo_db.sqlalchemy.test_base' > > module this cycle. This module has been deprecated since 2015 and we want > > to get rid of it to reduce load on the overburdened oslo maintainers. I > > have already fixed the issue in a couple of projects. These can be used as > > blueprints for fixing the remaining affected projects: > > > > * masakari (https://review.opendev.org/c/openstack/masakari/+/802761) > > * glance (https://review.opendev.org/c/openstack/glance/+/802762) > > * manila (https://review.opendev.org/c/openstack/manila/+/802763) > > > > I would love to fix the remaining projects but my limited time is currently > > focused elsewhere. The oslo.db change is available at [1]. We'd like this to > > be merged in the next month but we can push that out to later in the cycle > > if teams need more time. Just shout. > > Thanks for the notice and the example. I've tried to draft a patch but I'm > puzzled because it works locally with all the 3 python versions (py36, py38, > py39) on Fedora 35, but it fails on the gates. What am I missing? > > https://review.opendev.org/c/openstack/sahara/+/838046 I replied on the patch and posted a modification to get things passing. Stephen > > Ciao From rafaelweingartner at gmail.com Tue Apr 19 11:36:45 2022 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Tue, 19 Apr 2022 08:36:45 -0300 Subject: [Kolla-ansible][Xena] Error deploying Cloudkitty In-Reply-To: References: Message-ID: It seems that it was always assumed to be HTTP and not HTTPs: https://github.com/openstack/kolla-ansible/blob/a52cf61b2234d2f078dd2893dd37de63e20ea1aa/ansible/roles/cloudkitty/tasks/bootstrap.yml#L36 . Maybe, we will need to change that to use SSL whenever needed. On Tue, Apr 19, 2022 at 8:19 AM wodel youchi wrote: > Hi, > > I tested with influx -host > First I tested with the internal api IP address of the host itself, and it > did work : influx -host 10.10.3.9 > Then I tested with VIP of the internal api, which is held by haproxy : > influx -host 10.10.3.1, it didn't work, looking in the haproxy > configuration file of influxdb, I noticed that haproxy uses https in the > front end, so I tested with : influx -ssl -host 10.10.3.1 and it did work. > > And if you see the error message from TASK [cloudkitty : Creating > Cloudkitty influxdb database], ssl is false > > fatal: [192.168.1.5 -> 192.168.1.5]: FAILED! => { > "action": "influxdb_database", > "changed": false, > "invocation": { > "module_args": { > "database_name": "cloudkitty", > "hostname": "dashint.cloud.cerist.dz", > "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", > "path": "", > "port": 8086, > "proxies": {}, > "retries": 3, > *"ssl": false,* > "state": "present", > "timeout": null, > "udp_port": 4444, > "use_udp": false, > "username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", > "validate_certs": true > } > }, > "msg": "('Connection aborted.', RemoteDisconnected('Remote end closed > connection without response',))" > } > > Could that be the problem? if yes how to force Cloudkitty to enable ssl? > > Regards. > > > Virus-free. > www.avast.com > > <#m_-2160537011768264727_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > > Le mar. 19 avr. 2022 ? 07:30, Pierre Riteau a > ?crit : > >> Hello, >> >> InfluxDB is configured to only listen on the internal API interface. Can >> you check the hostname you are using resolves correctly from the cloudkitty >> host? >> Inside the influxdb container, you should use `influxdb -host >> ` with the internal IP of the influxdb host. >> >> Also check if the output of `docker logs influxdb` has any logs. >> >> Best wishes, >> Pierre Riteau (priteau) >> >> On Tue, 19 Apr 2022 at 01:24, wodel youchi >> wrote: >> >>> Hi, >>> >>> I am trying to deploy Cloudkitty, but I get this error message : >>> >>> TASK [cloudkitty : Creating Cloudkitty influxdb database] >>>> ****************************************************** >>>> task path: >>>> /home/deployer/kollavenv/xenavenv/share/kolla-ansible/ansible/roles/cloudkitty/tasks/bootstrap.yml:36 >>> >>> >>> fatal: [192.168.1.5 -> 192.168.1.5]: FAILED! => { >>>> "action": "influxdb_database", >>>> "changed": false, >>>> "invocation": { >>>> "module_args": { >>>> "database_name": "cloudkitty", >>>> "hostname": "dashint.cloud.cerist.dz", >>>> "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", >>>> "path": "", >>>> "port": 8086, >>>> "proxies": {}, >>>> "retries": 3, >>>> "ssl": false, >>>> "state": "present", >>>> "timeout": null, >>>> "udp_port": 4444, >>>> "use_udp": false, >>>> "username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", >>>> "validate_certs": true >>>> } >>>> }, >>>> "msg": "('Connection aborted.', RemoteDisconnected('Remote end >>>> closed connection without response',))" >>>> } >>> >>> >>> >>> On the influxdb container I did this : >>> >>>> [root at controllerb ~]# docker ps | grep inf >>>> 68b3ebfefbec >>>> 192.168.1.16:4000/openstack.kolla/centos-source-influxdb:xena >>>> "dumb-init --single-?" 22 minutes ago Up 22 minutes >>>> influxdb >>>> [root at controllerb ~]# docker exec -it influxdb /bin/bash >>>> (influxdb)[influxdb at controllerb /]$ influx >>>> Failed to connect to http://localhost:8086: Get >>>> http://localhost:8086/ping: dial tcp [::1]:8086: connect: connection >>>> refused >>>> Please check your connection settings and ensure 'influxd' is running. >>>> (influxdb)[influxdb at controllerb /]$ ps -ef >>>> UID PID PPID C STIME TTY TIME CMD >>>> influxdb 1 0 0 Apr18 ? 00:00:00 dumb-init >>>> --single-child -- kolla_start >>>> influxdb 7 1 0 Apr18 ? 00:00:01 /usr/bin/influxd >>>> -config /etc/influxdb/influxdb.conf >>>> influxdb 45 0 0 00:12 pts/0 00:00:00 /bin/bash >>>> influxdb 78 45 0 00:12 pts/0 00:00:00 ps -ef >>>> (influxdb)[influxdb at controllerb /]$ >>> >>> >>> I have no log file for influxdb, the directory is empty. >>> >>> Any ideas? >>> >>> Regards. >>> >> -- Rafael Weing?rtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue Apr 19 13:26:12 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 19 Apr 2022 15:26:12 +0200 Subject: [all][tc][Release Management] Improvements in project governance Message-ID: <1858624.taCxCBeP46@p1> Hi, During the Zed PTG sessions in the TC room we were discussing some ideas how we can improve project governance. One of the topics was related to the projects which don't really have any changes in the cycle. Currently we are forcing to do new release of basically the same code when it comes to the end of the cycle. Can/Should we maybe change that and e.g. instead of forcing new release use last released version of the of the repo for new release too? If yes, should we then automatically propose change of the release model to the "independent" maybe? What would be the best way how Release Management team can maybe notify TC about such less active projects which don't needs any new release in the cycle? That could be one of the potential conditions to check project's health by the TC team. Another question is related to the projects which aren't really active and are broken during the final release time. We had such problem in the last cycle, see [1] for details. Should we still force pushing fixes for them to be able to release or maybe should we consider deprecation of such projects and not to release it at all? [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-March/027864.html[1] -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-March/027864.html -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From marc.gariepy at calculquebec.ca Tue Apr 19 13:48:09 2022 From: marc.gariepy at calculquebec.ca (Marc Gariepy) Date: Tue, 19 Apr 2022 09:48:09 -0400 Subject: =?UTF-8?Q?Re=3A_=5Bopenstack=2Dansible=5D_Nominate_Damian_D=C4=85browski_f?= =?UTF-8?Q?or_openstack=2Dansible_core_team?= In-Reply-To: References: Message-ID: hello, On Tue, Apr 19, 2022 at 5:39 AM Dmitriy Rabotyagov wrote: > Hi OSA Cores! > > I'm happy to nominate Damian D?browski (damiandabrowski) to the core > reviewers team. > +2 Welcome Damian :) Marc Gariepy > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Tue Apr 19 14:00:41 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Tue, 19 Apr 2022 15:00:41 +0100 Subject: [Kolla-ansible][Xena] Error deploying Cloudkitty In-Reply-To: References: Message-ID: Hi, I tried to do this vim /home/deployer/kollavenv/xenavenv/share/kolla-ansible/ansible/roles/cloudkitty/defaults/main.yml *cloudkitty_influxdb_use_ssl: "true"* But it didn't work,then I added the same variable to globals.yml but it didn't work. So finally I edited vim /home/deployer/kollavenv/xenavenv/share/kolla-ansible/ansible/roles/cloudkitty/tasks/bootstrap.yml and added the ssl variable as a workaround > - name: Creating Cloudkitty influxdb database > become: true > kolla_toolbox: > module_name: influxdb_database > module_args: > hostname: "{{ influxdb_address }}" > port: "{{ influxdb_http_port }}" > * ssl: True* > database_name: "{{ cloudkitty_influxdb_name }}" > run_once: True > delegate_to: "{{ groups['cloudkitty-api'][0] }}" > when: cloudkitty_storage_backend == 'influxdb' > I don't know if this would have worked I just get the idea - name: Creating Cloudkitty influxdb database > become: true > kolla_toolbox: > module_name: influxdb_database > module_args: > hostname: "{{ influxdb_address }}" > port: "{{ influxdb_http_port }}" > * ssl: {{ cloudkitty_influxdb_use_ssl }}* > database_name: "{{ cloudkitty_influxdb_name }}" > run_once: True > delegate_to: "{{ groups['cloudkitty-api'][0] }}" > when: cloudkitty_storage_backend == 'influxdb' > Regards. Le mar. 19 avr. 2022 ? 12:37, Rafael Weing?rtner < rafaelweingartner at gmail.com> a ?crit : > It seems that it was always assumed to be HTTP and not HTTPs: > https://github.com/openstack/kolla-ansible/blob/a52cf61b2234d2f078dd2893dd37de63e20ea1aa/ansible/roles/cloudkitty/tasks/bootstrap.yml#L36 > . > > Maybe, we will need to change that to use SSL whenever needed. > > On Tue, Apr 19, 2022 at 8:19 AM wodel youchi > wrote: > >> Hi, >> >> I tested with influx -host >> First I tested with the internal api IP address of the host itself, and >> it did work : influx -host 10.10.3.9 >> Then I tested with VIP of the internal api, which is held by haproxy : >> influx -host 10.10.3.1, it didn't work, looking in the haproxy >> configuration file of influxdb, I noticed that haproxy uses https in the >> front end, so I tested with : influx -ssl -host 10.10.3.1 and it did work. >> >> And if you see the error message from TASK [cloudkitty : Creating >> Cloudkitty influxdb database], ssl is false >> >> fatal: [192.168.1.5 -> 192.168.1.5]: FAILED! => { >> "action": "influxdb_database", >> "changed": false, >> "invocation": { >> "module_args": { >> "database_name": "cloudkitty", >> "hostname": "dashint.cloud.cerist.dz", >> "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", >> "path": "", >> "port": 8086, >> "proxies": {}, >> "retries": 3, >> *"ssl": false,* >> "state": "present", >> "timeout": null, >> "udp_port": 4444, >> "use_udp": false, >> "username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", >> "validate_certs": true >> } >> }, >> "msg": "('Connection aborted.', RemoteDisconnected('Remote end closed >> connection without response',))" >> } >> >> Could that be the problem? if yes how to force Cloudkitty to enable ssl? >> >> Regards. >> >> >> Virus-free. >> www.avast.com >> >> <#m_2114711239033937821_m_-2160537011768264727_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >> >> Le mar. 19 avr. 2022 ? 07:30, Pierre Riteau a >> ?crit : >> >>> Hello, >>> >>> InfluxDB is configured to only listen on the internal API interface. Can >>> you check the hostname you are using resolves correctly from the cloudkitty >>> host? >>> Inside the influxdb container, you should use `influxdb -host >>> ` with the internal IP of the influxdb host. >>> >>> Also check if the output of `docker logs influxdb` has any logs. >>> >>> Best wishes, >>> Pierre Riteau (priteau) >>> >>> On Tue, 19 Apr 2022 at 01:24, wodel youchi >>> wrote: >>> >>>> Hi, >>>> >>>> I am trying to deploy Cloudkitty, but I get this error message : >>>> >>>> TASK [cloudkitty : Creating Cloudkitty influxdb database] >>>>> ****************************************************** >>>>> task path: >>>>> /home/deployer/kollavenv/xenavenv/share/kolla-ansible/ansible/roles/cloudkitty/tasks/bootstrap.yml:36 >>>> >>>> >>>> fatal: [192.168.1.5 -> 192.168.1.5]: FAILED! => { >>>>> "action": "influxdb_database", >>>>> "changed": false, >>>>> "invocation": { >>>>> "module_args": { >>>>> "database_name": "cloudkitty", >>>>> "hostname": "dashint.cloud.cerist.dz", >>>>> "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", >>>>> "path": "", >>>>> "port": 8086, >>>>> "proxies": {}, >>>>> "retries": 3, >>>>> "ssl": false, >>>>> "state": "present", >>>>> "timeout": null, >>>>> "udp_port": 4444, >>>>> "use_udp": false, >>>>> "username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", >>>>> "validate_certs": true >>>>> } >>>>> }, >>>>> "msg": "('Connection aborted.', RemoteDisconnected('Remote end >>>>> closed connection without response',))" >>>>> } >>>> >>>> >>>> >>>> On the influxdb container I did this : >>>> >>>>> [root at controllerb ~]# docker ps | grep inf >>>>> 68b3ebfefbec >>>>> 192.168.1.16:4000/openstack.kolla/centos-source-influxdb:xena >>>>> "dumb-init --single-?" 22 minutes ago Up 22 minutes >>>>> influxdb >>>>> [root at controllerb ~]# docker exec -it influxdb /bin/bash >>>>> (influxdb)[influxdb at controllerb /]$ influx >>>>> Failed to connect to http://localhost:8086: Get >>>>> http://localhost:8086/ping: dial tcp [::1]:8086: connect: connection >>>>> refused >>>>> Please check your connection settings and ensure 'influxd' is running. >>>>> (influxdb)[influxdb at controllerb /]$ ps -ef >>>>> UID PID PPID C STIME TTY TIME CMD >>>>> influxdb 1 0 0 Apr18 ? 00:00:00 dumb-init >>>>> --single-child -- kolla_start >>>>> influxdb 7 1 0 Apr18 ? 00:00:01 /usr/bin/influxd >>>>> -config /etc/influxdb/influxdb.conf >>>>> influxdb 45 0 0 00:12 pts/0 00:00:00 /bin/bash >>>>> influxdb 78 45 0 00:12 pts/0 00:00:00 ps -ef >>>>> (influxdb)[influxdb at controllerb /]$ >>>> >>>> >>>> I have no log file for influxdb, the directory is empty. >>>> >>>> Any ideas? >>>> >>>> Regards. >>>> >>> > > -- > Rafael Weing?rtner > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.rosser at rd.bbc.co.uk Tue Apr 19 14:23:32 2022 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Tue, 19 Apr 2022 15:23:32 +0100 Subject: =?UTF-8?Q?Re=3a_=5bopenstack-ansible=5d_Nominate_Damian_D=c4=85brow?= =?UTF-8?Q?ski_for_openstack-ansible_core_team?= In-Reply-To: References: Message-ID: <20d3c639-775b-82c0-5222-3381d282f432@rd.bbc.co.uk> +2? Welcome Damian! On 19/04/2022 10:39, Dmitriy Rabotyagov wrote: > Hi OSA Cores! > > I'm happy to nominate?Damian D?browski (damiandabrowski) to the core > reviewers team. > > He has been doing a good job lately in reviewing incoming patches, > helping out in IRC and participating in community activities, so I > think he will be a good match for the Core Reviewers group. > > So I call for current Core Reviewers to support this nomination or > raise objections to it until 22nd of April 2022. If no objections are > raised we will add Damian to the team next week. > > -- > Kind?regards, > Dmitriy Rabotyagov From Andrew.Bonney at bbc.co.uk Tue Apr 19 14:47:01 2022 From: Andrew.Bonney at bbc.co.uk (Andrew Bonney) Date: Tue, 19 Apr 2022 14:47:01 +0000 Subject: =?utf-8?B?UkU6IFtvcGVuc3RhY2stYW5zaWJsZV0gTm9taW5hdGUgRGFtaWFuIETEhWJy?= =?utf-8?Q?owski_for_openstack-ansible_core_team?= In-Reply-To: <20d3c639-775b-82c0-5222-3381d282f432@rd.bbc.co.uk> References: <20d3c639-775b-82c0-5222-3381d282f432@rd.bbc.co.uk> Message-ID: Sounds good to me! -----Original Message----- From: Jonathan Rosser Sent: 19 April 2022 15:24 To: openstack-discuss at lists.openstack.org Subject: Re: [openstack-ansible] Nominate Damian D?browski for openstack-ansible core team +2 Welcome Damian! On 19/04/2022 10:39, Dmitriy Rabotyagov wrote: > Hi OSA Cores! > > I'm happy to nominate Damian D?browski (damiandabrowski) to the core > reviewers team. > > He has been doing a good job lately in reviewing incoming patches, > helping out in IRC and participating in community activities, so I > think he will be a good match for the Core Reviewers group. > > So I call for current Core Reviewers to support this nomination or > raise objections to it until 22nd of April 2022. If no objections are > raised we will add Damian to the team next week. > > -- > Kind regards, > Dmitriy Rabotyagov From bsanjeewa at kln.ac.lk Tue Apr 19 07:11:21 2022 From: bsanjeewa at kln.ac.lk (Buddhika S. Godakuru - University of Kelaniya) Date: Tue, 19 Apr 2022 12:41:21 +0530 Subject: [Kolla-ansible][Xena][Ceph-RGW] need help configuring Ceph RGW for Swift and S3 access In-Reply-To: References: Message-ID: Dear Wodel, I think that default endpoint for swift when using cephrgw is /swift/v1 (unless you have changed it in ceph), so your endpoints should be | 4082b4acf8bc4e4c9efc6e2d0e293724 | RegionOne | swift | object-store | True | admin | https://dashint.cloud.example.com:6780/swift/v1/AUTH_%(project_id)s | | b13a2f53e13e4650b4efdb8184eb0211 | RegionOne | swift | object-store | True | internal | https://dashint.cloud.example.com:6780/swift/v1/AUTH_%(project_id)s | | f85b36ff9a2b49bc9eaadf1aafdee28c | RegionOne | swift | object-store | True | public | https://dash.cloud.example.com:6780 /swift/v1/AUTH_%(project_id)s | See https://docs.ceph.com/en/latest/radosgw/keystone/#cross-project-tenant-access On Mon, 18 Apr 2022 at 23:52, wodel youchi wrote: > Hi, > I am having trouble configuring Openstack to use Ceph RGW as the Object > store backend for Swift and S3. > > My setup is an HCI, I have 3 controllers which are also my ceph mgrs, mons > and rgws and 9 compte/storage servers (osds). > Xena is deployed with Ceph Pacific. > > Ceph public network is a private network on vlan10 with 10.10.1.0/24 as a > subnet. > > Here is a snippet from my globals.yml : > >> --- >> kolla_base_distro: "centos" >> kolla_install_type: "source" >> openstack_release: "xena" >> kolla_internal_vip_address: "10.10.3.1" >> kolla_internal_fqdn: "dashint.cloud.example.com" >> kolla_external_vip_address: "x.x.x.x" >> kolla_external_fqdn: "dash.cloud.example.com " >> docker_registry: 192.168.1.16:4000 >> network_interface: "bond0" >> kolla_external_vip_interface: "bond1" >> api_interface: "bond1.30" >> *storage_interface: "bond1.10" <---------------- VLAN10 (public >> ceph network)* >> tunnel_interface: "bond1.40" >> dns_interface: "bond1" >> octavia_network_interface: "bond1.301" >> neutron_external_interface: "bond2" >> neutron_plugin_agent: "openvswitch" >> keepalived_virtual_router_id: "51" >> kolla_enable_tls_internal: "yes" >> kolla_enable_tls_external: "yes" >> kolla_certificates_dir: "{{ node_config }}/certificates" >> kolla_external_fqdn_cert: "{{ kolla_certificates_dir }}/haproxy.pem" >> kolla_internal_fqdn_cert: "{{ kolla_certificates_dir >> }}/haproxy-internal.pem" >> kolla_admin_openrc_cacert: "{{ kolla_certificates_dir }}/ca.pem" >> kolla_copy_ca_into_containers: "yes" >> kolla_enable_tls_backend: "yes" >> kolla_verify_tls_backend: "no" >> kolla_tls_backend_cert: "{{ kolla_certificates_dir }}/backend-cert.pem" >> kolla_tls_backend_key: "{{ kolla_certificates_dir }}/backend-key.pem" >> enable_openstack_core: "yes" >> enable_hacluster: "yes" >> enable_haproxy: "yes" >> enable_aodh: "yes" >> enable_barbican: "yes" >> enable_ceilometer: "yes" >> enable_central_logging: "yes" >> >> *enable_ceph_rgw: "yes"enable_ceph_rgw_loadbalancer: "{{ enable_ceph_rgw >> | bool }}"* >> enable_cinder: "yes" >> enable_cinder_backup: "yes" >> enable_collectd: "yes" >> enable_designate: "yes" >> enable_elasticsearch_curator: "yes" >> enable_freezer: "no" >> enable_gnocchi: "yes" >> enable_gnocchi_statsd: "yes" >> enable_magnum: "yes" >> enable_manila: "yes" >> enable_manila_backend_cephfs_native: "yes" >> enable_mariabackup: "yes" >> enable_masakari: "yes" >> enable_neutron_vpnaas: "yes" >> enable_neutron_qos: "yes" >> enable_neutron_agent_ha: "yes" >> enable_neutron_provider_networks: "yes" >> enable_neutron_segments: "yes" >> enable_octavia: "yes" >> enable_trove: "yes" >> external_ceph_cephx_enabled: "yes" >> ceph_glance_keyring: "ceph.client.glance.keyring" >> ceph_glance_user: "glance" >> ceph_glance_pool_name: "images" >> ceph_cinder_keyring: "ceph.client.cinder.keyring" >> ceph_cinder_user: "cinder" >> ceph_cinder_pool_name: "volumes" >> ceph_cinder_backup_keyring: "ceph.client.cinder-backup.keyring" >> ceph_cinder_backup_user: "cinder-backup" >> ceph_cinder_backup_pool_name: "backups" >> ceph_nova_keyring: "{{ ceph_cinder_keyring }}" >> ceph_nova_user: "cinder" >> ceph_nova_pool_name: "vms" >> ceph_gnocchi_keyring: "ceph.client.gnocchi.keyring" >> ceph_gnocchi_user: "gnocchi" >> ceph_gnocchi_pool_name: "metrics" >> ceph_manila_keyring: "ceph.client.manila.keyring" >> ceph_manila_user: "manila" >> glance_backend_ceph: "yes" >> glance_backend_file: "no" >> gnocchi_backend_storage: "ceph" >> cinder_backend_ceph: "yes" >> cinder_backup_driver: "ceph" >> cloudkitty_collector_backend: "gnocchi" >> designate_ns_record: "cloud.example.com " >> nova_backend_ceph: "yes" >> nova_compute_virt_type: "kvm" >> octavia_auto_configure: yes >> octavia_amp_flavor: >> name: "amphora" >> is_public: no >> vcpus: 1 >> ram: 1024 >> disk: 5 >> octavia_amp_network: >> name: lb-mgmt-net >> provider_network_type: vlan >> provider_segmentation_id: 301 >> provider_physical_network: physnet1 >> external: false >> shared: false >> subnet: >> name: lb-mgmt-subnet >> cidr: "10.7.0.0/16" >> allocation_pool_start: "10.7.0.50" >> allocation_pool_end: "10.7.255.200" >> no_gateway_ip: yes >> enable_dhcp: yes >> mtu: 9000 >> octavia_amp_network_cidr: 10.10.7.0/24 >> octavia_amp_image_tag: "amphora" >> octavia_certs_country: XZ >> octavia_certs_state: Gotham >> octavia_certs_organization: WAYNE >> octavia_certs_organizational_unit: IT >> horizon_keystone_multidomain: true >> elasticsearch_curator_dry_run: "no" >> enable_cluster_user_trust: true >> >> >> >> >> >> >> >> >> >> >> >> *ceph_rgw_hosts: - host: controllera ip: 10.10.1.5 >> port: 8080 - host: controllerb ip: 10.10.1.9 >> port: 8080 - host: controllerc ip: 10.10.1.13 >> port: 8080ceph_rgw_swift_account_in_url: trueceph_rgw_swift_compatibility: >> true* > > > > And Here is my ceph all.yml file > >> --- >> dummy: >> ceph_release_num: 16 >> cluster: ceph >> configure_firewall: False >> *monitor_interface: bond1.10* >> monitor_address_block: 10.10.1.0/24 >> is_hci: true >> hci_safety_factor: 0.2 >> osd_memory_target: 4294967296 >> *public_network: 10.10.1.0/24 * >> cluster_network: 10.10.2.0/24 >> *radosgw_interface: "{{ monitor_interface }}"* >> *radosgw_address_block: 10.10.1.0/24 * >> nfs_file_gw: true >> nfs_obj_gw: true >> ceph_docker_image: "ceph/daemon" >> ceph_docker_image_tag: latest-pacific >> ceph_docker_registry: 192.168.1.16:4000 >> containerized_deployment: True >> openstack_config: true >> openstack_glance_pool: >> name: "images" >> pg_autoscale_mode: False >> application: "rbd" >> pg_num: 128 >> pgp_num: 128 >> target_size_ratio: 5.00 >> rule_name: "SSD" >> openstack_cinder_pool: >> name: "volumes" >> pg_autoscale_mode: False >> application: "rbd" >> pg_num: 1024 >> pgp_num: 1024 >> target_size_ratio: 42.80 >> rule_name: "SSD" >> openstack_nova_pool: >> name: "vms" >> pg_autoscale_mode: False >> application: "rbd" >> pg_num: 256 >> pgp_num: 256 >> target_size_ratio: 10.00 >> rule_name: "SSD" >> openstack_cinder_backup_pool: >> name: "backups" >> pg_autoscale_mode: False >> application: "rbd" >> pg_num: 512 >> pgp_num: 512 >> target_size_ratio: 18.00 >> rule_name: "SSD" >> openstack_gnocchi_pool: >> name: "metrics" >> pg_autoscale_mode: False >> application: "rbd" >> pg_num: 32 >> pgp_num: 32 >> target_size_ratio: 0.10 >> rule_name: "SSD" >> openstack_cephfs_data_pool: >> name: "cephfs_data" >> pg_autoscale_mode: False >> application: "cephfs" >> pg_num: 256 >> pgp_num: 256 >> target_size_ratio: 10.00 >> rule_name: "SSD" >> openstack_cephfs_metadata_pool: >> name: "cephfs_metadata" >> pg_autoscale_mode: False >> application: "cephfs" >> pg_num: 32 >> pgp_num: 32 >> target_size_ratio: 0.10 >> rule_name: "SSD" >> openstack_pools: >> - "{{ openstack_glance_pool }}" >> - "{{ openstack_cinder_pool }}" >> - "{{ openstack_nova_pool }}" >> - "{{ openstack_cinder_backup_pool }}" >> - "{{ openstack_gnocchi_pool }}" >> - "{{ openstack_cephfs_data_pool }}" >> - "{{ openstack_cephfs_metadata_pool }}" >> openstack_keys: >> - { name: client.glance, caps: { mon: "profile rbd", osd: "profile rbd >> pool={{ openstack_cinder_pool.name }}, profile rbd pool={{ >> openstack_glance_pool.name }}"}, mode: "0600" } >> - { name: client.cinder, caps: { mon: "profile rbd", osd: "profile rbd >> pool={{ openstack_cinder_pool.name }}, profile rbd pool={{ >> openstack_nova_pool.name }}, profile rbd pool={{ >> openstack_glance_pool.name }}"}, mode: "0600" } >> - { name: client.cinder-backup, caps: { mon: "profile rbd", osd: >> "profile rbd pool={{ openstack_cinder_backup_pool.name }}"}, mode: >> "0600" } >> - { name: client.gnocchi, caps: { mon: "profile rbd", osd: "profile rbd >> pool={{ openstack_gnocchi_pool.name }}"}, mode: "0600", } >> - { name: client.openstack, caps: { mon: "profile rbd", osd: "profile >> rbd pool={{ openstack_glance_pool.name }}, profile rbd pool={{ >> openstack_nova_pool.name }}, profile rbd pool={{ >> openstack_cinder_pool.name }}, profile rbd pool={{ >> openstack_cinder_backup_pool.name }}"}, mode: "0600" } >> dashboard_enabled: True >> dashboard_protocol: https >> dashboard_port: 8443 >> dashboard_network: "192.168.1.0/24" >> dashboard_admin_user: admin >> dashboard_admin_user_ro: true >> dashboard_admin_password: *********** >> dashboard_crt: '/home/deployer/work/site-central/chaininv.crt' >> dashboard_key: '/home/deployer/work/site-central/cloud_example.com.priv' >> dashboard_grafana_api_no_ssl_verify: true >> dashboard_rgw_api_user_id: admin >> dashboard_rgw_api_no_ssl_verify: true >> dashboard_frontend_vip: '192.168.1.5' >> node_exporter_container_image: " >> 192.168.1.16:4000/prom/node-exporter:v0.17.0" >> grafana_admin_user: admin >> grafana_admin_password: ********* >> grafana_crt: '/home/deployer/work/site-central/chaininv.crt' >> grafana_key: '/home/deployer/work/site-central/cloud_example.com.priv' >> grafana_server_fqdn: 'grafanasrv.cloud.example.com' >> grafana_container_image: "192.168.1.16:4000/grafana/grafana:6.7.4" >> grafana_dashboard_version: pacific >> prometheus_container_image: "192.168.1.16:4000/prom/prometheus:v2.7.2" >> alertmanager_container_image: " >> 192.168.1.16:4000/prom/alertmanager:v0.16.2" >> > > And my rgws.yml > >> --- >> dummy: >> copy_admin_key: true >> rgw_create_pools: >> "{{ rgw_zone }}.rgw.buckets.data": >> pg_num: 256 >> pgp_num: 256 >> size: 3 >> type: replicated >> pg_autoscale_mode: False >> rule_id: 1 >> "{{ rgw_zone }}.rgw.buckets.index": >> pg_num: 64 >> pgp_num: 64 >> size: 3 >> type: replicated >> pg_autoscale_mode: False >> rule_id: 1 >> "{{ rgw_zone }}.rgw.meta": >> pg_num: 32 >> pgp_num: 32 >> size: 3 >> type: replicated >> pg_autoscale_mode: False >> rule_id: 1 >> "{{ rgw_zone }}.rgw.log": >> pg_num: 32 >> pgp_num: 32 >> size: 3 >> type: replicated >> pg_autoscale_mode: False >> rule_id: 1 >> "{{ rgw_zone }}.rgw.control": >> pg_num: 32 >> pgp_num: 32 >> size: 3 >> type: replicated >> pg_autoscale_mode: False >> rule_id: 1 >> > > The ceph_rgw user was created by kolla > (xenavenv) [deployer at rscdeployer ~]$ openstack user list | grep ceph > | 3262aa7e03ab49c8a5710dfe3b16a136 | ceph_rgw > > This is my ceph.conf from one of my controllers : > >> [root at controllera ~]# cat /etc/ceph/ceph.conf >> [client.rgw.controllera.rgw0] >> host = controllera >> rgw_keystone_url = https://dash.cloud.example.com:5000 >> ##Authentication using username, password and tenant. Preferred. >> rgw_keystone_verify_ssl = false >> rgw_keystone_api_version = 3 >> rgw_keystone_admin_user = ceph_rgw >> rgw_keystone_admin_password = cos2Jcnpnw9BhGwvPm************************** >> rgw_keystone_admin_domain = Default >> rgw_keystone_admin_project = service >> rgw_s3_auth_use_keystone = true >> rgw_keystone_accepted_roles = admin >> rgw_keystone_implicit_tenants = true >> rgw_swift_account_in_url = true >> keyring = /var/lib/ceph/radosgw/ceph-rgw.controllera.rgw0/keyring >> log file = /var/log/ceph/ceph-rgw-controllera.rgw0.log >> rgw frontends = beast endpoint=10.10.1.5:8080 >> rgw thread pool size = 512 >> #For Debug >> debug ms = 1 >> debug rgw = 20 >> >> >> # Please do not change this file directly since it is managed by Ansible >> and will be overwritten >> [global] >> cluster network = 10.10.2.0/24 >> fsid = da094354-6ade-415a-a424-************ >> mon host = [v2:10.10.1.5:3300,v1:10.10.1.5:6789],[v2:10.10.1.9:3300,v1: >> 10.10.1.9:6789],[v2:10.10.1.13:3300,v1:10.10.1.13:6789] >> mon initial members = controllera,controllerb,controllerc >> osd pool default crush rule = 1 >> *public network = 10.10.1.0/24 * >> > > > Here are my swift endpoints > (xenavenv) [deployer at rscdeployer ~]$ openstack endpoint list | grep swift > | 4082b4acf8bc4e4c9efc6e2d0e293724 | RegionOne | swift | > object-store | True | admin | > https://dashint.cloud.example.com:6780/v1/AUTH_%(project_id)s | > | b13a2f53e13e4650b4efdb8184eb0211 | RegionOne | swift | > object-store | True | internal | > https://dashint.cloud.example.com:6780/v1/AUTH_%(project_id)s | > | f85b36ff9a2b49bc9eaadf1aafdee28c | RegionOne | swift | > object-store | True | public | > https://dash.cloud.example.com:6780/v1/AUTH_%(project_id)s | > > When I connect to Horizon -> Project -> Object Store -> Containers I get > theses errors : > > - Unable to get the swift container listing > - Unable to fetch the policy details. > > I cannot create a new container from the WebUI, the Storage policy > parameter is empty. > If I try to create a new container from the CLI, I get this : > (xenavenv) [deployer at rscdeployer ~]$ source cephrgw-openrc.sh > (xenavenv) [deployer at rscdeployer ~]$ openstack container create demo -v > START with options: container create demo -v > command: container create -> > openstackclient.object.v1.container.CreateContainer (auth=True) > Using auth plugin: password > Not Found (HTTP 404) > END return value: 1 > > > This is the log from RGW service when I execute the above command : > >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 CONTENT_LENGTH=0 >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_ACCEPT=*/* >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_ACCEPT_ENCODING=gzip, >> deflate >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_HOST= >> dashint.cloud.example.com:6780 >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 >> HTTP_USER_AGENT=openstacksdk/0.59.0 keystoneauth1/4.4.0 >> python-requests/2.26.0 CPython/3.8.8 >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_VERSION=1.1 >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 >> HTTP_X_AUTH_TOKEN=gAAAAABiXUrjDFNzXx03mt1lbpUiCqNND1HACspSfg6h_TMxKYND5Hb9BO3FxH0a7CYoBXgRJywGszlK8cl-7zbUNRjHmxgIzmyh-CrWyGv793ZLOAmT_XShcrIKThjIIH3gTxYoX1TXwOKbsvMuZnI5EKKsol2y2MhcqPLeLGc28_AwoOr_b80 >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 >> HTTP_X_FORWARDED_FOR=10.10.3.16 >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_X_FORWARDED_PROTO=https >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 REMOTE_ADDR=10.10.1.13 >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 REQUEST_METHOD=PUT >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 >> REQUEST_URI=/v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 >> SCRIPT_URI=/v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 SERVER_PORT=8080 >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 1 ====== starting new request >> req=0x7f23221aa620 ===== >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 2 req 728157015944164764 >> 0.000000000s initializing for trans_id = >> tx000000a1aeef2b40f759c-00625d4ae3-4b389-default >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 10 req 728157015944164764 >> 0.000000000s rgw api priority: s3=8 s3website=7 >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 10 req 728157015944164764 >> 0.000000000s host=dashint.cloud.example.com >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 req 728157015944164764 >> 0.000000000s subdomain= domain= in_hosted_domain=0 >> in_hosted_domain_s3website=0 >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 req 728157015944164764 >> 0.000000000s final domain/bucket subdomain= domain= in_hosted_domain=0 >> in_hosted_domain_s3website=0 s->info.domain= >> s->info.request_uri=/v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 req 728157015944164764 >> 0.000000000s get_handler handler=22RGWHandler_REST_Obj_S3 >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 10 req 728157015944164764 >> 0.000000000s handler=22RGWHandler_REST_Obj_S3 >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 2 req 728157015944164764 >> 0.000000000s getting op 1 >> 2022-04-18T12:26:27.995+0100 7f22e07a9700 1 -- 10.10.1.13:0/2715436964 >> --> [v2:10.10.1.7:6801/4815,v1:10.10.1.7:6803/4815] -- >> osd_op(unknown.0.0:1516 12.3 12:c14cb721:::script.prerequest.:head [call >> version.read in=11b,getxattrs,stat] snapc 0=[] >> ondisk+read+known_if_redirected e1182) v8 -- 0x56055eb2c400 con >> 0x56055e53b000 >> 2022-04-18T12:26:27.996+0100 7f230d002700 1 -- 10.10.1.13:0/2715436964 >> <== osd.23 v2:10.10.1.7:6801/4815 22 ==== osd_op_reply(1516 >> script.prerequest. [call,getxattrs,stat] v0'0 uv0 ondisk = -2 ((2) No such >> file or directory)) v8 ==== 246+0+0 (crc 0 0 0) 0x56055ea18b40 con >> 0x56055e53b000 >> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 >> 0.001000002s s3:put_obj scheduling with throttler client=2 cost=1 >> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 >> 0.001000002s s3:put_obj op=21RGWPutObj_ObjStore_S3 >> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 2 req 728157015944164764 >> 0.001000002s s3:put_obj verifying requester >> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 >> 0.001000002s s3:put_obj rgw::auth::StrategyRegistry::s3_main_strategy_t: >> trying rgw::auth::s3::AWSAuthStrategy >> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 >> 0.001000002s s3:put_obj rgw::auth::s3::AWSAuthStrategy: trying >> rgw::auth::s3::S3AnonymousEngine >> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 >> 0.001000002s s3:put_obj rgw::auth::s3::S3AnonymousEngine granted access >> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 >> 0.001000002s s3:put_obj rgw::auth::s3::AWSAuthStrategy granted access >> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 2 req 728157015944164764 >> 0.001000002s s3:put_obj normalizing buckets and tenants >> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 >> 0.001000002s s->object=AUTH_971efa4cb18f42f7a405342072c39c9d/demo >> s->bucket=v1 >> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 2 req 728157015944164764 >> 0.001000002s s3:put_obj init permissions >> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 >> 0.001000002s s3:put_obj get_system_obj_state: rctx=0x7f23221a9000 >> obj=default.rgw.meta:root:v1 state=0x56055ea8c520 s->prefetch_data=0 >> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 >> 0.001000002s s3:put_obj cache get: name=default.rgw.meta+root+v1 : miss >> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 1 -- 10.10.1.13:0/2715436964 >> --> [v2:10.10.1.3:6802/4933,v1:10.10.1.3:6806/4933] -- >> osd_op(unknown.0.0:1517 11.b 11:d05f7b30:root::v1:head [call version.read >> in=11b,getxattrs,stat] snapc 0=[] ondisk+read+known_if_redirected e1182) v8 >> -- 0x56055eb2cc00 con 0x56055e585000 >> 2022-04-18T12:26:27.997+0100 7f230c801700 1 -- 10.10.1.13:0/2715436964 >> <== osd.3 v2:10.10.1.3:6802/4933 9 ==== osd_op_reply(1517 v1 >> [call,getxattrs,stat] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) >> v8 ==== 230+0+0 (crc 0 0 0) 0x56055e39db00 con 0x56055e585000 >> 2022-04-18T12:26:27.997+0100 7f22dd7a3700 10 req 728157015944164764 >> 0.002000004s s3:put_obj cache put: name=default.rgw.meta+root+v1 >> info.flags=0x0 >> 2022-04-18T12:26:27.997+0100 7f22dd7a3700 10 req 728157015944164764 >> 0.002000004s s3:put_obj adding default.rgw.meta+root+v1 to cache LRU end >> 2022-04-18T12:26:27.997+0100 7f22dd7a3700 10 req 728157015944164764 >> 0.002000004s s3:put_obj init_permissions on failed, ret=-2002 >> 2022-04-18T12:26:27.997+0100 7f22dd7a3700 1 req 728157015944164764 >> 0.002000004s op->ERRORHANDLER: err_no=-2002 new_err_no=-2002 >> 2022-04-18T12:26:27.997+0100 7f22dbfa0700 1 -- 10.10.1.13:0/2715436964 >> --> [v2:10.10.1.8:6804/4817,v1:10.10.1.8:6805/4817] -- >> osd_op(unknown.0.0:1518 12.1f 12:fb11263f:::script.postrequest.:head [call >> version.read in=11b,getxattrs,stat] snapc 0=[] >> ondisk+read+known_if_redirected e1182) v8 -- 0x56055eb2d000 con >> 0x56055e94c800 >> 2022-04-18T12:26:27.998+0100 7f230d002700 1 -- 10.10.1.13:0/2715436964 >> <== osd.9 v2:10.10.1.8:6804/4817 10 ==== osd_op_reply(1518 >> script.postrequest. [call,getxattrs,stat] v0'0 uv0 ondisk = -2 ((2) No such >> file or directory)) v8 ==== 247+0+0 (crc 0 0 0) 0x56055ea18b40 con >> 0x56055e94c800 >> 2022-04-18T12:26:27.998+0100 7f22d8f9a700 2 req 728157015944164764 >> 0.003000006s s3:put_obj op status=0 >> 2022-04-18T12:26:27.998+0100 7f22d8f9a700 2 req 728157015944164764 >> 0.003000006s s3:put_obj http status=404 >> 2022-04-18T12:26:27.998+0100 7f22d8f9a700 1 ====== req done >> req=0x7f23221aa620 op status=0 http_status=404 latency=0.003000006s ====== >> 2022-04-18T12:26:27.998+0100 7f22d8f9a700 1 beast: 0x7f23221aa620: >> 10.10.1.13 - anonymous [18/Apr/2022:12:26:27.995 +0100] "PUT >> /v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo HTTP/1.1" 404 214 - >> "openstacksdk/0.59.0 keystoneauth1/4.4.0 python-requests/2.26.0 >> CPython/3.8.8" - latency=0.003000006s >> > > Could you help please. > > Regards. > -- ??????? ????? ???????? Buddhika Sanjeewa Godakuru Systems Analyst/Programmer Deputy Webmaster / University of Kelaniya Information and Communication Technology Centre (ICTC) University of Kelaniya, Sri Lanka, Kelaniya, Sri Lanka. Mobile : (+94) 071 5696981 Office : (+94) 011 2903420 / 2903424 -- ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++? University of Kelaniya Sri Lanka, accepts no liability for the content of this email, or for the consequences of any actions taken on the basis of the information provided, unless that information is subsequently confirmed in writing. If you are not the intended recipient, this email and/or any information it contains should not be copied, disclosed, retained or used by you or any other party and the email and all its contents should be promptly deleted fully from our system and the sender informed. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Tue Apr 19 10:09:03 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Tue, 19 Apr 2022 11:09:03 +0100 Subject: [Kolla-ansible][Xena][Ceph-RGW] need help configuring Ceph RGW for Swift and S3 access In-Reply-To: References: Message-ID: Hi, Thanks. The endpoints were created by Kolla-ansible upon deployment. I did configure kolla-ansible to enable cross project tenant access by using : *ceph_rgw_swift_account_in_url: true* And I did add the *rgw_swift_account_in_url = true* in ceph.conf in the Rados servers. But the endpoints were created by kolla. I will modify them and try again. Regards. Le mar. 19 avr. 2022 ? 08:12, Buddhika S. Godakuru - University of Kelaniya a ?crit : > Dear Wodel, > I think that default endpoint for swift when using cephrgw is /swift/v1 > (unless you have changed it in ceph), > so your endpoints should be > | 4082b4acf8bc4e4c9efc6e2d0e293724 | RegionOne | swift | > object-store | True | admin | > https://dashint.cloud.example.com:6780/swift/v1/AUTH_%(project_id)s | > | b13a2f53e13e4650b4efdb8184eb0211 | RegionOne | swift | > object-store | True | internal | > https://dashint.cloud.example.com:6780/swift/v1/AUTH_%(project_id)s | > | f85b36ff9a2b49bc9eaadf1aafdee28c | RegionOne | swift | > object-store | True | public | > https://dash.cloud.example.com:6780/swift/v1/AUTH_%(project_id)s | > > > See > https://docs.ceph.com/en/latest/radosgw/keystone/#cross-project-tenant-access > > On Mon, 18 Apr 2022 at 23:52, wodel youchi wrote: > >> Hi, >> I am having trouble configuring Openstack to use Ceph RGW as the Object >> store backend for Swift and S3. >> >> My setup is an HCI, I have 3 controllers which are also my ceph mgrs, >> mons and rgws and 9 compte/storage servers (osds). >> Xena is deployed with Ceph Pacific. >> >> Ceph public network is a private network on vlan10 with 10.10.1.0/24 as >> a subnet. >> >> Here is a snippet from my globals.yml : >> >>> --- >>> kolla_base_distro: "centos" >>> kolla_install_type: "source" >>> openstack_release: "xena" >>> kolla_internal_vip_address: "10.10.3.1" >>> kolla_internal_fqdn: "dashint.cloud.example.com" >>> kolla_external_vip_address: "x.x.x.x" >>> kolla_external_fqdn: "dash.cloud.example.com " >>> docker_registry: 192.168.1.16:4000 >>> network_interface: "bond0" >>> kolla_external_vip_interface: "bond1" >>> api_interface: "bond1.30" >>> *storage_interface: "bond1.10" <---------------- VLAN10 (public >>> ceph network)* >>> tunnel_interface: "bond1.40" >>> dns_interface: "bond1" >>> octavia_network_interface: "bond1.301" >>> neutron_external_interface: "bond2" >>> neutron_plugin_agent: "openvswitch" >>> keepalived_virtual_router_id: "51" >>> kolla_enable_tls_internal: "yes" >>> kolla_enable_tls_external: "yes" >>> kolla_certificates_dir: "{{ node_config }}/certificates" >>> kolla_external_fqdn_cert: "{{ kolla_certificates_dir }}/haproxy.pem" >>> kolla_internal_fqdn_cert: "{{ kolla_certificates_dir >>> }}/haproxy-internal.pem" >>> kolla_admin_openrc_cacert: "{{ kolla_certificates_dir }}/ca.pem" >>> kolla_copy_ca_into_containers: "yes" >>> kolla_enable_tls_backend: "yes" >>> kolla_verify_tls_backend: "no" >>> kolla_tls_backend_cert: "{{ kolla_certificates_dir }}/backend-cert.pem" >>> kolla_tls_backend_key: "{{ kolla_certificates_dir }}/backend-key.pem" >>> enable_openstack_core: "yes" >>> enable_hacluster: "yes" >>> enable_haproxy: "yes" >>> enable_aodh: "yes" >>> enable_barbican: "yes" >>> enable_ceilometer: "yes" >>> enable_central_logging: "yes" >>> >>> *enable_ceph_rgw: "yes"enable_ceph_rgw_loadbalancer: "{{ enable_ceph_rgw >>> | bool }}"* >>> enable_cinder: "yes" >>> enable_cinder_backup: "yes" >>> enable_collectd: "yes" >>> enable_designate: "yes" >>> enable_elasticsearch_curator: "yes" >>> enable_freezer: "no" >>> enable_gnocchi: "yes" >>> enable_gnocchi_statsd: "yes" >>> enable_magnum: "yes" >>> enable_manila: "yes" >>> enable_manila_backend_cephfs_native: "yes" >>> enable_mariabackup: "yes" >>> enable_masakari: "yes" >>> enable_neutron_vpnaas: "yes" >>> enable_neutron_qos: "yes" >>> enable_neutron_agent_ha: "yes" >>> enable_neutron_provider_networks: "yes" >>> enable_neutron_segments: "yes" >>> enable_octavia: "yes" >>> enable_trove: "yes" >>> external_ceph_cephx_enabled: "yes" >>> ceph_glance_keyring: "ceph.client.glance.keyring" >>> ceph_glance_user: "glance" >>> ceph_glance_pool_name: "images" >>> ceph_cinder_keyring: "ceph.client.cinder.keyring" >>> ceph_cinder_user: "cinder" >>> ceph_cinder_pool_name: "volumes" >>> ceph_cinder_backup_keyring: "ceph.client.cinder-backup.keyring" >>> ceph_cinder_backup_user: "cinder-backup" >>> ceph_cinder_backup_pool_name: "backups" >>> ceph_nova_keyring: "{{ ceph_cinder_keyring }}" >>> ceph_nova_user: "cinder" >>> ceph_nova_pool_name: "vms" >>> ceph_gnocchi_keyring: "ceph.client.gnocchi.keyring" >>> ceph_gnocchi_user: "gnocchi" >>> ceph_gnocchi_pool_name: "metrics" >>> ceph_manila_keyring: "ceph.client.manila.keyring" >>> ceph_manila_user: "manila" >>> glance_backend_ceph: "yes" >>> glance_backend_file: "no" >>> gnocchi_backend_storage: "ceph" >>> cinder_backend_ceph: "yes" >>> cinder_backup_driver: "ceph" >>> cloudkitty_collector_backend: "gnocchi" >>> designate_ns_record: "cloud.example.com " >>> nova_backend_ceph: "yes" >>> nova_compute_virt_type: "kvm" >>> octavia_auto_configure: yes >>> octavia_amp_flavor: >>> name: "amphora" >>> is_public: no >>> vcpus: 1 >>> ram: 1024 >>> disk: 5 >>> octavia_amp_network: >>> name: lb-mgmt-net >>> provider_network_type: vlan >>> provider_segmentation_id: 301 >>> provider_physical_network: physnet1 >>> external: false >>> shared: false >>> subnet: >>> name: lb-mgmt-subnet >>> cidr: "10.7.0.0/16" >>> allocation_pool_start: "10.7.0.50" >>> allocation_pool_end: "10.7.255.200" >>> no_gateway_ip: yes >>> enable_dhcp: yes >>> mtu: 9000 >>> octavia_amp_network_cidr: 10.10.7.0/24 >>> octavia_amp_image_tag: "amphora" >>> octavia_certs_country: XZ >>> octavia_certs_state: Gotham >>> octavia_certs_organization: WAYNE >>> octavia_certs_organizational_unit: IT >>> horizon_keystone_multidomain: true >>> elasticsearch_curator_dry_run: "no" >>> enable_cluster_user_trust: true >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> *ceph_rgw_hosts: - host: controllera ip: 10.10.1.5 >>> port: 8080 - host: controllerb ip: 10.10.1.9 >>> port: 8080 - host: controllerc ip: 10.10.1.13 >>> port: 8080ceph_rgw_swift_account_in_url: trueceph_rgw_swift_compatibility: >>> true* >> >> >> >> And Here is my ceph all.yml file >> >>> --- >>> dummy: >>> ceph_release_num: 16 >>> cluster: ceph >>> configure_firewall: False >>> *monitor_interface: bond1.10* >>> monitor_address_block: 10.10.1.0/24 >>> is_hci: true >>> hci_safety_factor: 0.2 >>> osd_memory_target: 4294967296 >>> *public_network: 10.10.1.0/24 * >>> cluster_network: 10.10.2.0/24 >>> *radosgw_interface: "{{ monitor_interface }}"* >>> *radosgw_address_block: 10.10.1.0/24 * >>> nfs_file_gw: true >>> nfs_obj_gw: true >>> ceph_docker_image: "ceph/daemon" >>> ceph_docker_image_tag: latest-pacific >>> ceph_docker_registry: 192.168.1.16:4000 >>> containerized_deployment: True >>> openstack_config: true >>> openstack_glance_pool: >>> name: "images" >>> pg_autoscale_mode: False >>> application: "rbd" >>> pg_num: 128 >>> pgp_num: 128 >>> target_size_ratio: 5.00 >>> rule_name: "SSD" >>> openstack_cinder_pool: >>> name: "volumes" >>> pg_autoscale_mode: False >>> application: "rbd" >>> pg_num: 1024 >>> pgp_num: 1024 >>> target_size_ratio: 42.80 >>> rule_name: "SSD" >>> openstack_nova_pool: >>> name: "vms" >>> pg_autoscale_mode: False >>> application: "rbd" >>> pg_num: 256 >>> pgp_num: 256 >>> target_size_ratio: 10.00 >>> rule_name: "SSD" >>> openstack_cinder_backup_pool: >>> name: "backups" >>> pg_autoscale_mode: False >>> application: "rbd" >>> pg_num: 512 >>> pgp_num: 512 >>> target_size_ratio: 18.00 >>> rule_name: "SSD" >>> openstack_gnocchi_pool: >>> name: "metrics" >>> pg_autoscale_mode: False >>> application: "rbd" >>> pg_num: 32 >>> pgp_num: 32 >>> target_size_ratio: 0.10 >>> rule_name: "SSD" >>> openstack_cephfs_data_pool: >>> name: "cephfs_data" >>> pg_autoscale_mode: False >>> application: "cephfs" >>> pg_num: 256 >>> pgp_num: 256 >>> target_size_ratio: 10.00 >>> rule_name: "SSD" >>> openstack_cephfs_metadata_pool: >>> name: "cephfs_metadata" >>> pg_autoscale_mode: False >>> application: "cephfs" >>> pg_num: 32 >>> pgp_num: 32 >>> target_size_ratio: 0.10 >>> rule_name: "SSD" >>> openstack_pools: >>> - "{{ openstack_glance_pool }}" >>> - "{{ openstack_cinder_pool }}" >>> - "{{ openstack_nova_pool }}" >>> - "{{ openstack_cinder_backup_pool }}" >>> - "{{ openstack_gnocchi_pool }}" >>> - "{{ openstack_cephfs_data_pool }}" >>> - "{{ openstack_cephfs_metadata_pool }}" >>> openstack_keys: >>> - { name: client.glance, caps: { mon: "profile rbd", osd: "profile rbd >>> pool={{ openstack_cinder_pool.name }}, profile rbd pool={{ >>> openstack_glance_pool.name }}"}, mode: "0600" } >>> - { name: client.cinder, caps: { mon: "profile rbd", osd: "profile rbd >>> pool={{ openstack_cinder_pool.name }}, profile rbd pool={{ >>> openstack_nova_pool.name }}, profile rbd pool={{ >>> openstack_glance_pool.name }}"}, mode: "0600" } >>> - { name: client.cinder-backup, caps: { mon: "profile rbd", osd: >>> "profile rbd pool={{ openstack_cinder_backup_pool.name }}"}, mode: >>> "0600" } >>> - { name: client.gnocchi, caps: { mon: "profile rbd", osd: "profile >>> rbd pool={{ openstack_gnocchi_pool.name }}"}, mode: "0600", } >>> - { name: client.openstack, caps: { mon: "profile rbd", osd: "profile >>> rbd pool={{ openstack_glance_pool.name }}, profile rbd pool={{ >>> openstack_nova_pool.name }}, profile rbd pool={{ >>> openstack_cinder_pool.name }}, profile rbd pool={{ >>> openstack_cinder_backup_pool.name }}"}, mode: "0600" } >>> dashboard_enabled: True >>> dashboard_protocol: https >>> dashboard_port: 8443 >>> dashboard_network: "192.168.1.0/24" >>> dashboard_admin_user: admin >>> dashboard_admin_user_ro: true >>> dashboard_admin_password: *********** >>> dashboard_crt: '/home/deployer/work/site-central/chaininv.crt' >>> dashboard_key: '/home/deployer/work/site-central/cloud_example.com.priv' >>> dashboard_grafana_api_no_ssl_verify: true >>> dashboard_rgw_api_user_id: admin >>> dashboard_rgw_api_no_ssl_verify: true >>> dashboard_frontend_vip: '192.168.1.5' >>> node_exporter_container_image: " >>> 192.168.1.16:4000/prom/node-exporter:v0.17.0" >>> grafana_admin_user: admin >>> grafana_admin_password: ********* >>> grafana_crt: '/home/deployer/work/site-central/chaininv.crt' >>> grafana_key: '/home/deployer/work/site-central/cloud_example.com.priv' >>> grafana_server_fqdn: 'grafanasrv.cloud.example.com' >>> grafana_container_image: "192.168.1.16:4000/grafana/grafana:6.7.4" >>> grafana_dashboard_version: pacific >>> prometheus_container_image: "192.168.1.16:4000/prom/prometheus:v2.7.2" >>> alertmanager_container_image: " >>> 192.168.1.16:4000/prom/alertmanager:v0.16.2" >>> >> >> And my rgws.yml >> >>> --- >>> dummy: >>> copy_admin_key: true >>> rgw_create_pools: >>> "{{ rgw_zone }}.rgw.buckets.data": >>> pg_num: 256 >>> pgp_num: 256 >>> size: 3 >>> type: replicated >>> pg_autoscale_mode: False >>> rule_id: 1 >>> "{{ rgw_zone }}.rgw.buckets.index": >>> pg_num: 64 >>> pgp_num: 64 >>> size: 3 >>> type: replicated >>> pg_autoscale_mode: False >>> rule_id: 1 >>> "{{ rgw_zone }}.rgw.meta": >>> pg_num: 32 >>> pgp_num: 32 >>> size: 3 >>> type: replicated >>> pg_autoscale_mode: False >>> rule_id: 1 >>> "{{ rgw_zone }}.rgw.log": >>> pg_num: 32 >>> pgp_num: 32 >>> size: 3 >>> type: replicated >>> pg_autoscale_mode: False >>> rule_id: 1 >>> "{{ rgw_zone }}.rgw.control": >>> pg_num: 32 >>> pgp_num: 32 >>> size: 3 >>> type: replicated >>> pg_autoscale_mode: False >>> rule_id: 1 >>> >> >> The ceph_rgw user was created by kolla >> (xenavenv) [deployer at rscdeployer ~]$ openstack user list | grep ceph >> | 3262aa7e03ab49c8a5710dfe3b16a136 | ceph_rgw >> >> This is my ceph.conf from one of my controllers : >> >>> [root at controllera ~]# cat /etc/ceph/ceph.conf >>> [client.rgw.controllera.rgw0] >>> host = controllera >>> rgw_keystone_url = https://dash.cloud.example.com:5000 >>> ##Authentication using username, password and tenant. Preferred. >>> rgw_keystone_verify_ssl = false >>> rgw_keystone_api_version = 3 >>> rgw_keystone_admin_user = ceph_rgw >>> rgw_keystone_admin_password = >>> cos2Jcnpnw9BhGwvPm************************** >>> rgw_keystone_admin_domain = Default >>> rgw_keystone_admin_project = service >>> rgw_s3_auth_use_keystone = true >>> rgw_keystone_accepted_roles = admin >>> rgw_keystone_implicit_tenants = true >>> rgw_swift_account_in_url = true >>> keyring = /var/lib/ceph/radosgw/ceph-rgw.controllera.rgw0/keyring >>> log file = /var/log/ceph/ceph-rgw-controllera.rgw0.log >>> rgw frontends = beast endpoint=10.10.1.5:8080 >>> rgw thread pool size = 512 >>> #For Debug >>> debug ms = 1 >>> debug rgw = 20 >>> >>> >>> # Please do not change this file directly since it is managed by Ansible >>> and will be overwritten >>> [global] >>> cluster network = 10.10.2.0/24 >>> fsid = da094354-6ade-415a-a424-************ >>> mon host = [v2:10.10.1.5:3300,v1:10.10.1.5:6789],[v2:10.10.1.9:3300,v1: >>> 10.10.1.9:6789],[v2:10.10.1.13:3300,v1:10.10.1.13:6789] >>> mon initial members = controllera,controllerb,controllerc >>> osd pool default crush rule = 1 >>> *public network = 10.10.1.0/24 * >>> >> >> >> Here are my swift endpoints >> (xenavenv) [deployer at rscdeployer ~]$ openstack endpoint list | grep swift >> | 4082b4acf8bc4e4c9efc6e2d0e293724 | RegionOne | swift | >> object-store | True | admin | >> https://dashint.cloud.example.com:6780/v1/AUTH_%(project_id)s | >> | b13a2f53e13e4650b4efdb8184eb0211 | RegionOne | swift | >> object-store | True | internal | >> https://dashint.cloud.example.com:6780/v1/AUTH_%(project_id)s | >> | f85b36ff9a2b49bc9eaadf1aafdee28c | RegionOne | swift | >> object-store | True | public | >> https://dash.cloud.example.com:6780/v1/AUTH_%(project_id)s | >> >> When I connect to Horizon -> Project -> Object Store -> Containers I get >> theses errors : >> >> - Unable to get the swift container listing >> - Unable to fetch the policy details. >> >> I cannot create a new container from the WebUI, the Storage policy >> parameter is empty. >> If I try to create a new container from the CLI, I get this : >> (xenavenv) [deployer at rscdeployer ~]$ source cephrgw-openrc.sh >> (xenavenv) [deployer at rscdeployer ~]$ openstack container create demo -v >> START with options: container create demo -v >> command: container create -> >> openstackclient.object.v1.container.CreateContainer (auth=True) >> Using auth plugin: password >> Not Found (HTTP 404) >> END return value: 1 >> >> >> This is the log from RGW service when I execute the above command : >> >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 CONTENT_LENGTH=0 >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_ACCEPT=*/* >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_ACCEPT_ENCODING=gzip, >>> deflate >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_HOST= >>> dashint.cloud.example.com:6780 >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 >>> HTTP_USER_AGENT=openstacksdk/0.59.0 keystoneauth1/4.4.0 >>> python-requests/2.26.0 CPython/3.8.8 >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_VERSION=1.1 >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 >>> HTTP_X_AUTH_TOKEN=gAAAAABiXUrjDFNzXx03mt1lbpUiCqNND1HACspSfg6h_TMxKYND5Hb9BO3FxH0a7CYoBXgRJywGszlK8cl-7zbUNRjHmxgIzmyh-CrWyGv793ZLOAmT_XShcrIKThjIIH3gTxYoX1TXwOKbsvMuZnI5EKKsol2y2MhcqPLeLGc28_AwoOr_b80 >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 >>> HTTP_X_FORWARDED_FOR=10.10.3.16 >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_X_FORWARDED_PROTO=https >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 REMOTE_ADDR=10.10.1.13 >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 REQUEST_METHOD=PUT >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 >>> REQUEST_URI=/v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 >>> SCRIPT_URI=/v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 SERVER_PORT=8080 >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 1 ====== starting new request >>> req=0x7f23221aa620 ===== >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 2 req 728157015944164764 >>> 0.000000000s initializing for trans_id = >>> tx000000a1aeef2b40f759c-00625d4ae3-4b389-default >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 10 req 728157015944164764 >>> 0.000000000s rgw api priority: s3=8 s3website=7 >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 10 req 728157015944164764 >>> 0.000000000s host=dashint.cloud.example.com >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 req 728157015944164764 >>> 0.000000000s subdomain= domain= in_hosted_domain=0 >>> in_hosted_domain_s3website=0 >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 req 728157015944164764 >>> 0.000000000s final domain/bucket subdomain= domain= in_hosted_domain=0 >>> in_hosted_domain_s3website=0 s->info.domain= >>> s->info.request_uri=/v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 req 728157015944164764 >>> 0.000000000s get_handler handler=22RGWHandler_REST_Obj_S3 >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 10 req 728157015944164764 >>> 0.000000000s handler=22RGWHandler_REST_Obj_S3 >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 2 req 728157015944164764 >>> 0.000000000s getting op 1 >>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 1 -- 10.10.1.13:0/2715436964 >>> --> [v2:10.10.1.7:6801/4815,v1:10.10.1.7:6803/4815] -- >>> osd_op(unknown.0.0:1516 12.3 12:c14cb721:::script.prerequest.:head [call >>> version.read in=11b,getxattrs,stat] snapc 0=[] >>> ondisk+read+known_if_redirected e1182) v8 -- 0x56055eb2c400 con >>> 0x56055e53b000 >>> 2022-04-18T12:26:27.996+0100 7f230d002700 1 -- 10.10.1.13:0/2715436964 >>> <== osd.23 v2:10.10.1.7:6801/4815 22 ==== osd_op_reply(1516 >>> script.prerequest. [call,getxattrs,stat] v0'0 uv0 ondisk = -2 ((2) No such >>> file or directory)) v8 ==== 246+0+0 (crc 0 0 0) 0x56055ea18b40 con >>> 0x56055e53b000 >>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 >>> 0.001000002s s3:put_obj scheduling with throttler client=2 cost=1 >>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 >>> 0.001000002s s3:put_obj op=21RGWPutObj_ObjStore_S3 >>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 2 req 728157015944164764 >>> 0.001000002s s3:put_obj verifying requester >>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 >>> 0.001000002s s3:put_obj rgw::auth::StrategyRegistry::s3_main_strategy_t: >>> trying rgw::auth::s3::AWSAuthStrategy >>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 >>> 0.001000002s s3:put_obj rgw::auth::s3::AWSAuthStrategy: trying >>> rgw::auth::s3::S3AnonymousEngine >>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 >>> 0.001000002s s3:put_obj rgw::auth::s3::S3AnonymousEngine granted access >>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 >>> 0.001000002s s3:put_obj rgw::auth::s3::AWSAuthStrategy granted access >>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 2 req 728157015944164764 >>> 0.001000002s s3:put_obj normalizing buckets and tenants >>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 >>> 0.001000002s s->object=AUTH_971efa4cb18f42f7a405342072c39c9d/demo >>> s->bucket=v1 >>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 2 req 728157015944164764 >>> 0.001000002s s3:put_obj init permissions >>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 >>> 0.001000002s s3:put_obj get_system_obj_state: rctx=0x7f23221a9000 >>> obj=default.rgw.meta:root:v1 state=0x56055ea8c520 s->prefetch_data=0 >>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 >>> 0.001000002s s3:put_obj cache get: name=default.rgw.meta+root+v1 : miss >>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 1 -- 10.10.1.13:0/2715436964 >>> --> [v2:10.10.1.3:6802/4933,v1:10.10.1.3:6806/4933] -- >>> osd_op(unknown.0.0:1517 11.b 11:d05f7b30:root::v1:head [call version.read >>> in=11b,getxattrs,stat] snapc 0=[] ondisk+read+known_if_redirected e1182) v8 >>> -- 0x56055eb2cc00 con 0x56055e585000 >>> 2022-04-18T12:26:27.997+0100 7f230c801700 1 -- 10.10.1.13:0/2715436964 >>> <== osd.3 v2:10.10.1.3:6802/4933 9 ==== osd_op_reply(1517 v1 >>> [call,getxattrs,stat] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) >>> v8 ==== 230+0+0 (crc 0 0 0) 0x56055e39db00 con 0x56055e585000 >>> 2022-04-18T12:26:27.997+0100 7f22dd7a3700 10 req 728157015944164764 >>> 0.002000004s s3:put_obj cache put: name=default.rgw.meta+root+v1 >>> info.flags=0x0 >>> 2022-04-18T12:26:27.997+0100 7f22dd7a3700 10 req 728157015944164764 >>> 0.002000004s s3:put_obj adding default.rgw.meta+root+v1 to cache LRU end >>> 2022-04-18T12:26:27.997+0100 7f22dd7a3700 10 req 728157015944164764 >>> 0.002000004s s3:put_obj init_permissions on failed, ret=-2002 >>> 2022-04-18T12:26:27.997+0100 7f22dd7a3700 1 req 728157015944164764 >>> 0.002000004s op->ERRORHANDLER: err_no=-2002 new_err_no=-2002 >>> 2022-04-18T12:26:27.997+0100 7f22dbfa0700 1 -- 10.10.1.13:0/2715436964 >>> --> [v2:10.10.1.8:6804/4817,v1:10.10.1.8:6805/4817] -- >>> osd_op(unknown.0.0:1518 12.1f 12:fb11263f:::script.postrequest.:head [call >>> version.read in=11b,getxattrs,stat] snapc 0=[] >>> ondisk+read+known_if_redirected e1182) v8 -- 0x56055eb2d000 con >>> 0x56055e94c800 >>> 2022-04-18T12:26:27.998+0100 7f230d002700 1 -- 10.10.1.13:0/2715436964 >>> <== osd.9 v2:10.10.1.8:6804/4817 10 ==== osd_op_reply(1518 >>> script.postrequest. [call,getxattrs,stat] v0'0 uv0 ondisk = -2 ((2) No such >>> file or directory)) v8 ==== 247+0+0 (crc 0 0 0) 0x56055ea18b40 con >>> 0x56055e94c800 >>> 2022-04-18T12:26:27.998+0100 7f22d8f9a700 2 req 728157015944164764 >>> 0.003000006s s3:put_obj op status=0 >>> 2022-04-18T12:26:27.998+0100 7f22d8f9a700 2 req 728157015944164764 >>> 0.003000006s s3:put_obj http status=404 >>> 2022-04-18T12:26:27.998+0100 7f22d8f9a700 1 ====== req done >>> req=0x7f23221aa620 op status=0 http_status=404 latency=0.003000006s ====== >>> 2022-04-18T12:26:27.998+0100 7f22d8f9a700 1 beast: 0x7f23221aa620: >>> 10.10.1.13 - anonymous [18/Apr/2022:12:26:27.995 +0100] "PUT >>> /v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo HTTP/1.1" 404 214 - >>> "openstacksdk/0.59.0 keystoneauth1/4.4.0 python-requests/2.26.0 >>> CPython/3.8.8" - latency=0.003000006s >>> >> >> Could you help please. >> >> Regards. >> > > > -- > > ??????? ????? ???????? > Buddhika Sanjeewa Godakuru > > Systems Analyst/Programmer > Deputy Webmaster / University of Kelaniya > > Information and Communication Technology Centre (ICTC) > University of Kelaniya, Sri Lanka, > Kelaniya, > Sri Lanka. > > Mobile : (+94) 071 5696981 > Office : (+94) 011 2903420 / 2903424 > > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > University of Kelaniya Sri Lanka, accepts no liability for the content of > this email, or for the consequences of any actions taken on the basis of > the information provided, unless that information is subsequently confirmed > in writing. If you are not the intended recipient, this email and/or any > information it contains should not be copied, disclosed, retained or used > by you or any other party and the email and all its contents should be > promptly deleted fully from our system and the sender informed. > > E-mail transmission cannot be guaranteed to be secure or error-free as > information could be intercepted, corrupted, lost, destroyed, arrive late > or incomplete. > > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gricha.1888 at gmail.com Tue Apr 19 12:00:21 2022 From: gricha.1888 at gmail.com (Richa Gupta) Date: Tue, 19 Apr 2022 17:30:21 +0530 Subject: [Kolla][Kolla-Ansible] Unable to Deploy tacker victoria in centos 8 Stream Message-ID: We tried installing the tacker victoria release using Kolla Ansible all-in-one deployment on centos stream 8, by using the following links: https://docs.openstack.org/tacker/victoria/install/kolla.html and running prerequisites mentioned in the link below: https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html#install-dependencies (We changed ussri to victoria at all necessary places. The globals.yml exactly matches with the one given in the OpenStack link) while running deployment we encounter the following error: RUNNING HANDLER [Waiting for rabbitmq to start] ********************************************************************************************************** fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["docker", "exec", "rabbitmq", "rabbitmqctl", "wait", "/var/lib/rabbitmq/mnesia/rabbitmq.pid"], "delta": "0:00:10.704454", "end": "2022-04-19 11:43:12.259809", "msg": "non-zero return code", "rc": 75, "start": "2022-04-19 11:43:01.555355", "stderr": "Error: operation wait on node rabbit at tacker-victoria timed out. Timeout value used: 10000", "stderr_lines": ["Error: operation wait on node rabbit at tacker-victoria timed out. Timeout value used: 10000"], "stdout": "Waiting for pid file '/var/lib/rabbitmq/mnesia/rabbitmq.pid' to appear", "stdout_lines": ["Waiting for pid file '/var/lib/rabbitmq/mnesia/rabbitmq.pid' to appear"]} RUNNING HANDLER [Restart remaining rabbitmq containers] ************************************************************************************************** NO MORE HOSTS LEFT *************************************************************************************************************************************** PLAY RECAP *********************************************************************************************************************************************** localhost : ok=77 changed=43 unreachable=0 failed=1 skipped=19 rescued=0 ignored=1 After this we run deploy again and encounter the following error: TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] *************************************************************************************************************************** FAILED - RETRYING: nova | Ensure RabbitMQ users exist (5 retries left). FAILED - RETRYING: nova | Ensure RabbitMQ users exist (4 retries left). FAILED - RETRYING: nova | Ensure RabbitMQ users exist (3 retries left). FAILED - RETRYING: nova | Ensure RabbitMQ users exist (2 retries left). FAILED - RETRYING: nova | Ensure RabbitMQ users exist (1 retries left). failed: [localhost -> localhost] (item=None) => {"attempts": 5, "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} fatal: [localhost -> {{ service_rabbitmq_delegate_host }}]: FAILED! => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} PLAY RECAP ***************************************************************************************************************************************************************************** localhost : ok=116 changed=42 unreachable=0 failed=1 skipped=40 rescued=0 ignored=0 Command failed ansible-playbook -i /usr/local/share/kolla-ansible/ansible/inventory/all-in-one -e @/etc/kolla/globals.yml -e @/etc/kolla/passwords.yml -e CONFIG_DIR=/etc/kolla -e kolla_action=deploy /usr/local/share/kolla-ansible/ansible/site.yml [centos at tacker-victoria ~]$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Tue Apr 19 15:10:00 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Tue, 19 Apr 2022 16:10:00 +0100 Subject: [Solved][Kolla-ansible][Xena][Ceph-RGW] need help configuring Ceph RGW for Swift and S3 access In-Reply-To: References: Message-ID: Hi, Many thanks, after changing the endpoints it worked. So the question is why kolla-ansible did not create the correct urls? Did I miss something? Regards. Le mar. 19 avr. 2022 ? 11:09, wodel youchi a ?crit : > Hi, > Thanks. > > The endpoints were created by Kolla-ansible upon deployment. > > I did configure kolla-ansible to enable cross project tenant access by > using : > *ceph_rgw_swift_account_in_url: true* > > And I did add the *rgw_swift_account_in_url = true* in ceph.conf in the > Rados servers. But the endpoints were created by kolla. > > I will modify them and try again. > > Regards. > > Le mar. 19 avr. 2022 ? 08:12, Buddhika S. Godakuru - University of > Kelaniya a ?crit : > >> Dear Wodel, >> I think that default endpoint for swift when using cephrgw is /swift/v1 >> (unless you have changed it in ceph), >> so your endpoints should be >> | 4082b4acf8bc4e4c9efc6e2d0e293724 | RegionOne | swift | >> object-store | True | admin | >> https://dashint.cloud.example.com:6780/swift/v1/AUTH_%(project_id)s | >> | b13a2f53e13e4650b4efdb8184eb0211 | RegionOne | swift | >> object-store | True | internal | >> https://dashint.cloud.example.com:6780/swift/v1/AUTH_%(project_id)s | >> | f85b36ff9a2b49bc9eaadf1aafdee28c | RegionOne | swift | >> object-store | True | public | >> https://dash.cloud.example.com:6780/swift/v1/AUTH_%(project_id)s | >> >> >> See >> https://docs.ceph.com/en/latest/radosgw/keystone/#cross-project-tenant-access >> >> On Mon, 18 Apr 2022 at 23:52, wodel youchi >> wrote: >> >>> Hi, >>> I am having trouble configuring Openstack to use Ceph RGW as the Object >>> store backend for Swift and S3. >>> >>> My setup is an HCI, I have 3 controllers which are also my ceph mgrs, >>> mons and rgws and 9 compte/storage servers (osds). >>> Xena is deployed with Ceph Pacific. >>> >>> Ceph public network is a private network on vlan10 with 10.10.1.0/24 as >>> a subnet. >>> >>> Here is a snippet from my globals.yml : >>> >>>> --- >>>> kolla_base_distro: "centos" >>>> kolla_install_type: "source" >>>> openstack_release: "xena" >>>> kolla_internal_vip_address: "10.10.3.1" >>>> kolla_internal_fqdn: "dashint.cloud.example.com" >>>> kolla_external_vip_address: "x.x.x.x" >>>> kolla_external_fqdn: "dash.cloud.example.com " >>>> docker_registry: 192.168.1.16:4000 >>>> network_interface: "bond0" >>>> kolla_external_vip_interface: "bond1" >>>> api_interface: "bond1.30" >>>> *storage_interface: "bond1.10" <---------------- VLAN10 (public >>>> ceph network)* >>>> tunnel_interface: "bond1.40" >>>> dns_interface: "bond1" >>>> octavia_network_interface: "bond1.301" >>>> neutron_external_interface: "bond2" >>>> neutron_plugin_agent: "openvswitch" >>>> keepalived_virtual_router_id: "51" >>>> kolla_enable_tls_internal: "yes" >>>> kolla_enable_tls_external: "yes" >>>> kolla_certificates_dir: "{{ node_config }}/certificates" >>>> kolla_external_fqdn_cert: "{{ kolla_certificates_dir }}/haproxy.pem" >>>> kolla_internal_fqdn_cert: "{{ kolla_certificates_dir >>>> }}/haproxy-internal.pem" >>>> kolla_admin_openrc_cacert: "{{ kolla_certificates_dir }}/ca.pem" >>>> kolla_copy_ca_into_containers: "yes" >>>> kolla_enable_tls_backend: "yes" >>>> kolla_verify_tls_backend: "no" >>>> kolla_tls_backend_cert: "{{ kolla_certificates_dir }}/backend-cert.pem" >>>> kolla_tls_backend_key: "{{ kolla_certificates_dir }}/backend-key.pem" >>>> enable_openstack_core: "yes" >>>> enable_hacluster: "yes" >>>> enable_haproxy: "yes" >>>> enable_aodh: "yes" >>>> enable_barbican: "yes" >>>> enable_ceilometer: "yes" >>>> enable_central_logging: "yes" >>>> >>>> *enable_ceph_rgw: "yes"enable_ceph_rgw_loadbalancer: "{{ >>>> enable_ceph_rgw | bool }}"* >>>> enable_cinder: "yes" >>>> enable_cinder_backup: "yes" >>>> enable_collectd: "yes" >>>> enable_designate: "yes" >>>> enable_elasticsearch_curator: "yes" >>>> enable_freezer: "no" >>>> enable_gnocchi: "yes" >>>> enable_gnocchi_statsd: "yes" >>>> enable_magnum: "yes" >>>> enable_manila: "yes" >>>> enable_manila_backend_cephfs_native: "yes" >>>> enable_mariabackup: "yes" >>>> enable_masakari: "yes" >>>> enable_neutron_vpnaas: "yes" >>>> enable_neutron_qos: "yes" >>>> enable_neutron_agent_ha: "yes" >>>> enable_neutron_provider_networks: "yes" >>>> enable_neutron_segments: "yes" >>>> enable_octavia: "yes" >>>> enable_trove: "yes" >>>> external_ceph_cephx_enabled: "yes" >>>> ceph_glance_keyring: "ceph.client.glance.keyring" >>>> ceph_glance_user: "glance" >>>> ceph_glance_pool_name: "images" >>>> ceph_cinder_keyring: "ceph.client.cinder.keyring" >>>> ceph_cinder_user: "cinder" >>>> ceph_cinder_pool_name: "volumes" >>>> ceph_cinder_backup_keyring: "ceph.client.cinder-backup.keyring" >>>> ceph_cinder_backup_user: "cinder-backup" >>>> ceph_cinder_backup_pool_name: "backups" >>>> ceph_nova_keyring: "{{ ceph_cinder_keyring }}" >>>> ceph_nova_user: "cinder" >>>> ceph_nova_pool_name: "vms" >>>> ceph_gnocchi_keyring: "ceph.client.gnocchi.keyring" >>>> ceph_gnocchi_user: "gnocchi" >>>> ceph_gnocchi_pool_name: "metrics" >>>> ceph_manila_keyring: "ceph.client.manila.keyring" >>>> ceph_manila_user: "manila" >>>> glance_backend_ceph: "yes" >>>> glance_backend_file: "no" >>>> gnocchi_backend_storage: "ceph" >>>> cinder_backend_ceph: "yes" >>>> cinder_backup_driver: "ceph" >>>> cloudkitty_collector_backend: "gnocchi" >>>> designate_ns_record: "cloud.example.com " >>>> nova_backend_ceph: "yes" >>>> nova_compute_virt_type: "kvm" >>>> octavia_auto_configure: yes >>>> octavia_amp_flavor: >>>> name: "amphora" >>>> is_public: no >>>> vcpus: 1 >>>> ram: 1024 >>>> disk: 5 >>>> octavia_amp_network: >>>> name: lb-mgmt-net >>>> provider_network_type: vlan >>>> provider_segmentation_id: 301 >>>> provider_physical_network: physnet1 >>>> external: false >>>> shared: false >>>> subnet: >>>> name: lb-mgmt-subnet >>>> cidr: "10.7.0.0/16" >>>> allocation_pool_start: "10.7.0.50" >>>> allocation_pool_end: "10.7.255.200" >>>> no_gateway_ip: yes >>>> enable_dhcp: yes >>>> mtu: 9000 >>>> octavia_amp_network_cidr: 10.10.7.0/24 >>>> octavia_amp_image_tag: "amphora" >>>> octavia_certs_country: XZ >>>> octavia_certs_state: Gotham >>>> octavia_certs_organization: WAYNE >>>> octavia_certs_organizational_unit: IT >>>> horizon_keystone_multidomain: true >>>> elasticsearch_curator_dry_run: "no" >>>> enable_cluster_user_trust: true >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> *ceph_rgw_hosts: - host: controllera ip: 10.10.1.5 >>>> port: 8080 - host: controllerb ip: 10.10.1.9 >>>> port: 8080 - host: controllerc ip: 10.10.1.13 >>>> port: 8080ceph_rgw_swift_account_in_url: trueceph_rgw_swift_compatibility: >>>> true* >>> >>> >>> >>> And Here is my ceph all.yml file >>> >>>> --- >>>> dummy: >>>> ceph_release_num: 16 >>>> cluster: ceph >>>> configure_firewall: False >>>> *monitor_interface: bond1.10* >>>> monitor_address_block: 10.10.1.0/24 >>>> is_hci: true >>>> hci_safety_factor: 0.2 >>>> osd_memory_target: 4294967296 >>>> *public_network: 10.10.1.0/24 * >>>> cluster_network: 10.10.2.0/24 >>>> *radosgw_interface: "{{ monitor_interface }}"* >>>> *radosgw_address_block: 10.10.1.0/24 * >>>> nfs_file_gw: true >>>> nfs_obj_gw: true >>>> ceph_docker_image: "ceph/daemon" >>>> ceph_docker_image_tag: latest-pacific >>>> ceph_docker_registry: 192.168.1.16:4000 >>>> containerized_deployment: True >>>> openstack_config: true >>>> openstack_glance_pool: >>>> name: "images" >>>> pg_autoscale_mode: False >>>> application: "rbd" >>>> pg_num: 128 >>>> pgp_num: 128 >>>> target_size_ratio: 5.00 >>>> rule_name: "SSD" >>>> openstack_cinder_pool: >>>> name: "volumes" >>>> pg_autoscale_mode: False >>>> application: "rbd" >>>> pg_num: 1024 >>>> pgp_num: 1024 >>>> target_size_ratio: 42.80 >>>> rule_name: "SSD" >>>> openstack_nova_pool: >>>> name: "vms" >>>> pg_autoscale_mode: False >>>> application: "rbd" >>>> pg_num: 256 >>>> pgp_num: 256 >>>> target_size_ratio: 10.00 >>>> rule_name: "SSD" >>>> openstack_cinder_backup_pool: >>>> name: "backups" >>>> pg_autoscale_mode: False >>>> application: "rbd" >>>> pg_num: 512 >>>> pgp_num: 512 >>>> target_size_ratio: 18.00 >>>> rule_name: "SSD" >>>> openstack_gnocchi_pool: >>>> name: "metrics" >>>> pg_autoscale_mode: False >>>> application: "rbd" >>>> pg_num: 32 >>>> pgp_num: 32 >>>> target_size_ratio: 0.10 >>>> rule_name: "SSD" >>>> openstack_cephfs_data_pool: >>>> name: "cephfs_data" >>>> pg_autoscale_mode: False >>>> application: "cephfs" >>>> pg_num: 256 >>>> pgp_num: 256 >>>> target_size_ratio: 10.00 >>>> rule_name: "SSD" >>>> openstack_cephfs_metadata_pool: >>>> name: "cephfs_metadata" >>>> pg_autoscale_mode: False >>>> application: "cephfs" >>>> pg_num: 32 >>>> pgp_num: 32 >>>> target_size_ratio: 0.10 >>>> rule_name: "SSD" >>>> openstack_pools: >>>> - "{{ openstack_glance_pool }}" >>>> - "{{ openstack_cinder_pool }}" >>>> - "{{ openstack_nova_pool }}" >>>> - "{{ openstack_cinder_backup_pool }}" >>>> - "{{ openstack_gnocchi_pool }}" >>>> - "{{ openstack_cephfs_data_pool }}" >>>> - "{{ openstack_cephfs_metadata_pool }}" >>>> openstack_keys: >>>> - { name: client.glance, caps: { mon: "profile rbd", osd: "profile >>>> rbd pool={{ openstack_cinder_pool.name }}, profile rbd pool={{ >>>> openstack_glance_pool.name }}"}, mode: "0600" } >>>> - { name: client.cinder, caps: { mon: "profile rbd", osd: "profile >>>> rbd pool={{ openstack_cinder_pool.name }}, profile rbd pool={{ >>>> openstack_nova_pool.name }}, profile rbd pool={{ >>>> openstack_glance_pool.name }}"}, mode: "0600" } >>>> - { name: client.cinder-backup, caps: { mon: "profile rbd", osd: >>>> "profile rbd pool={{ openstack_cinder_backup_pool.name }}"}, mode: >>>> "0600" } >>>> - { name: client.gnocchi, caps: { mon: "profile rbd", osd: "profile >>>> rbd pool={{ openstack_gnocchi_pool.name }}"}, mode: "0600", } >>>> - { name: client.openstack, caps: { mon: "profile rbd", osd: "profile >>>> rbd pool={{ openstack_glance_pool.name }}, profile rbd pool={{ >>>> openstack_nova_pool.name }}, profile rbd pool={{ >>>> openstack_cinder_pool.name }}, profile rbd pool={{ >>>> openstack_cinder_backup_pool.name }}"}, mode: "0600" } >>>> dashboard_enabled: True >>>> dashboard_protocol: https >>>> dashboard_port: 8443 >>>> dashboard_network: "192.168.1.0/24" >>>> dashboard_admin_user: admin >>>> dashboard_admin_user_ro: true >>>> dashboard_admin_password: *********** >>>> dashboard_crt: '/home/deployer/work/site-central/chaininv.crt' >>>> dashboard_key: '/home/deployer/work/site-central/cloud_example.com.priv' >>>> dashboard_grafana_api_no_ssl_verify: true >>>> dashboard_rgw_api_user_id: admin >>>> dashboard_rgw_api_no_ssl_verify: true >>>> dashboard_frontend_vip: '192.168.1.5' >>>> node_exporter_container_image: " >>>> 192.168.1.16:4000/prom/node-exporter:v0.17.0" >>>> grafana_admin_user: admin >>>> grafana_admin_password: ********* >>>> grafana_crt: '/home/deployer/work/site-central/chaininv.crt' >>>> grafana_key: '/home/deployer/work/site-central/cloud_example.com.priv' >>>> grafana_server_fqdn: 'grafanasrv.cloud.example.com' >>>> grafana_container_image: "192.168.1.16:4000/grafana/grafana:6.7.4" >>>> grafana_dashboard_version: pacific >>>> prometheus_container_image: "192.168.1.16:4000/prom/prometheus:v2.7.2" >>>> alertmanager_container_image: " >>>> 192.168.1.16:4000/prom/alertmanager:v0.16.2" >>>> >>> >>> And my rgws.yml >>> >>>> --- >>>> dummy: >>>> copy_admin_key: true >>>> rgw_create_pools: >>>> "{{ rgw_zone }}.rgw.buckets.data": >>>> pg_num: 256 >>>> pgp_num: 256 >>>> size: 3 >>>> type: replicated >>>> pg_autoscale_mode: False >>>> rule_id: 1 >>>> "{{ rgw_zone }}.rgw.buckets.index": >>>> pg_num: 64 >>>> pgp_num: 64 >>>> size: 3 >>>> type: replicated >>>> pg_autoscale_mode: False >>>> rule_id: 1 >>>> "{{ rgw_zone }}.rgw.meta": >>>> pg_num: 32 >>>> pgp_num: 32 >>>> size: 3 >>>> type: replicated >>>> pg_autoscale_mode: False >>>> rule_id: 1 >>>> "{{ rgw_zone }}.rgw.log": >>>> pg_num: 32 >>>> pgp_num: 32 >>>> size: 3 >>>> type: replicated >>>> pg_autoscale_mode: False >>>> rule_id: 1 >>>> "{{ rgw_zone }}.rgw.control": >>>> pg_num: 32 >>>> pgp_num: 32 >>>> size: 3 >>>> type: replicated >>>> pg_autoscale_mode: False >>>> rule_id: 1 >>>> >>> >>> The ceph_rgw user was created by kolla >>> (xenavenv) [deployer at rscdeployer ~]$ openstack user list | grep ceph >>> | 3262aa7e03ab49c8a5710dfe3b16a136 | ceph_rgw >>> >>> This is my ceph.conf from one of my controllers : >>> >>>> [root at controllera ~]# cat /etc/ceph/ceph.conf >>>> [client.rgw.controllera.rgw0] >>>> host = controllera >>>> rgw_keystone_url = https://dash.cloud.example.com:5000 >>>> ##Authentication using username, password and tenant. Preferred. >>>> rgw_keystone_verify_ssl = false >>>> rgw_keystone_api_version = 3 >>>> rgw_keystone_admin_user = ceph_rgw >>>> rgw_keystone_admin_password = >>>> cos2Jcnpnw9BhGwvPm************************** >>>> rgw_keystone_admin_domain = Default >>>> rgw_keystone_admin_project = service >>>> rgw_s3_auth_use_keystone = true >>>> rgw_keystone_accepted_roles = admin >>>> rgw_keystone_implicit_tenants = true >>>> rgw_swift_account_in_url = true >>>> keyring = /var/lib/ceph/radosgw/ceph-rgw.controllera.rgw0/keyring >>>> log file = /var/log/ceph/ceph-rgw-controllera.rgw0.log >>>> rgw frontends = beast endpoint=10.10.1.5:8080 >>>> rgw thread pool size = 512 >>>> #For Debug >>>> debug ms = 1 >>>> debug rgw = 20 >>>> >>>> >>>> # Please do not change this file directly since it is managed by >>>> Ansible and will be overwritten >>>> [global] >>>> cluster network = 10.10.2.0/24 >>>> fsid = da094354-6ade-415a-a424-************ >>>> mon host = [v2:10.10.1.5:3300,v1:10.10.1.5:6789],[v2:10.10.1.9:3300,v1: >>>> 10.10.1.9:6789],[v2:10.10.1.13:3300,v1:10.10.1.13:6789] >>>> mon initial members = controllera,controllerb,controllerc >>>> osd pool default crush rule = 1 >>>> *public network = 10.10.1.0/24 * >>>> >>> >>> >>> Here are my swift endpoints >>> (xenavenv) [deployer at rscdeployer ~]$ openstack endpoint list | grep >>> swift >>> | 4082b4acf8bc4e4c9efc6e2d0e293724 | RegionOne | swift | >>> object-store | True | admin | >>> https://dashint.cloud.example.com:6780/v1/AUTH_%(project_id)s | >>> | b13a2f53e13e4650b4efdb8184eb0211 | RegionOne | swift | >>> object-store | True | internal | >>> https://dashint.cloud.example.com:6780/v1/AUTH_%(project_id)s | >>> | f85b36ff9a2b49bc9eaadf1aafdee28c | RegionOne | swift | >>> object-store | True | public | >>> https://dash.cloud.example.com:6780/v1/AUTH_%(project_id)s | >>> >>> When I connect to Horizon -> Project -> Object Store -> Containers I get >>> theses errors : >>> >>> - Unable to get the swift container listing >>> - Unable to fetch the policy details. >>> >>> I cannot create a new container from the WebUI, the Storage policy >>> parameter is empty. >>> If I try to create a new container from the CLI, I get this : >>> (xenavenv) [deployer at rscdeployer ~]$ source cephrgw-openrc.sh >>> (xenavenv) [deployer at rscdeployer ~]$ openstack container create demo -v >>> START with options: container create demo -v >>> command: container create -> >>> openstackclient.object.v1.container.CreateContainer (auth=True) >>> Using auth plugin: password >>> Not Found (HTTP 404) >>> END return value: 1 >>> >>> >>> This is the log from RGW service when I execute the above command : >>> >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 CONTENT_LENGTH=0 >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_ACCEPT=*/* >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_ACCEPT_ENCODING=gzip, >>>> deflate >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_HOST= >>>> dashint.cloud.example.com:6780 >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 >>>> HTTP_USER_AGENT=openstacksdk/0.59.0 keystoneauth1/4.4.0 >>>> python-requests/2.26.0 CPython/3.8.8 >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_VERSION=1.1 >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 >>>> HTTP_X_AUTH_TOKEN=gAAAAABiXUrjDFNzXx03mt1lbpUiCqNND1HACspSfg6h_TMxKYND5Hb9BO3FxH0a7CYoBXgRJywGszlK8cl-7zbUNRjHmxgIzmyh-CrWyGv793ZLOAmT_XShcrIKThjIIH3gTxYoX1TXwOKbsvMuZnI5EKKsol2y2MhcqPLeLGc28_AwoOr_b80 >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 >>>> HTTP_X_FORWARDED_FOR=10.10.3.16 >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 >>>> HTTP_X_FORWARDED_PROTO=https >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 REMOTE_ADDR=10.10.1.13 >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 REQUEST_METHOD=PUT >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 >>>> REQUEST_URI=/v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 >>>> SCRIPT_URI=/v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 SERVER_PORT=8080 >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 1 ====== starting new >>>> request req=0x7f23221aa620 ===== >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 2 req 728157015944164764 >>>> 0.000000000s initializing for trans_id = >>>> tx000000a1aeef2b40f759c-00625d4ae3-4b389-default >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 10 req 728157015944164764 >>>> 0.000000000s rgw api priority: s3=8 s3website=7 >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 10 req 728157015944164764 >>>> 0.000000000s host=dashint.cloud.example.com >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 req 728157015944164764 >>>> 0.000000000s subdomain= domain= in_hosted_domain=0 >>>> in_hosted_domain_s3website=0 >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 req 728157015944164764 >>>> 0.000000000s final domain/bucket subdomain= domain= in_hosted_domain=0 >>>> in_hosted_domain_s3website=0 s->info.domain= >>>> s->info.request_uri=/v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 req 728157015944164764 >>>> 0.000000000s get_handler handler=22RGWHandler_REST_Obj_S3 >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 10 req 728157015944164764 >>>> 0.000000000s handler=22RGWHandler_REST_Obj_S3 >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 2 req 728157015944164764 >>>> 0.000000000s getting op 1 >>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 1 -- 10.10.1.13:0/2715436964 >>>> --> [v2:10.10.1.7:6801/4815,v1:10.10.1.7:6803/4815] -- >>>> osd_op(unknown.0.0:1516 12.3 12:c14cb721:::script.prerequest.:head [call >>>> version.read in=11b,getxattrs,stat] snapc 0=[] >>>> ondisk+read+known_if_redirected e1182) v8 -- 0x56055eb2c400 con >>>> 0x56055e53b000 >>>> 2022-04-18T12:26:27.996+0100 7f230d002700 1 -- 10.10.1.13:0/2715436964 >>>> <== osd.23 v2:10.10.1.7:6801/4815 22 ==== osd_op_reply(1516 >>>> script.prerequest. [call,getxattrs,stat] v0'0 uv0 ondisk = -2 ((2) No such >>>> file or directory)) v8 ==== 246+0+0 (crc 0 0 0) 0x56055ea18b40 con >>>> 0x56055e53b000 >>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 >>>> 0.001000002s s3:put_obj scheduling with throttler client=2 cost=1 >>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 >>>> 0.001000002s s3:put_obj op=21RGWPutObj_ObjStore_S3 >>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 2 req 728157015944164764 >>>> 0.001000002s s3:put_obj verifying requester >>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 >>>> 0.001000002s s3:put_obj rgw::auth::StrategyRegistry::s3_main_strategy_t: >>>> trying rgw::auth::s3::AWSAuthStrategy >>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 >>>> 0.001000002s s3:put_obj rgw::auth::s3::AWSAuthStrategy: trying >>>> rgw::auth::s3::S3AnonymousEngine >>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 >>>> 0.001000002s s3:put_obj rgw::auth::s3::S3AnonymousEngine granted access >>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 >>>> 0.001000002s s3:put_obj rgw::auth::s3::AWSAuthStrategy granted access >>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 2 req 728157015944164764 >>>> 0.001000002s s3:put_obj normalizing buckets and tenants >>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 >>>> 0.001000002s s->object=AUTH_971efa4cb18f42f7a405342072c39c9d/demo >>>> s->bucket=v1 >>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 2 req 728157015944164764 >>>> 0.001000002s s3:put_obj init permissions >>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 >>>> 0.001000002s s3:put_obj get_system_obj_state: rctx=0x7f23221a9000 >>>> obj=default.rgw.meta:root:v1 state=0x56055ea8c520 s->prefetch_data=0 >>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 >>>> 0.001000002s s3:put_obj cache get: name=default.rgw.meta+root+v1 : miss >>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 1 -- 10.10.1.13:0/2715436964 >>>> --> [v2:10.10.1.3:6802/4933,v1:10.10.1.3:6806/4933] -- >>>> osd_op(unknown.0.0:1517 11.b 11:d05f7b30:root::v1:head [call version.read >>>> in=11b,getxattrs,stat] snapc 0=[] ondisk+read+known_if_redirected e1182) v8 >>>> -- 0x56055eb2cc00 con 0x56055e585000 >>>> 2022-04-18T12:26:27.997+0100 7f230c801700 1 -- 10.10.1.13:0/2715436964 >>>> <== osd.3 v2:10.10.1.3:6802/4933 9 ==== osd_op_reply(1517 v1 >>>> [call,getxattrs,stat] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) >>>> v8 ==== 230+0+0 (crc 0 0 0) 0x56055e39db00 con 0x56055e585000 >>>> 2022-04-18T12:26:27.997+0100 7f22dd7a3700 10 req 728157015944164764 >>>> 0.002000004s s3:put_obj cache put: name=default.rgw.meta+root+v1 >>>> info.flags=0x0 >>>> 2022-04-18T12:26:27.997+0100 7f22dd7a3700 10 req 728157015944164764 >>>> 0.002000004s s3:put_obj adding default.rgw.meta+root+v1 to cache LRU end >>>> 2022-04-18T12:26:27.997+0100 7f22dd7a3700 10 req 728157015944164764 >>>> 0.002000004s s3:put_obj init_permissions on failed, ret=-2002 >>>> 2022-04-18T12:26:27.997+0100 7f22dd7a3700 1 req 728157015944164764 >>>> 0.002000004s op->ERRORHANDLER: err_no=-2002 new_err_no=-2002 >>>> 2022-04-18T12:26:27.997+0100 7f22dbfa0700 1 -- 10.10.1.13:0/2715436964 >>>> --> [v2:10.10.1.8:6804/4817,v1:10.10.1.8:6805/4817] -- >>>> osd_op(unknown.0.0:1518 12.1f 12:fb11263f:::script.postrequest.:head [call >>>> version.read in=11b,getxattrs,stat] snapc 0=[] >>>> ondisk+read+known_if_redirected e1182) v8 -- 0x56055eb2d000 con >>>> 0x56055e94c800 >>>> 2022-04-18T12:26:27.998+0100 7f230d002700 1 -- 10.10.1.13:0/2715436964 >>>> <== osd.9 v2:10.10.1.8:6804/4817 10 ==== osd_op_reply(1518 >>>> script.postrequest. [call,getxattrs,stat] v0'0 uv0 ondisk = -2 ((2) No such >>>> file or directory)) v8 ==== 247+0+0 (crc 0 0 0) 0x56055ea18b40 con >>>> 0x56055e94c800 >>>> 2022-04-18T12:26:27.998+0100 7f22d8f9a700 2 req 728157015944164764 >>>> 0.003000006s s3:put_obj op status=0 >>>> 2022-04-18T12:26:27.998+0100 7f22d8f9a700 2 req 728157015944164764 >>>> 0.003000006s s3:put_obj http status=404 >>>> 2022-04-18T12:26:27.998+0100 7f22d8f9a700 1 ====== req done >>>> req=0x7f23221aa620 op status=0 http_status=404 latency=0.003000006s ====== >>>> 2022-04-18T12:26:27.998+0100 7f22d8f9a700 1 beast: 0x7f23221aa620: >>>> 10.10.1.13 - anonymous [18/Apr/2022:12:26:27.995 +0100] "PUT >>>> /v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo HTTP/1.1" 404 214 - >>>> "openstacksdk/0.59.0 keystoneauth1/4.4.0 python-requests/2.26.0 >>>> CPython/3.8.8" - latency=0.003000006s >>>> >>> >>> Could you help please. >>> >>> Regards. >>> >> >> >> -- >> >> ??????? ????? ???????? >> Buddhika Sanjeewa Godakuru >> >> Systems Analyst/Programmer >> Deputy Webmaster / University of Kelaniya >> >> Information and Communication Technology Centre (ICTC) >> University of Kelaniya, Sri Lanka, >> Kelaniya, >> Sri Lanka. >> >> Mobile : (+94) 071 5696981 >> Office : (+94) 011 2903420 / 2903424 >> >> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >> University of Kelaniya Sri Lanka, accepts no liability for the content of >> this email, or for the consequences of any actions taken on the basis of >> the information provided, unless that information is subsequently confirmed >> in writing. If you are not the intended recipient, this email and/or any >> information it contains should not be copied, disclosed, retained or used >> by you or any other party and the email and all its contents should be >> promptly deleted fully from our system and the sender informed. >> >> E-mail transmission cannot be guaranteed to be secure or error-free as >> information could be intercepted, corrupted, lost, destroyed, arrive late >> or incomplete. >> >> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >> > Virus-free. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Tue Apr 19 15:28:45 2022 From: amy at demarco.com (Amy) Date: Tue, 19 Apr 2022 10:28:45 -0500 Subject: =?utf-8?Q?Re:_[openstack-ansible]_Nominate_Damian_D=C4=85browski?= =?utf-8?Q?_for_openstack-ansible_core_team?= In-Reply-To: References: Message-ID: <7A74639F-F049-4840-A5BD-F2538995E182@demarco.com> +2 Welcome Amy (spotz) > On Apr 19, 2022, at 9:52 AM, Andrew Bonney wrote: > > ?Sounds good to me! > > -----Original Message----- > From: Jonathan Rosser > Sent: 19 April 2022 15:24 > To: openstack-discuss at lists.openstack.org > Subject: Re: [openstack-ansible] Nominate Damian D?browski for openstack-ansible core team > > +2 Welcome Damian! > >> On 19/04/2022 10:39, Dmitriy Rabotyagov wrote: >> Hi OSA Cores! >> >> I'm happy to nominate Damian D?browski (damiandabrowski) to the core >> reviewers team. >> >> He has been doing a good job lately in reviewing incoming patches, >> helping out in IRC and participating in community activities, so I >> think he will be a good match for the Core Reviewers group. >> >> So I call for current Core Reviewers to support this nomination or >> raise objections to it until 22nd of April 2022. If no objections are >> raised we will add Damian to the team next week. >> >> -- >> Kind regards, >> Dmitriy Rabotyagov > From johnsomor at gmail.com Tue Apr 19 16:01:48 2022 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 19 Apr 2022 09:01:48 -0700 Subject: [all][tc][Release Management] Improvements in project governance In-Reply-To: <1858624.taCxCBeP46@p1> References: <1858624.taCxCBeP46@p1> Message-ID: Comments inline. Michael On Tue, Apr 19, 2022 at 6:34 AM Slawek Kaplonski wrote: > > Hi, > > > During the Zed PTG sessions in the TC room we were discussing some ideas how we can improve project governance. > > One of the topics was related to the projects which don't really have any changes in the cycle. Currently we are forcing to do new release of basically the same code when it comes to the end of the cycle. > > Can/Should we maybe change that and e.g. instead of forcing new release use last released version of the of the repo for new release too? In the past this has created confusion in the community about if a project has been dropped/removed from OpenStack. That said, I think this is the point of the "independent" release classification. > If yes, should we then automatically propose change of the release model to the "independent" maybe? Personally, I would prefer to send an email to the discuss list proposing the switch to independent. Patches can sometimes get merged before everyone gets to give input. Especially since the patch would be proposed in the "releases" project and may not be on the team's dashboards. > What would be the best way how Release Management team can maybe notify TC about such less active projects which don't needs any new release in the cycle? That could be one of the potential conditions to check project's health by the TC team. It seems like this would be a straight forward script to write given we already have tools to capture the list of changes included in a given release. > Another question is related to the projects which aren't really active and are broken during the final release time. We had such problem in the last cycle, see [1] for details. Should we still force pushing fixes for them to be able to release or maybe should we consider deprecation of such projects and not to release it at all? In the past we have simply not released projects that are broken and don't have people actively working on fixing them. It has been a signal to the community that if they value the project they need to contribute to it. > [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-March/027864.html > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat From bsanjeewa at kln.ac.lk Tue Apr 19 15:56:39 2022 From: bsanjeewa at kln.ac.lk (Buddhika S. Godakuru - University of Kelaniya) Date: Tue, 19 Apr 2022 21:26:39 +0530 Subject: [Solved][Kolla-ansible][Xena][Ceph-RGW] need help configuring Ceph RGW for Swift and S3 access In-Reply-To: References: Message-ID: Dear Wodel, It seems you need to set ceph_rgw_swift_compatibility: false Seems when you enable compatibility, it assumes ceph is working just like swift itself. (without /swift/ part) Hope this helps. On Tue, 19 Apr 2022 at 20:40, wodel youchi wrote: > Hi, > > Many thanks, after changing the endpoints it worked. > So the question is why kolla-ansible did not create the correct urls? Did > I miss something? > > Regards. > > Le mar. 19 avr. 2022 ? 11:09, wodel youchi a > ?crit : > >> Hi, >> Thanks. >> >> The endpoints were created by Kolla-ansible upon deployment. >> >> I did configure kolla-ansible to enable cross project tenant access by >> using : >> *ceph_rgw_swift_account_in_url: true* >> >> And I did add the *rgw_swift_account_in_url = true* in ceph.conf in the >> Rados servers. But the endpoints were created by kolla. >> >> I will modify them and try again. >> >> Regards. >> >> Le mar. 19 avr. 2022 ? 08:12, Buddhika S. Godakuru - University of >> Kelaniya a ?crit : >> >>> Dear Wodel, >>> I think that default endpoint for swift when using cephrgw is /swift/v1 >>> (unless you have changed it in ceph), >>> so your endpoints should be >>> | 4082b4acf8bc4e4c9efc6e2d0e293724 | RegionOne | swift | >>> object-store | True | admin | >>> https://dashint.cloud.example.com:6780/swift/v1/AUTH_%(project_id)s | >>> | b13a2f53e13e4650b4efdb8184eb0211 | RegionOne | swift | >>> object-store | True | internal | >>> https://dashint.cloud.example.com:6780/swift/v1/AUTH_%(project_id)s | >>> | f85b36ff9a2b49bc9eaadf1aafdee28c | RegionOne | swift | >>> object-store | True | public | >>> https://dash.cloud.example.com:6780/swift/v1/AUTH_%(project_id)s | >>> >>> >>> See >>> https://docs.ceph.com/en/latest/radosgw/keystone/#cross-project-tenant-access >>> >>> On Mon, 18 Apr 2022 at 23:52, wodel youchi >>> wrote: >>> >>>> Hi, >>>> I am having trouble configuring Openstack to use Ceph RGW as the Object >>>> store backend for Swift and S3. >>>> >>>> My setup is an HCI, I have 3 controllers which are also my ceph mgrs, >>>> mons and rgws and 9 compte/storage servers (osds). >>>> Xena is deployed with Ceph Pacific. >>>> >>>> Ceph public network is a private network on vlan10 with 10.10.1.0/24 >>>> as a subnet. >>>> >>>> Here is a snippet from my globals.yml : >>>> >>>>> --- >>>>> kolla_base_distro: "centos" >>>>> kolla_install_type: "source" >>>>> openstack_release: "xena" >>>>> kolla_internal_vip_address: "10.10.3.1" >>>>> kolla_internal_fqdn: "dashint.cloud.example.com" >>>>> kolla_external_vip_address: "x.x.x.x" >>>>> kolla_external_fqdn: "dash.cloud.example.com " >>>>> docker_registry: 192.168.1.16:4000 >>>>> network_interface: "bond0" >>>>> kolla_external_vip_interface: "bond1" >>>>> api_interface: "bond1.30" >>>>> *storage_interface: "bond1.10" <---------------- VLAN10 (public >>>>> ceph network)* >>>>> tunnel_interface: "bond1.40" >>>>> dns_interface: "bond1" >>>>> octavia_network_interface: "bond1.301" >>>>> neutron_external_interface: "bond2" >>>>> neutron_plugin_agent: "openvswitch" >>>>> keepalived_virtual_router_id: "51" >>>>> kolla_enable_tls_internal: "yes" >>>>> kolla_enable_tls_external: "yes" >>>>> kolla_certificates_dir: "{{ node_config }}/certificates" >>>>> kolla_external_fqdn_cert: "{{ kolla_certificates_dir }}/haproxy.pem" >>>>> kolla_internal_fqdn_cert: "{{ kolla_certificates_dir >>>>> }}/haproxy-internal.pem" >>>>> kolla_admin_openrc_cacert: "{{ kolla_certificates_dir }}/ca.pem" >>>>> kolla_copy_ca_into_containers: "yes" >>>>> kolla_enable_tls_backend: "yes" >>>>> kolla_verify_tls_backend: "no" >>>>> kolla_tls_backend_cert: "{{ kolla_certificates_dir }}/backend-cert.pem" >>>>> kolla_tls_backend_key: "{{ kolla_certificates_dir }}/backend-key.pem" >>>>> enable_openstack_core: "yes" >>>>> enable_hacluster: "yes" >>>>> enable_haproxy: "yes" >>>>> enable_aodh: "yes" >>>>> enable_barbican: "yes" >>>>> enable_ceilometer: "yes" >>>>> enable_central_logging: "yes" >>>>> >>>>> *enable_ceph_rgw: "yes"enable_ceph_rgw_loadbalancer: "{{ >>>>> enable_ceph_rgw | bool }}"* >>>>> enable_cinder: "yes" >>>>> enable_cinder_backup: "yes" >>>>> enable_collectd: "yes" >>>>> enable_designate: "yes" >>>>> enable_elasticsearch_curator: "yes" >>>>> enable_freezer: "no" >>>>> enable_gnocchi: "yes" >>>>> enable_gnocchi_statsd: "yes" >>>>> enable_magnum: "yes" >>>>> enable_manila: "yes" >>>>> enable_manila_backend_cephfs_native: "yes" >>>>> enable_mariabackup: "yes" >>>>> enable_masakari: "yes" >>>>> enable_neutron_vpnaas: "yes" >>>>> enable_neutron_qos: "yes" >>>>> enable_neutron_agent_ha: "yes" >>>>> enable_neutron_provider_networks: "yes" >>>>> enable_neutron_segments: "yes" >>>>> enable_octavia: "yes" >>>>> enable_trove: "yes" >>>>> external_ceph_cephx_enabled: "yes" >>>>> ceph_glance_keyring: "ceph.client.glance.keyring" >>>>> ceph_glance_user: "glance" >>>>> ceph_glance_pool_name: "images" >>>>> ceph_cinder_keyring: "ceph.client.cinder.keyring" >>>>> ceph_cinder_user: "cinder" >>>>> ceph_cinder_pool_name: "volumes" >>>>> ceph_cinder_backup_keyring: "ceph.client.cinder-backup.keyring" >>>>> ceph_cinder_backup_user: "cinder-backup" >>>>> ceph_cinder_backup_pool_name: "backups" >>>>> ceph_nova_keyring: "{{ ceph_cinder_keyring }}" >>>>> ceph_nova_user: "cinder" >>>>> ceph_nova_pool_name: "vms" >>>>> ceph_gnocchi_keyring: "ceph.client.gnocchi.keyring" >>>>> ceph_gnocchi_user: "gnocchi" >>>>> ceph_gnocchi_pool_name: "metrics" >>>>> ceph_manila_keyring: "ceph.client.manila.keyring" >>>>> ceph_manila_user: "manila" >>>>> glance_backend_ceph: "yes" >>>>> glance_backend_file: "no" >>>>> gnocchi_backend_storage: "ceph" >>>>> cinder_backend_ceph: "yes" >>>>> cinder_backup_driver: "ceph" >>>>> cloudkitty_collector_backend: "gnocchi" >>>>> designate_ns_record: "cloud.example.com " >>>>> nova_backend_ceph: "yes" >>>>> nova_compute_virt_type: "kvm" >>>>> octavia_auto_configure: yes >>>>> octavia_amp_flavor: >>>>> name: "amphora" >>>>> is_public: no >>>>> vcpus: 1 >>>>> ram: 1024 >>>>> disk: 5 >>>>> octavia_amp_network: >>>>> name: lb-mgmt-net >>>>> provider_network_type: vlan >>>>> provider_segmentation_id: 301 >>>>> provider_physical_network: physnet1 >>>>> external: false >>>>> shared: false >>>>> subnet: >>>>> name: lb-mgmt-subnet >>>>> cidr: "10.7.0.0/16" >>>>> allocation_pool_start: "10.7.0.50" >>>>> allocation_pool_end: "10.7.255.200" >>>>> no_gateway_ip: yes >>>>> enable_dhcp: yes >>>>> mtu: 9000 >>>>> octavia_amp_network_cidr: 10.10.7.0/24 >>>>> octavia_amp_image_tag: "amphora" >>>>> octavia_certs_country: XZ >>>>> octavia_certs_state: Gotham >>>>> octavia_certs_organization: WAYNE >>>>> octavia_certs_organizational_unit: IT >>>>> horizon_keystone_multidomain: true >>>>> elasticsearch_curator_dry_run: "no" >>>>> enable_cluster_user_trust: true >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> *ceph_rgw_hosts: - host: controllera ip: 10.10.1.5 >>>>> port: 8080 - host: controllerb ip: 10.10.1.9 >>>>> port: 8080 - host: controllerc ip: 10.10.1.13 >>>>> port: 8080ceph_rgw_swift_account_in_url: trueceph_rgw_swift_compatibility: >>>>> true* >>>> >>>> >>>> >>>> And Here is my ceph all.yml file >>>> >>>>> --- >>>>> dummy: >>>>> ceph_release_num: 16 >>>>> cluster: ceph >>>>> configure_firewall: False >>>>> *monitor_interface: bond1.10* >>>>> monitor_address_block: 10.10.1.0/24 >>>>> is_hci: true >>>>> hci_safety_factor: 0.2 >>>>> osd_memory_target: 4294967296 >>>>> *public_network: 10.10.1.0/24 * >>>>> cluster_network: 10.10.2.0/24 >>>>> *radosgw_interface: "{{ monitor_interface }}"* >>>>> *radosgw_address_block: 10.10.1.0/24 * >>>>> nfs_file_gw: true >>>>> nfs_obj_gw: true >>>>> ceph_docker_image: "ceph/daemon" >>>>> ceph_docker_image_tag: latest-pacific >>>>> ceph_docker_registry: 192.168.1.16:4000 >>>>> containerized_deployment: True >>>>> openstack_config: true >>>>> openstack_glance_pool: >>>>> name: "images" >>>>> pg_autoscale_mode: False >>>>> application: "rbd" >>>>> pg_num: 128 >>>>> pgp_num: 128 >>>>> target_size_ratio: 5.00 >>>>> rule_name: "SSD" >>>>> openstack_cinder_pool: >>>>> name: "volumes" >>>>> pg_autoscale_mode: False >>>>> application: "rbd" >>>>> pg_num: 1024 >>>>> pgp_num: 1024 >>>>> target_size_ratio: 42.80 >>>>> rule_name: "SSD" >>>>> openstack_nova_pool: >>>>> name: "vms" >>>>> pg_autoscale_mode: False >>>>> application: "rbd" >>>>> pg_num: 256 >>>>> pgp_num: 256 >>>>> target_size_ratio: 10.00 >>>>> rule_name: "SSD" >>>>> openstack_cinder_backup_pool: >>>>> name: "backups" >>>>> pg_autoscale_mode: False >>>>> application: "rbd" >>>>> pg_num: 512 >>>>> pgp_num: 512 >>>>> target_size_ratio: 18.00 >>>>> rule_name: "SSD" >>>>> openstack_gnocchi_pool: >>>>> name: "metrics" >>>>> pg_autoscale_mode: False >>>>> application: "rbd" >>>>> pg_num: 32 >>>>> pgp_num: 32 >>>>> target_size_ratio: 0.10 >>>>> rule_name: "SSD" >>>>> openstack_cephfs_data_pool: >>>>> name: "cephfs_data" >>>>> pg_autoscale_mode: False >>>>> application: "cephfs" >>>>> pg_num: 256 >>>>> pgp_num: 256 >>>>> target_size_ratio: 10.00 >>>>> rule_name: "SSD" >>>>> openstack_cephfs_metadata_pool: >>>>> name: "cephfs_metadata" >>>>> pg_autoscale_mode: False >>>>> application: "cephfs" >>>>> pg_num: 32 >>>>> pgp_num: 32 >>>>> target_size_ratio: 0.10 >>>>> rule_name: "SSD" >>>>> openstack_pools: >>>>> - "{{ openstack_glance_pool }}" >>>>> - "{{ openstack_cinder_pool }}" >>>>> - "{{ openstack_nova_pool }}" >>>>> - "{{ openstack_cinder_backup_pool }}" >>>>> - "{{ openstack_gnocchi_pool }}" >>>>> - "{{ openstack_cephfs_data_pool }}" >>>>> - "{{ openstack_cephfs_metadata_pool }}" >>>>> openstack_keys: >>>>> - { name: client.glance, caps: { mon: "profile rbd", osd: "profile >>>>> rbd pool={{ openstack_cinder_pool.name }}, profile rbd pool={{ >>>>> openstack_glance_pool.name }}"}, mode: "0600" } >>>>> - { name: client.cinder, caps: { mon: "profile rbd", osd: "profile >>>>> rbd pool={{ openstack_cinder_pool.name }}, profile rbd pool={{ >>>>> openstack_nova_pool.name }}, profile rbd pool={{ >>>>> openstack_glance_pool.name }}"}, mode: "0600" } >>>>> - { name: client.cinder-backup, caps: { mon: "profile rbd", osd: >>>>> "profile rbd pool={{ openstack_cinder_backup_pool.name }}"}, mode: >>>>> "0600" } >>>>> - { name: client.gnocchi, caps: { mon: "profile rbd", osd: "profile >>>>> rbd pool={{ openstack_gnocchi_pool.name }}"}, mode: "0600", } >>>>> - { name: client.openstack, caps: { mon: "profile rbd", osd: >>>>> "profile rbd pool={{ openstack_glance_pool.name }}, profile rbd >>>>> pool={{ openstack_nova_pool.name }}, profile rbd pool={{ >>>>> openstack_cinder_pool.name }}, profile rbd pool={{ >>>>> openstack_cinder_backup_pool.name }}"}, mode: "0600" } >>>>> dashboard_enabled: True >>>>> dashboard_protocol: https >>>>> dashboard_port: 8443 >>>>> dashboard_network: "192.168.1.0/24" >>>>> dashboard_admin_user: admin >>>>> dashboard_admin_user_ro: true >>>>> dashboard_admin_password: *********** >>>>> dashboard_crt: '/home/deployer/work/site-central/chaininv.crt' >>>>> dashboard_key: >>>>> '/home/deployer/work/site-central/cloud_example.com.priv' >>>>> dashboard_grafana_api_no_ssl_verify: true >>>>> dashboard_rgw_api_user_id: admin >>>>> dashboard_rgw_api_no_ssl_verify: true >>>>> dashboard_frontend_vip: '192.168.1.5' >>>>> node_exporter_container_image: " >>>>> 192.168.1.16:4000/prom/node-exporter:v0.17.0" >>>>> grafana_admin_user: admin >>>>> grafana_admin_password: ********* >>>>> grafana_crt: '/home/deployer/work/site-central/chaininv.crt' >>>>> grafana_key: '/home/deployer/work/site-central/cloud_example.com.priv' >>>>> grafana_server_fqdn: 'grafanasrv.cloud.example.com' >>>>> grafana_container_image: "192.168.1.16:4000/grafana/grafana:6.7.4" >>>>> grafana_dashboard_version: pacific >>>>> prometheus_container_image: "192.168.1.16:4000/prom/prometheus:v2.7.2" >>>>> alertmanager_container_image: " >>>>> 192.168.1.16:4000/prom/alertmanager:v0.16.2" >>>>> >>>> >>>> And my rgws.yml >>>> >>>>> --- >>>>> dummy: >>>>> copy_admin_key: true >>>>> rgw_create_pools: >>>>> "{{ rgw_zone }}.rgw.buckets.data": >>>>> pg_num: 256 >>>>> pgp_num: 256 >>>>> size: 3 >>>>> type: replicated >>>>> pg_autoscale_mode: False >>>>> rule_id: 1 >>>>> "{{ rgw_zone }}.rgw.buckets.index": >>>>> pg_num: 64 >>>>> pgp_num: 64 >>>>> size: 3 >>>>> type: replicated >>>>> pg_autoscale_mode: False >>>>> rule_id: 1 >>>>> "{{ rgw_zone }}.rgw.meta": >>>>> pg_num: 32 >>>>> pgp_num: 32 >>>>> size: 3 >>>>> type: replicated >>>>> pg_autoscale_mode: False >>>>> rule_id: 1 >>>>> "{{ rgw_zone }}.rgw.log": >>>>> pg_num: 32 >>>>> pgp_num: 32 >>>>> size: 3 >>>>> type: replicated >>>>> pg_autoscale_mode: False >>>>> rule_id: 1 >>>>> "{{ rgw_zone }}.rgw.control": >>>>> pg_num: 32 >>>>> pgp_num: 32 >>>>> size: 3 >>>>> type: replicated >>>>> pg_autoscale_mode: False >>>>> rule_id: 1 >>>>> >>>> >>>> The ceph_rgw user was created by kolla >>>> (xenavenv) [deployer at rscdeployer ~]$ openstack user list | grep ceph >>>> | 3262aa7e03ab49c8a5710dfe3b16a136 | ceph_rgw >>>> >>>> This is my ceph.conf from one of my controllers : >>>> >>>>> [root at controllera ~]# cat /etc/ceph/ceph.conf >>>>> [client.rgw.controllera.rgw0] >>>>> host = controllera >>>>> rgw_keystone_url = https://dash.cloud.example.com:5000 >>>>> ##Authentication using username, password and tenant. Preferred. >>>>> rgw_keystone_verify_ssl = false >>>>> rgw_keystone_api_version = 3 >>>>> rgw_keystone_admin_user = ceph_rgw >>>>> rgw_keystone_admin_password = >>>>> cos2Jcnpnw9BhGwvPm************************** >>>>> rgw_keystone_admin_domain = Default >>>>> rgw_keystone_admin_project = service >>>>> rgw_s3_auth_use_keystone = true >>>>> rgw_keystone_accepted_roles = admin >>>>> rgw_keystone_implicit_tenants = true >>>>> rgw_swift_account_in_url = true >>>>> keyring = /var/lib/ceph/radosgw/ceph-rgw.controllera.rgw0/keyring >>>>> log file = /var/log/ceph/ceph-rgw-controllera.rgw0.log >>>>> rgw frontends = beast endpoint=10.10.1.5:8080 >>>>> rgw thread pool size = 512 >>>>> #For Debug >>>>> debug ms = 1 >>>>> debug rgw = 20 >>>>> >>>>> >>>>> # Please do not change this file directly since it is managed by >>>>> Ansible and will be overwritten >>>>> [global] >>>>> cluster network = 10.10.2.0/24 >>>>> fsid = da094354-6ade-415a-a424-************ >>>>> mon host = [v2:10.10.1.5:3300,v1:10.10.1.5:6789],[v2:10.10.1.9:3300 >>>>> ,v1:10.10.1.9:6789],[v2:10.10.1.13:3300,v1:10.10.1.13:6789] >>>>> mon initial members = controllera,controllerb,controllerc >>>>> osd pool default crush rule = 1 >>>>> *public network = 10.10.1.0/24 * >>>>> >>>> >>>> >>>> Here are my swift endpoints >>>> (xenavenv) [deployer at rscdeployer ~]$ openstack endpoint list | grep >>>> swift >>>> | 4082b4acf8bc4e4c9efc6e2d0e293724 | RegionOne | swift | >>>> object-store | True | admin | >>>> https://dashint.cloud.example.com:6780/v1/AUTH_%(project_id)s | >>>> | b13a2f53e13e4650b4efdb8184eb0211 | RegionOne | swift | >>>> object-store | True | internal | >>>> https://dashint.cloud.example.com:6780/v1/AUTH_%(project_id)s | >>>> | f85b36ff9a2b49bc9eaadf1aafdee28c | RegionOne | swift | >>>> object-store | True | public | >>>> https://dash.cloud.example.com:6780/v1/AUTH_%(project_id)s | >>>> >>>> When I connect to Horizon -> Project -> Object Store -> Containers I >>>> get theses errors : >>>> >>>> - Unable to get the swift container listing >>>> - Unable to fetch the policy details. >>>> >>>> I cannot create a new container from the WebUI, the Storage policy >>>> parameter is empty. >>>> If I try to create a new container from the CLI, I get this : >>>> (xenavenv) [deployer at rscdeployer ~]$ source cephrgw-openrc.sh >>>> (xenavenv) [deployer at rscdeployer ~]$ openstack container create demo -v >>>> START with options: container create demo -v >>>> command: container create -> >>>> openstackclient.object.v1.container.CreateContainer (auth=True) >>>> Using auth plugin: password >>>> Not Found (HTTP 404) >>>> END return value: 1 >>>> >>>> >>>> This is the log from RGW service when I execute the above command : >>>> >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 CONTENT_LENGTH=0 >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_ACCEPT=*/* >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 >>>>> HTTP_ACCEPT_ENCODING=gzip, deflate >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_HOST= >>>>> dashint.cloud.example.com:6780 >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 >>>>> HTTP_USER_AGENT=openstacksdk/0.59.0 keystoneauth1/4.4.0 >>>>> python-requests/2.26.0 CPython/3.8.8 >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 HTTP_VERSION=1.1 >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 >>>>> HTTP_X_AUTH_TOKEN=gAAAAABiXUrjDFNzXx03mt1lbpUiCqNND1HACspSfg6h_TMxKYND5Hb9BO3FxH0a7CYoBXgRJywGszlK8cl-7zbUNRjHmxgIzmyh-CrWyGv793ZLOAmT_XShcrIKThjIIH3gTxYoX1TXwOKbsvMuZnI5EKKsol2y2MhcqPLeLGc28_AwoOr_b80 >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 >>>>> HTTP_X_FORWARDED_FOR=10.10.3.16 >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 >>>>> HTTP_X_FORWARDED_PROTO=https >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 REMOTE_ADDR=10.10.1.13 >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 REQUEST_METHOD=PUT >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 >>>>> REQUEST_URI=/v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 >>>>> SCRIPT_URI=/v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 SERVER_PORT=8080 >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 1 ====== starting new >>>>> request req=0x7f23221aa620 ===== >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 2 req 728157015944164764 >>>>> 0.000000000s initializing for trans_id = >>>>> tx000000a1aeef2b40f759c-00625d4ae3-4b389-default >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 10 req 728157015944164764 >>>>> 0.000000000s rgw api priority: s3=8 s3website=7 >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 10 req 728157015944164764 >>>>> 0.000000000s host=dashint.cloud.example.com >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 req 728157015944164764 >>>>> 0.000000000s subdomain= domain= in_hosted_domain=0 >>>>> in_hosted_domain_s3website=0 >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 req 728157015944164764 >>>>> 0.000000000s final domain/bucket subdomain= domain= in_hosted_domain=0 >>>>> in_hosted_domain_s3website=0 s->info.domain= >>>>> s->info.request_uri=/v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 20 req 728157015944164764 >>>>> 0.000000000s get_handler handler=22RGWHandler_REST_Obj_S3 >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 10 req 728157015944164764 >>>>> 0.000000000s handler=22RGWHandler_REST_Obj_S3 >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 2 req 728157015944164764 >>>>> 0.000000000s getting op 1 >>>>> 2022-04-18T12:26:27.995+0100 7f22e07a9700 1 -- >>>>> 10.10.1.13:0/2715436964 --> [v2: >>>>> 10.10.1.7:6801/4815,v1:10.10.1.7:6803/4815] -- >>>>> osd_op(unknown.0.0:1516 12.3 12:c14cb721:::script.prerequest.:head [call >>>>> version.read in=11b,getxattrs,stat] snapc 0=[] >>>>> ondisk+read+known_if_redirected e1182) v8 -- 0x56055eb2c400 con >>>>> 0x56055e53b000 >>>>> 2022-04-18T12:26:27.996+0100 7f230d002700 1 -- >>>>> 10.10.1.13:0/2715436964 <== osd.23 v2:10.10.1.7:6801/4815 22 ==== >>>>> osd_op_reply(1516 script.prerequest. [call,getxattrs,stat] v0'0 uv0 ondisk >>>>> = -2 ((2) No such file or directory)) v8 ==== 246+0+0 (crc 0 0 0) >>>>> 0x56055ea18b40 con 0x56055e53b000 >>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 >>>>> 0.001000002s s3:put_obj scheduling with throttler client=2 cost=1 >>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 >>>>> 0.001000002s s3:put_obj op=21RGWPutObj_ObjStore_S3 >>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 2 req 728157015944164764 >>>>> 0.001000002s s3:put_obj verifying requester >>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 >>>>> 0.001000002s s3:put_obj rgw::auth::StrategyRegistry::s3_main_strategy_t: >>>>> trying rgw::auth::s3::AWSAuthStrategy >>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 >>>>> 0.001000002s s3:put_obj rgw::auth::s3::AWSAuthStrategy: trying >>>>> rgw::auth::s3::S3AnonymousEngine >>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 >>>>> 0.001000002s s3:put_obj rgw::auth::s3::S3AnonymousEngine granted access >>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 >>>>> 0.001000002s s3:put_obj rgw::auth::s3::AWSAuthStrategy granted access >>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 2 req 728157015944164764 >>>>> 0.001000002s s3:put_obj normalizing buckets and tenants >>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 >>>>> 0.001000002s s->object=AUTH_971efa4cb18f42f7a405342072c39c9d/demo >>>>> s->bucket=v1 >>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 2 req 728157015944164764 >>>>> 0.001000002s s3:put_obj init permissions >>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 20 req 728157015944164764 >>>>> 0.001000002s s3:put_obj get_system_obj_state: rctx=0x7f23221a9000 >>>>> obj=default.rgw.meta:root:v1 state=0x56055ea8c520 s->prefetch_data=0 >>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 10 req 728157015944164764 >>>>> 0.001000002s s3:put_obj cache get: name=default.rgw.meta+root+v1 : miss >>>>> 2022-04-18T12:26:27.996+0100 7f22ddfa4700 1 -- >>>>> 10.10.1.13:0/2715436964 --> [v2: >>>>> 10.10.1.3:6802/4933,v1:10.10.1.3:6806/4933] -- >>>>> osd_op(unknown.0.0:1517 11.b 11:d05f7b30:root::v1:head [call version.read >>>>> in=11b,getxattrs,stat] snapc 0=[] ondisk+read+known_if_redirected e1182) v8 >>>>> -- 0x56055eb2cc00 con 0x56055e585000 >>>>> 2022-04-18T12:26:27.997+0100 7f230c801700 1 -- >>>>> 10.10.1.13:0/2715436964 <== osd.3 v2:10.10.1.3:6802/4933 9 ==== >>>>> osd_op_reply(1517 v1 [call,getxattrs,stat] v0'0 uv0 ondisk = -2 ((2) No >>>>> such file or directory)) v8 ==== 230+0+0 (crc 0 0 0) 0x56055e39db00 con >>>>> 0x56055e585000 >>>>> 2022-04-18T12:26:27.997+0100 7f22dd7a3700 10 req 728157015944164764 >>>>> 0.002000004s s3:put_obj cache put: name=default.rgw.meta+root+v1 >>>>> info.flags=0x0 >>>>> 2022-04-18T12:26:27.997+0100 7f22dd7a3700 10 req 728157015944164764 >>>>> 0.002000004s s3:put_obj adding default.rgw.meta+root+v1 to cache LRU end >>>>> 2022-04-18T12:26:27.997+0100 7f22dd7a3700 10 req 728157015944164764 >>>>> 0.002000004s s3:put_obj init_permissions on failed, ret=-2002 >>>>> 2022-04-18T12:26:27.997+0100 7f22dd7a3700 1 req 728157015944164764 >>>>> 0.002000004s op->ERRORHANDLER: err_no=-2002 new_err_no=-2002 >>>>> 2022-04-18T12:26:27.997+0100 7f22dbfa0700 1 -- >>>>> 10.10.1.13:0/2715436964 --> [v2: >>>>> 10.10.1.8:6804/4817,v1:10.10.1.8:6805/4817] -- >>>>> osd_op(unknown.0.0:1518 12.1f 12:fb11263f:::script.postrequest.:head [call >>>>> version.read in=11b,getxattrs,stat] snapc 0=[] >>>>> ondisk+read+known_if_redirected e1182) v8 -- 0x56055eb2d000 con >>>>> 0x56055e94c800 >>>>> 2022-04-18T12:26:27.998+0100 7f230d002700 1 -- >>>>> 10.10.1.13:0/2715436964 <== osd.9 v2:10.10.1.8:6804/4817 10 ==== >>>>> osd_op_reply(1518 script.postrequest. [call,getxattrs,stat] v0'0 uv0 ondisk >>>>> = -2 ((2) No such file or directory)) v8 ==== 247+0+0 (crc 0 0 0) >>>>> 0x56055ea18b40 con 0x56055e94c800 >>>>> 2022-04-18T12:26:27.998+0100 7f22d8f9a700 2 req 728157015944164764 >>>>> 0.003000006s s3:put_obj op status=0 >>>>> 2022-04-18T12:26:27.998+0100 7f22d8f9a700 2 req 728157015944164764 >>>>> 0.003000006s s3:put_obj http status=404 >>>>> 2022-04-18T12:26:27.998+0100 7f22d8f9a700 1 ====== req done >>>>> req=0x7f23221aa620 op status=0 http_status=404 latency=0.003000006s ====== >>>>> 2022-04-18T12:26:27.998+0100 7f22d8f9a700 1 beast: 0x7f23221aa620: >>>>> 10.10.1.13 - anonymous [18/Apr/2022:12:26:27.995 +0100] "PUT >>>>> /v1/AUTH_971efa4cb18f42f7a405342072c39c9d/demo HTTP/1.1" 404 214 - >>>>> "openstacksdk/0.59.0 keystoneauth1/4.4.0 python-requests/2.26.0 >>>>> CPython/3.8.8" - latency=0.003000006s >>>>> >>>> >>>> Could you help please. >>>> >>>> Regards. >>>> >>> >>> >>> -- >>> >>> ??????? ????? ???????? >>> Buddhika Sanjeewa Godakuru >>> >>> Systems Analyst/Programmer >>> Deputy Webmaster / University of Kelaniya >>> >>> Information and Communication Technology Centre (ICTC) >>> University of Kelaniya, Sri Lanka, >>> Kelaniya, >>> Sri Lanka. >>> >>> Mobile : (+94) 071 5696981 >>> Office : (+94) 011 2903420 / 2903424 >>> >>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>> University of Kelaniya Sri Lanka, accepts no liability for the content >>> of this email, or for the consequences of any actions taken on the basis of >>> the information provided, unless that information is subsequently confirmed >>> in writing. If you are not the intended recipient, this email and/or any >>> information it contains should not be copied, disclosed, retained or used >>> by you or any other party and the email and all its contents should be >>> promptly deleted fully from our system and the sender informed. >>> >>> E-mail transmission cannot be guaranteed to be secure or error-free as >>> information could be intercepted, corrupted, lost, destroyed, arrive late >>> or incomplete. >>> >>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>> >> > > Virus-free. > www.avast.com > > <#m_-5200851050393724007_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > -- ??????? ????? ???????? Buddhika Sanjeewa Godakuru Systems Analyst/Programmer Deputy Webmaster / University of Kelaniya Information and Communication Technology Centre (ICTC) University of Kelaniya, Sri Lanka, Kelaniya, Sri Lanka. Mobile : (+94) 071 5696981 Office : (+94) 011 2903420 / 2903424 -- ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++? University of Kelaniya Sri Lanka, accepts no liability for the content of this email, or for the consequences of any actions taken on the basis of the information provided, unless that information is subsequently confirmed in writing. If you are not the intended recipient, this email and/or any information it contains should not be copied, disclosed, retained or used by you or any other party and the email and all its contents should be promptly deleted fully from our system and the sender informed. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -------------- next part -------------- An HTML attachment was scrubbed... URL: From alee at redhat.com Tue Apr 19 20:59:03 2022 From: alee at redhat.com (Ade Lee) Date: Tue, 19 Apr 2022 16:59:03 -0400 Subject: Question on monkey-patching paramiko for FIPS Message-ID: Hi all, As many have already seen, a number of changes have been merged in OpenStack as part of the effort to allow OpenStack to run on FIPS enabled systems. This effort has been captured in a proposed community goal. [1]. One of the requirements for this effort is that md5sum() not be used in a security related context. In fact, python 3.9 has been modified to raise an exception of hashlib.md5sum() is called on a FIPS enabled system, unless it is explicitly annotated with a usedforsecurity=False attribute [2]. We added a wrapper for md5sum in oslo.config to take advantage of this attribute. [3,4,5] Where we have less control is in libraries used by Openstack - and in particular, paramiko. Paramiko fails on FIPS enabled systems because of a call to md5sum() in get_fingerprint(). A patch has been submitted to fix this problem. [6]. Unfortunately, it takes a very long time for paramiko to fix issues. In order for us to make progress on FIPS testing, a small monkey-patch for paramiko was checked into tempest. [7]. Because this change was made to a test tool, this patch was relatively uncontroversial. A similar change has been found to be needed for manila [8]. I would expect that a similar change will be needed in other components that use paramiko to SSH to other systems (eg. cinder, neutron?) I suspect that the only reason this has not been detected in FIPS testing more widely yet is because the components that use paramiko for SSH are being tested in third party tests that do not, as yet, test FIPS. At the request of the manila team, I am bringing this monkey-patch to the attention of the wider OpenStack community to get feedback on the pros and cons of applying this monkey-patch. A couple things to note: 1. This monkey patch is quite small in scope and only needed until paramiko fixes the issue. 2. paramiko is not FIPS compliant, and so we will ultimately need to fix paramiko or replace it with a different library on FIPS enabled systems. When we do this, we would remove the monkey patch. Thanks, Ade Lee [1] https://opendev.org/openstack/governance/src/branch/master/goals/proposed/fips.rst [2] https://bugs.python.org/issue9216 [3] https://review.opendev.org/c/openstack/oslo.utils/+/750031 [4] Patches to various projects to use oslo.utils adapter for hashlib.md5 (as examples): glance: https://review.opendev.org/c/openstack/glance/+/756158 nova: https://review.opendev.org/c/openstack/nova/+/756434 nova: https://review.opendev.org/c/openstack/nova/+/777686 os-brick: https://review.opendev.org/c/openstack/os-brick/+/756151 oslo: https://review.opendev.org/c/openstack/oslo.versionedobjects/+/756153 tooz: https://review.opendev.org/c/openstack/tooz/+/756432 opensdk: https://review.opendev.org/c/openstack/openstacksdk/+/767411 octavia: https://review.opendev.org/c/openstack/octavia/+/798146 designate: https://review.opendev.org/c/openstack/designate/+/798157 glance_store: https://review.opendev.org/c/openstack/glance_store/+/756157 [5] Swift patch to handle hashlib.md5 https://review.opendev.org/c/openstack/swift/+/751966 [6] https://github.com/paramiko/paramiko/pull/1928 [7] https://review.opendev.org/c/openstack/tempest/+/822560 [8] https://review.opendev.org/c/openstack/manila/+/819375 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Tue Apr 19 21:08:25 2022 From: gagehugo at gmail.com (Gage Hugo) Date: Tue, 19 Apr 2022 16:08:25 -0500 Subject: [security][security sig] Stepping down as chair Message-ID: Hey folks, I've been serving as the Security SIG chair for over 4 years now and throughout my entire time as chair I've had the wonderful experience of learning and working alongside several excellent individuals. It also has given me the great opportunity to have multiple discussions on the topic of security with the community as a whole which helped me grow my knowledge base and career. However, with my focus shifting away from pure OpenStack work and the rather long time that 4 years is, I believe it's time for someone with a fresh perspective to have the opportunity to step into the role and help guide the Security SIG into the future. I am stepping down from the role of Security SIG chair as of today, but I plan on sticking around in the SIG for as long as I can. I hope to continue contributing when I can and I will be around to help anyone who is interested in the role of chair. I'd like to give a big thank you to Jeremy Stanley (fungi) for all the help he provided throughout my time as chair and continues to do so to this day whenever I have a question and need help. He helped me step into the chair role and I am sure he will also help anyone who is interested in the role as well. Thanks again for the opportunity and I hope to see you all around! - Gage Hugo -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Apr 19 21:34:13 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 19 Apr 2022 16:34:13 -0500 Subject: [all][tc][Release Management] Improvements in project governance In-Reply-To: References: <1858624.taCxCBeP46@p1> Message-ID: <18043bf50ac.fa24a8bf261486.4680880006039416608@ghanshyammann.com> ---- On Tue, 19 Apr 2022 11:01:48 -0500 Michael Johnson wrote ---- > Comments inline. > > Michael > > On Tue, Apr 19, 2022 at 6:34 AM Slawek Kaplonski wrote: > > > > Hi, > > > > > > During the Zed PTG sessions in the TC room we were discussing some ideas how we can improve project governance. > > > > One of the topics was related to the projects which don't really have any changes in the cycle. Currently we are forcing to do new release of basically the same code when it comes to the end of the cycle. > > > > Can/Should we maybe change that and e.g. instead of forcing new release use last released version of the of the repo for new release too? > > In the past this has created confusion in the community about if a > project has been dropped/removed from OpenStack. That said, I think > this is the point of the "independent" release classification. > > > If yes, should we then automatically propose change of the release model to the "independent" maybe? > > Personally, I would prefer to send an email to the discuss list > proposing the switch to independent. Patches can sometimes get merged > before everyone gets to give input. Especially since the patch would > be proposed in the "releases" project and may not be on the team's > dashboards. Yeah, we can do that but I agree on moving such projects to the 'independent' release model. > > > What would be the best way how Release Management team can maybe notify TC about such less active projects which don't needs any new release in the cycle? That could be one of the potential conditions to check project's health by the TC team. > > It seems like this would be a straight forward script to write given > we already have tools to capture the list of changes included in a > given release. Even we started one script to collect such stats per project with their gate job status, its under review - https://review.opendev.org/c/openstack/governance/+/810037 > > > Another question is related to the projects which aren't really active and are broken during the final release time. We had such problem in the last cycle, see [1] for details. Should we still force pushing fixes for them to be able to release or maybe should we consider deprecation of such projects and not to release it at all? > > In the past we have simply not released projects that are broken and > don't have people actively working on fixing them. It has been a > signal to the community that if they value the project they need to > contribute to it. +1. Indeed, if no one is there to fix the code/test of such projects I am not sure who will care about its release too. So I 100% agree with not repeating the steps of fixing and releasing them as we did in Yoga. -gmann > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-March/027864.html > > > > > > -- > > > > Slawek Kaplonski > > > > Principal Software Engineer > > > > Red Hat > > From skaplons at redhat.com Wed Apr 20 07:18:20 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 20 Apr 2022 09:18:20 +0200 Subject: Question on monkey-patching paramiko for FIPS In-Reply-To: References: Message-ID: <5258360.Sb9uPGUboI@p1> Hi, On wtorek, 19 kwietnia 2022 22:59:03 CEST Ade Lee wrote: > Hi all, > > As many have already seen, a number of changes have been merged in > OpenStack as part of the effort to allow OpenStack to run on FIPS enabled > systems. This effort has been captured in a proposed community goal. [1]. > > One of the requirements for this effort is that md5sum() not be used in a > security related context. In fact, python 3.9 has been modified to raise an > exception of hashlib.md5sum() is called on a FIPS enabled system, unless it > is explicitly annotated with a usedforsecurity=False attribute [2]. We > added a wrapper for md5sum in oslo.config to take advantage of this > attribute. [3,4,5] > > Where we have less control is in libraries used by Openstack - and in > particular, paramiko. Paramiko fails on FIPS enabled systems because of a > call to md5sum() in get_fingerprint(). A patch has been submitted to fix > this problem. [6]. Unfortunately, it takes a very long time for paramiko > to fix issues. > > In order for us to make progress on FIPS testing, a small monkey-patch for > paramiko was checked into tempest. [7]. Because this change was made to a > test tool, this patch was relatively uncontroversial. > > A similar change has been found to be needed for manila [8]. I would > expect that a similar change will be needed in other components that use > paramiko to SSH to other systems (eg. cinder, neutron?) I suspect that the > only reason this has not been detected in FIPS testing more widely yet is > because the components that use paramiko for SSH are being tested in third > party tests that do not, as yet, test FIPS. I just checked in Neutron and it seems that we are not using paramiko almost anywhere. It is used in neutron-tempest-plugin and in rally tests in neutron-vpnaas. Except that it's used in os-ken: https://opendev.org/openstack/os-ken/src/branch/master/os_ken/services/protocols/bgp/operator/ssh.py[1] but that's something what we in neutron are not using at all so that's why we didn't found it during the FIPS testing. > > At the request of the manila team, I am bringing this monkey-patch to the > attention of the wider OpenStack community to get feedback on the pros and > cons of applying this monkey-patch. > > A couple things to note: > 1. This monkey patch is quite small in scope and only needed until paramiko > fixes the issue. > 2. paramiko is not FIPS compliant, and so we will ultimately need to fix > paramiko or replace it with a different library on FIPS enabled systems. > When we do this, we would remove the monkey patch. IMHO it's ok to temporary use it in testing to be sure that everything else is working fine with FIPS (or fix any other issues which will be there). But in longer term I don't think we can say that e.g. Manila is FIPS compliant if it is using paramiko which isn't FIPS compliant. So it will need to be fixed on the paramiko side or Manila will need to move to some other lib to be FIPS compliant. > > Thanks, > Ade Lee > > [1] > https://opendev.org/openstack/governance/src/branch/master/goals/proposed/fips.rst > [2] https://bugs.python.org/issue9216 > [3] https://review.opendev.org/c/openstack/oslo.utils/+/750031 > [4] Patches to various projects to use oslo.utils adapter for hashlib.md5 > (as examples): glance: > https://review.opendev.org/c/openstack/glance/+/756158 nova: > https://review.opendev.org/c/openstack/nova/+/756434 nova: > https://review.opendev.org/c/openstack/nova/+/777686 os-brick: > https://review.opendev.org/c/openstack/os-brick/+/756151 oslo: > https://review.opendev.org/c/openstack/oslo.versionedobjects/+/756153 tooz: > https://review.opendev.org/c/openstack/tooz/+/756432 opensdk: > https://review.opendev.org/c/openstack/openstacksdk/+/767411 octavia: > https://review.opendev.org/c/openstack/octavia/+/798146 designate: > https://review.opendev.org/c/openstack/designate/+/798157 glance_store: > https://review.opendev.org/c/openstack/glance_store/+/756157 > [5] Swift patch to handle hashlib.md5 > https://review.opendev.org/c/openstack/swift/+/751966 > [6] https://github.com/paramiko/paramiko/pull/1928 > [7] https://review.opendev.org/c/openstack/tempest/+/822560 > [8] https://review.opendev.org/c/openstack/manila/+/819375 > -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://opendev.org/openstack/os-ken/src/branch/master/os_ken/services/protocols/bgp/operator/ssh.py -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From fdehech.7 at gmail.com Wed Apr 20 07:53:05 2022 From: fdehech.7 at gmail.com (Firas Dehech) Date: Wed, 20 Apr 2022 08:53:05 +0100 Subject: ERROR CREATING CLUSTER Message-ID: Hi all, I am working on a project Openstack in linux ubuntu 20.04. I want to create a cluster hadoop with one master-node and three worker-nodes and i have a problem with a cluster that doesn't work. Status ERROR: Creating cluster failed for the following reason(s): Failed to create trust Error ID: ef5e8b0a-8e6d-4878-bebb-f37f4fa50a88, Failed to create trust Error ID: 43157255-86af-4773-96c1-a07ca7ac66ed. links: https://docs.openstack.org/devstack/latest/ File local.conf : [[local|localrc]] ADMIN_PASSWORD=secret DATABASE_PASSWORD=secret RABBIT_PASSWORD=secret SERVICE_PASSWORD=$ADMIN_PASSWORD HOST_IP=10.0.2.15 LOGFILE=$DEST/logs/stack.sh.log SWIFT_REPLICAS=1SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5 SWIFT_DATA_DIR=$DEST/data enable_plugin sahara https://opendev.org/openstack/sahara enable_plugin sahara-dashboard https://opendev.org/openstack/sahara-dashboard enable_plugin heat https://opendev.org/openstack/heat Can you guys advise me about these errors. Is there anything to worry about? Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Apr 20 07:57:39 2022 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 20 Apr 2022 08:57:39 +0100 Subject: [Kolla-ansible][Xena] Error deploying Cloudkitty In-Reply-To: References: Message-ID: Hi Wodel, Did it work when you added the ssl parameter? If so, could you propose a fix for this upstream? Thanks, Mark On Tue, 19 Apr 2022 at 15:07, wodel youchi wrote: > Hi, > I tried to do this > vim > /home/deployer/kollavenv/xenavenv/share/kolla-ansible/ansible/roles/cloudkitty/defaults/main.yml > *cloudkitty_influxdb_use_ssl: "true"* > But it didn't work,then I added the same variable to globals.yml but it > didn't work. > > So finally I edited vim > /home/deployer/kollavenv/xenavenv/share/kolla-ansible/ansible/roles/cloudkitty/tasks/bootstrap.yml > and added the ssl variable as a workaround > >> - name: Creating Cloudkitty influxdb database >> become: true >> kolla_toolbox: >> module_name: influxdb_database >> module_args: >> hostname: "{{ influxdb_address }}" >> port: "{{ influxdb_http_port }}" >> * ssl: True* >> database_name: "{{ cloudkitty_influxdb_name }}" >> run_once: True >> delegate_to: "{{ groups['cloudkitty-api'][0] }}" >> when: cloudkitty_storage_backend == 'influxdb' >> > > > I don't know if this would have worked I just get the idea > > - name: Creating Cloudkitty influxdb database >> become: true >> kolla_toolbox: >> module_name: influxdb_database >> module_args: >> hostname: "{{ influxdb_address }}" >> port: "{{ influxdb_http_port }}" >> * ssl: {{ cloudkitty_influxdb_use_ssl }}* >> database_name: "{{ cloudkitty_influxdb_name }}" >> run_once: True >> delegate_to: "{{ groups['cloudkitty-api'][0] }}" >> when: cloudkitty_storage_backend == 'influxdb' >> > > > > > Regards. > > Le mar. 19 avr. 2022 ? 12:37, Rafael Weing?rtner < > rafaelweingartner at gmail.com> a ?crit : > >> It seems that it was always assumed to be HTTP and not HTTPs: >> https://github.com/openstack/kolla-ansible/blob/a52cf61b2234d2f078dd2893dd37de63e20ea1aa/ansible/roles/cloudkitty/tasks/bootstrap.yml#L36 >> . >> >> Maybe, we will need to change that to use SSL whenever needed. >> >> On Tue, Apr 19, 2022 at 8:19 AM wodel youchi >> wrote: >> >>> Hi, >>> >>> I tested with influx -host >>> First I tested with the internal api IP address of the host itself, and >>> it did work : influx -host 10.10.3.9 >>> Then I tested with VIP of the internal api, which is held by haproxy : >>> influx -host 10.10.3.1, it didn't work, looking in the haproxy >>> configuration file of influxdb, I noticed that haproxy uses https in the >>> front end, so I tested with : influx -ssl -host 10.10.3.1 and it did work. >>> >>> And if you see the error message from TASK [cloudkitty : Creating >>> Cloudkitty influxdb database], ssl is false >>> >>> fatal: [192.168.1.5 -> 192.168.1.5]: FAILED! => { >>> "action": "influxdb_database", >>> "changed": false, >>> "invocation": { >>> "module_args": { >>> "database_name": "cloudkitty", >>> "hostname": "dashint.cloud.cerist.dz", >>> "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", >>> "path": "", >>> "port": 8086, >>> "proxies": {}, >>> "retries": 3, >>> *"ssl": false,* >>> "state": "present", >>> "timeout": null, >>> "udp_port": 4444, >>> "use_udp": false, >>> "username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", >>> "validate_certs": true >>> } >>> }, >>> "msg": "('Connection aborted.', RemoteDisconnected('Remote end >>> closed connection without response',))" >>> } >>> >>> Could that be the problem? if yes how to force Cloudkitty to enable ssl? >>> >>> Regards. >>> >>> >>> Virus-free. >>> www.avast.com >>> >>> <#m_-5979860831382871527_m_2114711239033937821_m_-2160537011768264727_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >>> >>> Le mar. 19 avr. 2022 ? 07:30, Pierre Riteau a >>> ?crit : >>> >>>> Hello, >>>> >>>> InfluxDB is configured to only listen on the internal API interface. >>>> Can you check the hostname you are using resolves correctly from the >>>> cloudkitty host? >>>> Inside the influxdb container, you should use `influxdb -host >>>> ` with the internal IP of the influxdb host. >>>> >>>> Also check if the output of `docker logs influxdb` has any logs. >>>> >>>> Best wishes, >>>> Pierre Riteau (priteau) >>>> >>>> On Tue, 19 Apr 2022 at 01:24, wodel youchi >>>> wrote: >>>> >>>>> Hi, >>>>> >>>>> I am trying to deploy Cloudkitty, but I get this error message : >>>>> >>>>> TASK [cloudkitty : Creating Cloudkitty influxdb database] >>>>>> ****************************************************** >>>>>> task path: >>>>>> /home/deployer/kollavenv/xenavenv/share/kolla-ansible/ansible/roles/cloudkitty/tasks/bootstrap.yml:36 >>>>> >>>>> >>>>> fatal: [192.168.1.5 -> 192.168.1.5]: FAILED! => { >>>>>> "action": "influxdb_database", >>>>>> "changed": false, >>>>>> "invocation": { >>>>>> "module_args": { >>>>>> "database_name": "cloudkitty", >>>>>> "hostname": "dashint.cloud.cerist.dz", >>>>>> "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", >>>>>> "path": "", >>>>>> "port": 8086, >>>>>> "proxies": {}, >>>>>> "retries": 3, >>>>>> "ssl": false, >>>>>> "state": "present", >>>>>> "timeout": null, >>>>>> "udp_port": 4444, >>>>>> "use_udp": false, >>>>>> "username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", >>>>>> "validate_certs": true >>>>>> } >>>>>> }, >>>>>> "msg": "('Connection aborted.', RemoteDisconnected('Remote end >>>>>> closed connection without response',))" >>>>>> } >>>>> >>>>> >>>>> >>>>> On the influxdb container I did this : >>>>> >>>>>> [root at controllerb ~]# docker ps | grep inf >>>>>> 68b3ebfefbec >>>>>> 192.168.1.16:4000/openstack.kolla/centos-source-influxdb:xena >>>>>> "dumb-init --single-?" 22 minutes ago Up 22 minutes >>>>>> influxdb >>>>>> [root at controllerb ~]# docker exec -it influxdb /bin/bash >>>>>> (influxdb)[influxdb at controllerb /]$ influx >>>>>> Failed to connect to http://localhost:8086: Get >>>>>> http://localhost:8086/ping: dial tcp [::1]:8086: connect: connection >>>>>> refused >>>>>> Please check your connection settings and ensure 'influxd' is running. >>>>>> (influxdb)[influxdb at controllerb /]$ ps -ef >>>>>> UID PID PPID C STIME TTY TIME CMD >>>>>> influxdb 1 0 0 Apr18 ? 00:00:00 dumb-init >>>>>> --single-child -- kolla_start >>>>>> influxdb 7 1 0 Apr18 ? 00:00:01 /usr/bin/influxd >>>>>> -config /etc/influxdb/influxdb.conf >>>>>> influxdb 45 0 0 00:12 pts/0 00:00:00 /bin/bash >>>>>> influxdb 78 45 0 00:12 pts/0 00:00:00 ps -ef >>>>>> (influxdb)[influxdb at controllerb /]$ >>>>> >>>>> >>>>> I have no log file for influxdb, the directory is empty. >>>>> >>>>> Any ideas? >>>>> >>>>> Regards. >>>>> >>>> >> >> -- >> Rafael Weing?rtner >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Wed Apr 20 11:19:10 2022 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Wed, 20 Apr 2022 13:19:10 +0200 Subject: [all][tc][Release Management] Improvements in project governance In-Reply-To: References: <1858624.taCxCBeP46@p1> Message-ID: Hi, At the very same time at the PTG we discussed this on the Release Management session [1] as well. To release deliverables without significant content is not ideal and this came up in previous discussions as well. On the other hand unfortunately this is the most feasible solution from release management team perspective especially because the team is quite small (new members are welcome! feel free to join the release management team! :)). To change to independent release model is an option for some cases, but not for every project. (It is less clear for consumers what version is/should be used for which series; Fixing problems that comes up in specific stable branches, is not possible; testing the deliverable against a specific stable branch constraints is not possiblel; etc.) See some other comments inline. [1] https://etherpad.opendev.org/p/april2022-ptg-rel-mgt#L44 El?d On 2022. 04. 19. 18:01, Michael Johnson wrote: > Comments inline. > > Michael > > On Tue, Apr 19, 2022 at 6:34 AM Slawek Kaplonski wrote: >> Hi, >> >> >> During the Zed PTG sessions in the TC room we were discussing some ideas how we can improve project governance. >> >> One of the topics was related to the projects which don't really have any changes in the cycle. Currently we are forcing to do new release of basically the same code when it comes to the end of the cycle. >> >> Can/Should we maybe change that and e.g. instead of forcing new release use last released version of the of the repo for new release too? > In the past this has created confusion in the community about if a > project has been dropped/removed from OpenStack. That said, I think > this is the point of the "independent" release classification. Yes, exactly as Michael says. >> If yes, should we then automatically propose change of the release model to the "independent" maybe? > Personally, I would prefer to send an email to the discuss list > proposing the switch to independent. Patches can sometimes get merged > before everyone gets to give input. Especially since the patch would > be proposed in the "releases" project and may not be on the team's > dashboards. The release process catches libraries only (that had no merged change), so the number is not that huge, sending a mail seems to be a fair option. (The process says: "Evaluate any libraries that did not have any change merged over the cycle to see if it is time to transition them to the independent release model . Note: client libraries (and other libraries strongly tied to another deliverable) should generally follow their parent deliverable release model, even if they did not have a lot of activity themselves).") >> What would be the best way how Release Management team can maybe notify TC about such less active projects which don't needs any new release in the cycle? That could be one of the potential conditions to check project's health by the TC team. > It seems like this would be a straight forward script to write given > we already have tools to capture the list of changes included in a > given release. There are a couple of good signals already for TC to catch inactive projects, like the generated patches that are not merged, for example: https://review.opendev.org/q/topic:reno-yoga+is:open https://review.opendev.org/q/topic:create-yoga+is:open https://review.opendev.org/q/topic:add-xena-python-jobtemplates+is:open (Note that in the past not merged patches caused issues and discussing with the TC resulted a suggestion to force-merge them to avoid future issues) >> Another question is related to the projects which aren't really active and are broken during the final release time. We had such problem in the last cycle, see [1] for details. Should we still force pushing fixes for them to be able to release or maybe should we consider deprecation of such projects and not to release it at all? > In the past we have simply not released projects that are broken and > don't have people actively working on fixing them. It has been a > signal to the community that if they value the project they need to > contribute to it. Yes, that's a fair point, too, maybe those broken deliverables should not be released at all. I'm not sure, but that might cause another issues for release management tooling, though... Besides, during our PTG session we came to the conclusion that we need another step in our process: * "propose DNM changes on every repository by RequirementsFreeze (5 weeks before final release) to check that tests are still passing with the current set of dependencies" Hopefully this will catch broken things well in advance. >> [1]http://lists.openstack.org/pipermail/openstack-discuss/2022-March/027864.html >> >> >> -- >> >> Slawek Kaplonski >> >> Principal Software Engineer >> >> Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Apr 20 13:37:09 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 20 Apr 2022 10:37:09 -0300 Subject: [cinder] Bug deputy report for week of 04-20-2022 Message-ID: This is a bug report from 04-13-2022 to 04-20-2022. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Medium - https://bugs.launchpad.net/cinder/+bug/1969366 "Backend incorrectly reporting cacheable capability." Assigned to Gorka. - https://bugs.launchpad.net/cinder/+bug/1969408 "[RBD] Cinder fails to retype / migrate large volumes." Unassigned. - https://bugs.launchpad.net/cinder/+bug/1967481 "[Storwize_SVC]Retype operation failure for GMCV due to update_clean_rate." Unassigned. - https://bugs.launchpad.net/cinder/+bug/1968164 "Incorrectly shown volume size while using PowerFlex cinder driver." Fix proposed to master. Low - https://bugs.launchpad.net/cinder/+bug/1967686 "[DEFAULT] use_forwarded_for is a duplicate of the HTTPProxyToWSGI middleware." Assigned to Takashi Kajinami. - https://bugs.launchpad.net/cinder/+bug/1968048 "Pylint tox environment fails on Fedora 35." Fix proposed to master. - https://bugs.launchpad.net/cinder/+bug/1968159 "[IBM Storwize] Retype failure for replication volume-type." Fix proposed to master. Invalid - https://bugs.launchpad.net/cinder/+bug/1969213 "When image properties contain "signature_verified" field?create volume from image wil fail." Unassigned. Cheers -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Apr 20 14:35:00 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 20 Apr 2022 09:35:00 -0500 Subject: [all][tc][policy][heat][cinder] Continuing the RBAC PTG discussion + policy pop-up meeting new time Message-ID: <1804765e168.c7627ca0318612.1596099052066040022@ghanshyammann.com> Hello Everyone, As we said in PTG about continuing the RBAC discussion on open questions (currently from Heat and Cinder), we are scheduling the call on April 26 Tuesday from 14:30-15:00 UTC. And we will use the same time for the policy-popup team meeting every alternate Tuesday to answer RBAC queries from any projects. Meeting Details: - https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team#Meeting Agenda (you can add for this or coming meetings) - https://etherpad.opendev.org/p/rbac-zed-ptg#L97 -gmann From abishop at redhat.com Wed Apr 20 16:10:02 2022 From: abishop at redhat.com (Alan Bishop) Date: Wed, 20 Apr 2022 09:10:02 -0700 Subject: [glance][devstack][tripleo][ansible][ceph_admin] Glance moving away from single store Configuration In-Reply-To: References: Message-ID: On Mon, Apr 18, 2022 at 10:13 PM Abhishek Kekane wrote: > Hello Everyone, > > Glance has added support to configure multiple stores as a store backend > in Stein cycle, and it is very stable now. So in upcoming cycles we are > going to remove single store support and use multiple stores support only > (PS. you can configure a single store using multiple stores configuration > options). As a first step, we have started adding support in devstack > [1][2][3] for configuring glance as multiple stores for each of the glance > store backend. This cycle we are going to default multistore configuration > in devstack so that our gate/check (CI) jobs should test using the same. > Following cycles we will start removing single store support from glance > code base. > Hi, TripleO is already relying on glance's multistore feature, even when tripleo configures only a single store. Removing glance's legacy single store code shouldn't have any impact on tripleo. BTW, tripleo switched to using the multistore feature back in Train. Alan > If you have any questions related to this work kindly revert back to this > mail or you can join us in our weekly meeting, every Thursday at 1400 UTC > #openstack-meeting IRC channel as well. > > [1] https://review.opendev.org/c/openstack/devstack/+/741654 > [2] https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/741801/ > [3] https://review.opendev.org/c/openstack/devstack/+/741802 > > > Thank you, > > Abhishek Kekane > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sigurd.k.brinch at uia.no Wed Apr 20 16:42:18 2022 From: sigurd.k.brinch at uia.no (Sigurd Kristian Brinch) Date: Wed, 20 Apr 2022 16:42:18 +0000 Subject: Nova support for multiple vGPUs? In-Reply-To: References: Message-ID: Hi, As far as I can tell, libvirt/KVM supports multiple vGPUs per VM (https://docs.nvidia.com/grid/14.0/grid-vgpu-release-notes-generic-linux-kvm/index.html#multiple-vgpu-support), but in OpenStack/Nova it is limited to one vGPU per VM (https://docs.openstack.org/nova/latest/admin/virtual-gpu.html#configure-a-flavor-controller) Is there a reason for this limit? What would be needed to enable multiple vGPUs in Nova? BR Sigurd -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Wed Apr 20 21:48:12 2022 From: amy at demarco.com (Amy Marrich) Date: Wed, 20 Apr 2022 16:48:12 -0500 Subject: OPS Meetup Registration Message-ID: Registration is now open for the OPS Meetup to be held Friday right after the Summit. https://www.eventbrite.com/e/openstack-ops-meetup-tickets-322813472787 And don't forget you can still add topics: https://etherpad.opendev.org/p/april2022-ptg-openstack-ops Thanks, Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Apr 21 01:29:54 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 20 Apr 2022 20:29:54 -0500 Subject: [all][tc] Technical Committee next weekly meeting on April 21 at 1500 UTC In-Reply-To: <1803f6816f5.1079eb8e9187164.1531746983034997208@ghanshyammann.com> References: <1803f6816f5.1079eb8e9187164.1531746983034997208@ghanshyammann.com> Message-ID: <18049bd72e3.d6edf1c3342331.6210853266964778047@ghanshyammann.com> Hello Everyone, Below is the agenda for Tomorrow's TC IRC meeting schedule at 1500 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Zed cycle Tracker ** https://etherpad.opendev.org/p/tc-zed-tracker * Gate health check ** Fixing Zuul config error in OpenStack *** https://etherpad.opendev.org/p/zuul-config-error-openstack * Migration from old ELK service to new Dashboard ** Shutdown of old ELK service *** https://review.opendev.org/c/opendev/system-config/+/838324 ** Communicating the new ELK service dashboard and login information *** https://opensearch.logs.openstack.org/_dashboards/app/discover?security_tenant=global *** https://review.opendev.org/c/openstack/governance-sigs/+/835838 * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open * Open Discussion ** tick-tock release notes feedback (rosmaita) -gmann ---- On Mon, 18 Apr 2022 20:20:27 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Technical Committee's next weekly meeting is scheduled for April 21 at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, April 20, at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From loisctn1 at gmail.com Thu Apr 21 03:58:38 2022 From: loisctn1 at gmail.com (Duc Loi) Date: Thu, 21 Apr 2022 10:58:38 +0700 Subject: Deployment system includes: Ussuri, Brocade X7-8 Message-ID: Hi Everyone, In the near future, we will deploy a system including Ussuri and Brocade X7-8 according to customer requirements. But Brocade has made a notice not to support versions after Train. Has anyone implemented a system similar? What are the challenges you face? What solutions can fix it? Thanks and Best Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Thu Apr 21 05:04:06 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Thu, 21 Apr 2022 10:34:06 +0530 Subject: Question on monkey-patching paramiko for FIPS In-Reply-To: References: Message-ID: Hi Ade, We discussed this at our cinder meeting yesterday[1]. We do have occurrences of paramiko in our code[2] but they are mostly in our driver code where we are not testing compliance with FIPS. Currently, most of our third party CIs runs on Ubuntu and AFAIR, FIPS only works on CentOS so we can think about this when we start supporting FIPS for Ubuntu (as also discussed in Cinder PTG[3]). For the core cinder code, we don't have any usage of paramiko and the current jobs proposed[4] doesn't deal with third party drivers that are using paramiko. In conclusion, the FIPS compliance looks good from the Cinder side and we are planning to add documentation for new drivers about FIPS compliance so they're aware about it. [1] https://meetings.opendev.org/meetings/cinder/2022/cinder.2022-04-20-14.00.log.html#l-44 [2] https://github.com/openstack/cinder/search?q=paramiko [3] https://etherpad.opendev.org/p/zed-ptg-cinder#L547 [4] https://review.opendev.org/c/openstack/cinder/+/790535 Thanks and regards Rajat Dhasmana On Wed, Apr 20, 2022 at 2:36 AM Ade Lee wrote: > Hi all, > > As many have already seen, a number of changes have been merged in > OpenStack as part of the effort to allow OpenStack to run on FIPS enabled > systems. This effort has been captured in a proposed community goal. [1]. > > One of the requirements for this effort is that md5sum() not be used in a > security related context. In fact, python 3.9 has been modified to raise an > exception of hashlib.md5sum() is called on a FIPS enabled system, unless it > is explicitly annotated with a usedforsecurity=False attribute [2]. We > added a wrapper for md5sum in oslo.config to take advantage of this > attribute. [3,4,5] > > Where we have less control is in libraries used by Openstack - and in > particular, paramiko. Paramiko fails on FIPS enabled systems because of a > call to md5sum() in get_fingerprint(). A patch has been submitted to fix > this problem. [6]. Unfortunately, it takes a very long time for paramiko > to fix issues. > > In order for us to make progress on FIPS testing, a small monkey-patch for > paramiko was checked into tempest. [7]. Because this change was made to a > test tool, this patch was relatively uncontroversial. > > A similar change has been found to be needed for manila [8]. I would > expect that a similar change will be needed in other components that use > paramiko to SSH to other systems (eg. cinder, neutron?) I suspect that the > only reason this has not been detected in FIPS testing more widely yet is > because the components that use paramiko for SSH are being tested in third > party tests that do not, as yet, test FIPS. > > At the request of the manila team, I am bringing this monkey-patch to the > attention of the wider OpenStack community to get feedback on the pros and > cons of applying this monkey-patch. > > A couple things to note: > 1. This monkey patch is quite small in scope and only needed until > paramiko fixes the issue. > 2. paramiko is not FIPS compliant, and so we will ultimately need to fix > paramiko or replace it with a different library on FIPS enabled systems. > When we do this, we would remove the monkey patch. > > Thanks, > Ade Lee > > [1] > https://opendev.org/openstack/governance/src/branch/master/goals/proposed/fips.rst > [2] https://bugs.python.org/issue9216 > [3] https://review.opendev.org/c/openstack/oslo.utils/+/750031 > [4] Patches to various projects to use oslo.utils adapter for hashlib.md5 > (as examples): glance: > https://review.opendev.org/c/openstack/glance/+/756158 nova: > https://review.opendev.org/c/openstack/nova/+/756434 nova: > https://review.opendev.org/c/openstack/nova/+/777686 os-brick: > https://review.opendev.org/c/openstack/os-brick/+/756151 oslo: > https://review.opendev.org/c/openstack/oslo.versionedobjects/+/756153 > tooz: https://review.opendev.org/c/openstack/tooz/+/756432 opensdk: > https://review.opendev.org/c/openstack/openstacksdk/+/767411 octavia: > https://review.opendev.org/c/openstack/octavia/+/798146 designate: > https://review.opendev.org/c/openstack/designate/+/798157 glance_store: > https://review.opendev.org/c/openstack/glance_store/+/756157 > [5] Swift patch to handle hashlib.md5 > https://review.opendev.org/c/openstack/swift/+/751966 > [6] https://github.com/paramiko/paramiko/pull/1928 > [7] https://review.opendev.org/c/openstack/tempest/+/822560 > [8] https://review.opendev.org/c/openstack/manila/+/819375 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lab.rep.2201 at gmail.com Thu Apr 21 05:56:05 2022 From: lab.rep.2201 at gmail.com (lab reporting) Date: Thu, 21 Apr 2022 07:56:05 +0200 Subject: [dev][horizon] Integration of the ec2 feature Message-ID: Hello, We will add EC2 token management within the Horizon project. As a reminder, EC2 Credentials service allows the creation of access/secret credentials used for the ec2 interop layer of OpenStack to authorize S3 requests. It is part of the package keystone.contrib Do you think this development is of interest to the community ? Do you think this development can be built into the core? And if so, should we insert into the panel group dedicated to object storage or into the identity panel ? Thank you in advance for your answers. Best Regards, Tony From akekane at redhat.com Thu Apr 21 06:23:13 2022 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 21 Apr 2022 11:53:13 +0530 Subject: [glance][devstack][tripleo][ansible][ceph_admin] Glance moving away from single store Configuration In-Reply-To: References: Message-ID: On Wed, Apr 20, 2022 at 9:40 PM Alan Bishop wrote: > > > On Mon, Apr 18, 2022 at 10:13 PM Abhishek Kekane > wrote: > >> Hello Everyone, >> >> Glance has added support to configure multiple stores as a store backend >> in Stein cycle, and it is very stable now. So in upcoming cycles we are >> going to remove single store support and use multiple stores support only >> (PS. you can configure a single store using multiple stores configuration >> options). As a first step, we have started adding support in devstack >> [1][2][3] for configuring glance as multiple stores for each of the glance >> store backend. This cycle we are going to default multistore configuration >> in devstack so that our gate/check (CI) jobs should test using the same. >> Following cycles we will start removing single store support from glance >> code base. >> > > Hi, > > TripleO is already relying on glance's multistore feature, even when > tripleo configures only a single store. Removing > glance's legacy single store code shouldn't have any impact on tripleo. > BTW, tripleo switched to using the multistore > feature back in Train. > > Alan > Hi Alan, Fantastic, thank you for the update. Abhishek > > >> If you have any questions related to this work kindly revert back to this >> mail or you can join us in our weekly meeting, every Thursday at 1400 UTC >> #openstack-meeting IRC channel as well. >> >> [1] https://review.opendev.org/c/openstack/devstack/+/741654 >> [2] https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/741801/ >> [3] https://review.opendev.org/c/openstack/devstack/+/741802 >> >> >> Thank you, >> >> Abhishek Kekane >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Thu Apr 21 06:28:35 2022 From: ramishra at redhat.com (Rabi Mishra) Date: Thu, 21 Apr 2022 11:58:35 +0530 Subject: [tc][all][ Zed Virtual PTG RBAC discussions Summary In-Reply-To: <1800c9bf145.d19b5ce2503791.7304546509368773732@ghanshyammann.com> References: <1800c9bf145.d19b5ce2503791.7304546509368773732@ghanshyammann.com> Message-ID: On Sat, Apr 9, 2022 at 10:10 AM Ghanshyam Mann wrote: > Hello Everyone, > > I tried to attend the RBAC-related sessions on various projects[i] but I > am sure I might have missed a few of them. I am summarizing > the RBAC discussion on what open questions were from the project side and > what we discussed in TC PTG. Feel free to append > the discussion you had in your project or any query you want TC to solve. > > Current status: > ------------------ > * I have started this etherpad[ii] to track the status of this goal, > please keep it up to date as you progress the work in your project. > > Open question: > ------------------ > 1. heat create_stack API calling the mixed scope APIs (for example create > flavor and create server). what is best scope for heat API so that > we do not have any security leak. We have not concluded the solution yet > as we need the heat team also join the discussion and agree on that. > But we have a few possible solutions listed below: > > ** Heat accepts stack API with system scope > *** This means a stack with system resources would require a system admin > role => Need to check with services relying on Heat > ** Heat assigns a project-scope role to the requester during a processing > stack operation and uses this project scope credential to manage project > resources > ** Heat starts accepting the new header accepting the extra token (say > SYSTEM_TOKEN) and uses that to create/interact the system-level resource > like create flavor. > This is probably more complex than what we think:) I would expect keystone to provide full backward compatibility (i.e toggle off srbac), so that existing heat stacks in upgraded deployments work as before. As for the different options mentioned above, - IMO, heat assigning a project-scoped role to a user dynamically is probably out of consideration. - Introducing hacks in heat to switch tokens when creating/updating different resources of a stack (assuming we get multiple system/project scoped tokens with authentication) is also not a good idea either. Also the fact heat still relies on keystone trusts (used for long running tasks[1] and signaling) would make it complicated. Let's discuss in the next scheduled call. [1] https://github.com/openstack/heat/blob/master/heat/common/config.py#L130 > 2. How to isolate the host level attribute in GET APIs? (cinder and manila > have the same issue). Cinder GET volume API response has > the host information. One possible solution we discussed is to have a > separate API to show the host information to the system user and > the rest of the volume response to the project users only. This is similar > to what we have in nova. > > Then we have a few questions from the Tacker side, where tacker create_vnf > API internally call heat create_stack and they are planning to > make create_vnf API for non-admin users. > > Direction on enabling the enforce scope by default > ------------------------------------------------------------ > As keystone, nova, and neutron are ready with the new RBAC, we wanted to > enable the scope checks by default. But after seeing the > lack of integration testing and the above mentioned open question > (especially heat and any deployment project breaking) we decided to hold > it. As the first step, we will migrate the tempest tests to the new RBAC > and will enable the scope for these services in devstack. And based on the > testing results we will decide on it. But after seeing the amount of work > needed in Tempest and on the open question, I do not think we will be able > to do it in the Zed cycle. Instead, we will target to enable the 'new > defaults' by default. > > We ran out of time in TC and will continue the discuss these in policy > popup meetings. I will push the schedule to Ml. > > [i] https://etherpad.opendev.org/p/rbac-zed-ptg > [ii] https://etherpad.opendev.org/p/rbac-goal-tracking > > -gmann > > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Apr 21 10:20:48 2022 From: smooney at redhat.com (Sean Mooney) Date: Thu, 21 Apr 2022 11:20:48 +0100 Subject: Nova support for multiple vGPUs? In-Reply-To: References: Message-ID: On Wed, 2022-04-20 at 16:42 +0000, Sigurd Kristian Brinch wrote: > Hi, > As far as I can tell, libvirt/KVM supports multiple vGPUs per VM > (https://docs.nvidia.com/grid/14.0/grid-vgpu-release-notes-generic-linux-kvm/index.html#multiple-vgpu-support), > but in OpenStack/Nova it is limited to one vGPU per VM > (https://docs.openstack.org/nova/latest/admin/virtual-gpu.html#configure-a-flavor-controller) > Is there a reason for this limit? yes nvidia > What would be needed to enable multiple vGPUs in Nova? so you can technically do it today if you have 2 vGPU for seperate physical gpu cards but nvidia do not support multiple vGPUs form the same card. nova does not currently provide a way to force the gpu allocation to be from seperate cards. well thats not quite true you could you would have to use the named group syntax to request them so instaed of resources:vgpu=2 you woudl do? resources_first_gpu_group:VGPU=1? resources_second_gpu_group:VGPU=1 group_policy=isolate the name after resouces_ is arbitray group name provided it conforms to this regex '([a-zA-Z0-9_-]{1,64})?' we stongly dislike this approch. first of all using group_policy=isolate is a gloabl thing meaning that no request groups can come form the same provider that means you can not have to sriov VFs from the same physical nic as a result of setting it. if you dont set group_policy the default is none which means you no longer are guarenteed that they will come form different providres so what you woudl need to do is extend placment to support isolating only sepeicic named groups and then expose that in nova via flavor extra specs which is not particaly good ux as it rather complicated and means you need to understand how placement works in depth. placement shoudl really be an implemenation detail i.e. resources_first_gpu_group:VGPU=1 resources_second_gpu_group:VGPU=1 group_isolate=first_grpu_group,second_gpu_group;... that fixes the confilct with sriov and all other usages of resouce groups like bandwith based qos the slightly better approch wouls be to make this simplere to use by doing somtihng liek this resources:vgpu=2 vgpu:gpu_selection_policy=isolate we would still need the placement feature to isolate by group but we can hide the detail form the end user with a pre filter in nova https://github.com/openstack/nova/blob/eedbff38599addd4574084edac8b111c4e1f244a/nova/scheduler/request_filter.py which will transfrom the resouce request and split it up into groups automatically this is a long way to say that if it was not for limiations in the iommu on nvidia gpus and the fact that they cannot map two vgpus to from on phsyical gpu to a singel vm this would already work out of hte box wiht just resources:vgpu=2. perhaps when intel lauch there discret datacenter gpus there vGPU implementaiotn will not have this limiation. we do not prevent you from requestin 2 vgpus today it will just fail when qemu tries to use them. we also have not put the effort into working around the limiation in nvidias hardware since ther drivers also used to block this until the ampear generation and there has nto been a large request to support multipel vgpus form users. ocationally some will ask about it but in general peopel either do full gpu passthough or use 1 vgpu instance. hopefully that will help. you can try the first approch today if you have more then one physical gpu per host e.g. resources_first_gpu_group:VGPU=1 resources_second_gpu_group:VGPU=1 group_policy=isolate just be aware of the limiation fo group_policy=isolate regard sean > > BR > Sigurd From sbauza at redhat.com Thu Apr 21 10:25:18 2022 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 21 Apr 2022 12:25:18 +0200 Subject: Nova support for multiple vGPUs? In-Reply-To: References: Message-ID: Le mer. 20 avr. 2022 ? 18:47, Sigurd Kristian Brinch a ?crit : > Hi, > > As far as I can tell, libvirt/KVM supports multiple vGPUs per VM > > ( > https://docs.nvidia.com/grid/14.0/grid-vgpu-release-notes-generic-linux-kvm/index.html#multiple-vgpu-support), > > > but in OpenStack/Nova it is limited to one vGPU per VM > > ( > https://docs.openstack.org/nova/latest/admin/virtual-gpu.html#configure-a-flavor-controller > ) > > Is there a reason for this limit? > > What would be needed to enable multiple vGPUs in Nova? > > > If you look at the vGPU types that are supported for multiple vGPUs per VM, those are only the ones who associate the whole GPU to one single vGPU (eg. A100-40C for the A100 40GB PCIe card)... You can try to ask for more vGPUs per instance if you want, but unless you use the above types (which are just kind of passthrough), you'll get the libvirt exception that's provided in https://bugs.launchpad.net/nova/+bug/1758086 ) This is then not a Nova limitation, but we tried to document in our upstream docs to let operators know about such limitation. -Sylvain > BR > > Sigurd > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Thu Apr 21 10:57:13 2022 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 21 Apr 2022 12:57:13 +0200 Subject: Nova support for multiple vGPUs? In-Reply-To: References: Message-ID: Le jeu. 21 avr. 2022 ? 12:26, Sean Mooney a ?crit : > On Wed, 2022-04-20 at 16:42 +0000, Sigurd Kristian Brinch wrote: > > Hi, > > As far as I can tell, libvirt/KVM supports multiple vGPUs per VM > > ( > https://docs.nvidia.com/grid/14.0/grid-vgpu-release-notes-generic-linux-kvm/index.html#multiple-vgpu-support > ), > > but in OpenStack/Nova it is limited to one vGPU per VM > > ( > https://docs.openstack.org/nova/latest/admin/virtual-gpu.html#configure-a-flavor-controller > ) > > Is there a reason for this limit? > yes nvidia > > What would be needed to enable multiple vGPUs in Nova? > so you can technically do it today if you have 2 vGPU for seperate > physical gpu cards > but nvidia do not support multiple vGPUs form the same card. > > nova does not currently provide a way to force the gpu allocation to be > from seperate cards. > > > well thats not quite true you could > > you would have to use the named group syntax to request them so instaed of > resources:vgpu=2 > > you woudl do > > resources_first_gpu_group:VGPU=1 > resources_second_gpu_group:VGPU=1 > group_policy=isolate > > the name after resouces_ is arbitray group name provided it conforms to > this regex '([a-zA-Z0-9_-]{1,64})?' > > we stongly dislike this approch. > first of all using group_policy=isolate is a gloabl thing meaning that no > request groups can come form the same provider > > that means you can not have to sriov VFs from the same physical nic as a > result of setting it. > if you dont set group_policy the default is none which means you no longer > are guarenteed that they will come form different providres > > so what you woudl need to do is extend placment to support isolating only > sepeicic named groups > and then expose that in nova via flavor extra specs which is not particaly > good ux as it rather complicated and means you need to > understand how placement works in depth. placement shoudl really be an > implemenation detail > i.e. > resources_first_gpu_group:VGPU=1 > resources_second_gpu_group:VGPU=1 > group_isolate=first_grpu_group,second_gpu_group;... > > that fixes the confilct with sriov and all other usages of resouce groups > like bandwith based qos > > the slightly better approch wouls be to make this simplere to use by doing > somtihng liek this > > resources:vgpu=2 > vgpu:gpu_selection_policy=isolate > > we would still need the placement feature to isolate by group > but we can hide the detail form the end user with a pre filter in nova > > https://github.com/openstack/nova/blob/eedbff38599addd4574084edac8b111c4e1f244a/nova/scheduler/request_filter.py > which will transfrom the resouce request and split it up into groups > automatically > > this is a long way to say that if it was not for limiations in the iommu > on nvidia gpus and the fact that they cannot map two vgpus > to from on phsyical gpu to a singel vm this would already work out of hte > box wiht just > resources:vgpu=2. perhaps when intel lauch there discret datacenter gpus > there vGPU implementaiotn will not have this limiation. > we do not prevent you from requestin 2 vgpus today it will just fail when > qemu tries to use them. > > we also have not put the effort into working around the limiation in > nvidias hardware since ther drivers also used to block this > until the ampear generation and there has nto been a large request to > support multipel vgpus form users. > > ocationally some will ask about it but in general peopel either do full > gpu passthough or use 1 vgpu instance. > > Correct, that's why we have this open bug report for a while, but we don't really want to fix for only one vendor. > hopefully that will help. > you can try the first approch today if you have more then one physical gpu > per host > e.g. > resources_first_gpu_group:VGPU=1 > resources_second_gpu_group:VGPU=1 > group_policy=isolate > > just be aware of the limiation fo group_policy=isolate > Thanks Sean for explaining how to use a workaround. > > regard > sean > > > > > > BR > > Sigurd > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From firashlel750 at gmail.com Thu Apr 21 08:02:22 2022 From: firashlel750 at gmail.com (Firas Hlel) Date: Thu, 21 Apr 2022 09:02:22 +0100 Subject: error openstack Message-ID: Hi all, I am working on a project Openstack in linux ubuntu 20.04. I want to create a cluster hadoop with one master-node and three worker-nodes and i have a problem with a cluster that doesn't work. Status ERROR: Creating cluster failed for the following reason(s): Failed to create trust Error ID: ef5e8b0a-8e6d-4878-bebb-f37f4fa50a88, Failed to create trust Error ID: 43157255-86af-4773-96c1-a07ca7ac66ed. links: https://docs.openstack.org/devstack/latest/ File local.conf : [[local|localrc]] ADMIN_PASSWORD=secret DATABASE_PASSWORD=secret RABBIT_PASSWORD=secret SERVICE_PASSWORD=$ADMIN_PASSWORD HOST_IP=10.0.2.15 LOGFILE=$DEST/logs/stack.sh.log SWIFT_REPLICAS=1SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5 SWIFT_DATA_DIR=$DEST/data enable_plugin sahara https://opendev.org/openstack/sahara enable_plugin sahara-dashboard https://opendev.org/openstack/sahara-dashboard enable_plugin heat https://opendev.org/openstack/heat Can you guys advise me about these errors. Is there anything to worry about? -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Thu Apr 21 11:14:20 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Thu, 21 Apr 2022 12:14:20 +0100 Subject: [Xena][Gnocchi] gnocchi_statsd become unhealthy after some time Message-ID: Hi, Recently we deployed Openstack Xena using Kollla-ansible, we noticed that gnocchi_statsd become unhealthy after a period of time, we restarted it several times, it shows a healthy state, but hours after it becomes unhealthy [root at controllera ~]# docker ps | grep gnocch fcc31e240322 192.168.1.16:4000/openstack.kolla/centos-source-gnocchi-statsd:xena "dumb-init --single-?" 45 hours ago Up 24 hours (unhealthy) gnocchi_statsd The log file don't show anything special, you can find all gnocchi logs attached. How can we figure out what causes this? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gnocchi.tgz Type: application/x-compressed Size: 277468 bytes Desc: not available URL: From wodel.youchi at gmail.com Thu Apr 21 14:22:43 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Thu, 21 Apr 2022 15:22:43 +0100 Subject: [Xena][Cloudkitty] I have no rating in the dashboard Message-ID: Hi, I've deployed Cloudkitty using Kolla-ansible, then I followed this example to test if the service is working : https://docs.openstack.org/cloudkitty/xena/user/rating/hashmap.html#examples I don't fully understand the service. I waited a day, and until now I don't have any results. The only errors I had is this once is : 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator [-] [scope: a50e58ff494441b3b78ab096bca747d9, worker: 0 ] Error while collecting metric image.size at timestamp 2022-04-01 00:00:00+01:00: {'cause': "Metrics can't being aggregated", 'reason': 'Granularities are missing', 'detail': [['image.size', 'mean', 3600.0]]} (HTTP 400). Exiting.: gnocchiclient.exceptions.BadRequest: {'cause': "Metrics can't being aggregated", 'reason': 'Granularitie:are missing', 'detail': [['image.size', 'mean', 3600.0]]} (HTTP 400) 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator Traceback (most recent call last): 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator File "/var/lib/kolla/venv/lib/python3.6/site-packages/cloudkitty/orchestrator.py", line 308, in _get_result 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator return self._collect(metric, timestamp) 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator File "/var/lib/kolla/venv/lib/python3.6/site-packages/cloudkitty/orchestrator.py", line 297, in _collect 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator self._tenant_id, 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator File "/var/lib/kolla/venv/lib/python3.6/site-packages/cloudkitty/collector/__init__.py", line 240, in retrieve 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator q_filter=q_filter, 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator File "/var/lib/kolla/venv/lib/python3.6/site-packages/cloudkitty/collector/gnocchi.py", line 450, in fetch_all 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator q_filter=q_filter, 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator File "/var/lib/kolla/venv/lib/python3.6/site-packages/cloudkitty/collector/gnocchi.py", line 329, in _fetch_metric 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator measurements = self._conn.aggregates.fetch(op, **agg_kwargs) 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator File "/var/lib/kolla/venv/lib/python3.6/site-packages/gnocchiclient/v1/aggregates.py", line 72, in fetch 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator data=ujson.dumps(data)).json() 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator File "/var/lib/kolla/venv/lib/python3.6/site-packages/gnocchiclient/v1/base.py", line 41, in _post 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator return self.client.api.post(*args, **kwargs) 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator File "/var/lib/kolla/venv/lib/python3.6/site-packages/keystoneauth1/adapter.py", line 401, in post 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator return self.request(url, 'POST', **kwargs) 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator File "/var/lib/kolla/venv/lib/python3.6/site-packages/gnocchiclient/client.py", line 52, in request 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator raise exceptions.from_response(resp, method) 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator gnocchiclient.exceptions.BadRequest: {'cause': "Metrics can't being aggregated", 'reason': 'Granularities are missing', 'detail': [['image.size', 'mean', 3600.0]]} (HTTP 400) 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator I searched the web and I found this post : https://storyboard.openstack.org/#!/story/2008598 Could that be the problem? If so , how to correct it? i tried to create this file on my kolla-ansible deployer vim /etc/kolla/config/cloudkitty.conf [DEFAULT] debug = True [collect] metrics_conf = /etc/*cloudkitty-api*/metrics.yml (I don't know if this path is correct, since there is no /etc/cloudkitty on the nodes) And I created a directory like this : mkdir /etc/kolla/config/cloudkitty/ wget https://raw.githubusercontent.com/openstack/cloudkitty/master/etc/cloudkitty/metrics.yml -O /etc/kolla/config/cloudkitty/metrics.yml I executed then kolla-ansible -i multinode reconfigure -t cloudkitty All I have in the logs is : 2022-04-21 15:13:32.691 8 INFO cotyledon._service_manager [-] Child 1304 exited with status 1 2022-04-21 15:13:32.693 8 INFO cotyledon._service_manager [-] Child 1310 exited with status 1 2022-04-21 15:13:32.695 8 INFO cotyledon._service_manager [-] Child 1312 exited with status 1 2022-04-21 15:13:32.697 8 INFO cotyledon._service_manager [-] Child 1314 exited with status 1 2022-04-21 15:13:33.097 8 INFO cotyledon._service_manager [-] Child 1325 exited with status 1 2022-04-21 15:13:33.101 8 INFO cotyledon._service_manager [-] Child 1327 exited with status 1 2022-04-21 15:13:33.102 8 INFO cotyledon._service_manager [-] Forking too fast, sleeping 2022-04-21 15:13:38.109 8 INFO cotyledon._service_manager [-] Child 1329 exited with status 1 2022-04-21 15:13:38.112 8 INFO cotyledon._service_manager [-] Child 1331 exited with status 1 2022-04-21 15:13:38.114 8 INFO cotyledon._service_manager [-] Child 1333 exited with status 1 Any ideas? Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Thu Apr 21 17:19:14 2022 From: corey.bryant at canonical.com (Corey Bryant) Date: Thu, 21 Apr 2022 13:19:14 -0400 Subject: OpenStack Yoga for Ubuntu 22.04 LTS and Ubuntu 20.04 LTS Message-ID: The Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack Yoga on Ubuntu 22.04 LTS (Jammy Jellyfish) and Ubuntu 20.04 LTS (Focal Fossa) via the Ubuntu Cloud Archive. Details of the Yoga release can be found at: https://www.openstack.org/software/yoga To get access to the Ubuntu Yoga packages: == Ubuntu 22.04 LTS == OpenStack Yoga is available by default on Ubuntu 22.04. == Ubuntu 20.04 LTS == The Ubuntu Cloud Archive for OpenStack Yoga can be enabled on Ubuntu 20.04 by running the following command: sudo add-apt-repository cloud-archive:yoga The Ubuntu Cloud Archive for Yoga includes updates for: aodh, barbican, ceilometer, ceph (17.1.0), cinder, designate, designate-dashboard, dpdk (21.11), glance, gnocchi, heat, heat-dashboard, horizon, ironic, ironic-ui, keystone, libvirt (8.0.0), magnum, magnum-ui, manila, manila-ui, masakari, mistral, murano, murano-dashboard, networking-arista, networking-bagpipe, networking-baremetal, networking-bgpvpn, networking-hyperv, networking-l2gw, networking-mlnx, networking-odl, networking-sfc, neutron, neutron-dynamic-routing, neutron-fwaas, neutron-vpnaas, nova, octavia, octavia-dashboard, openstack-trove, openvswitch (2.17.0), ovn (22.03.0), ovn-octavia-provider, placement, sahara, sahara-dashboard, senlin, swift, trove-dashboard, vmware-nsx, vitrage, watcher, watcher-dashboard, zaqar, and zaqar-ui. For a full list of packages and versions, please refer to: https://openstack-ci-reports.ubuntu.com/reports/cloud-archive/yoga_versions.html == Reporting bugs == If you have any issues please report bugs using the ?ubuntu-bug? tool to ensure that bugs get logged in the right place in Launchpad: sudo ubuntu-bug nova-conductor Thank you to everyone who contributed to OpenStack Yoga! Corey (on behalf of the Ubuntu OpenStack Engineering team) -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Fri Apr 22 06:20:04 2022 From: katonalala at gmail.com (Lajos Katona) Date: Fri, 22 Apr 2022 08:20:04 +0200 Subject: [neutron] Drivers meeting - Friday 22.4.2022 - cancelled Message-ID: Hi Neutron Drivers! Due to the lack of agenda, let's cancel today's drivers meeting (sorry for the late mail...). See You on the meeting next week. Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From derekokeeffe85 at yahoo.ie Fri Apr 22 07:43:09 2022 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Fri, 22 Apr 2022 07:43:09 +0000 (UTC) Subject: Openstack Ansible error References: <178087883.181758.1650613389293.ref@mail.yahoo.com> Message-ID: <178087883.181758.1650613389293@mail.yahoo.com> Hi all, I've been working through deploying Openstack ansible when I run the first playbook (setup hosts) it fails with the following error: ?An exception occurred during task execution. To see the full traceback, use -vvv. The error was: jinja2.exceptions.TemplateRuntimeError: No filter named 'ipaddr' found.failed: [infra1_keystone_container-fb7ae0f1] (item={'key': 'container_address', 'value': {'address': 'xx.xx.xx.xxx', 'bridge': 'br-mgmt', 'interface': 'eth1', 'netmask': '255.255.255.0', 'type': 'veth'}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "container_address", "value": {"address": "xx.xx.xx.xxx", "bridge": "br-mgmt", "interface": "eth1", "netmask": "255.255.255.0", "type": "veth"}}, "msg": "TemplateRuntimeError: No filter named 'ipaddr' found."} I have followed this fix:?https://bugs.launchpad.net/openstack-ansible/+bug/1963686 But, unfortunately the error still occurs. I am in the process of troubleshooting and if I find a resolution I will post for anyone in a similar position but in the mean time if anyone has a fix/workaround for this issue it would be greatly appreciated. Regards,Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Fri Apr 22 08:10:15 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Fri, 22 Apr 2022 10:10:15 +0200 Subject: Openstack Ansible error In-Reply-To: <178087883.181758.1650613389293@mail.yahoo.com> References: <178087883.181758.1650613389293.ref@mail.yahoo.com> <178087883.181758.1650613389293@mail.yahoo.com> Message-ID: Hi Derek, The issue is raised by the new version of ansible.netcommon collection. Since version 2.6.0 they've dropped ipaddr and some more modules, at same time moving them to ansible.utils collection. So the easiest fix here is to ensure that ansible.utils<2.6.0. This should have been already fixed in the latest versions of openstack-ansible as well. And I'm quite sure the proposed workaround in the bug report you mentioned must work as well. Can you kindly provide OSA version you're trying to deploy as well as output of the command: /opt/ansible-runtime/bin/ansible-galaxy collection list --collections-path /etc/ansible/ ??, 22 ???. 2022 ?. ? 09:50, Derek O keeffe : > > Hi all, > > I've been working through deploying Openstack ansible when I run the first playbook (setup hosts) it fails with the following error: > > > An exception occurred during task execution. To see the full traceback, use -vvv. The error was: jinja2.exceptions.TemplateRuntimeError: No filter named 'ipaddr' found. > failed: [infra1_keystone_container-fb7ae0f1] (item={'key': 'container_address', 'value': {'address': 'xx.xx.xx.xxx', 'bridge': 'br-mgmt', 'interface': 'eth1', 'netmask': '255.255.255.0', 'type': 'veth'}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "container_address", "value": {"address": "xx.xx.xx.xxx", "bridge": "br-mgmt", "interface": "eth1", "netmask": "255.255.255.0", "type": "veth"}}, "msg": "TemplateRuntimeError: No filter named 'ipaddr' found."} > > I have followed this fix: https://bugs.launchpad.net/openstack-ansible/+bug/1963686 > > But, unfortunately the error still occurs. I am in the process of troubleshooting and if I find a resolution I will post for anyone in a similar position but in the mean time if anyone has a fix/workaround for this issue it would be greatly appreciated. > > Regards, > Derek From manishbhartigt at gmail.com Fri Apr 22 11:30:31 2022 From: manishbhartigt at gmail.com (Manish Bharti) Date: Fri, 22 Apr 2022 17:00:31 +0530 Subject: Openstack Trove - Polling request timed out Message-ID: Dear Team, We are trying to deploy OpenStack environment for our application and while deploying trove service we are facing the below error(attached screenshot)- Traceback (most recent call last): File "/usr/lib/python3/dist-packages/trove/common/utils.py", line 207, in wait_for_task return polling_task.wait() File "/usr/lib/python3/dist-packages/eventlet/event.py", line 125, in wait result = hub.switch() File "/usr/lib/python3/dist-packages/eventlet/hubs/hub.py", line 313, in switch return self.greenlet.switch() File "/usr/lib/python3/dist-packages/oslo_service/loopingcall.py", line 154, in _run_loop idle = idle_for_func(result, self._elapsed(watch)) File "/usr/lib/python3/dist-packages/oslo_service/loopingcall.py", line 349, in _idle_for raise LoopingCallTimeOut( oslo_service.loopingcall.LoopingCallTimeOut: Looping call timed out after 870.99 seconds During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/trove/taskmanager/models.py", line 434, in wait_for_instance utils.poll_until(self._service_is_active, File "/usr/lib/python3/dist-packages/trove/common/utils.py", line 223, in poll_until return wait_for_task(task) File "/usr/lib/python3/dist-packages/trove/common/utils.py", line 209, in wait_for_task raise exception.PollTimeOut trove.common.exception.PollTimeOut: Polling request timed out. Please help us on this issue. -- Thank you , Manish Bharti Jodhpur, Raj - 342001. Contact:8875033000 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: trove_error.jpeg Type: image/jpeg Size: 117432 bytes Desc: not available URL: From fungi at yuggoth.org Fri Apr 22 20:12:34 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 22 Apr 2022 20:12:34 +0000 Subject: [dev][infra][tact-sig] Retiring the status.openstack.org server Message-ID: <20220422201233.mcfhn2u4haceuaf2@yuggoth.org> With the recent retirement of the Elastic-Recheck and OpenStack-Health services, as well as Zuul long ago growing its own status dashboard, the only other thing still being served from the status.openstack.org site is a very broken and empty ReviewDay interface. I'm planning to take status.openstack.org offline at the end of this month (late next week), so wanted to give everyone a heads up that anything still served from that site right now (basically just a link to the Zuul status page for our openstack tenant) will be going away. If you want the Zuul status page for openstack, the proper URL is https://zuul.opendev.org/t/openstack/status or you can just go to the root of the site and click the "status" link in the openstack tenant row. (There's also a white-label zuul.openstack.org site but that's primarily maintained in order to avoid breaking old tools people may have hard-coded to use it.) -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Fri Apr 22 20:53:37 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 22 Apr 2022 15:53:37 -0500 Subject: [adjutant][tc][all] Call for volunteers to be a PTL and maintainers In-Reply-To: <1915566590.650011.1646837917079@mail.yahoo.com> References: <4381995.LvFx2qVVIh@p1> <1915566590.650011.1646837917079@mail.yahoo.com> Message-ID: <180530d387f.12325e74512727.6650321884236044968@ghanshyammann.com> Hi Braden, Please let us know about the status of your company's permission to maintain the project. As we are in Zed cycle development and there is no one to maintain/lead this project we need to start thinking about the next steps mentioned in the leaderless project etherpad - https://etherpad.opendev.org/p/zed-leaderless -gmann ---- On Wed, 09 Mar 2022 08:58:37 -0600 Albert Braden wrote ---- > I'm still waiting for permission to work on Adjutant. My contract ends this month and I'm taking 2 months off before I start fulltime. I have hope that permission will be granted while I'm out. I expect that I will be able to start working on Adjutant in June. > On Saturday, March 5, 2022, 01:32:13 PM EST, Slawek Kaplonski wrote: > > Hi, > > After last PTL elections [1] Adjutant project don't have any PTL. It also didn't had PTL in the Yoga cycle already. > So this is call for maintainters for Adjutant. If You are using it or interested in it, and if You are willing to help maintaining this project, please contact TC members through this mailing list or directly on the #openstack-tc channel @OFTC. We can talk possibilities to make someone a PTL of the project or going with this project to the Distributed Project Leadership [2] model. > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-February/027411.html > [2] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat From gmann at ghanshyammann.com Fri Apr 22 21:05:23 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 22 Apr 2022 16:05:23 -0500 Subject: [all][tc] What's happening in Technical Committee: summary April 15th, 21: Reading: 10 min Message-ID: <1805317ff5e.126414ab512923.7618351795364859532@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We had this week's meeting on April 21. Most of the meeting discussions are summarized below ( Completed or in-progress activities section). Meeting full logs are available @https://meetings.opendev.org/meetings/tc/2022/tc.2022-04-21-15.00.html * Next TC weekly meeting will be on April 28 Thursday at 15:00 UTC, feel free to add the topic on the agenda[1] by April 27. 2. What we completed this week: ========================= * Updated sushy-oem-idrac from x/ to /openstack/ namespace[2] 3. Activities In progress: ================== TC Tracker for Zed cycle ------------------------------ * Zed tracker etherpad includes the TC working items[3]. We completed the assigned of each item and will continue tracking the progress biweekly. Open Reviews ----------------- * Eight open reviews for ongoing activities[4]. Migration from old ELK service to new Dashboard ----------------------------------------------------------- You might know that Daniel was working to bring the log search dashboard on OpenSearch and he is ready with it and the new dashboard[5] is ready to use. He will send the details about it to ML soon. He is investigating on elactic-recheck instance to be up. The new dashboard is maintained under the OpenStack tact SIG and you can reach out to Daniel on #openstack-infra channel. Please use and provide feedback about this new dashboard to make the upstream failure debugging easy. We also discussed the OpenDev request to shut down the old ELK server and agree to do that. Drop the lower constraints maintenance ------------------------------------------------ The TC resolution to drop the lower constraints testing is under review[6], and open for the feedback[7] Consistent and Secure Default RBAC -------------------------------------------- I have scheduled the discussion for coming Tuesday 26th 14:30 UTC [8], We will target the heat discussion and if time permits then cinder queries. If you have any other questions specific to your project, please add them to the etherpad[9] FIPs community-wide goal ------------------------------- As discussed in PTG, we agreed to select this as per the new milestone. Ade has proposed the milestone update and ready for feedback[10]. Removing the TC Liaisons framework -------------------------------------------- As discussed in PTG, I have proposed the removal of TC liaisons[11] and it's under review. 2021 User Survey TC Question Analysis ----------------------------------------------- No update on this. The survey summary is up for review[12]. Feel free to check and provide feedback. Zed cycle Leaderless projects ---------------------------------- No updates on this. Only Adjutant project is leaderless/maintainer-less. We will check Adjutant's situation again on ML and hope Braden will be ready with their company side permission[13]. Fixing Zuul config error ---------------------------- Requesting projects with zuul config error to look into those and fix them which should not take much time[14]. Project updates ------------------- * Add the cinder-three-par charm to Openstack charms[15] 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[16]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [17] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://review.opendev.org/c/openstack/governance/+/838486 [3] https://etherpad.opendev.org/p/tc-zed-tracker [4] https://review.opendev.org/q/projects:openstack/governance+status:open [5] https://docs.openstack.org/project-team-guide/testing.html#checking-status-of-other-job-results [6] https://review.opendev.org/c/openstack/governance/+/838004 [7] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028199.html [8] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028258.html [9] https://etherpad.opendev.org/p/rbac-zed-ptg [10] https://review.opendev.org/c/openstack/governance/+/838601 [11] https://review.opendev.org/c/openstack/governance/+/837891 [12] https://review.opendev.org/c/openstack/governance/+/836888 [13] http://lists.openstack.org/pipermail/openstack-discuss/2022-March/027626.html [14] https://etherpad.opendev.org/p/zuul-config-error-openstack [15] https://review.opendev.org/c/openstack/governance/+/837781 [16] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [17] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From gmann at ghanshyammann.com Fri Apr 22 21:55:54 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 22 Apr 2022 16:55:54 -0500 Subject: [all][tc] What's happening in Technical Committee: summary April 22th, 22: Reading: 10 min In-Reply-To: <1805317ff5e.126414ab512923.7618351795364859532@ghanshyammann.com> References: <1805317ff5e.126414ab512923.7618351795364859532@ghanshyammann.com> Message-ID: <18053464139.d537f9eb13776.663157598265594291@ghanshyammann.com> Correcting the dates in the subject line. ---- On Fri, 22 Apr 2022 16:05:23 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Here is this week's summary of the Technical Committee activities. > > 1. TC Meetings: > ============ > * We had this week's meeting on April 21. Most of the meeting discussions are > summarized below ( Completed or in-progress activities section). Meeting full logs > are available @https://meetings.opendev.org/meetings/tc/2022/tc.2022-04-21-15.00.html > > * Next TC weekly meeting will be on April 28 Thursday at 15:00 UTC, feel free to > add the topic on the agenda[1] by April 27. > > > 2. What we completed this week: > ========================= > * Updated sushy-oem-idrac from x/ to /openstack/ namespace[2] > > > 3. Activities In progress: > ================== > TC Tracker for Zed cycle > ------------------------------ > * Zed tracker etherpad includes the TC working items[3]. We completed the assigned > of each item and will continue tracking the progress biweekly. > > Open Reviews > ----------------- > * Eight open reviews for ongoing activities[4]. > > Migration from old ELK service to new Dashboard > ----------------------------------------------------------- > You might know that Daniel was working to bring the log search dashboard on OpenSearch > and he is ready with it and the new dashboard[5] is ready to use. He will send the details > about it to ML soon. He is investigating on elactic-recheck instance to be up. > > The new dashboard is maintained under the OpenStack tact SIG and you can reach out to Daniel > on #openstack-infra channel. Please use and provide feedback about this new dashboard to > make the upstream failure debugging easy. > > We also discussed the OpenDev request to shut down the old ELK server and agree to do that. > > Drop the lower constraints maintenance > ------------------------------------------------ > The TC resolution to drop the lower constraints testing is under review[6], and open > for the feedback[7] > > Consistent and Secure Default RBAC > -------------------------------------------- > I have scheduled the discussion for coming Tuesday 26th 14:30 UTC [8], We will target > the heat discussion and if time permits then cinder queries. If you have any other > questions specific to your project, please add them to the etherpad[9] > > FIPs community-wide goal > ------------------------------- > As discussed in PTG, we agreed to select this as per the new milestone. Ade has proposed > the milestone update and ready for feedback[10]. > > Removing the TC Liaisons framework > -------------------------------------------- > As discussed in PTG, I have proposed the removal of TC liaisons[11] and it's under review. > > 2021 User Survey TC Question Analysis > ----------------------------------------------- > No update on this. The survey summary is up for review[12]. Feel free to check and > provide feedback. > > Zed cycle Leaderless projects > ---------------------------------- > No updates on this. Only Adjutant project is leaderless/maintainer-less. We will check Adjutant's > situation again on ML and hope Braden will be ready with their company side permission[13]. > > Fixing Zuul config error > ---------------------------- > Requesting projects with zuul config error to look into those and fix them which should > not take much time[14]. > > Project updates > ------------------- > * Add the cinder-three-par charm to Openstack charms[15] > > > 4. How to contact the TC: > ==================== > If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: > > 1. Email: you can send the email with tag [tc] on openstack-discuss ML[16]. > 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [17] > 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. > > > [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > [2] https://review.opendev.org/c/openstack/governance/+/838486 > [3] https://etherpad.opendev.org/p/tc-zed-tracker > [4] https://review.opendev.org/q/projects:openstack/governance+status:open > [5] https://docs.openstack.org/project-team-guide/testing.html#checking-status-of-other-job-results > [6] https://review.opendev.org/c/openstack/governance/+/838004 > [7] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028199.html > [8] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028258.html > [9] https://etherpad.opendev.org/p/rbac-zed-ptg > [10] https://review.opendev.org/c/openstack/governance/+/838601 > [11] https://review.opendev.org/c/openstack/governance/+/837891 > [12] https://review.opendev.org/c/openstack/governance/+/836888 > [13] http://lists.openstack.org/pipermail/openstack-discuss/2022-March/027626.html > [14] https://etherpad.opendev.org/p/zuul-config-error-openstack > [15] https://review.opendev.org/c/openstack/governance/+/837781 > [16] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss > [17] http://eavesdrop.openstack.org/#Technical_Committee_Meeting > > -gmann > From hiwkby at yahoo.com Sat Apr 23 21:32:17 2022 From: hiwkby at yahoo.com (Hirotaka Wakabayashi) Date: Sat, 23 Apr 2022 21:32:17 +0000 (UTC) Subject: Openstack Trove - Polling request timed out References: <2131751104.503313.1650749537337.ref@mail.yahoo.com> Message-ID: <2131751104.503313.1650749537337@mail.yahoo.com> Hello Manish! "openstack database instance show" command shows you the current status of an instance. If the status is not ACTIVE, guest-agent fails to start the database service for some reasons. Please see the following page for further debug. https://docs.openstack.org/trove/latest/admin/troubleshooting.html FYI: I think the error occurs here in your case. https://opendev.org/openstack/trove/src/branch/master/trove/taskmanager/models.py#L434 Best Regards, Hirotaka On Saturday, April 23, 2022, 12:05:26 AM GMT+9, openstack-discuss-request at lists.openstack.org wrote: Send openstack-discuss mailing list submissions to ??? openstack-discuss at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit ??? http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss or, via email, send a message with subject or body 'help' to ??? openstack-discuss-request at lists.openstack.org You can reach the person managing the list at ??? openstack-discuss-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of openstack-discuss digest..." Today's Topics: ? 1. Openstack Trove - Polling request timed out (Manish Bharti) ---------------------------------------------------------------------- Message: 1 Date: Fri, 22 Apr 2022 17:00:31 +0530 From: Manish Bharti To: openstack-discuss at lists.openstack.org Subject: Openstack Trove - Polling request timed out Message-ID: ??? Content-Type: text/plain; charset="utf-8" Dear Team, We are trying to deploy OpenStack environment for our application and while deploying trove service we are facing the below error(attached screenshot)- Traceback (most recent call last): ? File "/usr/lib/python3/dist-packages/trove/common/utils.py", line 207, in wait_for_task ? ? return polling_task.wait() ? File "/usr/lib/python3/dist-packages/eventlet/event.py", line 125, in wait ? ? result = hub.switch() ? File "/usr/lib/python3/dist-packages/eventlet/hubs/hub.py", line 313, in switch ? ? return self.greenlet.switch() ? File "/usr/lib/python3/dist-packages/oslo_service/loopingcall.py", line 154, in _run_loop ? ? idle = idle_for_func(result, self._elapsed(watch)) ? File "/usr/lib/python3/dist-packages/oslo_service/loopingcall.py", line 349, in _idle_for ? ? raise LoopingCallTimeOut( oslo_service.loopingcall.LoopingCallTimeOut: ? ? Looping call timed out after 870.99 seconds During handling of the above exception, another exception occurred: Traceback (most recent call last): ? File "/usr/lib/python3/dist-packages/trove/taskmanager/models.py", line 434, in wait_for_instance ? ? utils.poll_until(self._service_is_active, ? File "/usr/lib/python3/dist-packages/trove/common/utils.py", line 223, in poll_until ? ? return wait_for_task(task) ? File "/usr/lib/python3/dist-packages/trove/common/utils.py", line 209, in wait_for_task ? ? raise exception.PollTimeOut trove.common.exception.PollTimeOut: Polling request timed out. Please help us on this issue. -- Thank you , Manish Bharti Jodhpur, Raj - 342001. Contact:8875033000 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: trove_error.jpeg Type: image/jpeg Size: 117432 bytes Desc: not available URL: ------------------------------ Subject: Digest Footer _______________________________________________ openstack-discuss mailing list openstack-discuss at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss ------------------------------ End of openstack-discuss Digest, Vol 42, Issue 97 ************************************************* From tonyliu0592 at hotmail.com Sun Apr 24 04:01:48 2022 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Sun, 24 Apr 2022 04:01:48 +0000 Subject: retention policy for deleted resource in database Message-ID: Hi, I see the record for deleted resource stay in database. What's the retention policy for those records? Is it configurable? Any manual cleanup is required? Thanks! Tony From tonyliu0592 at hotmail.com Sun Apr 24 04:06:52 2022 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Sun, 24 Apr 2022 04:06:52 +0000 Subject: [cinder] how cinder gets the volume name when restore from a backup? Message-ID: Hi, Here is what I do. * Create a volume. * Create a backup of this volume. * Delete the volume. * Restore the volume from backup. The restored volume has different UUID, but the same name. I don't see volume name is stored in backup metadata. How does Cinder know the volume name when restore it from backup, given the volume is already deleted? Thanks! Tony From noonedeadpunk at gmail.com Sun Apr 24 05:17:27 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Sun, 24 Apr 2022 07:17:27 +0200 Subject: retention policy for deleted resource in database In-Reply-To: References: Message-ID: Hi Tony, There is no default retention policy defined. To clean up such records for cinder you can run command cinder-manage db purge For nova it's a bit more complicated as first you would archive these records and only then purge them from shadow tables. This is done with nova-manage command. You can check docs regarding it's usage: https://docs.openstack.org/nova/latest/cli/nova-manage.html#db-archive-deleted-rows ??, 24 ???. 2022 ?., 6:04 Tony Liu : > Hi, > > I see the record for deleted resource stay in database. > What's the retention policy for those records? > Is it configurable? > Any manual cleanup is required? > > > Thanks! > Tony > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Sun Apr 24 09:52:54 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Sun, 24 Apr 2022 10:52:54 +0100 Subject: [Xena][Cloudkitty] I have no rating in the dashboard In-Reply-To: References: Message-ID: Hi, Anyone? Regards. Le jeu. 21 avr. 2022 ? 15:22, wodel youchi a ?crit : > Hi, > > I've deployed Cloudkitty using Kolla-ansible, then I followed this example > to test if the service is working : > https://docs.openstack.org/cloudkitty/xena/user/rating/hashmap.html#examples > > I don't fully understand the service. > > I waited a day, and until now I don't have any results. > > The only errors I had is this once is : > 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator [-] [scope: > a50e58ff494441b3b78ab096bca747d9, worker: 0 > ] Error while collecting metric image.size at timestamp 2022-04-01 > 00:00:00+01:00: {'cause': "Metrics can't being aggregated", 'reason': > 'Granularities are missing', 'detail': [['image.size', 'mean', 3600.0]]} > (HTTP 400). Exiting.: gnocchiclient.exceptions.BadRequest: {'cause': > "Metrics can't being aggregated", 'reason': 'Granularitie:are missing', > 'detail': [['image.size', 'mean', 3600.0]]} (HTTP 400) > 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator Traceback (most > recent call last): > 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator File > "/var/lib/kolla/venv/lib/python3.6/site-packages/cloudkitty/orchestrator.py", > line 308, in _get_result > 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator return > self._collect(metric, timestamp) > 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator File > "/var/lib/kolla/venv/lib/python3.6/site-packages/cloudkitty/orchestrator.py", > line 297, in _collect > 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator > self._tenant_id, > 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator File > "/var/lib/kolla/venv/lib/python3.6/site-packages/cloudkitty/collector/__init__.py", > line 240, in retrieve > 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator > q_filter=q_filter, > 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator File > "/var/lib/kolla/venv/lib/python3.6/site-packages/cloudkitty/collector/gnocchi.py", > line 450, in fetch_all > 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator > q_filter=q_filter, > 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator File > "/var/lib/kolla/venv/lib/python3.6/site-packages/cloudkitty/collector/gnocchi.py", > line 329, in _fetch_metric > 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator measurements > = self._conn.aggregates.fetch(op, **agg_kwargs) > 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator File > "/var/lib/kolla/venv/lib/python3.6/site-packages/gnocchiclient/v1/aggregates.py", > line 72, in fetch > 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator > data=ujson.dumps(data)).json() > 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator File > "/var/lib/kolla/venv/lib/python3.6/site-packages/gnocchiclient/v1/base.py", > line 41, in _post > 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator return > self.client.api.post(*args, **kwargs) > 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator File > "/var/lib/kolla/venv/lib/python3.6/site-packages/keystoneauth1/adapter.py", > line 401, in post > 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator return > self.request(url, 'POST', **kwargs) > 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator File > "/var/lib/kolla/venv/lib/python3.6/site-packages/gnocchiclient/client.py", > line 52, in request > 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator raise > exceptions.from_response(resp, method) > 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator > gnocchiclient.exceptions.BadRequest: {'cause': "Metrics can't being > aggregated", 'reason': 'Granularities are missing', 'detail': > [['image.size', 'mean', 3600.0]]} (HTTP 400) > 2022-04-20 11:57:37.834 31 ERROR cloudkitty.orchestrator > > > I searched the web and I found this post : > https://storyboard.openstack.org/#!/story/2008598 > > Could that be the problem? If so , how to correct it? > > i tried to create this file on my kolla-ansible deployer > vim /etc/kolla/config/cloudkitty.conf > [DEFAULT] > debug = True > > [collect] > metrics_conf = /etc/*cloudkitty-api*/metrics.yml (I don't know if this > path is correct, since there is no /etc/cloudkitty on the nodes) > > And I created a directory like this : > mkdir /etc/kolla/config/cloudkitty/ > wget > https://raw.githubusercontent.com/openstack/cloudkitty/master/etc/cloudkitty/metrics.yml > -O /etc/kolla/config/cloudkitty/metrics.yml > > I executed then kolla-ansible -i multinode reconfigure -t cloudkitty > > All I have in the logs is : > 2022-04-21 15:13:32.691 8 INFO cotyledon._service_manager [-] Child 1304 > exited with status 1 > 2022-04-21 15:13:32.693 8 INFO cotyledon._service_manager [-] Child 1310 > exited with status 1 > 2022-04-21 15:13:32.695 8 INFO cotyledon._service_manager [-] Child 1312 > exited with status 1 > 2022-04-21 15:13:32.697 8 INFO cotyledon._service_manager [-] Child 1314 > exited with status 1 > 2022-04-21 15:13:33.097 8 INFO cotyledon._service_manager [-] Child 1325 > exited with status 1 > 2022-04-21 15:13:33.101 8 INFO cotyledon._service_manager [-] Child 1327 > exited with status 1 > 2022-04-21 15:13:33.102 8 INFO cotyledon._service_manager [-] Forking too > fast, sleeping > 2022-04-21 15:13:38.109 8 INFO cotyledon._service_manager [-] Child 1329 > exited with status 1 > 2022-04-21 15:13:38.112 8 INFO cotyledon._service_manager [-] Child 1331 > exited with status 1 > 2022-04-21 15:13:38.114 8 INFO cotyledon._service_manager [-] Child 1333 > exited with status 1 > > Any ideas? > > > Regards. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Mon Apr 25 01:23:44 2022 From: satish.txt at gmail.com (Satish Patel) Date: Sun, 24 Apr 2022 21:23:44 -0400 Subject: [skyline] skyline-console whl package question Message-ID: Folks, I am compiling skyline-apiserver on my bare metal server and found Makefile pulling skyline-console tarball from the following location but that package doesn't contain the latest merged patches. Does CI job is broken which compiles this tarball ? https://tarballs.opendev.org/openstack/skyline-console/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Mon Apr 25 01:25:35 2022 From: gagehugo at gmail.com (Gage Hugo) Date: Sun, 24 Apr 2022 20:25:35 -0500 Subject: [openstack-helm] No Meeting This Week Message-ID: Hey team, Since I will be busy during our normal meeting time this week and there's nothing on the agenda, the meeting for this week has been cancelled. We will meet again next week at the usual time. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Mon Apr 25 05:01:10 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Mon, 25 Apr 2022 10:31:10 +0530 Subject: [cinder] how cinder gets the volume name when restore from a backup? In-Reply-To: References: Message-ID: Hi Tony, On Sun, Apr 24, 2022 at 9:43 AM Tony Liu wrote: > Hi, > > Here is what I do. > * Create a volume. > * Create a backup of this volume. > * Delete the volume. > * Restore the volume from backup. > > There are different ways to do the following operations, 1) from cinderclient, 2) from openstackclient, 3) hitting the API manually or with a script etc so it's always good to be explicit about how the steps were done i.e. mentioning the way used to perform the above operations. Also it's helpful to provide the OpenStack release you're using as different releases have different behavior. I will use OpenStack Zed (master) as reference for below observations. Lastly providing the volume and backup backend is also useful information (however not very relevant in this case). > The restored volume has different UUID, but the same name. > AFAICS, if you're using cinderclient or hitting API directly, there's a "name" parameter[1][2] you can provide that will be used as the name of the restored volume. If you don't provide the name parameter, it will create the volume with the following name "restore_backup_"[3]. I don't see volume name is stored in backup metadata. > It's not taken from backup metadata. > How does Cinder know the volume name when restore it from backup, > given the volume is already deleted? > > > Thanks! > Tony > > > [1] https://opendev.org/openstack/python-cinderclient/src/branch/master/cinderclient/v3/shell.py#L227-L232 [2] https://opendev.org/openstack/cinder/src/branch/master/cinder/api/contrib/backups.py#L214 [3] https://opendev.org/openstack/cinder/src/branch/master/cinder/backup/api.py#L384-L386 Thanks and Regards Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Apr 25 09:54:42 2022 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 25 Apr 2022 11:54:42 +0200 Subject: [largescale-sig] Skipping meeting this week Message-ID: <02cbc3a0-d90b-fc88-7dea-b73229e63786@openstack.org> Hi everyone, Due to a shortage of participants and meeting chairs, we will skip the meeting planned this week for the Large Scale SIG. Our next meeting will be May 11th at 15UTC in #openstack-operators. You can add topics to discuss to our agenda etherpad: https://etherpad.opendev.org/p/large-scale-sig-meeting Regards, -- Thierry Carrez (ttx) From firashlel750 at gmail.com Mon Apr 25 09:44:28 2022 From: firashlel750 at gmail.com (Firas Hlel) Date: Mon, 25 Apr 2022 10:44:28 +0100 Subject: error Message-ID: Firas Hlel jeu. 21 avr. 09:02 (il y a 4 jours) ? openstack-discuss Hi all, I am working on a project Openstack in linux ubuntu 20.04. I want to create a cluster hadoop with one master-node and three worker-nodes and i have a problem with a cluster that doesn't work. Status ERROR: Creating cluster failed for the following reason(s): Failed to create trust Error ID: ef5e8b0a-8e6d-4878-bebb-f37f4fa50a88, Failed to create trust Error ID: 43157255-86af-4773-96c1-a07ca7ac66ed. links: https://docs.openstack.org/devstack/latest/ File local.conf : [[local|localrc]] ADMIN_PASSWORD=secret DATABASE_PASSWORD=secret RABBIT_PASSWORD=secret SERVICE_PASSWORD=$ADMIN_PASSWORD HOST_IP=10.0.2.15 LOGFILE=$DEST/logs/stack.sh.log SWIFT_REPLICAS=1SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5 SWIFT_DATA_DIR=$DEST/data enable_plugin sahara https://opendev.org/openstack/sahara enable_plugin sahara-dashboard https://opendev.org/openstack/sahara-dashboard enable_plugin heat https://opendev.org/openstack/heat Can you guys advise me about these errors. Is there anything to worry about? -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Mon Apr 25 14:42:37 2022 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 25 Apr 2022 11:42:37 -0300 Subject: [ironic][ptg] Zed PTG Summary Message-ID: Hello Ironicers! Sorry for the delay to provide a summary about our PTG =) First of all, thank you to all contributors that took some time to join our sessions! During the sessions we had a peak of 13 attendees on the last three days. Day1 We only had two topics: Feedback about the Yoga Cycle and What should we do about specs. During the of we did a retrospective about the Yoga Cycle, we discussed the good and bad things that happened during the cycle and how we can make things better. In the discussion about what we should do about specs we come up with some ideas to improve our process when we think a spec should be required. Day2: On this day most of the discussions were around topics that can be managing machines. How we can decrease the cost of Data Centers that needs to keep all their machine on even if they are not active or taking minimal load, we would provide a way to power tuning nodes that are already deployed (in the ironic perspective this is a way to reconfigure a node that is already active), we also talked about turning down power usage if we are able to identify that IPA is idle. The custom deploy timeout topic had no strong objections, since we have a single configuration that handles the timeout for all steps we think we can improve this, we still need to define some details related to the implementation after we have the RFE. We discussed how we can generate the inspector.ipxe configuration if we notice some changes in the configuration of the deployment. Day3: We started this day looking at the survey results, this gave us some ideas on possible areas we should improve. The ironic safeguard topic was focusing on two main things: queuing (limit the max number of concurrent cleaning operations) and data disk protection (only clean the root disk (or give a list of skip-disks/disks-to-clean). The community decided that this seems like a good idea and we defined some of the possible paths forwards related to how it could be implemented. The redfish gateway and related ideas topic brought an interesting idea to have an ironic driver that can execute a "script" that is required for their HW to start working properly, there is a recording from the first meeting that can provide more details =). The per-node clean steps topic had no objections, we think this will improve the operators experience for scenarios where they would need specific steps for running on a node that has different disk configuration. Day4: Most of the topics on this day were focused on networking. We discussed the status of OVN DHCP support and how we will move forward on our side to make things work after Neutron and OVN have all the necessary bits in place. During the next topic we talked about adding device configuration capabilities for networking-baremetal since in multi-tenant BMaaS there is a need to configure the ToR network devices (Access/Edge Switches) and many vendors have abandoned their ML2 mechanism plug-ins to support this, so now we are looking to add support for new mechanisms that have more features that could improve the operators experience. We focused on discussing the pros/cons about Netconf and Yang but also other alternatives solutions. The other topic we discussed were: - netboot deprecation: we discussed how we should move forward with some of our testing for partition images + UEFI. - Bluefield DPU: not much discussion since we didn't have many folks interested on the topic. - Anaconda driver: we talked about how we can get a CI for testing the driver. You can find more information about the topics and the discussions in our etherpad: https://etherpad.opendev.org/p/ironic-zed-ptg -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the ironic-core and puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From obondarev at mirantis.com Mon Apr 25 15:07:17 2022 From: obondarev at mirantis.com (Oleg Bondarev) Date: Mon, 25 Apr 2022 19:07:17 +0400 Subject: [neutron] Bug Deputy Report Apr 18 - Apr 24 Message-ID: Hello Team, Bug report for the week of Apr 18 is below. Two OVN bugs (including Invalid one) need triage from the OVN team. High ------ - https://bugs.launchpad.net/neutron/+bug/1969615 - OVS: flow loop is created with openvswitch version 2.16 - Opinion - seems more an issue in OVS rather than a bug in Neutron - Unassigned Undecided -------------- - https://bugs.launchpad.net/neutron/+bug/1969592 - [OVN] Frequent DB leader changes causes 'VIF creation failed' on nova side - New - Unassigned Invalid --------- - https://bugs.launchpad.net/neutron/+bug/1969354 - ovn-controller don't update new flows - Invalid - but reporter updated with additional info - Unassigned Thanks, Oleg -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Mon Apr 25 15:18:19 2022 From: katonalala at gmail.com (Lajos Katona) Date: Mon, 25 Apr 2022 17:18:19 +0200 Subject: [Neutron][neutron-vpnaas] proposing Mohammed Naser for neutron-vpnaas core reviewer Message-ID: Hi, I would like to propose Mohammed Naser (mnaser) as a core reviewer to neutron-vpnaas. He and his company uses neutron-vpnaas in production and volunteered to help in the maintenance of it. You can vote/feedback in this email thread. If there is no objection by 6th of May, we will add Mohammed to the core list. Thanks Lajos -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Mon Apr 25 15:20:58 2022 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 25 Apr 2022 11:20:58 -0400 Subject: [all][neutron][neutron-vpnaas] Maintainers needed In-Reply-To: References: Message-ID: Hi there, Just wanted to bring up that I've actually went ahead and fixed the functional gates, I'd also like to volunteer to maintain it. We've got some use of it and we've also been contributing fixes for it for quite sometime: https://review.opendev.org/q/project:openstack/neutron-vpnaas+owner:mnaser%2540vexxhost.com Thanks for keep it moving for that period in time. I hope others chime in too. Thanks Mohammed On Mon, Apr 11, 2022 at 6:10 AM Lajos Katona wrote: > > Hi, > > In the last few cycles neutron-vpnaas has no serious maintainers, and most patches merged were from Neutron core team or from Release team. > Recently even neutron-vpnaas gate jobs started to fail. > During the Zed PTG we discussed this topic (see [1]), > > For the maintenance we need someone to be the contact person for the > project, who takes care of the project?s CI and review patches, answers bugs. > Of course that?s only a minimal requirement. If the new maintainer works on > new features for the project, it?s even better :) > > If we don?t have any new maintainer(s) before milestone Zed-2, which is > July 11 - July 15 week according to [2], we will start marking neutron-vpnaas > as deprecated and in the next cycle (AA, or perhapc 2023.1) we will propose > to retire the project. > > So if You are using this project now, or if You have customers who are > using it, please consider the possibility of maintaining it. Otherwise, please be > aware that it is highly possible that the project will be deprecated and moved > out from the official OpenStack projects. > > [1]: https://etherpad.opendev.org/p/neutron-zed-ptg#L201 > [2]: https://releases.openstack.org/zed/schedule.html > > Lajos Katona (lajoskatona) -- Mohammed Naser VEXXHOST, Inc. From gsteinmuller at vexxhost.com Mon Apr 25 15:31:59 2022 From: gsteinmuller at vexxhost.com (=?UTF-8?Q?Guilherme_Steinm=C3=BCller?=) Date: Mon, 25 Apr 2022 12:31:59 -0300 Subject: [Neutron][neutron-vpnaas] proposing Mohammed Naser for neutron-vpnaas core reviewer In-Reply-To: References: Message-ID: +1 ! On Mon, Apr 25, 2022 at 12:27 PM Lajos Katona wrote: > Hi, > I would like to propose Mohammed Naser (mnaser) as a core reviewer to > neutron-vpnaas. > He and his company uses neutron-vpnaas in production and volunteered to > help in the maintenance of it. > > You can vote/feedback in this email thread. > If there is no objection by 6th of May, we will add Mohammed to the core > list. > > Thanks > Lajos > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Apr 25 17:51:33 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 25 Apr 2022 12:51:33 -0500 Subject: [all] Resend New CFN(Computing Force Network) SIG Proposal In-Reply-To: References: Message-ID: <18061d9a035.cb6bda15115439.2847955073200685698@ghanshyammann.com> Thanks, Niu for the proposal and sorry for the delay in response. I have raised this proposal to TC members and asking to check it. Overall proposal seems interesting to me but few initial queries inline. ---- On Wed, 13 Apr 2022 00:34:30 -0500 niujie wrote ---- > > Hi all > I sent an email yesterday about NewCFN(Computing Force Network) SIG Proposal, I tried to recall it because therewas a typo in email address, then I get recall failed msg, so I assume the emailwas sent out successfully, and plan to keep it as it was. > But I found that the ?recall? actionwas logged in pipermail, it might cause misunderstanding, we are sure about proposefor a new SIG, so I?m sending this again, sorry for the email flood J > > I'm from China Mobile, China Mobile is recently working onbuild a new information infrastructure focusing on connectivity, computingpower, and capabilities, this new information infrastructure is calledComputing Force Network, we think OpenStack community which gathers globalwisdom together is a perfect platform to discuss topics like CFN, so we areproposing to create a new SIG for CFN(Computing Force Network). Below is CFNbrief introduction and initial SIG scope. > With the flourish of new business scenarios such as hybridcloud, multi-cloud, AI, big data processing, edge computing, building a newinformation infrastructure based on multiple key technologies that convergedcloud and network, will better support global digital transformation. This newinfrastructure is not only relates to cloud, it is getting more and moreconnected with network, and at the same time, we also need to consider how toconverge multiple technologies like AI, Blockchain, big data, security to providethis all-in-one service. > Computing Force Network(CFN) is a new informationinfrastructure that based on network, focused on computing, deeply convergedArtificial intelligence, Block chain, Cloud, Data, Network, Edge computing, Endapplication, Security(ABCDNETS), providing all-in-one services. > Xiaodong Duan, Vice president of China Mobile ResearchInstitute, introduced the vision and architecture of Computing Force Network in2021 November OpenInfra Live Keynotes by his presentation Connection +Computing + Capability Opens a New Era of Digital Infrastructure, heproposed the new era of CFN. > We are expecting to work with OpenStack on how to buildthis new information infrastructure, and how to promote the development andimplementation of next generation infrastructure, achieve ubiquitous computingforce, computing & network convergence, intelligence orchestration,all-in-one service. Then computing force will become common utilities likewater and electric step by step, computing force will be ready for access uponuse and connected by single entry point. > The above vision of CFN , from technical perspective, willmainly focus on unified management and orchestration of computing + networkintegrated system, computing and network deeply converged in architecture, formand protocols aspect, bringing potential changes to OpenStack components. CFNis aiming to achieve seamlessly migration of any application between anyheterogeneous platforms, it's a challenge for the industry currently, we feelthat in pursuit of CFN could potentially contributes to the development andevolution of OpenStack. Yes, it will require changes to OpenStack components but we will see based on the exact use case and OpenStack component scope. Is this include the application migration tooling in OpenStack? > In this CFN SIG, we will mainly focus on discussing how tobuild the new information infrastructure of CFN, related key technologies, andwhat's the impact on OpenStack brought by the network & could convergencetrend , the topics are including but not limited to: > 1, Acomputing basement for unified management of container, VM and Bare Metal > 2,Computing infrastructure which eliminated the difference between heterogeneoushardware > 3,Measurement criteria and scheduling scheme based on unified computinginfrastructure > 4,Network solutions for SDN integrating smart NIC for data center > 5,Unified orchestration & management for "network + cloud", and"cloud + edge + end" integrated scheduling solution > We will have regular meetings to investigate and discussbusiness scenarios, development trend, technical scheme, release technicaldocuments, technical proposal and requirements for OpenStack Projects, andpropose new project when necessary. > We will also collaborate with other open source projectslike LFN, CNCF, LFE, to have a consistent plan across communities, and alignwith global standardization organization like ETSI, 3GPP, IETF, to promote CFNrelated technical scheme become the standard in industry. > If you have any thoughts, interests, questions,requirements, we can discuss by this mailing list. Thanks for the detailed information about the SIG scope. From the above, I understood that it will not be just changed to the OpenStack existing component but also new source code components also, do you have such list/proposal for a new component or you would like to continue discussing it and based on that you will get to know. How you are thinking about their (new component if any) releases like a coordinated release with OpenStack or independent. If coordinated then it is more than SIG scope and might be good to add a new project. By seeing the scope of this proposal (which seems very wider), I think it is not required to answer all of them now. Overall I am ok to start it as SIG and based on discussion/progress evaluation we will get to know more about new components, requirements etc and then we can change it from SIG to a new project under OpenStack or other governance (based on the core/requirement/use case it produces). -gmann > Any suggestions are welcomed, and we are really hoping tohear from anyone, and work with you. > > Jie Niu > China Mobile > From skaplons at redhat.com Mon Apr 25 19:16:39 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 25 Apr 2022 21:16:39 +0200 Subject: [Neutron][neutron-vpnaas] proposing Mohammed Naser for neutron-vpnaas core reviewer In-Reply-To: References: Message-ID: <2626416.mvXUDI8C0e@p1> Hi, On poniedzia?ek, 25 kwietnia 2022 17:18:19 CEST Lajos Katona wrote: > Hi, > I would like to propose Mohammed Naser (mnaser) as a core reviewer to > neutron-vpnaas. > He and his company uses neutron-vpnaas in production and volunteered to > help in the maintenance of it. > > You can vote/feedback in this email thread. > If there is no objection by 6th of May, we will add Mohammed to the core > list. > > Thanks > Lajos > +1 Great to see Mohammed stepping up to maintain neutron-vpnaas. Thanks Mohammed :) -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From gmann at ghanshyammann.com Mon Apr 25 19:47:39 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 25 Apr 2022 14:47:39 -0500 Subject: [all][tc] Technical Committee next weekly meeting on April 28, 2022 at 1500 UTC Message-ID: <1806243e9b4.dcfd01f3118929.1373688832929150163@ghanshyammann.com> Hello Everyone, Technical Committee's next weekly meeting is scheduled for April 28, 2022 at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, April 27, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From tonyliu0592 at hotmail.com Tue Apr 26 02:56:49 2022 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Tue, 26 Apr 2022 02:56:49 +0000 Subject: retention policy for deleted resource in database In-Reply-To: References: Message-ID: Thank you Dmitriy! I am a bit surprised. I'd expect records for deleted resource to be cleaned up by some retention policy. Will need to make sure such clean-up to be taken care of by daily operation. BTW, is it mentioned in any doc? I probably missed it. Tony ________________________________________ From: Dmitriy Rabotyagov Sent: April 23, 2022 10:17 PM Cc: openstack-dev at lists.openstack.org Subject: Re: retention policy for deleted resource in database Hi Tony, There is no default retention policy defined. To clean up such records for cinder you can run command cinder-manage db purge For nova it's a bit more complicated as first you would archive these records and only then purge them from shadow tables. This is done with nova-manage command. You can check docs regarding it's usage: https://docs.openstack.org/nova/latest/cli/nova-manage.html#db-archive-deleted-rows ??, 24 ???. 2022 ?., 6:04 Tony Liu >: Hi, I see the record for deleted resource stay in database. What's the retention policy for those records? Is it configurable? Any manual cleanup is required? Thanks! Tony From tonyliu0592 at hotmail.com Tue Apr 26 03:00:30 2022 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Tue, 26 Apr 2022 03:00:30 +0000 Subject: [cinder] how cinder gets the volume name when restore from a backup? In-Reply-To: References: Message-ID: Hi Rajat, I did those steps on dashboard with Xena. I don't think dashboard save the volume name, it must be from API. But I can't figure out where the volume name is retrieved when restore the deleted volume from backup. Thanks! Tony ________________________________________ From: Rajat Dhasmana Sent: April 24, 2022 10:01 PM To: Tony Liu Cc: openstack-dev at lists.openstack.org Subject: Re: [cinder] how cinder gets the volume name when restore from a backup? Hi Tony, On Sun, Apr 24, 2022 at 9:43 AM Tony Liu > wrote: Hi, Here is what I do. * Create a volume. * Create a backup of this volume. * Delete the volume. * Restore the volume from backup. There are different ways to do the following operations, 1) from cinderclient, 2) from openstackclient, 3) hitting the API manually or with a script etc so it's always good to be explicit about how the steps were done i.e. mentioning the way used to perform the above operations. Also it's helpful to provide the OpenStack release you're using as different releases have different behavior. I will use OpenStack Zed (master) as reference for below observations. Lastly providing the volume and backup backend is also useful information (however not very relevant in this case). The restored volume has different UUID, but the same name. AFAICS, if you're using cinderclient or hitting API directly, there's a "name" parameter[1][2] you can provide that will be used as the name of the restored volume. If you don't provide the name parameter, it will create the volume with the following name "restore_backup_"[3]. I don't see volume name is stored in backup metadata. It's not taken from backup metadata. How does Cinder know the volume name when restore it from backup, given the volume is already deleted? Thanks! Tony [1] https://opendev.org/openstack/python-cinderclient/src/branch/master/cinderclient/v3/shell.py#L227-L232 [2] https://opendev.org/openstack/cinder/src/branch/master/cinder/api/contrib/backups.py#L214 [3] https://opendev.org/openstack/cinder/src/branch/master/cinder/backup/api.py#L384-L386 Thanks and Regards Rajat Dhasmana From katonalala at gmail.com Tue Apr 26 06:48:45 2022 From: katonalala at gmail.com (Lajos Katona) Date: Tue, 26 Apr 2022 08:48:45 +0200 Subject: [all][neutron][neutron-vpnaas] Maintainers needed In-Reply-To: References: Message-ID: Hi, Thanks Mohammed for jumping in to keep neutron-vpnaas alive. Of course we are open to help any team or company or individual to help in the maintenance of it, or to add new features, like making neutron-vpnaas work with OVN. Lajos Mohammed Naser ezt ?rta (id?pont: 2022. ?pr. 25., H, 17:21): > Hi there, > > Just wanted to bring up that I've actually went ahead and fixed the > functional gates, I'd also like to volunteer to maintain it. We've > got some use of it and we've also been contributing fixes for it for > quite sometime: > > > https://review.opendev.org/q/project:openstack/neutron-vpnaas+owner:mnaser%2540vexxhost.com > > Thanks for keep it moving for that period in time. I hope others chime in > too. > > Thanks > Mohammed > > On Mon, Apr 11, 2022 at 6:10 AM Lajos Katona wrote: > > > > Hi, > > > > In the last few cycles neutron-vpnaas has no serious maintainers, and > most patches merged were from Neutron core team or from Release team. > > Recently even neutron-vpnaas gate jobs started to fail. > > During the Zed PTG we discussed this topic (see [1]), > > > > For the maintenance we need someone to be the contact person for the > > project, who takes care of the project?s CI and review patches, answers > bugs. > > Of course that?s only a minimal requirement. If the new maintainer works > on > > new features for the project, it?s even better :) > > > > If we don?t have any new maintainer(s) before milestone Zed-2, which is > > July 11 - July 15 week according to [2], we will start marking > neutron-vpnaas > > as deprecated and in the next cycle (AA, or perhapc 2023.1) we will > propose > > to retire the project. > > > > So if You are using this project now, or if You have customers who are > > using it, please consider the possibility of maintaining it. Otherwise, > please be > > aware that it is highly possible that the project will be deprecated and > moved > > out from the official OpenStack projects. > > > > [1]: https://etherpad.opendev.org/p/neutron-zed-ptg#L201 > > [2]: https://releases.openstack.org/zed/schedule.html > > > > Lajos Katona (lajoskatona) > > > > -- > Mohammed Naser > VEXXHOST, Inc. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Tue Apr 26 07:32:13 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Tue, 26 Apr 2022 09:32:13 +0200 Subject: [Neutron][neutron-vpnaas] proposing Mohammed Naser for neutron-vpnaas core reviewer In-Reply-To: <2626416.mvXUDI8C0e@p1> References: <2626416.mvXUDI8C0e@p1> Message-ID: Hey! +1 :) To have that said, we are also interested in pulling some weight in maintaining vpnaas. So feel free to reach out if any help is needed, like developing/testing fixes, reviews or designing/implementing new features:) ??, 25 ???. 2022 ?., 21:19 Slawek Kaplonski : > Hi, > > On poniedzia?ek, 25 kwietnia 2022 17:18:19 CEST Lajos Katona wrote: > > Hi, > > I would like to propose Mohammed Naser (mnaser) as a core reviewer to > > neutron-vpnaas. > > He and his company uses neutron-vpnaas in production and volunteered to > > help in the maintenance of it. > > > > You can vote/feedback in this email thread. > > If there is no objection by 6th of May, we will add Mohammed to the core > > list. > > > > Thanks > > Lajos > > > > +1 > Great to see Mohammed stepping up to maintain neutron-vpnaas. Thanks > Mohammed :) > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From bxzhu_5355 at 163.com Tue Apr 26 08:51:28 2022 From: bxzhu_5355 at 163.com (=?utf-8?B?5pyx5Y2a56Wl?=) Date: Tue, 26 Apr 2022 16:51:28 +0800 Subject: [skyline] skyline-console whl package question In-Reply-To: References: Message-ID: <55E70734-BFF6-4EB8-B3BD-38D80AC88DB2@163.com> Hi, satish We have fixed an issue of skyline-apiserver.[1] I think that it can fix your issue. If you still meet the issue, you can open a ticket here.[2] So, first of all, I think that you should fetch the latest codes of skyline-apiserver. BTW, you can reach me at IRC (#openstack-skyline). (nickname: boxiang) Thanks, Boxiang [1] https://review.opendev.org/c/openstack/skyline-apiserver/+/839115 [2] https://bugs.launchpad.net/skyline-apiserver/+bugs > 2022?4?25? ??9:23?Satish Patel ??? > > Folks, > > I am compiling skyline-apiserver on my bare metal server and found Makefile pulling skyline-console tarball from the following location but that package doesn't contain the latest merged patches. Does CI job is broken which compiles this tarball ? > > https://tarballs.opendev.org/openstack/skyline-console/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From derekokeeffe85 at yahoo.ie Tue Apr 26 10:18:43 2022 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Tue, 26 Apr 2022 10:18:43 +0000 (UTC) Subject: Setup infrastructure failing References: <1165416533.2168800.1650968323255.ref@mail.yahoo.com> Message-ID: <1165416533.2168800.1650968323255@mail.yahoo.com> Hi all, We get the following error in the?openstack.osa.db_setup : Create database for service task: failed: [infra1_keystone_container-e54c8ba5 -> infra1_utility_container-1a1eb7ce(xx.xx.xx.xxx)] (item={'name': 'keystone', 'users': [{'username': 'keystone', 'password': 'PASSWORD'}]}) => {"ansible_loop_var": "item", "changed": false, "item": {"name": "keystone", "users": [{"password": "PASSWORD", "username": "keystone"}]}, "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (2003, \"Can't connect to MySQL server on 'xx.xx.xx.xxx' ([Errno 111] Connection refused)\")"} Also, ansible galera_container -m shell -a "mysql -h localhost -e 'show status like \"%wsrep_cluster_%\";'" is different to the example output: infra1_galera_container-90dd1571 | CHANGED | rc=0 >>Variable_name Valuewsrep_cluster_weight 1wsrep_cluster_capabilities wsrep_cluster_conf_id 1wsrep_cluster_size 1wsrep_cluster_state_uuid c06b8107-c53a-11ec-a54c-a20664362f6cwsrep_cluster_status Primary We checked our infra node and all the containers are present and mysql is installed on the infra1-galera-container and port 3306 is open. MariaDB [(none)]> show databases;+--------------------+| Database? ? ? ? ? ? ? ? ? ? |+--------------------+| information_schema? ?|| mysql? ? ? ? ? ? ? ? ? ? ? ? ? || performance_schema || sys? ? ? ? ? ? ? ? ? ? ? ? ? ? ? |+--------------------+4 rows in set (0.001 sec) We are not sure what is causing this and it could possibly be that we have forgotten/skipped/missed a configuration step but we have gone over the process multiple times and cannot see what we may have missed. The previous playbooks did have a lot of "skipped" results? compute1? ? ? ? ? ? ? ? ? ?: ok=120? changed=2? ? unreachable=0? ? failed=0? ? skipped=31? ?rescued=0? ? ignored=0? ?compute2? ? ? ? ? ? ? ? ? ?: ok=120? changed=2? ? unreachable=0? ? failed=0? ? skipped=31? ?rescued=0? ? ignored=0? ?infra1? ? ? ? ? ? ? ? ? ? ?: ok=156? changed=3? ? unreachable=0? ? failed=0? ? skipped=31? ?rescued=0? ? ignored=0? ?infra1_aodh_container-7c703324 : ok=89? ?changed=36? ?unreachable=0? ? failed=0? ? skipped=5? ? rescued=0? ? ignored=0? ?infra1_ceilometer_central_container-7590db72 : ok=89? ?changed=36? ?unreachable=0? ? failed=0? ? skipped=5? ? rescued=0? ? ignored=0? ?infra1_cinder_api_container-91806531 : ok=89? ?changed=36? ?unreachable=0? ? failed=0? ? skipped=5? ? rescued=0? ? ignored=0? ?infra1_galera_container-90dd1571 : ok=89? ?changed=36? ?unreachable=0? ? failed=0? ? skipped=5? ? rescued=0? ? ignored=0? ?infra1_glance_container-fff39ac4 : ok=89? ?changed=36? ?unreachable=0? ? failed=0? ? skipped=5? ? rescued=0? ? ignored=0? ?infra1_gnocchi_container-ffd32a8b : ok=89? ?changed=36? ?unreachable=0? ? failed=0? ? skipped=5? ? rescued=0? ? ignored=0? ?infra1_heat_api_container-a4079838 : ok=89? ?changed=36? ?unreachable=0? ? failed=0? ? skipped=5? ? rescued=0? ? ignored=0? ?infra1_horizon_container-9c041ec0 : ok=89? ?changed=36? ?unreachable=0? ? failed=0? ? skipped=5? ? rescued=0? ? ignored=0? ?infra1_keystone_container-e54c8ba5 : ok=89? ?changed=36? ?unreachable=0? ? failed=0? ? skipped=5? ? rescued=0? ? ignored=0? ?infra1_memcached_container-dc50bcd8 : ok=89? ?changed=36? ?unreachable=0? ? failed=0? ? skipped=5? ? rescued=0? ? ignored=0? ?infra1_neutron_server_container-bfc4d0d2 : ok=92? ?changed=37? ?unreachable=0? ? failed=0? ? skipped=5? ? rescued=0? ? ignored=0? ?infra1_nova_api_container-f231bb13 : ok=89? ?changed=36? ?unreachable=0? ? failed=0? ? skipped=5? ? rescued=0? ? ignored=0? ?infra1_placement_container-108d79df : ok=89? ?changed=36? ?unreachable=0? ? failed=0? ? skipped=5? ? rescued=0? ? ignored=0? ?infra1_rabbit_mq_container-50ca98ed : ok=89? ?changed=36? ?unreachable=0? ? failed=0? ? skipped=5? ? rescued=0? ? ignored=0? ?infra1_repo_container-ccade01d : ok=89? ?changed=36? ?unreachable=0? ? failed=0? ? skipped=5? ? rescued=0? ? ignored=0? ?infra1_utility_container-1a1eb7ce : ok=89? ?changed=36? ?unreachable=0? ? failed=0? ? skipped=5? ? rescued=0? ? ignored=0? ?localhost? ? ? ? ? ? ? ? ? : ok=19? ?changed=0? ? unreachable=0? ? failed=0? ? skipped=14? ?rescued=0? ? ignored=0 compute1? ? ? ? ? ? ? ? ? ?: ok=0? ? changed=0? ? unreachable=0? ? failed=0? ? skipped=10? ?rescued=0? ? ignored=0? ?compute2? ? ? ? ? ? ? ? ? ?: ok=0? ? changed=0? ? unreachable=0? ? failed=0? ? skipped=10? ?rescued=0? ? ignored=0? ?infra1? ? ? ? ? ? ? ? ? ? ?: ok=38? ?changed=0? ? unreachable=0? ? failed=0? ? skipped=35? ?rescued=0? ? ignored=0? ?infra1_galera_container-90dd1571 : ok=69? ?changed=3? ? unreachable=0? ? failed=0? ? skipped=12? ?rescued=0? ? ignored=0? ?infra1_memcached_container-dc50bcd8 : ok=16? ?changed=0? ? unreachable=0? ? failed=0? ? skipped=4? ? rescued=0? ? ignored=0? ?infra1_rabbit_mq_container-50ca98ed : ok=69? ?changed=33? ?unreachable=0? ? failed=0? ? skipped=14? ?rescued=0? ? ignored=0? ?infra1_utility_container-1a1eb7ce : ok=40? ?changed=18? ?unreachable=0? ? failed=0? ? skipped=11? ?rescued=0? ? ignored=0? A lot were around Galera tasks but we are not sure if they were skipped intentionally as the playbooks ran to completion.Any help or troubleshooting steps would be appreciated. Thanks in advance. Regards,Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Tue Apr 26 11:10:58 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 26 Apr 2022 12:10:58 +0100 Subject: [oslo][ops] Change in default logging format Message-ID: <9028b019bca61c9890d80d36121b3ded5b2bb266.camel@redhat.com> I'm not sure how many people read release notes for the various oslo libraries, so I'm posting this here since it might have a minor impact for some tooling operators are using. We've just merged [1] which changes the default logging format used by oslo.log, configured using the '[DEFAULT] logging_context_format_string' config option. Previously, our logging format was: %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s This looks like e.g. 2022-04-26 10:49:14.629 4187413 WARNING nova.conductor.api [req-ffddccd3-8804-4fc3-a2f3-a05af1eb569f - - - - -] This has now been modified to include the global request ID as well as the local request ID: %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s This looks like e.g.: 2022-04-26 10:53:16.756 4187784 DEBUG nova.service [None req-f1698728-a2d6-4e02-a6b4-cf2cd398c02f - - - - -] Join ServiceGroup membership for this service compute This will affect pretty much all services using the out-of-the-box logging configuration. However, I suspect very few if any users are using the standard configuration. DevStack overrides '[DEFAULT] logging_context_format_string' for all services [2] and I suspect other installers probably do the same. If you are using the standard configuration, you might need to modify some log parsing/scraping tooling when this rolls around or manually set '[DEFAULT] logging_context_format_string' in your service configuration files. Cheers, Stephen [1] https://review.opendev.org/c/openstack/oslo.log/+/838190 [2] https://github.com/openstack/devstack/blob/3b0c035b9/functions#L677-L707 From satish.txt at gmail.com Tue Apr 26 11:50:59 2022 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 26 Apr 2022 07:50:59 -0400 Subject: [skyline] skyline-console whl package question In-Reply-To: <55E70734-BFF6-4EB8-B3BD-38D80AC88DB2@163.com> References: <55E70734-BFF6-4EB8-B3BD-38D80AC88DB2@163.com> Message-ID: <8227CAE8-9371-4926-B130-6EC795B5FFC8@gmail.com> Awesome!! Thank you and See you on IRC Sent from my iPhone > On Apr 26, 2022, at 4:52 AM, ??? wrote: > > ? > > Hi, satish > > We have fixed an issue of skyline-apiserver.[1] > I think that it can fix your issue. If you still meet the issue, you can open a ticket here.[2] > > So, first of all, I think that you should fetch the latest codes of skyline-apiserver. > > BTW, you can reach me at IRC (#openstack-skyline). (nickname: boxiang) > > Thanks, > > Boxiang > > > [1] https://review.opendev.org/c/openstack/skyline-apiserver/+/839115 > [2] https://bugs.launchpad.net/skyline-apiserver/+bugs > > >> 2022?4?25? ??9:23?Satish Patel ??? >> >> Folks, >> >> I am compiling skyline-apiserver on my bare metal server and found Makefile pulling skyline-console tarball from the following location but that package doesn't contain the latest merged patches. Does CI job is broken which compiles this tarball ? >> >> https://tarballs.opendev.org/openstack/skyline-console/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Tue Apr 26 12:13:01 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Tue, 26 Apr 2022 14:13:01 +0200 Subject: Setup infrastructure failing In-Reply-To: <1165416533.2168800.1650968323255@mail.yahoo.com> References: <1165416533.2168800.1650968323255.ref@mail.yahoo.com> <1165416533.2168800.1650968323255@mail.yahoo.com> Message-ID: Hey Derek, Connection to MySQL by default happens from utility container. While running utility-install.yml playbook, it should install mysqlclient as well deploy my.cnf file to it. You should check if this file has been created and if it contains valid credentials for login. If it's not you should try rerunning this specific playbook. Also if it has failed during previous run, likely you also don't have proper virtualenv, so you should consider adding `-e venv_rebuild=true` to the playbook execution. ??, 26 ???. 2022 ?., 12:21 Derek O keeffe : > Hi all, > > We get the following error in the openstack.osa.db_setup : Create > database for service task: > > failed: [infra1_keystone_container-e54c8ba5 -> > infra1_utility_container-1a1eb7ce(xx.xx.xx.xxx)] (item={'name': 'keystone', > 'users': [{'username': 'keystone', 'password': 'PASSWORD'}]}) => > {"ansible_loop_var": "item", "changed": false, "item": {"name": "keystone", > "users": [{"password": "PASSWORD", "username": "keystone"}]}, "msg": > "unable to connect to database, check login_user and login_password are > correct or /root/.my.cnf has the credentials. Exception message: (2003, > \"Can't connect to MySQL server on 'xx.xx.xx.xxx' ([Errno 111] Connection > refused)\")"} > > Also, ansible galera_container -m shell -a "mysql -h localhost -e 'show > status like \"%wsrep_cluster_%\";'" is different to the example output: > > infra1_galera_container-90dd1571 | CHANGED | rc=0 >> > Variable_name Value > wsrep_cluster_weight 1 > wsrep_cluster_capabilities > wsrep_cluster_conf_id 1 > wsrep_cluster_size 1 > wsrep_cluster_state_uuid c06b8107-c53a-11ec-a54c-a20664362f6c > wsrep_cluster_status Primary > > We checked our infra node and all the containers are present and mysql is > installed on the infra1-galera-container and port 3306 is open. > > MariaDB [(none)]> show databases; > +--------------------+ > | Database | > +--------------------+ > | information_schema | > | mysql | > | performance_schema | > | sys | > +--------------------+ > 4 rows in set (0.001 sec) > > > We are not sure what is causing this and it could possibly be that we have > forgotten/skipped/missed a configuration step but we have gone over the > process multiple times and cannot see what we may have missed. The previous > playbooks did have a lot of "skipped" results > > compute1 : ok=120 changed=2 unreachable=0 > failed=0 skipped=31 rescued=0 ignored=0 > compute2 : ok=120 changed=2 unreachable=0 > failed=0 skipped=31 rescued=0 ignored=0 > infra1 : ok=156 changed=3 unreachable=0 > failed=0 skipped=31 rescued=0 ignored=0 > infra1_aodh_container-7c703324 : ok=89 changed=36 unreachable=0 > failed=0 skipped=5 rescued=0 ignored=0 > infra1_ceilometer_central_container-7590db72 : ok=89 changed=36 > unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 > infra1_cinder_api_container-91806531 : ok=89 changed=36 unreachable=0 > failed=0 skipped=5 rescued=0 ignored=0 > infra1_galera_container-90dd1571 : ok=89 changed=36 unreachable=0 > failed=0 skipped=5 rescued=0 ignored=0 > infra1_glance_container-fff39ac4 : ok=89 changed=36 unreachable=0 > failed=0 skipped=5 rescued=0 ignored=0 > infra1_gnocchi_container-ffd32a8b : ok=89 changed=36 unreachable=0 > failed=0 skipped=5 rescued=0 ignored=0 > infra1_heat_api_container-a4079838 : ok=89 changed=36 unreachable=0 > failed=0 skipped=5 rescued=0 ignored=0 > infra1_horizon_container-9c041ec0 : ok=89 changed=36 unreachable=0 > failed=0 skipped=5 rescued=0 ignored=0 > infra1_keystone_container-e54c8ba5 : ok=89 changed=36 unreachable=0 > failed=0 skipped=5 rescued=0 ignored=0 > infra1_memcached_container-dc50bcd8 : ok=89 changed=36 unreachable=0 > failed=0 skipped=5 rescued=0 ignored=0 > infra1_neutron_server_container-bfc4d0d2 : ok=92 changed=37 > unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 > infra1_nova_api_container-f231bb13 : ok=89 changed=36 unreachable=0 > failed=0 skipped=5 rescued=0 ignored=0 > infra1_placement_container-108d79df : ok=89 changed=36 unreachable=0 > failed=0 skipped=5 rescued=0 ignored=0 > infra1_rabbit_mq_container-50ca98ed : ok=89 changed=36 unreachable=0 > failed=0 skipped=5 rescued=0 ignored=0 > infra1_repo_container-ccade01d : ok=89 changed=36 unreachable=0 > failed=0 skipped=5 rescued=0 ignored=0 > infra1_utility_container-1a1eb7ce : ok=89 changed=36 unreachable=0 > failed=0 skipped=5 rescued=0 ignored=0 > localhost : ok=19 changed=0 unreachable=0 > failed=0 skipped=14 rescued=0 ignored=0 > > > compute1 : ok=0 changed=0 unreachable=0 > failed=0 skipped=10 rescued=0 ignored=0 > compute2 : ok=0 changed=0 unreachable=0 > failed=0 skipped=10 rescued=0 ignored=0 > infra1 : ok=38 changed=0 unreachable=0 > failed=0 skipped=35 rescued=0 ignored=0 > infra1_galera_container-90dd1571 : ok=69 changed=3 unreachable=0 > failed=0 skipped=12 rescued=0 ignored=0 > infra1_memcached_container-dc50bcd8 : ok=16 changed=0 unreachable=0 > failed=0 skipped=4 rescued=0 ignored=0 > infra1_rabbit_mq_container-50ca98ed : ok=69 changed=33 unreachable=0 > failed=0 skipped=14 rescued=0 ignored=0 > infra1_utility_container-1a1eb7ce : ok=40 changed=18 unreachable=0 > failed=0 skipped=11 rescued=0 ignored=0 > > > A lot were around Galera tasks but we are not sure if they were skipped > intentionally as the playbooks ran to completion. > Any help or troubleshooting steps would be appreciated. Thanks in advance. > > > Regards, > Derek > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gong.yongsheng at 99cloud.net Tue Apr 26 02:36:50 2022 From: gong.yongsheng at 99cloud.net (=?UTF-8?B?6b6a5rC455Sf?=) Date: Tue, 26 Apr 2022 10:36:50 +0800 (GMT+08:00) Subject: =?UTF-8?B?UmU6UmU6IFthbGxdIFJlc2VuZCBOZXcgQ0ZOKENvbXB1dGluZyBGb3JjZSBOZXR3b3JrKSBTSUcgUHJvcG9zYWw=?= In-Reply-To: <18061d9a035.cb6bda15115439.2847955073200685698@ghanshyammann.com> Message-ID: From: Ghanshyam Mann Date: 2022-04-26 01:51:33 To: niujie Cc: openstack-discuss ,sunny ,'Horace Li' ,"huang.shuquan" ,"gong.yongsheng" ,"shane.wang" ,"jian-feng.ding" ,wangshengyjy ,yuzhiqiang ,zhangxiaoguang ,xujianwl Subject: Re: [all] Resend New CFN(Computing Force Network) SIG Proposal>Thanks, Niu for the proposal and sorry for the delay in response. > >I have raised this proposal to TC members and asking to check it. Overall proposal seems >interesting to me but few initial queries inline. > > > ---- On Wed, 13 Apr 2022 00:34:30 -0500 niujie wrote ---- > > > > Hi all > > I sent an email yesterday about NewCFN(Computing Force Network) SIG Proposal, I tried to recall it because therewas a typo in email address, then I get recall failed msg, so I assume the emailwas sent out successfully, and plan to keep it as it was. > > But I found that the ?recall? actionwas logged in pipermail, it might cause misunderstanding, we are sure about proposefor a new SIG, so I?m sending this again, sorry for the email flood J > > > > I'm from China Mobile, China Mobile is recently working onbuild a new information infrastructure focusing on connectivity, computingpower, and capabilities, this new information infrastructure is calledComputing Force Network, we think OpenStack community which gathers globalwisdom together is a perfect platform to discuss topics like CFN, so we areproposing to create a new SIG for CFN(Computing Force Network). Below is CFNbrief introduction and initial SIG scope. > > With the flourish of new business scenarios such as hybridcloud, multi-cloud, AI, big data processing, edge computing, building a newinformation infrastructure based on multiple key technologies that convergedcloud and network, will better support global digital transformation. This newinfrastructure is not only relates to cloud, it is getting more and moreconnected with network, and at the same time, we also need to consider how toconverge multiple technologies like AI, Blockchain, big data, security to providethis all-in-one service. > > Computing Force Network(CFN) is a new informationinfrastructure that based on network, focused on computing, deeply convergedArtificial intelligence, Block chain, Cloud, Data, Network, Edge computing, Endapplication, Security(ABCDNETS), providing all-in-one services. > > Xiaodong Duan, Vice president of China Mobile ResearchInstitute, introduced the vision and architecture of Computing Force Network in2021 November OpenInfra Live Keynotes by his presentation Connection +Computing + Capability Opens a New Era of Digital Infrastructure, heproposed the new era of CFN. > > We are expecting to work with OpenStack on how to buildthis new information infrastructure, and how to promote the development andimplementation of next generation infrastructure, achieve ubiquitous computingforce, computing & network convergence, intelligence orchestration,all-in-one service. Then computing force will become common utilities likewater and electric step by step, computing force will be ready for access uponuse and connected by single entry point. > > The above vision of CFN , from technical perspective, willmainly focus on unified management and orchestration of computing + networkintegrated system, computing and network deeply converged in architecture, formand protocols aspect, bringing potential changes to OpenStack components. CFNis aiming to achieve seamlessly migration of any application between anyheterogeneous platforms, it's a challenge for the industry currently, we feelthat in pursuit of CFN could potentially contributes to the development andevolution of OpenStack. > >Yes, it will require changes to OpenStack components but we will see based on the exact use case and OpenStack component scope. Is this include the application migration tooling in OpenStack? > > > > In this CFN SIG, we will mainly focus on discussing how tobuild the new information infrastructure of CFN, related key technologies, andwhat's the impact on OpenStack brought by the network & could convergencetrend , the topics are including but not limited to: > > 1, Acomputing basement for unified management of container, VM and Bare Metal > > 2,Computing infrastructure which eliminated the difference between heterogeneoushardware > > 3,Measurement criteria and scheduling scheme based on unified computinginfrastructure > > 4,Network solutions for SDN integrating smart NIC for data center > > 5,Unified orchestration & management for "network + cloud", and"cloud + edge + end" integrated scheduling solution > > We will have regular meetings to investigate and discussbusiness scenarios, development trend, technical scheme, release technicaldocuments, technical proposal and requirements for OpenStack Projects, andpropose new project when necessary. > > We will also collaborate with other open source projectslike LFN, CNCF, LFE, to have a consistent plan across communities, and alignwith global standardization organization like ETSI, 3GPP, IETF, to promote CFNrelated technical scheme become the standard in industry. > > If you have any thoughts, interests, questions,requirements, we can discuss by this mailing list. > >Thanks for the detailed information about the SIG scope. From the above, I understood that it will not be just changed to the OpenStack existing >component but also new source code components also, do you have such list/proposal for a new component or you would like to continue discussing >it and based on that you will get to know. How you are thinking about their (new component if any) releases like a coordinated >release with OpenStack or independent. If coordinated then it is more than SIG scope and might be good to add a new project. > I think It mostly likes an auto scheduling system in a multi-level cloud fabric (multiple data center clouds, multiple edge clouds, multiple far edge clouds) platform. The main process of the system can be described as: first, to collect the resource usage in each cloud node which can be assumed as an openstack deployment. the resource usage can be provided by the openstack placement compomnet, second, to collect lataency, qos or bandwidth of the network resources among the fabric. third, when application deployment needs come in, the fabric should compute the optimal placement strategy and do the deployment. So we will need to create a new project for it. >By seeing the scope of this proposal (which seems very wider), I think it is not required to answer all of them now. Overall I am ok to start >it as SIG and based on discussion/progress evaluation we will get to know more about new components, requirements etc and then we can >change it from SIG to a new project under OpenStack or other governance (based on the core/requirement/use case it produces). > >-gmann > > > Any suggestions are welcomed, and we are really hoping tohear from anyone, and work with you. > > > > Jie Niu > > China Mobile > > > yongsheng gong 99cloud -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Tue Apr 26 02:49:20 2022 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 25 Apr 2022 19:49:20 -0700 Subject: [all] Resend New CFN(Computing Force Network) SIG Proposal In-Reply-To: <18061d9a035.cb6bda15115439.2847955073200685698@ghanshyammann.com> References: <18061d9a035.cb6bda15115439.2847955073200685698@ghanshyammann.com> Message-ID: Hi All, I wanted to quickly chime in on this thread as well. A few of us from the OpenInfra Foundation had a quick sync with Jie, Zhiqiang and a few others from the China Mobile team to get a better understanding of the CFN initiative and help finding the best way forward. The CFN proposal has a wide scope both in terms of targeted areas and use cases, as well as the solution stack that includes Management & Orchestration and Services & Operations type components as well. OpenStack is one of the key components to CFN and with that the activities would include working with the OpenStack community in areas such as monitoring, scheduling, etc. Based on the discussion with the China Mobile team so far, a SIG within OpenStack doesn?t seem to be the right place to start as it would potentially increase the scope of the OpenStack project or limit the scope of CFN. While based on its scope the CFN initiative has the potential to become a standalone project, we need to explore it further before making any decisions. Thanks and Best Regards, Ildik? ??? Ildik? V?ncsa Senior Manager, Community & Ecosystem Open Infrastructure Foundation > On Apr 25, 2022, at 10:51, Ghanshyam Mann wrote: > > Thanks, Niu for the proposal and sorry for the delay in response. > > I have raised this proposal to TC members and asking to check it. Overall proposal seems > interesting to me but few initial queries inline. > > > ---- On Wed, 13 Apr 2022 00:34:30 -0500 niujie wrote ---- >> >> Hi all >> I sent an email yesterday about NewCFN(Computing Force Network) SIG Proposal, I tried to recall it because therewas a typo in email address, then I get recall failed msg, so I assume the emailwas sent out successfully, and plan to keep it as it was. >> But I found that the ?recall? actionwas logged in pipermail, it might cause misunderstanding, we are sure about proposefor a new SIG, so I?m sending this again, sorry for the email flood J >> >> I'm from China Mobile, China Mobile is recently working onbuild a new information infrastructure focusing on connectivity, computingpower, and capabilities, this new information infrastructure is calledComputing Force Network, we think OpenStack community which gathers globalwisdom together is a perfect platform to discuss topics like CFN, so we areproposing to create a new SIG for CFN(Computing Force Network). Below is CFNbrief introduction and initial SIG scope. >> With the flourish of new business scenarios such as hybridcloud, multi-cloud, AI, big data processing, edge computing, building a newinformation infrastructure based on multiple key technologies that convergedcloud and network, will better support global digital transformation. This newinfrastructure is not only relates to cloud, it is getting more and moreconnected with network, and at the same time, we also need to consider how toconverge multiple technologies like AI, Blockchain, big data, security to providethis all-in-one service. >> Computing Force Network(CFN) is a new informationinfrastructure that based on network, focused on computing, deeply convergedArtificial intelligence, Block chain, Cloud, Data, Network, Edge computing, Endapplication, Security(ABCDNETS), providing all-in-one services. >> Xiaodong Duan, Vice president of China Mobile ResearchInstitute, introduced the vision and architecture of Computing Force Network in2021 November OpenInfra Live Keynotes by his presentation Connection +Computing + Capability Opens a New Era of Digital Infrastructure, heproposed the new era of CFN. >> We are expecting to work with OpenStack on how to buildthis new information infrastructure, and how to promote the development andimplementation of next generation infrastructure, achieve ubiquitous computingforce, computing & network convergence, intelligence orchestration,all-in-one service. Then computing force will become common utilities likewater and electric step by step, computing force will be ready for access uponuse and connected by single entry point. >> The above vision of CFN , from technical perspective, willmainly focus on unified management and orchestration of computing + networkintegrated system, computing and network deeply converged in architecture, formand protocols aspect, bringing potential changes to OpenStack components. CFNis aiming to achieve seamlessly migration of any application between anyheterogeneous platforms, it's a challenge for the industry currently, we feelthat in pursuit of CFN could potentially contributes to the development andevolution of OpenStack. > > Yes, it will require changes to OpenStack components but we will see based on the exact use case and OpenStack component scope. Is this include the application migration tooling in OpenStack? > > >> In this CFN SIG, we will mainly focus on discussing how tobuild the new information infrastructure of CFN, related key technologies, andwhat's the impact on OpenStack brought by the network & could convergencetrend , the topics are including but not limited to: >> 1, Acomputing basement for unified management of container, VM and Bare Metal >> 2,Computing infrastructure which eliminated the difference between heterogeneoushardware >> 3,Measurement criteria and scheduling scheme based on unified computinginfrastructure >> 4,Network solutions for SDN integrating smart NIC for data center >> 5,Unified orchestration & management for "network + cloud", and"cloud + edge + end" integrated scheduling solution >> We will have regular meetings to investigate and discussbusiness scenarios, development trend, technical scheme, release technicaldocuments, technical proposal and requirements for OpenStack Projects, andpropose new project when necessary. >> We will also collaborate with other open source projectslike LFN, CNCF, LFE, to have a consistent plan across communities, and alignwith global standardization organization like ETSI, 3GPP, IETF, to promote CFNrelated technical scheme become the standard in industry. >> If you have any thoughts, interests, questions,requirements, we can discuss by this mailing list. > > Thanks for the detailed information about the SIG scope. From the above, I understood that it will not be just changed to the OpenStack existing > component but also new source code components also, do you have such list/proposal for a new component or you would like to continue discussing > it and based on that you will get to know. How you are thinking about their (new component if any) releases like a coordinated > release with OpenStack or independent. If coordinated then it is more than SIG scope and might be good to add a new project. > > By seeing the scope of this proposal (which seems very wider), I think it is not required to answer all of them now. Overall I am ok to start > it as SIG and based on discussion/progress evaluation we will get to know more about new components, requirements etc and then we can > change it from SIG to a new project under OpenStack or other governance (based on the core/requirement/use case it produces). > > -gmann > >> Any suggestions are welcomed, and we are really hoping tohear from anyone, and work with you. >> >> Jie Niu >> China Mobile >> > From niujie at chinamobile.com Tue Apr 26 04:21:32 2022 From: niujie at chinamobile.com (niujie) Date: Tue, 26 Apr 2022 12:21:32 +0800 Subject: [all] Resend New CFN(Computing Force Network) SIG Proposal In-Reply-To: <18061d9a035.cb6bda15115439.2847955073200685698@ghanshyammann.com> References: <18061d9a035.cb6bda15115439.2847955073200685698@ghanshyammann.com> Message-ID: <16b901d85925$1c9505a0$55bf10e0$@com> Hi Ghanshyam, Thanks for forwarding the proposal. Yes, we will figure out exact changes(requirements) based on the exact use case and OpenStack component scope by further discussion. As for the application migration, currently we don?t have plan for new project of tooling, we could probably start with tool(based on the discussion), but the ultimate goal is beyond just tooling, the vision of CFN is to achieve ecosystem for development, any application developed on this infrastructure could be migrated to any heterogeneous platforms. This may include build compiling platform on heterogeneous infrastructure, draft standardization for low-level code development, etc. You are right about CFN will not just bring changes to the OpenStack existing components, but also brings potential new source code components, we don't have such list/proposal for new component right now, that's why we would like to raise the CFN topic here, and based on the discussion with global wisdoms, we will figure out the next step. It is a good idea to start with a SIG, we can firstly start discussion here, and maybe re-evaluate as it goes. We have a brief CFN introduction slide, and shall I add a topic in TC weekly meeting agenda? Thanks Jie Niu -----????----- ???: Ghanshyam Mann [mailto:gmann at ghanshyammann.com] ????: Tuesday, April 26, 2022 1:52 AM ???: niujie ??: openstack-discuss; sunny; 'Horace Li'; huang.shuquan; gong.yongsheng; shane.wang; jian-feng.ding; wangshengyjy; yuzhiqiang; zhangxiaoguang; xujianwl ??: Re: [all] Resend New CFN(Computing Force Network) SIG Proposal Thanks, Niu for the proposal and sorry for the delay in response. I have raised this proposal to TC members and asking to check it. Overall proposal seems interesting to me but few initial queries inline. ---- On Wed, 13 Apr 2022 00:34:30 -0500 niujie wrote ---- > > Hi all > I sent an email yesterday about NewCFN(Computing Force Network) SIG Proposal, I tried to recall it because therewas a typo in email address, then I get recall failed msg, so I assume the emailwas sent out successfully, and plan to keep it as it was. > But I found that the ?recall? actionwas logged in pipermail, it might cause misunderstanding, we are sure about proposefor a new SIG, so I?m sending this again, sorry for the email flood J > > I'm from China Mobile, China Mobile is recently working onbuild a new information infrastructure focusing on connectivity, computingpower, and capabilities, this new information infrastructure is calledComputing Force Network, we think OpenStack community which gathers globalwisdom together is a perfect platform to discuss topics like CFN, so we areproposing to create a new SIG for CFN(Computing Force Network). Below is CFNbrief introduction and initial SIG scope. > With the flourish of new business scenarios such as hybridcloud, multi-cloud, AI, big data processing, edge computing, building a newinformation infrastructure based on multiple key technologies that convergedcloud and network, will better support global digital transformation. This newinfrastructure is not only relates to cloud, it is getting more and moreconnected with network, and at the same time, we also need to consider how toconverge multiple technologies like AI, Blockchain, big data, security to providethis all-in-one service. > Computing Force Network(CFN) is a new informationinfrastructure that based on network, focused on computing, deeply convergedArtificial intelligence, Block chain, Cloud, Data, Network, Edge computing, Endapplication, Security(ABCDNETS), providing all-in-one services. > Xiaodong Duan, Vice president of China Mobile ResearchInstitute, introduced the vision and architecture of Computing Force Network in2021 November OpenInfra Live Keynotes by his presentation Connection +Computing + Capability Opens a New Era of Digital Infrastructure, heproposed the new era of CFN. > We are expecting to work with OpenStack on how to buildthis new information infrastructure, and how to promote the development andimplementation of next generation infrastructure, achieve ubiquitous computingforce, computing & network convergence, intelligence orchestration,all-in-one service. Then computing force will become common utilities likewater and electric step by step, computing force will be ready for access uponuse and connected by single entry point. > The above vision of CFN , from technical perspective, willmainly focus on unified management and orchestration of computing + networkintegrated system, computing and network deeply converged in architecture, formand protocols aspect, bringing potential changes to OpenStack components. CFNis aiming to achieve seamlessly migration of any application between anyheterogeneous platforms, it's a challenge for the industry currently, we feelthat in pursuit of CFN could potentially contributes to the development andevolution of OpenStack. Yes, it will require changes to OpenStack components but we will see based on the exact use case and OpenStack component scope. Is this include the application migration tooling in OpenStack? > In this CFN SIG, we will mainly focus on discussing how tobuild the new information infrastructure of CFN, related key technologies, andwhat's the impact on OpenStack brought by the network & could convergencetrend , the topics are including but not limited to: > 1, Acomputing basement for unified management of container, VM and Bare Metal > 2,Computing infrastructure which eliminated the difference between heterogeneoushardware > 3,Measurement criteria and scheduling scheme based on unified computinginfrastructure > 4,Network solutions for SDN integrating smart NIC for data center > 5,Unified orchestration & management for "network + cloud", and"cloud + edge + end" integrated scheduling solution > We will have regular meetings to investigate and discussbusiness scenarios, development trend, technical scheme, release technicaldocuments, technical proposal and requirements for OpenStack Projects, andpropose new project when necessary. > We will also collaborate with other open source projectslike LFN, CNCF, LFE, to have a consistent plan across communities, and alignwith global standardization organization like ETSI, 3GPP, IETF, to promote CFNrelated technical scheme become the standard in industry. > If you have any thoughts, interests, questions,requirements, we can discuss by this mailing list. Thanks for the detailed information about the SIG scope. From the above, I understood that it will not be just changed to the OpenStack existing component but also new source code components also, do you have such list/proposal for a new component or you would like to continue discussing it and based on that you will get to know. How you are thinking about their (new component if any) releases like a coordinated release with OpenStack or independent. If coordinated then it is more than SIG scope and might be good to add a new project. By seeing the scope of this proposal (which seems very wider), I think it is not required to answer all of them now. Overall I am ok to start it as SIG and based on discussion/progress evaluation we will get to know more about new components, requirements etc and then we can change it from SIG to a new project under OpenStack or other governance (based on the core/requirement/use case it produces). -gmann > Any suggestions are welcomed, and we are really hoping tohear from anyone, and work with you. > > Jie Niu > China Mobile > From wodel.youchi at gmail.com Tue Apr 26 13:01:42 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Tue, 26 Apr 2022 14:01:42 +0100 Subject: [Kolla-ansible][Xena] Error deploying Cloudkitty In-Reply-To: References: Message-ID: Hi, It did work, Modify /home/deployer/kollavenv/xenavenv/share/kolla-ansible/ansible/roles/cloudkitty/tasks/bootstrap.yml like this : - name: Creating Cloudkitty influxdb database become: true kolla_toolbox: module_name: influxdb_database module_args: hostname: "{{ influxdb_address }}" port: "{{ influxdb_http_port }}" * ssl: {{ cloudkitty_influxdb_use_ssl }}* database_name: "{{ cloudkitty_influxdb_name }}" run_once: True delegate_to: "{{ groups['cloudkitty-api'][0] }}" when: cloudkitty_storage_backend == 'influxdb' Then declare *cloudkitty_influxdb_use_ssl* variable in globals.yml *cloudkitty_influxdb_use_ssl: true* Then deploy, it did work. How to propose a fix, I do not know how to do that!!! Regards. Le mer. 20 avr. 2022 ? 08:57, Mark Goddard a ?crit : > Hi Wodel, > > Did it work when you added the ssl parameter? If so, could you propose a > fix for this upstream? > > Thanks, > Mark > > On Tue, 19 Apr 2022 at 15:07, wodel youchi wrote: > >> Hi, >> I tried to do this >> vim >> /home/deployer/kollavenv/xenavenv/share/kolla-ansible/ansible/roles/cloudkitty/defaults/main.yml >> *cloudkitty_influxdb_use_ssl: "true"* >> But it didn't work,then I added the same variable to globals.yml but it >> didn't work. >> >> So finally I edited vim >> /home/deployer/kollavenv/xenavenv/share/kolla-ansible/ansible/roles/cloudkitty/tasks/bootstrap.yml >> and added the ssl variable as a workaround >> >>> - name: Creating Cloudkitty influxdb database >>> become: true >>> kolla_toolbox: >>> module_name: influxdb_database >>> module_args: >>> hostname: "{{ influxdb_address }}" >>> port: "{{ influxdb_http_port }}" >>> * ssl: True* >>> database_name: "{{ cloudkitty_influxdb_name }}" >>> run_once: True >>> delegate_to: "{{ groups['cloudkitty-api'][0] }}" >>> when: cloudkitty_storage_backend == 'influxdb' >>> >> >> >> I don't know if this would have worked I just get the idea >> >> - name: Creating Cloudkitty influxdb database >>> become: true >>> kolla_toolbox: >>> module_name: influxdb_database >>> module_args: >>> hostname: "{{ influxdb_address }}" >>> port: "{{ influxdb_http_port }}" >>> * ssl: {{ cloudkitty_influxdb_use_ssl }}* >>> database_name: "{{ cloudkitty_influxdb_name }}" >>> run_once: True >>> delegate_to: "{{ groups['cloudkitty-api'][0] }}" >>> when: cloudkitty_storage_backend == 'influxdb' >>> >> >> >> >> >> Regards. >> >> Le mar. 19 avr. 2022 ? 12:37, Rafael Weing?rtner < >> rafaelweingartner at gmail.com> a ?crit : >> >>> It seems that it was always assumed to be HTTP and not HTTPs: >>> https://github.com/openstack/kolla-ansible/blob/a52cf61b2234d2f078dd2893dd37de63e20ea1aa/ansible/roles/cloudkitty/tasks/bootstrap.yml#L36 >>> . >>> >>> Maybe, we will need to change that to use SSL whenever needed. >>> >>> On Tue, Apr 19, 2022 at 8:19 AM wodel youchi >>> wrote: >>> >>>> Hi, >>>> >>>> I tested with influx -host >>>> First I tested with the internal api IP address of the host itself, and >>>> it did work : influx -host 10.10.3.9 >>>> Then I tested with VIP of the internal api, which is held by haproxy : >>>> influx -host 10.10.3.1, it didn't work, looking in the haproxy >>>> configuration file of influxdb, I noticed that haproxy uses https in the >>>> front end, so I tested with : influx -ssl -host 10.10.3.1 and it did work. >>>> >>>> And if you see the error message from TASK [cloudkitty : Creating >>>> Cloudkitty influxdb database], ssl is false >>>> >>>> fatal: [192.168.1.5 -> 192.168.1.5]: FAILED! => { >>>> "action": "influxdb_database", >>>> "changed": false, >>>> "invocation": { >>>> "module_args": { >>>> "database_name": "cloudkitty", >>>> "hostname": "dashint.cloud.cerist.dz", >>>> "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", >>>> "path": "", >>>> "port": 8086, >>>> "proxies": {}, >>>> "retries": 3, >>>> *"ssl": false,* >>>> "state": "present", >>>> "timeout": null, >>>> "udp_port": 4444, >>>> "use_udp": false, >>>> "username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", >>>> "validate_certs": true >>>> } >>>> }, >>>> "msg": "('Connection aborted.', RemoteDisconnected('Remote end >>>> closed connection without response',))" >>>> } >>>> >>>> Could that be the problem? if yes how to force Cloudkitty to enable ssl? >>>> >>>> Regards. >>>> >>>> >>>> Virus-free. >>>> www.avast.com >>>> >>>> <#m_8469948696530983632_m_-5979860831382871527_m_2114711239033937821_m_-2160537011768264727_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >>>> >>>> Le mar. 19 avr. 2022 ? 07:30, Pierre Riteau a >>>> ?crit : >>>> >>>>> Hello, >>>>> >>>>> InfluxDB is configured to only listen on the internal API interface. >>>>> Can you check the hostname you are using resolves correctly from the >>>>> cloudkitty host? >>>>> Inside the influxdb container, you should use `influxdb -host >>>>> ` with the internal IP of the influxdb host. >>>>> >>>>> Also check if the output of `docker logs influxdb` has any logs. >>>>> >>>>> Best wishes, >>>>> Pierre Riteau (priteau) >>>>> >>>>> On Tue, 19 Apr 2022 at 01:24, wodel youchi >>>>> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> I am trying to deploy Cloudkitty, but I get this error message : >>>>>> >>>>>> TASK [cloudkitty : Creating Cloudkitty influxdb database] >>>>>>> ****************************************************** >>>>>>> task path: >>>>>>> /home/deployer/kollavenv/xenavenv/share/kolla-ansible/ansible/roles/cloudkitty/tasks/bootstrap.yml:36 >>>>>> >>>>>> >>>>>> fatal: [192.168.1.5 -> 192.168.1.5]: FAILED! => { >>>>>>> "action": "influxdb_database", >>>>>>> "changed": false, >>>>>>> "invocation": { >>>>>>> "module_args": { >>>>>>> "database_name": "cloudkitty", >>>>>>> "hostname": "dashint.cloud.cerist.dz", >>>>>>> "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", >>>>>>> "path": "", >>>>>>> "port": 8086, >>>>>>> "proxies": {}, >>>>>>> "retries": 3, >>>>>>> "ssl": false, >>>>>>> "state": "present", >>>>>>> "timeout": null, >>>>>>> "udp_port": 4444, >>>>>>> "use_udp": false, >>>>>>> "username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", >>>>>>> "validate_certs": true >>>>>>> } >>>>>>> }, >>>>>>> "msg": "('Connection aborted.', RemoteDisconnected('Remote end >>>>>>> closed connection without response',))" >>>>>>> } >>>>>> >>>>>> >>>>>> >>>>>> On the influxdb container I did this : >>>>>> >>>>>>> [root at controllerb ~]# docker ps | grep inf >>>>>>> 68b3ebfefbec >>>>>>> 192.168.1.16:4000/openstack.kolla/centos-source-influxdb:xena >>>>>>> "dumb-init --single-?" 22 minutes ago Up 22 minutes >>>>>>> influxdb >>>>>>> [root at controllerb ~]# docker exec -it influxdb /bin/bash >>>>>>> (influxdb)[influxdb at controllerb /]$ influx >>>>>>> Failed to connect to http://localhost:8086: Get >>>>>>> http://localhost:8086/ping: dial tcp [::1]:8086: connect: >>>>>>> connection refused >>>>>>> Please check your connection settings and ensure 'influxd' is >>>>>>> running. >>>>>>> (influxdb)[influxdb at controllerb /]$ ps -ef >>>>>>> UID PID PPID C STIME TTY TIME CMD >>>>>>> influxdb 1 0 0 Apr18 ? 00:00:00 dumb-init >>>>>>> --single-child -- kolla_start >>>>>>> influxdb 7 1 0 Apr18 ? 00:00:01 /usr/bin/influxd >>>>>>> -config /etc/influxdb/influxdb.conf >>>>>>> influxdb 45 0 0 00:12 pts/0 00:00:00 /bin/bash >>>>>>> influxdb 78 45 0 00:12 pts/0 00:00:00 ps -ef >>>>>>> (influxdb)[influxdb at controllerb /]$ >>>>>> >>>>>> >>>>>> I have no log file for influxdb, the directory is empty. >>>>>> >>>>>> Any ideas? >>>>>> >>>>>> Regards. >>>>>> >>>>> >>> >>> -- >>> Rafael Weing?rtner >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Tue Apr 26 13:05:04 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Tue, 26 Apr 2022 18:35:04 +0530 Subject: [Triple0 on Centos Stream] Wallaby Overcloud Deployment Issue Message-ID: Hi Team, We were trying to install Openstack Wallaby Release on Centos Stream. As we were trying the overcloud deployment, getting this error: 2022-04-26 17:20:51.768 374850 INFO tripleoclient.heat_launcher [-] Skipping container image pull. 2022-04-26 17:20:51.776 374850 INFO tripleoclient.heat_launcher [-] Checking that database is up 2022-04-26 17:20:52.251 374850 INFO tripleoclient.heat_launcher [-] Checking that message bus (rabbitmq) is up Error: Source image rejected: Error reading signature from https://access.redhat.com/webassets/docker/content/sigstore/ubi8/pause at sha256=40566c88ebf6242cc33d807270844e351e131b32a51d2c46705c40d7ac9f838e/signature-1: status 403 (Forbidden) 2022-04-26 17:20:57.687 374850 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] Exception occured while running the command: subprocess.CalledProcessError: Command '['sudo', 'podman', 'play', 'kube', '/home/stack/overcloud-deploy/overcloud/heat-launcher/heat-pod.yaml']' returned non-zero exit status 125. 2022-04-26 17:20:57.687 374850 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent call last): 2022-04-26 17:20:57.687 374850 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/command.py", line 34, in run Please suggest. -- ~ Lokendra -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoralej at redhat.com Tue Apr 26 13:42:46 2022 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Tue, 26 Apr 2022 15:42:46 +0200 Subject: [Triple0 on Centos Stream] Wallaby Overcloud Deployment Issue In-Reply-To: References: Message-ID: Hi, What command are you using and how are you configuring the repositories? Best regards, Alfredo On Tue, Apr 26, 2022 at 3:10 PM Lokendra Rathour wrote: > Hi Team, > We were trying to install Openstack Wallaby Release on Centos Stream. > As we were trying the overcloud deployment, getting this error: > > 2022-04-26 17:20:51.768 374850 INFO tripleoclient.heat_launcher [-] > Skipping container image pull. > 2022-04-26 17:20:51.776 374850 INFO tripleoclient.heat_launcher [-] > Checking that database is up > 2022-04-26 17:20:52.251 374850 INFO tripleoclient.heat_launcher [-] > Checking that message bus (rabbitmq) is up > Error: Source image rejected: Error reading signature from > https://access.redhat.com/webassets/docker/content/sigstore/ubi8/pause at sha256=40566c88ebf6242cc33d807270844e351e131b32a51d2c46705c40d7ac9f838e/signature-1: > status 403 (Forbidden) > 2022-04-26 17:20:57.687 374850 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] Exception occured > while running the command: subprocess.CalledProcessError: Command '['sudo', > 'podman', 'play', 'kube', > '/home/stack/overcloud-deploy/overcloud/heat-launcher/heat-pod.yaml']' > returned non-zero exit status 125. > 2022-04-26 17:20:57.687 374850 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent > call last): > 2022-04-26 17:20:57.687 374850 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/command.py", line 34, in run > > > Please suggest. > > -- > ~ Lokendra > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Tue Apr 26 14:13:40 2022 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 26 Apr 2022 10:13:40 -0400 Subject: [Neutron][neutron-vpnaas] proposing Mohammed Naser for neutron-vpnaas core reviewer In-Reply-To: References: <2626416.mvXUDI8C0e@p1> Message-ID: Awesome, thanks Dmitriy, I saw you posted up some changes as well so I'm hoping to help land those fixes as well. On Tue, Apr 26, 2022 at 3:37 AM Dmitriy Rabotyagov wrote: > > Hey! > > +1 :) > > To have that said, we are also interested in pulling some weight in maintaining vpnaas. > So feel free to reach out if any help is needed, like developing/testing fixes, reviews or designing/implementing new features:) > > ??, 25 ???. 2022 ?., 21:19 Slawek Kaplonski : >> >> Hi, >> >> On poniedzia?ek, 25 kwietnia 2022 17:18:19 CEST Lajos Katona wrote: >> > Hi, >> > I would like to propose Mohammed Naser (mnaser) as a core reviewer to >> > neutron-vpnaas. >> > He and his company uses neutron-vpnaas in production and volunteered to >> > help in the maintenance of it. >> > >> > You can vote/feedback in this email thread. >> > If there is no objection by 6th of May, we will add Mohammed to the core >> > list. >> > >> > Thanks >> > Lajos >> > >> >> +1 >> Great to see Mohammed stepping up to maintain neutron-vpnaas. Thanks Mohammed :) >> >> -- >> Slawek Kaplonski >> Principal Software Engineer >> Red Hat -- Mohammed Naser VEXXHOST, Inc. From miguel at mlavalle.com Tue Apr 26 14:29:28 2022 From: miguel at mlavalle.com (Miguel Lavalle) Date: Tue, 26 Apr 2022 09:29:28 -0500 Subject: [Neutron][neutron-vpnaas] proposing Mohammed Naser for neutron-vpnaas core reviewer In-Reply-To: <2626416.mvXUDI8C0e@p1> References: <2626416.mvXUDI8C0e@p1> Message-ID: +1. Thanks mnaser and Dmitry for stepping up to the plate Cheers On Mon, Apr 25, 2022 at 2:17 PM Slawek Kaplonski wrote: > Hi, > > On poniedzia?ek, 25 kwietnia 2022 17:18:19 CEST Lajos Katona wrote: > > Hi, > > I would like to propose Mohammed Naser (mnaser) as a core reviewer to > > neutron-vpnaas. > > He and his company uses neutron-vpnaas in production and volunteered to > > help in the maintenance of it. > > > > You can vote/feedback in this email thread. > > If there is no objection by 6th of May, we will add Mohammed to the core > > list. > > > > Thanks > > Lajos > > > > +1 > Great to see Mohammed stepping up to maintain neutron-vpnaas. Thanks > Mohammed :) > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Tue Apr 26 14:34:58 2022 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Tue, 26 Apr 2022 11:34:58 -0300 Subject: [Kolla-ansible][Xena] Error deploying Cloudkitty In-Reply-To: References: Message-ID: Great! I created the patch: https://review.opendev.org/c/openstack/kolla-ansible/+/839393 On Tue, Apr 26, 2022 at 10:01 AM wodel youchi wrote: > Hi, > > It did work, > > Modify > /home/deployer/kollavenv/xenavenv/share/kolla-ansible/ansible/roles/cloudkitty/tasks/bootstrap.yml > like this : > - name: Creating Cloudkitty influxdb database > become: true > kolla_toolbox: > module_name: influxdb_database > module_args: > hostname: "{{ influxdb_address }}" > port: "{{ influxdb_http_port }}" > * ssl: {{ cloudkitty_influxdb_use_ssl }}* > database_name: "{{ cloudkitty_influxdb_name }}" > run_once: True > delegate_to: "{{ groups['cloudkitty-api'][0] }}" > when: cloudkitty_storage_backend == 'influxdb' > > Then declare *cloudkitty_influxdb_use_ssl* variable in globals.yml > > *cloudkitty_influxdb_use_ssl: true* > > Then deploy, it did work. > > How to propose a fix, I do not know how to do that!!! > > Regards. > > Le mer. 20 avr. 2022 ? 08:57, Mark Goddard a ?crit : > >> Hi Wodel, >> >> Did it work when you added the ssl parameter? If so, could you propose a >> fix for this upstream? >> >> Thanks, >> Mark >> >> On Tue, 19 Apr 2022 at 15:07, wodel youchi >> wrote: >> >>> Hi, >>> I tried to do this >>> vim >>> /home/deployer/kollavenv/xenavenv/share/kolla-ansible/ansible/roles/cloudkitty/defaults/main.yml >>> *cloudkitty_influxdb_use_ssl: "true"* >>> But it didn't work,then I added the same variable to globals.yml but it >>> didn't work. >>> >>> So finally I edited vim >>> /home/deployer/kollavenv/xenavenv/share/kolla-ansible/ansible/roles/cloudkitty/tasks/bootstrap.yml >>> and added the ssl variable as a workaround >>> >>>> - name: Creating Cloudkitty influxdb database >>>> become: true >>>> kolla_toolbox: >>>> module_name: influxdb_database >>>> module_args: >>>> hostname: "{{ influxdb_address }}" >>>> port: "{{ influxdb_http_port }}" >>>> * ssl: True* >>>> database_name: "{{ cloudkitty_influxdb_name }}" >>>> run_once: True >>>> delegate_to: "{{ groups['cloudkitty-api'][0] }}" >>>> when: cloudkitty_storage_backend == 'influxdb' >>>> >>> >>> >>> I don't know if this would have worked I just get the idea >>> >>> - name: Creating Cloudkitty influxdb database >>>> become: true >>>> kolla_toolbox: >>>> module_name: influxdb_database >>>> module_args: >>>> hostname: "{{ influxdb_address }}" >>>> port: "{{ influxdb_http_port }}" >>>> * ssl: {{ cloudkitty_influxdb_use_ssl }}* >>>> database_name: "{{ cloudkitty_influxdb_name }}" >>>> run_once: True >>>> delegate_to: "{{ groups['cloudkitty-api'][0] }}" >>>> when: cloudkitty_storage_backend == 'influxdb' >>>> >>> >>> >>> >>> >>> Regards. >>> >>> Le mar. 19 avr. 2022 ? 12:37, Rafael Weing?rtner < >>> rafaelweingartner at gmail.com> a ?crit : >>> >>>> It seems that it was always assumed to be HTTP and not HTTPs: >>>> https://github.com/openstack/kolla-ansible/blob/a52cf61b2234d2f078dd2893dd37de63e20ea1aa/ansible/roles/cloudkitty/tasks/bootstrap.yml#L36 >>>> . >>>> >>>> Maybe, we will need to change that to use SSL whenever needed. >>>> >>>> On Tue, Apr 19, 2022 at 8:19 AM wodel youchi >>>> wrote: >>>> >>>>> Hi, >>>>> >>>>> I tested with influx -host >>>>> First I tested with the internal api IP address of the host itself, >>>>> and it did work : influx -host 10.10.3.9 >>>>> Then I tested with VIP of the internal api, which is held by haproxy : >>>>> influx -host 10.10.3.1, it didn't work, looking in the haproxy >>>>> configuration file of influxdb, I noticed that haproxy uses https in the >>>>> front end, so I tested with : influx -ssl -host 10.10.3.1 and it did work. >>>>> >>>>> And if you see the error message from TASK [cloudkitty : Creating >>>>> Cloudkitty influxdb database], ssl is false >>>>> >>>>> fatal: [192.168.1.5 -> 192.168.1.5]: FAILED! => { >>>>> "action": "influxdb_database", >>>>> "changed": false, >>>>> "invocation": { >>>>> "module_args": { >>>>> "database_name": "cloudkitty", >>>>> "hostname": "dashint.cloud.cerist.dz", >>>>> "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", >>>>> "path": "", >>>>> "port": 8086, >>>>> "proxies": {}, >>>>> "retries": 3, >>>>> *"ssl": false,* >>>>> "state": "present", >>>>> "timeout": null, >>>>> "udp_port": 4444, >>>>> "use_udp": false, >>>>> "username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", >>>>> "validate_certs": true >>>>> } >>>>> }, >>>>> "msg": "('Connection aborted.', RemoteDisconnected('Remote end >>>>> closed connection without response',))" >>>>> } >>>>> >>>>> Could that be the problem? if yes how to force Cloudkitty to enable >>>>> ssl? >>>>> >>>>> Regards. >>>>> >>>>> >>>>> Virus-free. >>>>> www.avast.com >>>>> >>>>> <#m_-7847920128451186113_m_8469948696530983632_m_-5979860831382871527_m_2114711239033937821_m_-2160537011768264727_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >>>>> >>>>> Le mar. 19 avr. 2022 ? 07:30, Pierre Riteau a >>>>> ?crit : >>>>> >>>>>> Hello, >>>>>> >>>>>> InfluxDB is configured to only listen on the internal API interface. >>>>>> Can you check the hostname you are using resolves correctly from the >>>>>> cloudkitty host? >>>>>> Inside the influxdb container, you should use `influxdb -host >>>>>> ` with the internal IP of the influxdb host. >>>>>> >>>>>> Also check if the output of `docker logs influxdb` has any logs. >>>>>> >>>>>> Best wishes, >>>>>> Pierre Riteau (priteau) >>>>>> >>>>>> On Tue, 19 Apr 2022 at 01:24, wodel youchi >>>>>> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I am trying to deploy Cloudkitty, but I get this error message : >>>>>>> >>>>>>> TASK [cloudkitty : Creating Cloudkitty influxdb database] >>>>>>>> ****************************************************** >>>>>>>> task path: >>>>>>>> /home/deployer/kollavenv/xenavenv/share/kolla-ansible/ansible/roles/cloudkitty/tasks/bootstrap.yml:36 >>>>>>> >>>>>>> >>>>>>> fatal: [192.168.1.5 -> 192.168.1.5]: FAILED! => { >>>>>>>> "action": "influxdb_database", >>>>>>>> "changed": false, >>>>>>>> "invocation": { >>>>>>>> "module_args": { >>>>>>>> "database_name": "cloudkitty", >>>>>>>> "hostname": "dashint.cloud.cerist.dz", >>>>>>>> "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", >>>>>>>> "path": "", >>>>>>>> "port": 8086, >>>>>>>> "proxies": {}, >>>>>>>> "retries": 3, >>>>>>>> "ssl": false, >>>>>>>> "state": "present", >>>>>>>> "timeout": null, >>>>>>>> "udp_port": 4444, >>>>>>>> "use_udp": false, >>>>>>>> "username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", >>>>>>>> "validate_certs": true >>>>>>>> } >>>>>>>> }, >>>>>>>> "msg": "('Connection aborted.', RemoteDisconnected('Remote end >>>>>>>> closed connection without response',))" >>>>>>>> } >>>>>>> >>>>>>> >>>>>>> >>>>>>> On the influxdb container I did this : >>>>>>> >>>>>>>> [root at controllerb ~]# docker ps | grep inf >>>>>>>> 68b3ebfefbec >>>>>>>> 192.168.1.16:4000/openstack.kolla/centos-source-influxdb:xena >>>>>>>> "dumb-init --single-?" 22 minutes ago Up 22 minutes >>>>>>>> influxdb >>>>>>>> [root at controllerb ~]# docker exec -it influxdb /bin/bash >>>>>>>> (influxdb)[influxdb at controllerb /]$ influx >>>>>>>> Failed to connect to http://localhost:8086: Get >>>>>>>> http://localhost:8086/ping: dial tcp [::1]:8086: connect: >>>>>>>> connection refused >>>>>>>> Please check your connection settings and ensure 'influxd' is >>>>>>>> running. >>>>>>>> (influxdb)[influxdb at controllerb /]$ ps -ef >>>>>>>> UID PID PPID C STIME TTY TIME CMD >>>>>>>> influxdb 1 0 0 Apr18 ? 00:00:00 dumb-init >>>>>>>> --single-child -- kolla_start >>>>>>>> influxdb 7 1 0 Apr18 ? 00:00:01 >>>>>>>> /usr/bin/influxd -config /etc/influxdb/influxdb.conf >>>>>>>> influxdb 45 0 0 00:12 pts/0 00:00:00 /bin/bash >>>>>>>> influxdb 78 45 0 00:12 pts/0 00:00:00 ps -ef >>>>>>>> (influxdb)[influxdb at controllerb /]$ >>>>>>> >>>>>>> >>>>>>> I have no log file for influxdb, the directory is empty. >>>>>>> >>>>>>> Any ideas? >>>>>>> >>>>>>> Regards. >>>>>>> >>>>>> >>>> >>>> -- >>>> Rafael Weing?rtner >>>> >>> -- Rafael Weing?rtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Tue Apr 26 17:47:11 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Tue, 26 Apr 2022 23:17:11 +0530 Subject: [Triple0 on Centos Stream] Wallaby Overcloud Deployment Issue In-Reply-To: References: Message-ID: Hi alfredo, It worked after enabling the same url via lab access. But actually we wanted to so the deployment offline. Undercloud was deployed offline and the container repositories are marked offline. Same can be seen in the /etc/container/containers.conf In the same file i see my container registery as marked. So why is overcloud deployment trying to reach internet for such redhat based repositories. Any idea to make the who system 100% offline. Any advice would be helpful. -Lokendra On Tue, 26 Apr 2022, 19:13 Alfredo Moralejo Alonso, wrote: > Hi, > > What command are you using and how are you configuring the repositories? > > Best regards, > > Alfredo > > > On Tue, Apr 26, 2022 at 3:10 PM Lokendra Rathour < > lokendrarathour at gmail.com> wrote: > >> Hi Team, >> We were trying to install Openstack Wallaby Release on Centos Stream. >> As we were trying the overcloud deployment, getting this error: >> >> 2022-04-26 17:20:51.768 374850 INFO tripleoclient.heat_launcher [-] >> Skipping container image pull. >> 2022-04-26 17:20:51.776 374850 INFO tripleoclient.heat_launcher [-] >> Checking that database is up >> 2022-04-26 17:20:52.251 374850 INFO tripleoclient.heat_launcher [-] >> Checking that message bus (rabbitmq) is up >> Error: Source image rejected: Error reading signature from >> https://access.redhat.com/webassets/docker/content/sigstore/ubi8/pause at sha256=40566c88ebf6242cc33d807270844e351e131b32a51d2c46705c40d7ac9f838e/signature-1: >> status 403 (Forbidden) >> 2022-04-26 17:20:57.687 374850 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] Exception occured >> while running the command: subprocess.CalledProcessError: Command '['sudo', >> 'podman', 'play', 'kube', >> '/home/stack/overcloud-deploy/overcloud/heat-launcher/heat-pod.yaml']' >> returned non-zero exit status 125. >> 2022-04-26 17:20:57.687 374850 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent >> call last): >> 2022-04-26 17:20:57.687 374850 ERROR >> tripleoclient.v1.overcloud_deploy.DeployOvercloud File >> "/usr/lib/python3.6/site-packages/tripleoclient/command.py", line 34, in run >> >> >> Please suggest. >> >> -- >> ~ Lokendra >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Apr 26 18:59:04 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 26 Apr 2022 13:59:04 -0500 Subject: [tc][all][ Zed Virtual PTG RBAC discussions Summary In-Reply-To: References: <1800c9bf145.d19b5ce2503791.7304546509368773732@ghanshyammann.com> Message-ID: <180673dcad3.c1b6b3bc193743.7336286985889622484@ghanshyammann.com> Hello Everyone, We had discussion today on RBAC open points on Heat. I tried to capture the key points in below etherpad - https://etherpad.opendev.org/p/rbac-zed-ptg#L100 We have not concluded on heat question and decided to continue the discussion next Tuesday 14:00 UTC. As meetpad did not work for many of attendees, we will use google meet for next discussion. Details: - Tuesday 3rd May 14:00 UTC - Video call: https://meet.google.com/gie-derw-eer - https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team#Meeting Agenda: - https://etherpad.opendev.org/p/rbac-zed-ptg#L100 -gmann ---- On Thu, 21 Apr 2022 01:28:35 -0500 Rabi Mishra wrote ---- > > > On Sat, Apr 9, 2022 at 10:10 AM Ghanshyam Mann wrote: > Hello Everyone, > > I tried to attend the RBAC-related sessions on various projects[i] but I am sure I might have missed a few of them. I am summarizing > the RBAC discussion on what open questions were from the project side and what we discussed in TC PTG. Feel free to append > the discussion you had in your project or any query you want TC to solve. > > Current status: > ------------------ > * I have started this etherpad[ii] to track the status of this goal, please keep it up to date as you progress the work in your project. > > Open question: > ------------------ > 1. heat create_stack API calling the mixed scope APIs (for example create flavor and create server). what is best scope for heat API so that > we do not have any security leak. We have not concluded the solution yet as we need the heat team also join the discussion and agree on that. > But we have a few possible solutions listed below: > > ** Heat accepts stack API with system scope > *** This means a stack with system resources would require a system admin role => Need to check with services relying on Heat > ** Heat assigns a project-scope role to the requester during a processing stack operation and uses this project scope credential to manage project resources > ** Heat starts accepting the new header accepting the extra token (say SYSTEM_TOKEN) and uses that to create/interact the system-level resource like create flavor. > > This is probably more complex than what we think:) I would expect keystone to provide full backward compatibility (i.e toggle off srbac), so that existing heat stacks in upgraded deploymentswork as before. As for the different options mentioned above, > - IMO, heat assigning a project-scoped role to a user dynamically is probably out of consideration. > - Introducing hacks in heat to switch tokens when creating/updating different resources of a stack (assuming we get multiple system/project scoped tokens with authentication) is also not a good idea either. Also the fact heat still relies on keystone trusts (used for long running tasks[1] and signaling) would make it complicated. > Let's discuss in the next scheduled call. > [1] https://github.com/openstack/heat/blob/master/heat/common/config.py#L130 > > 2. How to isolate the host level attribute in GET APIs? (cinder and manila have the same issue). Cinder GET volume API response has > the host information. One possible solution we discussed is to have a separate API to show the host information to the system user and > the rest of the volume response to the project users only. This is similar to what we have in nova. > > Then we have a few questions from the Tacker side, where tacker create_vnf API internally call heat create_stack and they are planning to > make create_vnf API for non-admin users. > > Direction on enabling the enforce scope by default > ------------------------------------------------------------ > As keystone, nova, and neutron are ready with the new RBAC, we wanted to enable the scope checks by default. But after seeing the > lack of integration testing and the above mentioned open question (especially heat and any deployment project breaking) we decided to hold > it. As the first step, we will migrate the tempest tests to the new RBAC and will enable the scope for these services in devstack. And based on the > testing results we will decide on it. But after seeing the amount of work needed in Tempest and on the open question, I do not think we will be able > to do it in the Zed cycle. Instead, we will target to enable the 'new defaults' by default. > > We ran out of time in TC and will continue the discuss these in policy popup meetings. I will push the schedule to Ml. > > [i] https://etherpad.opendev.org/p/rbac-zed-ptg > [ii] https://etherpad.opendev.org/p/rbac-goal-tracking > > -gmann > > > > -- > Regards,Rabi Mishra > From gmann at ghanshyammann.com Tue Apr 26 22:46:14 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 26 Apr 2022 17:46:14 -0500 Subject: [all][qa] Dropping centos-8-stream support and testing Message-ID: <180680dc549.12024ba50198289.2852901730952867408@ghanshyammann.com> Hello Everyone, As you know, in zed cycle we are targeting centos-9-stream in testing runtime[1] and dropping the py3.6 support. By dropping the py3.6 support means stop testing the py3.6 and require "python_requires = >=3.8" , For example nova - https://github.com/openstack/nova/blob/5f5551448dcfcde26095963e223f973b90e6f637/setup.cfg#L13 With that centos-8-stream jobs are failing 100% and in qa meeting we decided to drop the support in devstack[2]. Dropping support in devstack means: 1. we will stop testing it[3] 2. Drop support from stack.sh. By disabling centos-8-stream from the supported_distro by default[4] in stack.sh (you can enable it by running with force flag). 1st is done and patch to remove the centos-8-stream job is merged (along with in Tempest and nova). For 2nd one we will wait for 1 more week and will discuss to do in next QA meeting. Please start removing or replacing the c8s jobs with c9s in your projects. You can keep gerrit topic 'drop-c8s-testing' to know the overall status. [1] https://governance.openstack.org/tc/reference/runtimes/zed.html [2] https://meetings.opendev.org/meetings/qa/2022/qa.2022-04-26-15.00.log.html#l-96 [3] https://review.opendev.org/q/topic:drop-c8s-testing [4] https://github.com/openstack/devstack/blob/48417ca241cacff8f4398910792489a59a359afb/stack.sh#L232 -gmann From fungi at yuggoth.org Tue Apr 26 23:24:49 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 26 Apr 2022 23:24:49 +0000 Subject: [all][qa] Dropping centos-8-stream support and testing In-Reply-To: <180680dc549.12024ba50198289.2852901730952867408@ghanshyammann.com> References: <180680dc549.12024ba50198289.2852901730952867408@ghanshyammann.com> Message-ID: <20220426232448.y2bfwzkxe73xwuux@yuggoth.org> On 2022-04-26 17:46:14 -0500 (-0500), Ghanshyam Mann wrote: > As you know, in zed cycle we are targeting centos-9-stream in > testing runtime[1] and dropping the py3.6 support. [...] Just a reminder, RHEL 9 is still only in beta and I've seen no indication it will necessarily be released by September, so there's every chance Zed will not be usable on RHEL at release. I know that came up as a reason to keep 3.6 testing the last time we tried to remove it (for Yoga), so figure it's worth pointing out again. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Tue Apr 26 23:31:41 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 26 Apr 2022 18:31:41 -0500 Subject: [all] Resend New CFN(Computing Force Network) SIG Proposal In-Reply-To: <16b901d85925$1c9505a0$55bf10e0$@com> References: <18061d9a035.cb6bda15115439.2847955073200685698@ghanshyammann.com> <16b901d85925$1c9505a0$55bf10e0$@com> Message-ID: <18068375fe1.e1468dfa198843.6304495179716775789@ghanshyammann.com> ---- On Mon, 25 Apr 2022 23:21:32 -0500 niujie wrote ---- > Hi Ghanshyam, > > Thanks for forwarding the proposal. > > Yes, we will figure out exact changes(requirements) based on the exact use case and OpenStack component scope by further discussion. > As for the application migration, currently we don?t have plan for new project of tooling, we could probably start with tool(based on the discussion), but the ultimate goal is beyond just tooling, the vision of CFN is to achieve ecosystem for development, any application developed on this infrastructure could be migrated to any heterogeneous platforms. This may include build compiling platform on heterogeneous infrastructure, draft standardization for low-level code development, etc. > > You are right about CFN will not just bring changes to the OpenStack existing components, but also brings potential new source code components, we don't have such list/proposal for new component right now, that's why we would like to raise the CFN topic here, and based on the discussion with global wisdoms, we will figure out the next step. > > It is a good idea to start with a SIG, we can firstly start discussion here, and maybe re-evaluate as it goes. > > We have a brief CFN introduction slide, and shall I add a topic in TC weekly meeting agenda? I am fine to discuss it in TC meeting but I saw updates (ML reply too) from Ildiko that there is some discussion going on between foundation and your team and it seems CFN scope is more than OpenStack SIG and SIG is not right place to start with and you think SIG can be good place to start (I also have no objection on that). I think to have everyone on the same page, we need to discuss it together, I am ok to have that in TC weekly meeting including your team, foundation, and TC or a separate call. Whatever works for you and Ildiko (other foundation staff), please let me know. At the end, it does not matter much where we start it as SIG or separate infra project. Irrespective of the place, whatever you need from OpenStack community we will be supporting/implementing that as per use case and scope. [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028311.html -gmann > > Thanks > Jie Niu > > > > > > -----????----- > ???: Ghanshyam Mann [mailto:gmann at ghanshyammann.com] > ????: Tuesday, April 26, 2022 1:52 AM > ???: niujie > ??: openstack-discuss; sunny; 'Horace Li'; huang.shuquan; gong.yongsheng; shane.wang; jian-feng.ding; wangshengyjy; yuzhiqiang; zhangxiaoguang; xujianwl > ??: Re: [all] Resend New CFN(Computing Force Network) SIG Proposal > > Thanks, Niu for the proposal and sorry for the delay in response. > > I have raised this proposal to TC members and asking to check it. Overall proposal seems > interesting to me but few initial queries inline. > > > ---- On Wed, 13 Apr 2022 00:34:30 -0500 niujie wrote ---- > > > > Hi all > > I sent an email yesterday about NewCFN(Computing Force Network) SIG Proposal, I tried to recall it because therewas a typo in email address, then I get recall failed msg, so I assume the emailwas sent out successfully, and plan to keep it as it was. > > But I found that the ?recall? actionwas logged in pipermail, it might cause misunderstanding, we are sure about proposefor a new SIG, so I?m sending this again, sorry for the email flood J > > > > I'm from China Mobile, China Mobile is recently working onbuild a new information infrastructure focusing on connectivity, computingpower, and capabilities, this new information infrastructure is calledComputing Force Network, we think OpenStack community which gathers globalwisdom together is a perfect platform to discuss topics like CFN, so we areproposing to create a new SIG for CFN(Computing Force Network). Below is CFNbrief introduction and initial SIG scope. > > With the flourish of new business scenarios such as hybridcloud, multi-cloud, AI, big data processing, edge computing, building a newinformation infrastructure based on multiple key technologies that convergedcloud and network, will better support global digital transformation. This newinfrastructure is not only relates to cloud, it is getting more and moreconnected with network, and at the same time, we also need to consider how toconverge multiple technologies like AI, Blockchain, big data, security to providethis all-in-one service. > > Computing Force Network(CFN) is a new informationinfrastructure that based on network, focused on computing, deeply convergedArtificial intelligence, Block chain, Cloud, Data, Network, Edge computing, Endapplication, Security(ABCDNETS), providing all-in-one services. > > Xiaodong Duan, Vice president of China Mobile ResearchInstitute, introduced the vision and architecture of Computing Force Network in2021 November OpenInfra Live Keynotes by his presentation Connection +Computing + Capability Opens a New Era of Digital Infrastructure, heproposed the new era of CFN. > > We are expecting to work with OpenStack on how to buildthis new information infrastructure, and how to promote the development andimplementation of next generation infrastructure, achieve ubiquitous computingforce, computing & network convergence, intelligence orchestration,all-in-one service. Then computing force will become common utilities likewater and electric step by step, computing force will be ready for access uponuse and connected by single entry point. > > The above vision of CFN , from technical perspective, willmainly focus on unified management and orchestration of computing + networkintegrated system, computing and network deeply converged in architecture, formand protocols aspect, bringing potential changes to OpenStack components. CFNis aiming to achieve seamlessly migration of any application between anyheterogeneous platforms, it's a challenge for the industry currently, we feelthat in pursuit of CFN could potentially contributes to the development andevolution of OpenStack. > > Yes, it will require changes to OpenStack components but we will see based on the exact use case and OpenStack component scope. Is this include the application migration tooling in OpenStack? > > > > In this CFN SIG, we will mainly focus on discussing how tobuild the new information infrastructure of CFN, related key technologies, andwhat's the impact on OpenStack brought by the network & could convergencetrend , the topics are including but not limited to: > > 1, Acomputing basement for unified management of container, VM and Bare Metal > > 2,Computing infrastructure which eliminated the difference between heterogeneoushardware > > 3,Measurement criteria and scheduling scheme based on unified computinginfrastructure > > 4,Network solutions for SDN integrating smart NIC for data center > > 5,Unified orchestration & management for "network + cloud", and"cloud + edge + end" integrated scheduling solution > > We will have regular meetings to investigate and discussbusiness scenarios, development trend, technical scheme, release technicaldocuments, technical proposal and requirements for OpenStack Projects, andpropose new project when necessary. > > We will also collaborate with other open source projectslike LFN, CNCF, LFE, to have a consistent plan across communities, and alignwith global standardization organization like ETSI, 3GPP, IETF, to promote CFNrelated technical scheme become the standard in industry. > > If you have any thoughts, interests, questions,requirements, we can discuss by this mailing list. > > Thanks for the detailed information about the SIG scope. From the above, I understood that it will not be just changed to the OpenStack existing > component but also new source code components also, do you have such list/proposal for a new component or you would like to continue discussing > it and based on that you will get to know. How you are thinking about their (new component if any) releases like a coordinated > release with OpenStack or independent. If coordinated then it is more than SIG scope and might be good to add a new project. > > By seeing the scope of this proposal (which seems very wider), I think it is not required to answer all of them now. Overall I am ok to start > it as SIG and based on discussion/progress evaluation we will get to know more about new components, requirements etc and then we can > change it from SIG to a new project under OpenStack or other governance (based on the core/requirement/use case it produces). > > -gmann > > > Any suggestions are welcomed, and we are really hoping tohear from anyone, and work with you. > > > > Jie Niu > > China Mobile > > > > > > From gmann at ghanshyammann.com Tue Apr 26 23:43:57 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 26 Apr 2022 18:43:57 -0500 Subject: [all][qa] Dropping centos-8-stream support and testing In-Reply-To: <20220426232448.y2bfwzkxe73xwuux@yuggoth.org> References: <180680dc549.12024ba50198289.2852901730952867408@ghanshyammann.com> <20220426232448.y2bfwzkxe73xwuux@yuggoth.org> Message-ID: <18068429ce8.d8c36c0b198965.1273654656450168204@ghanshyammann.com> ---- On Tue, 26 Apr 2022 18:24:49 -0500 Jeremy Stanley wrote ---- > On 2022-04-26 17:46:14 -0500 (-0500), Ghanshyam Mann wrote: > > As you know, in zed cycle we are targeting centos-9-stream in > > testing runtime[1] and dropping the py3.6 support. > [...] > > Just a reminder, RHEL 9 is still only in beta and I've seen no > indication it will necessarily be released by September, so there's > every chance Zed will not be usable on RHEL at release. I know that > came up as a reason to keep 3.6 testing the last time we tried to > remove it (for Yoga), so figure it's worth pointing out again. Yes, that is a separate discussion and we already discussed it a lot about dropping py3.6 in Yoga cycle. We said that time that we would remove the support in Zed cycle and so does testing runtime is defined and updated on ML also. I did not see any objection to that at least during these two months we dropped the testing. This discussion is if we want to continue test the centos-8-stream even we dropped the py3.6 support which can be done with python 3.8. -gmann > -- > Jeremy Stanley > From smooney at redhat.com Wed Apr 27 00:15:41 2022 From: smooney at redhat.com (Sean Mooney) Date: Wed, 27 Apr 2022 01:15:41 +0100 Subject: [all][qa] Dropping centos-8-stream support and testing In-Reply-To: <18068429ce8.d8c36c0b198965.1273654656450168204@ghanshyammann.com> References: <180680dc549.12024ba50198289.2852901730952867408@ghanshyammann.com> <20220426232448.y2bfwzkxe73xwuux@yuggoth.org> <18068429ce8.d8c36c0b198965.1273654656450168204@ghanshyammann.com> Message-ID: <05a77b2d4b1f0464778927a64d8d556bc8d06565.camel@redhat.com> On Tue, 2022-04-26 at 18:43 -0500, Ghanshyam Mann wrote: > ---- On Tue, 26 Apr 2022 18:24:49 -0500 Jeremy Stanley wrote ---- > > On 2022-04-26 17:46:14 -0500 (-0500), Ghanshyam Mann wrote: > > > As you know, in zed cycle we are targeting centos-9-stream in > > > testing runtime[1] and dropping the py3.6 support. > > [...] > > > > Just a reminder, RHEL 9 is still only in beta and I've seen no > > indication it will necessarily be released by September, so there's > > every chance Zed will not be usable on RHEL at release. > at least from a redhat prodct perspecive redhat does not suppport installing upstream openstack on rhel so those choosign to install openstack on rhel out side of our productised version woudl be doing so without support form redhat. as such i dont think the status of rhel 9 is really a concern here. redhat is currently workingon our next major version of redhat's openstack plathform (osp 17) which will be based on rhel 9 and will be relased before the zed cycle completes. it will not be based on zed, zed will likely form the basis of osp(18) which will not be relased this year. from a rhel 8 perspective python 3.8 was never intended for production deploymetn on python 3.6 was fully productised and other interperters were only provdied for development and testing but 3.6 was the only fully supproted prodcution runtime. > I know that > > came up as a reason to keep 3.6 testing the last time we tried to > > remove it (for Yoga), so figure it's worth pointing out again. > > Yes, that is a separate discussion and we already discussed it a lot about > dropping py3.6 in Yoga cycle. We said that time that we would remove the > support in Zed cycle and so does testing runtime is defined and updated > on ML also. I did not see any objection to that at least during these two months > we dropped the testing. > > This discussion is if we want to continue test the centos-8-stream even we dropped > the py3.6 support which can be done with python 3.8. i really dont think that adds sginifcant value. if we were to look at redhat downstream plathform then stable wallaby and zed will both be run on python 3.9 on rhel 9.x testing with 3.8 wont align with what will be productised. so i think it woudl be better to focus on testign rdo wtih centos 9 stream only so for master/zed rather then investign effort into 3.8 for what its worth i think centos 8 with 3.8 actully does work today, im pretty sure i have deploy with that combination in the past i just dont really see that addign a lot of vaule vs ruing a centos 9 stream job with 3.9 ubuntu 20.04 provides 3.8 coverage and the rest is disto/packaging realted so unless rdo or one of the other installers speciricly ask for centos 8 stream i dont think we need to keep the nodeset for master. we do still requrie it for stable branches but for installers like devstack i think its long past time that we pushed people to move to centos 9 stream instead. or any other recent release fo your prefered distro. after all upstream we do not test rhel and never have we use centos as a proxy and centos 9 stream is fully relased and ready to use. > > -gmann > > > -- > > Jeremy Stanley > > > From noonedeadpunk at gmail.com Wed Apr 27 02:29:25 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Wed, 27 Apr 2022 04:29:25 +0200 Subject: =?UTF-8?Q?=5Bopenstack=2Dansible=5D_Nominate_Damian_D=C4=85browski_for_o?= =?UTF-8?Q?penstack=2Dansible_core_team?= In-Reply-To: <7A74639F-F049-4840-A5BD-F2538995E182@demarco.com> References: <7A74639F-F049-4840-A5BD-F2538995E182@demarco.com> Message-ID: I have added Damian to our core team! Warm welcome! ??, 19 ???. 2022 ?., 17:31 Amy : > +2 Welcome > > Amy (spotz) > > > On Apr 19, 2022, at 9:52 AM, Andrew Bonney > wrote: > > > > ?Sounds good to me! > > > > -----Original Message----- > > From: Jonathan Rosser > > Sent: 19 April 2022 15:24 > > To: openstack-discuss at lists.openstack.org > > Subject: Re: [openstack-ansible] Nominate Damian D?browski for > openstack-ansible core team > > > > +2 Welcome Damian! > > > >> On 19/04/2022 10:39, Dmitriy Rabotyagov wrote: > >> Hi OSA Cores! > >> > >> I'm happy to nominate Damian D?browski (damiandabrowski) to the core > >> reviewers team. > >> > >> He has been doing a good job lately in reviewing incoming patches, > >> helping out in IRC and participating in community activities, so I > >> think he will be a good match for the Core Reviewers group. > >> > >> So I call for current Core Reviewers to support this nomination or > >> raise objections to it until 22nd of April 2022. If no objections are > >> raised we will add Damian to the team next week. > >> > >> -- > >> Kind regards, > >> Dmitriy Rabotyagov > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Wed Apr 27 07:18:54 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Wed, 27 Apr 2022 12:48:54 +0530 Subject: [cinder] This week's meeting (Today) will be in video+IRC Message-ID: Hello Argonauts, This is a reminder email about this week's meeting i.e. Today (27 April) will be in video + IRC mode. The timings are same as regular cinder meeting i.e. 1400-1500 UTC. The meeting will be held in video[1] and IRC[2]. The reason to keep this in both ways is some people are more comfortable in written communication than verbal and vice versa so we can hold discussions in video or IRC as per author's wish. Also the roll call will happen on IRC and a summary will be given there as well so make sure you're connected to the IRC channel during the meeting. [1] connection info: https://bluejeans.com/556681290 [2] openstack-meeting-alt channel on IRC Thanks and regards Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Wed Apr 27 07:31:07 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Wed, 27 Apr 2022 13:01:07 +0530 Subject: [Openstack Triple0 Wallaby] Deployment error Message-ID: Hi Team, we tried OpenStack deployment using *Triple0 Wallaby.* while we were trying to deploy the setup Undercloud was deployed successfully. For the Overcloud Deployment: we generated the templates: Command: ./usr/share/openstack-tripleo-heat-templates/tools/process-templates.py -o ~/openstack-tripleo-heat-templates-rendered_at_wallaby -n /home/stack/templates/network_data.yaml -r /home/stack/templates/roles_data.yaml and using the below command to deploy overcloud: openstack overcloud deploy --templates \ -n /home/stack/templates/network_data.yaml \ -r /home/stack/templates/roles_data.yaml \ -e /home/stack/templates/environment.yaml \ -e /home/stack/templates/environments/network-isolation.yaml \ -e /home/stack/templates/environments/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml \ -e /home/stack/templates/ironic-config.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ -e /home/stack/containers-prepare-parameter.yaml after which it fails with error: 2022-04-27 12:34:00.801 607453 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] Exception occured while running the command: ValueError: Failed to deploy: ERROR: HEAT-E99001 Service neutron is not available for* resource type OS::TripleO::Network::Ports::ControlPlaneVipPort, reason: neutron network endpoint is not in service catalog.* Traceback (most recent call last): we were able to deploy Triple0 Victoria/ Train /ussuri using the same approach. Please suggest any further changes needed to include in our setup. -Lokendra -------------- next part -------------- An HTML attachment was scrubbed... URL: From strigazi at gmail.com Wed Apr 27 08:49:18 2022 From: strigazi at gmail.com (Spyros Trigazis) Date: Wed, 27 Apr 2022 10:49:18 +0200 Subject: [magnum] Proposing Michal Nasiadka for core-reviewer Message-ID: Dear all, I would like to nominate Michal Nasiadka for core reviewer in the magnum project. Michal has been helping with reviews for the past cycles and proposed features and fixes regularly. He would be a great addition to the team. Cheers, Spyros -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Wed Apr 27 12:26:08 2022 From: amy at demarco.com (Amy Marrich) Date: Wed, 27 Apr 2022 07:26:08 -0500 Subject: RDO Yoga Released Message-ID: The RDO community is pleased to announce the general availability of the RDO build for OpenStack Yoga for RPM-based distributions, CentOS Stream and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Yoga is the 25th release from the OpenStack project, which is the work of more than 1,000 contributors from around the world. The release is already available on the CentOS mirror network: - For CentOS Stream 8 http://mirror.centos.org/centos/8-stream/cloud/x86_64/openstack-yoga - For CentOS Stream 9 http://mirror.stream.centos.org/SIGs/9-stream/cloud/x86_64/openstack-yoga/ The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Stream and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS users looking to build and maintain their own on-premise, public or hybrid clouds. All work on RDO and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first. Interesting things in the Yoga release include: - RDO Yoga is the first RDO version built and tested for CentOS Stream 9. - In order to ease transition from CentOS Stream 8, RDO Yoga is also built and tested for CentOS Stream 8. Note that next release of RDO will be available only for CentOS Stream 9. The highlights of the broader upstream OpenStack project may be read via https://releases.openstack.org/yoga/highlights.html TripleO in the RDO Yoga release: Since the Xena development cycle, TripleO follows the Independent release model and will only maintain branches for selected OpenStack releases. In the case of Yoga, TripleO will not support the Yoga release. For TripleO users in RDO, this means that: - RDO Yoga will include packages for TripleO tested at OpenStack Yoga GA time. - Those packages will not be updated during the entire Yoga maintenance cycle. - RDO will not be able to included patches required to fix bugs in TripleO on RDO Yoga. - The lifecycle for the non-TripleO packages will follow the code merged and tested in upstream stable/yoga branches. - There will not be any TripleO Yoga container images built/pushed, so interested users will have to do their own container builds when deploying Yoga. You can find details about this on the RDO Webpage *Contributors* During the Yoga cycle, we saw the following new RDO contributors: - Adriano Vieira Petrich - Andrea Bolognani - Dariusz Smigiel - David Vallee Delisle - Douglas Viroel - Jakob Meng - Lucas Alvares Gomes - Luis Tomas Bolivar - T. Nichole Williams - Karolina Kula Welcome to all of you and Thank You So Much for participating! But we wouldn?t want to overlook anyone. A super massive Thank You to all 40 contributors who participated in producing this release. This list includes commits to rdo-packages, rdo-infra, and redhat-website repositories: - Adriano Vieira Petrich - Alan Bishop - Alan Pevec - Alex Schultz - Alfredo Moralejo - Amy Marrich (spotz) - Andrea Bolognani - Chandan Kumar - Daniel Alvarez Sanchez - Dariusz Smigiel - David Vallee Delisle - Douglas Viroel - Emma Foley - Ga?l Chamoulaud - Gregory Thiemonge - Harald - Jakob Meng - James Slagle - Jiri Podivin - Joel Capitao - Jon Schlueter - Julia Kreger - Kashyap Chamarthy - Lee Yarwood - Lon Hohberger - Lucas Alvares Gomes - Luigi Toscano - Luis Tomas Bolivar - Martin Kopec - mathieu bultel - Matthias Runge - Riccardo Pittau - Sergey - Stephen Finucane - Steve Baker - Takashi Kajinami - T. Nichole Williams - Tobias Urdin - Karolina Kula - User otherwiseguy - Yatin Karel *The Next Release Cycle* At the end of one release, focus shifts immediately to the next release i.e Zed. *Get Started* To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works. Finally, for those that don?t have any hardware or physical resources, there?s the OpenStack Global Passport Program. This is a collaborative effort between OpenStack public cloud providers to let you experience the freedom, performance and interoperability of open source infrastructure. You can quickly and easily gain access to OpenStack infrastructure via trial programs from participating OpenStack public cloud providers around the world. *Get Help* The RDO Project has the users at lists.rdoproject.org for RDO-specific users and operators. For more developer-oriented content we recommend joining the dev at lists.rdoproject.org mailing list. Remember to post a brief introduction about yourself and your RDO story. The mailing lists archives are all available at https://mail.rdoproject.org. You can also find extensive documentation on RDOproject.org. The #rdo channel on OFTC. IRC is also an excellent place to find and give help. We also welcome comments and requests on the CentOS devel mailing list and the CentOS and TripleO IRC channels (#centos, #centos-devel in Libera Chat network, and #tripleo on OFTC), however we have a more focused audience within the RDO venues. *Get Involved* To get involved in the OpenStack RPM packaging effort, check out the RDO contribute pages, peruse the CentOS Cloud SIG page, and inhale the RDO packaging documentation. Join us in #rdo and #tripleo on the OFTC IRC network and follow us on Twitter @RDOCommunity. You can also find us on Facebook and YouTube. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlandy at redhat.com Wed Apr 27 12:48:51 2022 From: rlandy at redhat.com (Ronelle Landy) Date: Wed, 27 Apr 2022 08:48:51 -0400 Subject: [TripleO] Tear down of Train centOS 7 check jobs and integration line Message-ID: Hello All, We have been running check/gate testing and integration lines for the Train release on both CentOS7 and CentOS 8. Following the work on CentOS 9 and the longevity of the Train release, we proposed removing the CentOS 7 Train jobs and building/testing changes to this release on CentOS 8 only. We floated this proposal with some interested parties and received no objections so we are beginning work on this tear down. We anticipate that removing the CentOS 7 jobs will allow patches to merge quicker and will free up resources for future work. Please respond if you have any questions or concerns. Thanks, Ronelle (for the TripleO CI team) -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Apr 27 12:50:30 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 27 Apr 2022 09:50:30 -0300 Subject: [cinder] Bug deputy report for week of 04-26-2022 Message-ID: This is a bug report from 04-20-2022 to 04-26-2022. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Medium - https://bugs.launchpad.net/os-brick/+bug/1969794 "backport of the fix for bug #1947370 make lock_path a required config option when prvisouls it was optional." Unassigned. - https://bugs.launchpad.net/cinder/+bug/1969373 "Simultaneous volume creation with the same image in multi-attach mode returns error." Assigned to Rajat. - https://bugs.launchpad.net/cinder/+bug/1969531 "NetApp NFS Storage Migration between backends is Failing." Unassigned. - https://bugs.launchpad.net/cinder/+bug/1969643 "RBD: Unable to delete a volume which has snapshot/volume children." Assigned to Sofia Enriquez and Eric Harney. No fix proposed to master yet. - https://bugs.launchpad.net/cinder/+bug/1969784 "[Pure Storage] Replicated array communication failure not handled correctly." Assigned to Simon Dodsley. No fix proposed to master yet. Low - https://bugs.launchpad.net/cinder/+bug/1967683 "Wrong property to look up remote address." Fix proposed to master. - https://bugs.launchpad.net/cinder/+bug/1970237 "[RBD] Multiple full backups cannot be created from one snapshot." Assigned to liuhuajie. No patch proposed to master yet. - https://bugs.launchpad.net/cinder/+bug/1969913 "[Documentation] Migration in cinder, lvm-rbd example is not correct." Unassigned. - https://bugs.launchpad.net/cinder/+bug/1966904 "Complex config formula evaluation causes RecursionError." Fix proposed to master. Wishlist - https://bugs.launchpad.net/cinder/+bug/1970208 "Rebranding. Dell EMC must be renamed to Dell." Assigned to Alexander Malashenko. Fix proposed to master. - https://bugs.launchpad.net/cinder/+bug/1970115 " [Enhancement] No way to disable image conversion." Assigned to Rico Lin. No fix proposed to master yet. - https://bugs.launchpad.net/cinder/+bug/1970114 " [Enhancement] Image conversion with RBD is not efficient." Unassigned. No fix proposed to master yet. Cheers, -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Wed Apr 27 12:59:48 2022 From: marios at redhat.com (Marios Andreou) Date: Wed, 27 Apr 2022 15:59:48 +0300 Subject: [TripleO] Tear down of Train centOS 7 check jobs and integration line In-Reply-To: References: Message-ID: On Wed, Apr 27, 2022 at 3:56 PM Ronelle Landy wrote: > > Hello All, > > We have been running check/gate testing and integration lines for the Train release on both CentOS7 and CentOS 8. > > Following the work on CentOS 9 and the longevity of the Train release, we proposed removing the CentOS 7 Train jobs and building/testing changes to this release on CentOS 8 only. We floated this proposal with some interested parties and received no objections so we are beginning work on this tear down. > we also discussed this at PTG recently during one of the CI sessions [1] and there were no objections raised there the topic branch for the c7 jobs removal is at [2] thanks, marios [1] https://etherpad.opendev.org/p/tripleo-zed-ci-load [2] https://review.opendev.org/q/topic:ooo_c7_teardown > We anticipate that removing the CentOS 7 jobs will allow patches to merge quicker and will free up resources for future work. Please respond if you have any questions or concerns. > > Thanks, > Ronelle (for the TripleO CI team) > > From niujie at chinamobile.com Wed Apr 27 03:22:05 2022 From: niujie at chinamobile.com (niujie) Date: Wed, 27 Apr 2022 11:22:05 +0800 Subject: [all] Resend New CFN(Computing Force Network) SIG Proposal In-Reply-To: <18068375fe1.e1468dfa198843.6304495179716775789@ghanshyammann.com> References: <18061d9a035.cb6bda15115439.2847955073200685698@ghanshyammann.com> <16b901d85925$1c9505a0$55bf10e0$@com> <18068375fe1.e1468dfa198843.6304495179716775789@ghanshyammann.com> Message-ID: <1a1b01d859e5$f910cbc0$eb326340$@com> Thanks Ghanshyam, Actually I saw Ildiko's email after my email was sent out :) My email was being held several hours for moderator before arrived because too many recipients, sorry for my expression about SIG and the confusion it might cause. For the new group proposal, we were not every clear of what kind of group is appropriate, SIG is the only option we know :), right now, we don't want to disobey any of the community's guidance, seems it's better we do not continue SIG or working-group proposal discussion until we got recommendation from the community. So we probably wait guidance before making next step(or group discussion), is it OK? As for the CFN topic, if anyone is interested and want to know more about what is CFN, we have a introduction slide, it is the same one we shared to the foundation. If you think it's good to share it on TC weekly meeting or any other form of share session, please tell us and we can share it. Thank you! Jie -----????----- ???: Ghanshyam Mann [mailto:gmann at ghanshyammann.com] ????: Wednesday, April 27, 2022 7:32 AM ???: niujie ??: 'openstack-discuss'; 'sunny'; 'Horace Li'; 'huang.shuquan'; 'gong.yongsheng'; 'shane.wang'; 'jian-feng.ding'; 'wangshengyjy'; 'yuzhiqiang'; 'zhangxiaoguang'; xujianwl ??: RE: [all] Resend New CFN(Computing Force Network) SIG Proposal ---- On Mon, 25 Apr 2022 23:21:32 -0500 niujie wrote ---- > Hi Ghanshyam, > > Thanks for forwarding the proposal. > > Yes, we will figure out exact changes(requirements) based on the exact use case and OpenStack component scope by further discussion. > As for the application migration, currently we don?t have plan for new project of tooling, we could probably start with tool(based on the discussion), but the ultimate goal is beyond just tooling, the vision of CFN is to achieve ecosystem for development, any application developed on this infrastructure could be migrated to any heterogeneous platforms. This may include build compiling platform on heterogeneous infrastructure, draft standardization for low-level code development, etc. > > You are right about CFN will not just bring changes to the OpenStack existing components, but also brings potential new source code components, we don't have such list/proposal for new component right now, that's why we would like to raise the CFN topic here, and based on the discussion with global wisdoms, we will figure out the next step. > > It is a good idea to start with a SIG, we can firstly start discussion here, and maybe re-evaluate as it goes. > > We have a brief CFN introduction slide, and shall I add a topic in TC weekly meeting agenda? I am fine to discuss it in TC meeting but I saw updates (ML reply too) from Ildiko that there is some discussion going on between foundation and your team and it seems CFN scope is more than OpenStack SIG and SIG is not right place to start with and you think SIG can be good place to start (I also have no objection on that). I think to have everyone on the same page, we need to discuss it together, I am ok to have that in TC weekly meeting including your team, foundation, and TC or a separate call. Whatever works for you and Ildiko (other foundation staff), please let me know. At the end, it does not matter much where we start it as SIG or separate infra project. Irrespective of the place, whatever you need from OpenStack community we will be supporting/implementing that as per use case and scope. [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028311.html -gmann > > Thanks > Jie Niu > > > > > > -----????----- > ???: Ghanshyam Mann [mailto:gmann at ghanshyammann.com] > ????: Tuesday, April 26, 2022 1:52 AM > ???: niujie > ??: openstack-discuss; sunny; 'Horace Li'; huang.shuquan; gong.yongsheng; shane.wang; jian-feng.ding; wangshengyjy; yuzhiqiang; zhangxiaoguang; xujianwl > ??: Re: [all] Resend New CFN(Computing Force Network) SIG Proposal > > Thanks, Niu for the proposal and sorry for the delay in response. > > I have raised this proposal to TC members and asking to check it. Overall proposal seems > interesting to me but few initial queries inline. > > > ---- On Wed, 13 Apr 2022 00:34:30 -0500 niujie wrote ---- > > > > Hi all > > I sent an email yesterday about NewCFN(Computing Force Network) SIG Proposal, I tried to recall it because therewas a typo in email address, then I get recall failed msg, so I assume the emailwas sent out successfully, and plan to keep it as it was. > > But I found that the ?recall? actionwas logged in pipermail, it might cause misunderstanding, we are sure about proposefor a new SIG, so I?m sending this again, sorry for the email flood J > > > > I'm from China Mobile, China Mobile is recently working onbuild a new information infrastructure focusing on connectivity, computingpower, and capabilities, this new information infrastructure is calledComputing Force Network, we think OpenStack community which gathers globalwisdom together is a perfect platform to discuss topics like CFN, so we areproposing to create a new SIG for CFN(Computing Force Network). Below is CFNbrief introduction and initial SIG scope. > > With the flourish of new business scenarios such as hybridcloud, multi-cloud, AI, big data processing, edge computing, building a newinformation infrastructure based on multiple key technologies that convergedcloud and network, will better support global digital transformation. This newinfrastructure is not only relates to cloud, it is getting more and moreconnected with network, and at the same time, we also need to consider how toconverge multiple technologies like AI, Blockchain, big data, security to providethis all-in-one service. > > Computing Force Network(CFN) is a new informationinfrastructure that based on network, focused on computing, deeply convergedArtificial intelligence, Block chain, Cloud, Data, Network, Edge computing, Endapplication, Security(ABCDNETS), providing all-in-one services. > > Xiaodong Duan, Vice president of China Mobile ResearchInstitute, introduced the vision and architecture of Computing Force Network in2021 November OpenInfra Live Keynotes by his presentation Connection +Computing + Capability Opens a New Era of Digital Infrastructure, heproposed the new era of CFN. > > We are expecting to work with OpenStack on how to buildthis new information infrastructure, and how to promote the development andimplementation of next generation infrastructure, achieve ubiquitous computingforce, computing & network convergence, intelligence orchestration,all-in-one service. Then computing force will become common utilities likewater and electric step by step, computing force will be ready for access uponuse and connected by single entry point. > > The above vision of CFN , from technical perspective, willmainly focus on unified management and orchestration of computing + networkintegrated system, computing and network deeply converged in architecture, formand protocols aspect, bringing potential changes to OpenStack components. CFNis aiming to achieve seamlessly migration of any application between anyheterogeneous platforms, it's a challenge for the industry currently, we feelthat in pursuit of CFN could potentially contributes to the development andevolution of OpenStack. > > Yes, it will require changes to OpenStack components but we will see based on the exact use case and OpenStack component scope. Is this include the application migration tooling in OpenStack? > > > > In this CFN SIG, we will mainly focus on discussing how tobuild the new information infrastructure of CFN, related key technologies, andwhat's the impact on OpenStack brought by the network & could convergencetrend , the topics are including but not limited to: > > 1, Acomputing basement for unified management of container, VM and Bare Metal > > 2,Computing infrastructure which eliminated the difference between heterogeneoushardware > > 3,Measurement criteria and scheduling scheme based on unified computinginfrastructure > > 4,Network solutions for SDN integrating smart NIC for data center > > 5,Unified orchestration & management for "network + cloud", and"cloud + edge + end" integrated scheduling solution > > We will have regular meetings to investigate and discussbusiness scenarios, development trend, technical scheme, release technicaldocuments, technical proposal and requirements for OpenStack Projects, andpropose new project when necessary. > > We will also collaborate with other open source projectslike LFN, CNCF, LFE, to have a consistent plan across communities, and alignwith global standardization organization like ETSI, 3GPP, IETF, to promote CFNrelated technical scheme become the standard in industry. > > If you have any thoughts, interests, questions,requirements, we can discuss by this mailing list. > > Thanks for the detailed information about the SIG scope. From the above, I understood that it will not be just changed to the OpenStack existing > component but also new source code components also, do you have such list/proposal for a new component or you would like to continue discussing > it and based on that you will get to know. How you are thinking about their (new component if any) releases like a coordinated > release with OpenStack or independent. If coordinated then it is more than SIG scope and might be good to add a new project. > > By seeing the scope of this proposal (which seems very wider), I think it is not required to answer all of them now. Overall I am ok to start > it as SIG and based on discussion/progress evaluation we will get to know more about new components, requirements etc and then we can > change it from SIG to a new project under OpenStack or other governance (based on the core/requirement/use case it produces). > > -gmann > > > Any suggestions are welcomed, and we are really hoping tohear from anyone, and work with you. > > > > Jie Niu > > China Mobile > > > > > > From fungi at yuggoth.org Wed Apr 27 13:23:57 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 27 Apr 2022 13:23:57 +0000 Subject: [all] Resend New CFN(Computing Force Network) SIG Proposal In-Reply-To: <1a1b01d859e5$f910cbc0$eb326340$@com> References: <18061d9a035.cb6bda15115439.2847955073200685698@ghanshyammann.com> <16b901d85925$1c9505a0$55bf10e0$@com> <18068375fe1.e1468dfa198843.6304495179716775789@ghanshyammann.com> <1a1b01d859e5$f910cbc0$eb326340$@com> Message-ID: <20220427132356.63kyjpruws2ocg4g@yuggoth.org> On 2022-04-27 11:22:05 +0800 (+0800), niujie wrote: [...] > My email was being held several hours for moderator before arrived > because too many recipients [...] As an aside, it's not recommended to Cc additional recipients when posting to a mailing list, since that has a tendency to result in split discussions among non-subscribers and duplicate deliveries for list subscribers. Encourage non-subscribers to participate in the discussion either by subscribing to the list or following along with the Web archive at http://lists.openstack.org/pipermail/openstack-discuss/ depending on which they find most convenient. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Wed Apr 27 16:40:45 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 27 Apr 2022 11:40:45 -0500 Subject: [all][qa] Dropping centos-8-stream support and testing In-Reply-To: <05a77b2d4b1f0464778927a64d8d556bc8d06565.camel@redhat.com> References: <180680dc549.12024ba50198289.2852901730952867408@ghanshyammann.com> <20220426232448.y2bfwzkxe73xwuux@yuggoth.org> <18068429ce8.d8c36c0b198965.1273654656450168204@ghanshyammann.com> <05a77b2d4b1f0464778927a64d8d556bc8d06565.camel@redhat.com> Message-ID: <1806be58561.1040fd115263101.965693504434277369@ghanshyammann.com> ---- On Tue, 26 Apr 2022 19:15:41 -0500 Sean Mooney wrote ---- > On Tue, 2022-04-26 at 18:43 -0500, Ghanshyam Mann wrote: > > ---- On Tue, 26 Apr 2022 18:24:49 -0500 Jeremy Stanley wrote ---- > > > On 2022-04-26 17:46:14 -0500 (-0500), Ghanshyam Mann wrote: > > > > As you know, in zed cycle we are targeting centos-9-stream in > > > > testing runtime[1] and dropping the py3.6 support. > > > [...] > > > > > > Just a reminder, RHEL 9 is still only in beta and I've seen no > > > indication it will necessarily be released by September, so there's > > > every chance Zed will not be usable on RHEL at release. > > > at least from a redhat prodct perspecive redhat does not suppport installing upstream > openstack on rhel so those choosign to install openstack on rhel out side of our productised version > woudl be doing so without support form redhat. > > as such i dont think the status of rhel 9 is really a concern here. > > redhat is currently workingon our next major version of redhat's openstack plathform (osp 17) which will be based on rhel 9 > and will be relased before the zed cycle completes. it will not be based on zed, zed will likely form the basis of osp(18) > which will not be relased this year. > > from a rhel 8 perspective python 3.8 was never intended for production deploymetn on python 3.6 was fully productised > and other interperters were only provdied for development and testing but 3.6 was the only fully supproted prodcution runtime. > > I know that > > > came up as a reason to keep 3.6 testing the last time we tried to > > > remove it (for Yoga), so figure it's worth pointing out again. > > > > Yes, that is a separate discussion and we already discussed it a lot about > > dropping py3.6 in Yoga cycle. We said that time that we would remove the > > support in Zed cycle and so does testing runtime is defined and updated > > on ML also. I did not see any objection to that at least during these two months > > we dropped the testing. > > > > This discussion is if we want to continue test the centos-8-stream even we dropped > > the py3.6 support which can be done with python 3.8. > > i really dont think that adds sginifcant value. > > if we were to look at redhat downstream plathform then stable wallaby and zed will both be run on python 3.9 on rhel 9.x > testing with 3.8 wont align with what will be productised. so i think it woudl be better to focus on testign rdo wtih centos 9 stream only > so for master/zed rather then investign effort into 3.8 > > for what its worth i think centos 8 with 3.8 actully does work today, im pretty sure i have deploy with that combination in the past > i just dont really see that addign a lot of vaule vs ruing a centos 9 stream job with 3.9 Agree, that is why in QA we decided to drop the c8s testing and more concentrate on c9s testing. -gmann > > ubuntu 20.04 provides 3.8 coverage and the rest is disto/packaging realted so unless rdo or one of the other installers speciricly > ask for centos 8 stream i dont think we need to keep the nodeset for master. > > we do still requrie it for stable branches but for installers like devstack i think its long past time that we pushed people to move > to centos 9 stream instead. or any other recent release fo your prefered distro. > after all upstream we do not test rhel and never have we use centos as a proxy and centos 9 stream is fully relased and ready to use. > > > > -gmann > > > > > -- > > > Jeremy Stanley > > > > > > > > From gmann at ghanshyammann.com Wed Apr 27 16:51:05 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 27 Apr 2022 11:51:05 -0500 Subject: [all] Resend New CFN(Computing Force Network) SIG Proposal In-Reply-To: <20220427132356.63kyjpruws2ocg4g@yuggoth.org> References: <18061d9a035.cb6bda15115439.2847955073200685698@ghanshyammann.com> <16b901d85925$1c9505a0$55bf10e0$@com> <18068375fe1.e1468dfa198843.6304495179716775789@ghanshyammann.com> <1a1b01d859e5$f910cbc0$eb326340$@com> <20220427132356.63kyjpruws2ocg4g@yuggoth.org> Message-ID: <1806beef8de.12b47d4b6263658.6839269630339396937@ghanshyammann.com> ---- On Wed, 27 Apr 2022 08:23:57 -0500 Jeremy Stanley wrote ---- > On 2022-04-27 11:22:05 +0800 (+0800), niujie wrote: > [...] > > My email was being held several hours for moderator before arrived > > because too many recipients > [...] > > As an aside, it's not recommended to Cc additional recipients when > posting to a mailing list, since that has a tendency to result in > split discussions among non-subscribers and duplicate deliveries for > list subscribers. Encourage non-subscribers to participate in the > discussion either by subscribing to the list or following along with > the Web archive at > http://lists.openstack.org/pipermail/openstack-discuss/ depending on > which they find most convenient. Well, that is what I raised concern in tc channel when my reply went to approval first. It could make these kind of async while replying and seeing other reply depends on when your email is approved by moderator. We cannot expcet moderator will keep eyes on every min for this ML thread. I still stand on my proposal to relax this 'more recipients ' rule at least making recipients number threshold to 20 or so. That will definitely benefit the new thread from new people. The Current rule is causing more issues then its benefits, at least with current limit of recipients numbers. -gmann > -- > Jeremy Stanley > From fungi at yuggoth.org Wed Apr 27 17:02:38 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 27 Apr 2022 17:02:38 +0000 Subject: [all] Mailman recipients limit (was: Resend New CFN...) In-Reply-To: <1806beef8de.12b47d4b6263658.6839269630339396937@ghanshyammann.com> References: <18061d9a035.cb6bda15115439.2847955073200685698@ghanshyammann.com> <16b901d85925$1c9505a0$55bf10e0$@com> <18068375fe1.e1468dfa198843.6304495179716775789@ghanshyammann.com> <1a1b01d859e5$f910cbc0$eb326340$@com> <20220427132356.63kyjpruws2ocg4g@yuggoth.org> <1806beef8de.12b47d4b6263658.6839269630339396937@ghanshyammann.com> Message-ID: <20220427170238.znzakvtu6kfvg2zz@yuggoth.org> On 2022-04-27 11:51:05 -0500 (-0500), Ghanshyam Mann wrote: [...] > Well, that is what I raised concern in tc channel when my reply > went to approval first. It could make these kind of async while > replying and seeing other reply depends on when your email is > approved by moderator. We cannot expcet moderator will keep eyes > on every min for this ML thread. > > I still stand on my proposal to relax this 'more recipients ' rule > at least making recipients number threshold to 20 or so. That will > definitely benefit the new thread from new people. > > The Current rule is causing more issues then its benefits, at > least with current limit of recipients numbers. I already approve numerous messages out of the moderation queue for this list every day for other reasons (and discard orders of magnitude more caught spam), so the recipients limit check is not increasing my workload as a list moderator. If people generally feel that having extra recipients besides the list address is acceptable, then I would just remove the check rather than raising it (10 recipients is already a lot). What *will* increase my workload is manually replying to people every time I see a massive recipient count in a list message in order to refer them to the https://wiki.openstack.org/wiki/MailingListEtiquette#Keep_Discussions_On-List section of our netiquette guidelines. Right now the automated check provides a signal without any additional work on my part. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Wed Apr 27 16:55:56 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 27 Apr 2022 11:55:56 -0500 Subject: [all] Resend New CFN(Computing Force Network) SIG Proposal In-Reply-To: <1a1b01d859e5$f910cbc0$eb326340$@com> References: <18061d9a035.cb6bda15115439.2847955073200685698@ghanshyammann.com> <16b901d85925$1c9505a0$55bf10e0$@com> <18068375fe1.e1468dfa198843.6304495179716775789@ghanshyammann.com> <1a1b01d859e5$f910cbc0$eb326340$@com> Message-ID: <1806bf36d35.cbe9fdb1263946.4349444633878513122@ghanshyammann.com> ---- On Tue, 26 Apr 2022 22:22:05 -0500 niujie wrote ---- > Thanks Ghanshyam, > > Actually I saw Ildiko's email after my email was sent out :) > My email was being held several hours for moderator before arrived because too many recipients, sorry for my expression about SIG and the confusion it might cause. > > For the new group proposal, we were not every clear of what kind of group is appropriate, SIG is the only option we know :), right now, we don't want to disobey any of the community's guidance, seems it's better we do not continue SIG or working-group proposal discussion until we got recommendation from the community. So we probably wait guidance before making next step(or group discussion), is it OK? No problem Jie, and thanks for the detail plan. As you are in discussion with foundation on possible place for CFN, I will say you can continue that and if you or foundation need/think we need some place in OpenStack community to add it or some component of it then we can discuss it in TC. Feel free to add it in TC weekly meeting agenda anytime or send me an email and i can add that- - https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann > > As for the CFN topic, if anyone is interested and want to know more about what is CFN, we have a introduction slide, it is the same one we shared to the foundation. If you think it's good to share it on TC weekly meeting or any other form of share session, please tell us and we can share it. > > Thank you! > Jie > > > > -----????----- > ???: Ghanshyam Mann [mailto:gmann at ghanshyammann.com] > ????: Wednesday, April 27, 2022 7:32 AM > ???: niujie > ??: 'openstack-discuss'; 'sunny'; 'Horace Li'; 'huang.shuquan'; 'gong.yongsheng'; 'shane.wang'; 'jian-feng.ding'; 'wangshengyjy'; 'yuzhiqiang'; 'zhangxiaoguang'; xujianwl > ??: RE: [all] Resend New CFN(Computing Force Network) SIG Proposal > > ---- On Mon, 25 Apr 2022 23:21:32 -0500 niujie wrote ---- > > Hi Ghanshyam, > > > > Thanks for forwarding the proposal. > > > > Yes, we will figure out exact changes(requirements) based on the exact use case and OpenStack component scope by further discussion. > > As for the application migration, currently we don?t have plan for new project of tooling, we could probably start with tool(based on the discussion), but the ultimate goal is beyond just tooling, the vision of CFN is to achieve ecosystem for development, any application developed on this infrastructure could be migrated to any heterogeneous platforms. This may include build compiling platform on heterogeneous infrastructure, draft standardization for low-level code development, etc. > > > > You are right about CFN will not just bring changes to the OpenStack existing components, but also brings potential new source code components, we don't have such list/proposal for new component right now, that's why we would like to raise the CFN topic here, and based on the discussion with global wisdoms, we will figure out the next step. > > > > It is a good idea to start with a SIG, we can firstly start discussion here, and maybe re-evaluate as it goes. > > > > We have a brief CFN introduction slide, and shall I add a topic in TC weekly meeting agenda? > > I am fine to discuss it in TC meeting but I saw updates (ML reply too) from Ildiko that there is some discussion going on > between foundation and your team and it seems CFN scope is more than OpenStack SIG and SIG is not right place to start > with and you think SIG can be good place to start (I also have no objection on that). > > I think to have everyone on the same page, we need to discuss it together, I am ok to have that in TC weekly meeting > including your team, foundation, and TC or a separate call. Whatever works for you and Ildiko (other foundation staff), > please let me know. > > At the end, it does not matter much where we start it as SIG or separate infra project. Irrespective of the place, whatever > you need from OpenStack community we will be supporting/implementing that as per use case and scope. > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028311.html > > -gmann > > > > > Thanks > > Jie Niu > > > > > > > > > > > > -----????----- > > ???: Ghanshyam Mann [mailto:gmann at ghanshyammann.com] > > ????: Tuesday, April 26, 2022 1:52 AM > > ???: niujie > > ??: openstack-discuss; sunny; 'Horace Li'; huang.shuquan; gong.yongsheng; shane.wang; jian-feng.ding; wangshengyjy; yuzhiqiang; zhangxiaoguang; xujianwl > > ??: Re: [all] Resend New CFN(Computing Force Network) SIG Proposal > > > > Thanks, Niu for the proposal and sorry for the delay in response. > > > > I have raised this proposal to TC members and asking to check it. Overall proposal seems > > interesting to me but few initial queries inline. > > > > > > ---- On Wed, 13 Apr 2022 00:34:30 -0500 niujie wrote ---- > > > > > > Hi all > > > I sent an email yesterday about NewCFN(Computing Force Network) SIG Proposal, I tried to recall it because therewas a typo in email address, then I get recall failed msg, so I assume the emailwas sent out successfully, and plan to keep it as it was. > > > But I found that the ?recall? actionwas logged in pipermail, it might cause misunderstanding, we are sure about proposefor a new SIG, so I?m sending this again, sorry for the email flood J > > > > > > I'm from China Mobile, China Mobile is recently working onbuild a new information infrastructure focusing on connectivity, computingpower, and capabilities, this new information infrastructure is calledComputing Force Network, we think OpenStack community which gathers globalwisdom together is a perfect platform to discuss topics like CFN, so we areproposing to create a new SIG for CFN(Computing Force Network). Below is CFNbrief introduction and initial SIG scope. > > > With the flourish of new business scenarios such as hybridcloud, multi-cloud, AI, big data processing, edge computing, building a newinformation infrastructure based on multiple key technologies that convergedcloud and network, will better support global digital transformation. This newinfrastructure is not only relates to cloud, it is getting more and moreconnected with network, and at the same time, we also need to consider how toconverge multiple technologies like AI, Blockchain, big data, security to providethis all-in-one service. > > > Computing Force Network(CFN) is a new informationinfrastructure that based on network, focused on computing, deeply convergedArtificial intelligence, Block chain, Cloud, Data, Network, Edge computing, Endapplication, Security(ABCDNETS), providing all-in-one services. > > > Xiaodong Duan, Vice president of China Mobile ResearchInstitute, introduced the vision and architecture of Computing Force Network in2021 November OpenInfra Live Keynotes by his presentation Connection +Computing + Capability Opens a New Era of Digital Infrastructure, heproposed the new era of CFN. > > > We are expecting to work with OpenStack on how to buildthis new information infrastructure, and how to promote the development andimplementation of next generation infrastructure, achieve ubiquitous computingforce, computing & network convergence, intelligence orchestration,all-in-one service. Then computing force will become common utilities likewater and electric step by step, computing force will be ready for access uponuse and connected by single entry point. > > > The above vision of CFN , from technical perspective, willmainly focus on unified management and orchestration of computing + networkintegrated system, computing and network deeply converged in architecture, formand protocols aspect, bringing potential changes to OpenStack components. CFNis aiming to achieve seamlessly migration of any application between anyheterogeneous platforms, it's a challenge for the industry currently, we feelthat in pursuit of CFN could potentially contributes to the development andevolution of OpenStack. > > > > Yes, it will require changes to OpenStack components but we will see based on the exact use case and OpenStack component scope. Is this include the application migration tooling in OpenStack? > > > > > > > In this CFN SIG, we will mainly focus on discussing how tobuild the new information infrastructure of CFN, related key technologies, andwhat's the impact on OpenStack brought by the network & could convergencetrend , the topics are including but not limited to: > > > 1, Acomputing basement for unified management of container, VM and Bare Metal > > > 2,Computing infrastructure which eliminated the difference between heterogeneoushardware > > > 3,Measurement criteria and scheduling scheme based on unified computinginfrastructure > > > 4,Network solutions for SDN integrating smart NIC for data center > > > 5,Unified orchestration & management for "network + cloud", and"cloud + edge + end" integrated scheduling solution > > > We will have regular meetings to investigate and discussbusiness scenarios, development trend, technical scheme, release technicaldocuments, technical proposal and requirements for OpenStack Projects, andpropose new project when necessary. > > > We will also collaborate with other open source projectslike LFN, CNCF, LFE, to have a consistent plan across communities, and alignwith global standardization organization like ETSI, 3GPP, IETF, to promote CFNrelated technical scheme become the standard in industry. > > > If you have any thoughts, interests, questions,requirements, we can discuss by this mailing list. > > > > Thanks for the detailed information about the SIG scope. From the above, I understood that it will not be just changed to the OpenStack existing > > component but also new source code components also, do you have such list/proposal for a new component or you would like to continue discussing > > it and based on that you will get to know. How you are thinking about their (new component if any) releases like a coordinated > > release with OpenStack or independent. If coordinated then it is more than SIG scope and might be good to add a new project. > > > > By seeing the scope of this proposal (which seems very wider), I think it is not required to answer all of them now. Overall I am ok to start > > it as SIG and based on discussion/progress evaluation we will get to know more about new components, requirements etc and then we can > > change it from SIG to a new project under OpenStack or other governance (based on the core/requirement/use case it produces). > > > > -gmann > > > > > Any suggestions are welcomed, and we are really hoping tohear from anyone, and work with you. > > > > > > Jie Niu > > > China Mobile > > > > > > > > > > > > > > > > > From gagehugo at gmail.com Wed Apr 27 20:02:31 2022 From: gagehugo at gmail.com (Gage Hugo) Date: Wed, 27 Apr 2022 15:02:31 -0500 Subject: [openstack-helm] gnocchi chart removal Message-ID: Hey team, One of the topics from the last few PTGs have been reducing the scope of openstack-helm by evaluating which charts were being used and which ones have not been actively maintained and one of those charts that has been discussed was gnocchi. Besides a change to tolerations that was applied to most of the charts in the repo, the gnocchi chart has not seen much activity and there wasn't any immediate interest in maintaining it, so the current plan is to remove it from the repo. If you are someone who is actively using the gnocchi chart and would be interested in helping to maintain it, please let me know! -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Apr 27 23:11:56 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 27 Apr 2022 18:11:56 -0500 Subject: [all][tc] Technical Committee next weekly meeting on April 28, 2022 at 1500 UTC In-Reply-To: <1806243e9b4.dcfd01f3118929.1373688832929150163@ghanshyammann.com> References: <1806243e9b4.dcfd01f3118929.1373688832929150163@ghanshyammann.com> Message-ID: <1806d4ba766.cec6d3b6274612.5369133567223234904@ghanshyammann.com> Hello Everyone, Below is the agenda for Tomorrow's TC IRC meeting schedule at 1500 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Gate health check ** Fixing Zuul config error in OpenStack *** https://etherpad.opendev.org/p/zuul-config-error-openstack * Retiring the status.openstack.org server ** http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028279.html * Communicating the new ELK service dashboard and login information ** https://opensearch.logs.openstack.org/_dashboards/app/discover?security_tenant=global ** https://review.opendev.org/c/openstack/governance-sigs/+/835838 * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Mon, 25 Apr 2022 14:47:39 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Technical Committee's next weekly meeting is scheduled for April 28, 2022 at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, April 27, at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > > From anlin.kong at gmail.com Thu Apr 28 04:26:35 2022 From: anlin.kong at gmail.com (Lingxian Kong) Date: Thu, 28 Apr 2022 16:26:35 +1200 Subject: Openstack Trove - Polling request timed out In-Reply-To: References: Message-ID: Although I'm not contributing to Trove project any more, I can give you some hints on this issue. The timeout issue is usually caused by the error of trove-guestagent inside the trove instance, you need to log into the instance and check trove-guestagent log, you can follow this troubleshooting guide, https://docs.openstack.org/trove/latest/admin/troubleshooting.html#ssh-into-the-instance Regards, Lingxian Kong On Sat, Apr 23, 2022 at 3:11 AM Manish Bharti wrote: > Dear Team, > > We are trying to deploy OpenStack environment for our application and > while deploying trove service we are facing the below error(attached > screenshot)- > > Traceback (most recent call last): > File "/usr/lib/python3/dist-packages/trove/common/utils.py", line 207, in wait_for_task > return polling_task.wait() > File "/usr/lib/python3/dist-packages/eventlet/event.py", line 125, in wait > result = hub.switch() > File "/usr/lib/python3/dist-packages/eventlet/hubs/hub.py", line 313, in switch > return self.greenlet.switch() > File "/usr/lib/python3/dist-packages/oslo_service/loopingcall.py", line 154, in _run_loop > idle = idle_for_func(result, self._elapsed(watch)) > File "/usr/lib/python3/dist-packages/oslo_service/loopingcall.py", line 349, in _idle_for > raise LoopingCallTimeOut( > oslo_service.loopingcall.LoopingCallTimeOut: > Looping call timed out after 870.99 seconds > > During handling of the above exception, another exception occurred: > > Traceback (most recent call last): > File "/usr/lib/python3/dist-packages/trove/taskmanager/models.py", line 434, in wait_for_instance > utils.poll_until(self._service_is_active, > File "/usr/lib/python3/dist-packages/trove/common/utils.py", line 223, in poll_until > return wait_for_task(task) > File "/usr/lib/python3/dist-packages/trove/common/utils.py", line 209, in wait_for_task > raise exception.PollTimeOut > trove.common.exception.PollTimeOut: Polling request timed out. > > > Please help us on this issue. > > -- > Thank you , > Manish Bharti > Jodhpur, Raj - 342001. > Contact:8875033000 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vishwanath.ne at gmail.com Thu Apr 28 04:27:41 2022 From: vishwanath.ne at gmail.com (Vishwanath) Date: Wed, 27 Apr 2022 21:27:41 -0700 Subject: [Browbeat]performance/load testing Message-ID: <1B2CA64C-B37A-4F57-AB42-212C4087D9AD@gmail.com> Hello, I installed openstack using kolla-ansible, i would like to do load/performance testing, i came across below openstack page. https://docs.openstack.org/self-healing-sig/latest/testing/tools-list.html Browbeat looks like a perfect fit for what i want to do.I read browbeat documentation . Questions i have is: - as per the browbeat documentation, in order to install this we need to have openstack installation via tripleo (undercloud/overcloud) , does browbeat work with non triiplo openstack deployment? - I don?t have undercloud/overcloud setup. Regards Vish -------------- next part -------------- An HTML attachment was scrubbed... URL: From sigurd.k.brinch at uia.no Thu Apr 28 07:48:18 2022 From: sigurd.k.brinch at uia.no (Sigurd Kristian Brinch) Date: Thu, 28 Apr 2022 07:48:18 +0000 Subject: Nova support for multiple vGPUs? In-Reply-To: References: Message-ID: Many thanks to both Sean and Sylvain for clarifying this, I'll look into your suggestions and test them :-) BR Sigurd From: Sylvain Bauza Sent: Thursday, April 21, 2022 12:57 To: Sean Mooney Cc: Sigurd Kristian Brinch ; openstack-discuss at lists.openstack.org Subject: Re: Nova support for multiple vGPUs? ? Le?jeu. 21 avr. 2022 ??12:26, Sean Mooney a ?crit?: On Wed, 2022-04-20 at 16:42 +0000, Sigurd Kristian Brinch wrote: > Hi, > As far as I can tell, libvirt/KVM supports multiple vGPUs per VM > (https://docs.nvidia.com/grid/14.0/grid-vgpu-release-notes-generic-linux-kvm/index.html#multiple-vgpu-support), > but in OpenStack/Nova it is limited to one vGPU per VM > (https://docs.openstack.org/nova/latest/admin/virtual-gpu.html#configure-a-flavor-controller) > Is there a reason for this limit? yes nvidia > What would be needed to enable multiple vGPUs in Nova? so you can technically do it today if you have 2 vGPU for seperate physical gpu cards but nvidia do not support multiple vGPUs form the same card. nova does not currently provide a way to force the gpu allocation to be from seperate cards. well thats not quite true you could you would have to use the named group syntax to request them so instaed of resources:vgpu=2 you woudl do? resources_first_gpu_group:VGPU=1? resources_second_gpu_group:VGPU=1 group_policy=isolate the name after resouces_ is arbitray group name provided it conforms to this regex '([a-zA-Z0-9_-]{1,64})?' we stongly dislike this approch. first of all using group_policy=isolate is a gloabl thing meaning that no request groups can come form the same provider that means you can not have to sriov VFs from the same physical nic as a result of setting it. if you dont set group_policy the default is none which means you no longer are guarenteed that they will come form different providres so what you woudl need to do is extend placment to support isolating only sepeicic named groups and then expose that in nova via flavor extra specs which is not particaly good ux as it rather complicated and means you need to understand how placement works in depth. placement shoudl really be an implemenation detail i.e. resources_first_gpu_group:VGPU=1 resources_second_gpu_group:VGPU=1 group_isolate=first_grpu_group,second_gpu_group;... that fixes the confilct with sriov and all other usages of resouce groups like bandwith based qos the slightly better approch wouls be to make this simplere to use by doing somtihng liek this resources:vgpu=2 vgpu:gpu_selection_policy=isolate we would still need the placement feature to isolate by group but we can hide the detail form the end user with a pre filter in nova https://github.com/openstack/nova/blob/eedbff38599addd4574084edac8b111c4e1f244a/nova/scheduler/request_filter.py which will transfrom the resouce request and split it up into groups automatically this is a long way to say that if it was not for limiations in the iommu on nvidia gpus and the fact that they cannot map two vgpus to from on phsyical gpu to a singel vm this would already work out of hte box wiht just resources:vgpu=2. perhaps when intel lauch there discret datacenter gpus there vGPU implementaiotn will not have this limiation. we do not prevent you from requestin 2 vgpus today it will just fail when qemu tries to use them. we also have not put the effort into working around the limiation in nvidias hardware since ther drivers also used to block this until the ampear generation and there has nto been a large request to support multipel vgpus form users. ocationally some will ask about it but in general peopel either do full gpu passthough or use 1 vgpu instance. Correct, that's why we have this open bug report for a while, but we don't really want to fix for only one vendor. ? hopefully that will help. you can try the first approch today if you have more then one physical gpu per host e.g. resources_first_gpu_group:VGPU=1 resources_second_gpu_group:VGPU=1 group_policy=isolate just be aware of the limiation fo group_policy=isolate Thanks Sean for explaining how to use a workaround. ? regard sean > > BR > Sigurd From niujie at chinamobile.com Thu Apr 28 10:15:02 2022 From: niujie at chinamobile.com (niujie) Date: Thu, 28 Apr 2022 18:15:02 +0800 Subject: [all] Resend New CFN(Computing Force Network) SIG Proposal In-Reply-To: <1806bf36d35.cbe9fdb1263946.4349444633878513122@ghanshyammann.com> References: <18061d9a035.cb6bda15115439.2847955073200685698@ghanshyammann.com> <16b901d85925$1c9505a0$55bf10e0$@com> <18068375fe1.e1468dfa198843.6304495179716775789@ghanshyammann.com> <1a1b01d859e5$f910cbc0$eb326340$@com> <1806bf36d35.cbe9fdb1263946.4349444633878513122@ghanshyammann.com> Message-ID: <1e8d01d85ae8$d35c8c80$7a15a580$@com> We would like to make a brief introduction about CFN to the community, we will not discuss any kind of working group application, just purely sharing CFN concept, vision and related technologies, and hopefully we can get more people interested in this topic . Do you think TC meeting is a good place to start with? If so, we are ok with next week's TC meeting, and I can add agenda. Thanks Jie Niu -----????----- ???: Ghanshyam Mann [mailto:gmann at ghanshyammann.com] ????: Thursday, April 28, 2022 12:56 AM ???: niujie ??: 'openstack-discuss'; 'sunny'; 'Horace Li'; 'huang.shuquan'; 'gong.yongsheng'; 'shane.wang'; 'jian-feng.ding'; 'wangshengyjy'; 'yuzhiqiang'; 'zhangxiaoguang'; 'xujianwl' ??: RE: [all] Resend New CFN(Computing Force Network) SIG Proposal ---- On Tue, 26 Apr 2022 22:22:05 -0500 niujie wrote ---- > Thanks Ghanshyam, > > Actually I saw Ildiko's email after my email was sent out :) > My email was being held several hours for moderator before arrived because too many recipients, sorry for my expression about SIG and the confusion it might cause. > > For the new group proposal, we were not every clear of what kind of group is appropriate, SIG is the only option we know :), right now, we don't want to disobey any of the community's guidance, seems it's better we do not continue SIG or working-group proposal discussion until we got recommendation from the community. So we probably wait guidance before making next step(or group discussion), is it OK? No problem Jie, and thanks for the detail plan. As you are in discussion with foundation on possible place for CFN, I will say you can continue that and if you or foundation need/think we need some place in OpenStack community to add it or some component of it then we can discuss it in TC. Feel free to add it in TC weekly meeting agenda anytime or send me an email and i can add that- - https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann > > As for the CFN topic, if anyone is interested and want to know more about what is CFN, we have a introduction slide, it is the same one we shared to the foundation. If you think it's good to share it on TC weekly meeting or any other form of share session, please tell us and we can share it. > > Thank you! > Jie > > > > -----????----- > ???: Ghanshyam Mann [mailto:gmann at ghanshyammann.com] > ????: Wednesday, April 27, 2022 7:32 AM > ???: niujie > ??: 'openstack-discuss'; 'sunny'; 'Horace Li'; 'huang.shuquan'; 'gong.yongsheng'; 'shane.wang'; 'jian-feng.ding'; 'wangshengyjy'; 'yuzhiqiang'; 'zhangxiaoguang'; xujianwl > ??: RE: [all] Resend New CFN(Computing Force Network) SIG Proposal > > ---- On Mon, 25 Apr 2022 23:21:32 -0500 niujie wrote ---- > > Hi Ghanshyam, > > > > Thanks for forwarding the proposal. > > > > Yes, we will figure out exact changes(requirements) based on the exact use case and OpenStack component scope by further discussion. > > As for the application migration, currently we don?t have plan for new project of tooling, we could probably start with tool(based on the discussion), but the ultimate goal is beyond just tooling, the vision of CFN is to achieve ecosystem for development, any application developed on this infrastructure could be migrated to any heterogeneous platforms. This may include build compiling platform on heterogeneous infrastructure, draft standardization for low-level code development, etc. > > > > You are right about CFN will not just bring changes to the OpenStack existing components, but also brings potential new source code components, we don't have such list/proposal for new component right now, that's why we would like to raise the CFN topic here, and based on the discussion with global wisdoms, we will figure out the next step. > > > > It is a good idea to start with a SIG, we can firstly start discussion here, and maybe re-evaluate as it goes. > > > > We have a brief CFN introduction slide, and shall I add a topic in TC weekly meeting agenda? > > I am fine to discuss it in TC meeting but I saw updates (ML reply too) from Ildiko that there is some discussion going on > between foundation and your team and it seems CFN scope is more than OpenStack SIG and SIG is not right place to start > with and you think SIG can be good place to start (I also have no objection on that). > > I think to have everyone on the same page, we need to discuss it together, I am ok to have that in TC weekly meeting > including your team, foundation, and TC or a separate call. Whatever works for you and Ildiko (other foundation staff), > please let me know. > > At the end, it does not matter much where we start it as SIG or separate infra project. Irrespective of the place, whatever > you need from OpenStack community we will be supporting/implementing that as per use case and scope. > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028311.html > > -gmann > > > > > Thanks > > Jie Niu > > > > > > > > > > > > -----????----- > > ???: Ghanshyam Mann [mailto:gmann at ghanshyammann.com] > > ????: Tuesday, April 26, 2022 1:52 AM > > ???: niujie > > ??: openstack-discuss; sunny; 'Horace Li'; huang.shuquan; gong.yongsheng; shane.wang; jian-feng.ding; wangshengyjy; yuzhiqiang; zhangxiaoguang; xujianwl > > ??: Re: [all] Resend New CFN(Computing Force Network) SIG Proposal > > > > Thanks, Niu for the proposal and sorry for the delay in response. > > > > I have raised this proposal to TC members and asking to check it. Overall proposal seems > > interesting to me but few initial queries inline. > > > > > > ---- On Wed, 13 Apr 2022 00:34:30 -0500 niujie wrote ---- > > > > > > Hi all > > > I sent an email yesterday about NewCFN(Computing Force Network) SIG Proposal, I tried to recall it because therewas a typo in email address, then I get recall failed msg, so I assume the emailwas sent out successfully, and plan to keep it as it was. > > > But I found that the ?recall? actionwas logged in pipermail, it might cause misunderstanding, we are sure about proposefor a new SIG, so I?m sending this again, sorry for the email flood J > > > > > > I'm from China Mobile, China Mobile is recently working onbuild a new information infrastructure focusing on connectivity, computingpower, and capabilities, this new information infrastructure is calledComputing Force Network, we think OpenStack community which gathers globalwisdom together is a perfect platform to discuss topics like CFN, so we areproposing to create a new SIG for CFN(Computing Force Network). Below is CFNbrief introduction and initial SIG scope. > > > With the flourish of new business scenarios such as hybridcloud, multi-cloud, AI, big data processing, edge computing, building a newinformation infrastructure based on multiple key technologies that convergedcloud and network, will better support global digital transformation. This newinfrastructure is not only relates to cloud, it is getting more and moreconnected with network, and at the same time, we also need to consider how toconverge multiple technologies like AI, Blockchain, big data, security to providethis all-in-one service. > > > Computing Force Network(CFN) is a new informationinfrastructure that based on network, focused on computing, deeply convergedArtificial intelligence, Block chain, Cloud, Data, Network, Edge computing, Endapplication, Security(ABCDNETS), providing all-in-one services. > > > Xiaodong Duan, Vice president of China Mobile ResearchInstitute, introduced the vision and architecture of Computing Force Network in2021 November OpenInfra Live Keynotes by his presentation Connection +Computing + Capability Opens a New Era of Digital Infrastructure, heproposed the new era of CFN. > > > We are expecting to work with OpenStack on how to buildthis new information infrastructure, and how to promote the development andimplementation of next generation infrastructure, achieve ubiquitous computingforce, computing & network convergence, intelligence orchestration,all-in-one service. Then computing force will become common utilities likewater and electric step by step, computing force will be ready for access uponuse and connected by single entry point. > > > The above vision of CFN , from technical perspective, willmainly focus on unified management and orchestration of computing + networkintegrated system, computing and network deeply converged in architecture, formand protocols aspect, bringing potential changes to OpenStack components. CFNis aiming to achieve seamlessly migration of any application between anyheterogeneous platforms, it's a challenge for the industry currently, we feelthat in pursuit of CFN could potentially contributes to the development andevolution of OpenStack. > > > > Yes, it will require changes to OpenStack components but we will see based on the exact use case and OpenStack component scope. Is this include the application migration tooling in OpenStack? > > > > > > > In this CFN SIG, we will mainly focus on discussing how tobuild the new information infrastructure of CFN, related key technologies, andwhat's the impact on OpenStack brought by the network & could convergencetrend , the topics are including but not limited to: > > > 1, Acomputing basement for unified management of container, VM and Bare Metal > > > 2,Computing infrastructure which eliminated the difference between heterogeneoushardware > > > 3,Measurement criteria and scheduling scheme based on unified computinginfrastructure > > > 4,Network solutions for SDN integrating smart NIC for data center > > > 5,Unified orchestration & management for "network + cloud", and"cloud + edge + end" integrated scheduling solution > > > We will have regular meetings to investigate and discussbusiness scenarios, development trend, technical scheme, release technicaldocuments, technical proposal and requirements for OpenStack Projects, andpropose new project when necessary. > > > We will also collaborate with other open source projectslike LFN, CNCF, LFE, to have a consistent plan across communities, and alignwith global standardization organization like ETSI, 3GPP, IETF, to promote CFNrelated technical scheme become the standard in industry. > > > If you have any thoughts, interests, questions,requirements, we can discuss by this mailing list. > > > > Thanks for the detailed information about the SIG scope. From the above, I understood that it will not be just changed to the OpenStack existing > > component but also new source code components also, do you have such list/proposal for a new component or you would like to continue discussing > > it and based on that you will get to know. How you are thinking about their (new component if any) releases like a coordinated > > release with OpenStack or independent. If coordinated then it is more than SIG scope and might be good to add a new project. > > > > By seeing the scope of this proposal (which seems very wider), I think it is not required to answer all of them now. Overall I am ok to start > > it as SIG and based on discussion/progress evaluation we will get to know more about new components, requirements etc and then we can > > change it from SIG to a new project under OpenStack or other governance (based on the core/requirement/use case it produces). > > > > -gmann > > > > > Any suggestions are welcomed, and we are really hoping tohear from anyone, and work with you. > > > > > > Jie Niu > > > China Mobile > > > > > > > > > > > > > > > > > From dpawlik at redhat.com Thu Apr 28 14:13:18 2022 From: dpawlik at redhat.com (Daniel Pawlik) Date: Thu, 28 Apr 2022 16:13:18 +0200 Subject: ELK services moving to OpenSearch Message-ID: Hello, We would like to announce the new Elasticsearch service based on Opensearch. As it was earlier mentioned [1], the old service logstash.openstack.org is deprecated and will be decommissioned in the near future. Same as earlier, each Zuul CI job results will be sent to the Elasticsearch service. With the new CI log processing system [2], we decided to add more logs to be proceeded [3]. If some important files are missing, please let me know. To use this new service, please check out the documentation [4][5] or use the following credentials: url: https://opensearch.logs.openstack.org/_dashboards/app/discover?security_tenant=global username: openstack password: openstack tenant: global I would also like to take this opportunity to ask for help with the OpenStack ci-log-processing project [6] which is providing configuration of the Opensearch service as well as tools required to process CI logs and push them into the Opensearch service. Cheers, Dan [1] https://lists.openstack.org/pipermail/openstack-discuss/2021-May/022359.html [2] https://lists.openstack.org/pipermail/openstack-discuss/2022-February/027367.html [3] https://opendev.org/openstack/ci-log-processing/src/branch/master/logscraper/config.yaml.sample [4] https://docs.openstack.org/project-team-guide/testing.html#checking-status-of-other-job-results [5] https://governance.openstack.org/sigs/tact-sig.html#opensearch [6] https://opendev.org/openstack/ci-log-processing -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Thu Apr 28 16:50:43 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Thu, 28 Apr 2022 22:20:43 +0530 Subject: [Openstack Triple0 Wallaby] Deployment error In-Reply-To: References: Message-ID: Hi Team, Any support here please ? On Wed, 27 Apr 2022, 13:01 Lokendra Rathour, wrote: > Hi Team, > we tried OpenStack deployment using *Triple0 Wallaby.* > > while we were trying to deploy the setup Undercloud was deployed > successfully. > For the Overcloud Deployment: > we generated the templates: > Command: > ./usr/share/openstack-tripleo-heat-templates/tools/process-templates.py -o > ~/openstack-tripleo-heat-templates-rendered_at_wallaby -n > /home/stack/templates/network_data.yaml -r > /home/stack/templates/roles_data.yaml > > and using the below command to deploy overcloud: > > openstack overcloud deploy --templates \ > -n /home/stack/templates/network_data.yaml \ > -r /home/stack/templates/roles_data.yaml \ > -e /home/stack/templates/environment.yaml \ > -e /home/stack/templates/environments/network-isolation.yaml \ > -e /home/stack/templates/environments/network-environment.yaml \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml > \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml > \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml > \ > -e /home/stack/templates/ironic-config.yaml \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ > -e /home/stack/containers-prepare-parameter.yaml > > after which it fails with error: > > 2022-04-27 12:34:00.801 607453 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] Exception occured > while running the command: ValueError: Failed to deploy: ERROR: HEAT-E99001 > Service neutron is not available for* resource type > OS::TripleO::Network::Ports::ControlPlaneVipPort, reason: neutron network > endpoint is not in service catalog.* > Traceback (most recent call last): > > we were able to deploy Triple0 Victoria/ Train /ussuri using the same > approach. Please suggest any further changes needed to include in our > setup. > > > -Lokendra > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Apr 28 17:06:51 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 28 Apr 2022 12:06:51 -0500 Subject: [all] Resend New CFN(Computing Force Network) SIG Proposal In-Reply-To: <1e8d01d85ae8$d35c8c80$7a15a580$@com> References: <18061d9a035.cb6bda15115439.2847955073200685698@ghanshyammann.com> <16b901d85925$1c9505a0$55bf10e0$@com> <18068375fe1.e1468dfa198843.6304495179716775789@ghanshyammann.com> <1a1b01d859e5$f910cbc0$eb326340$@com> <1806bf36d35.cbe9fdb1263946.4349444633878513122@ghanshyammann.com> <1e8d01d85ae8$d35c8c80$7a15a580$@com> Message-ID: <1807123c683.b1588c66341515.8416887600149252737@ghanshyammann.com> ---- On Thu, 28 Apr 2022 05:15:02 -0500 niujie wrote ---- > We would like to make a brief introduction about CFN to the community, we will not discuss any kind of working group application, just purely sharing CFN concept, vision and related technologies, and hopefully we can get more people interested in this topic . > Do you think TC meeting is a good place to start with? > If so, we are ok with next week's TC meeting, and I can add agenda. Sure, why not. Our next TC meeting is on 5th May and its video call so good timing. Please let me know how much time you need. Along with presenting to TC, I will suggest it will be a great to present it to wider audiance also like in OpenInfra Summit. As next Summit schedule is already out. please check with foundation if there is any space you can fit it. But I like the idea to give overview in TC meeting and see if more people get interest and ready to collaborate. -gmann > > > Thanks > Jie Niu > > -----????----- > ???: Ghanshyam Mann [mailto:gmann at ghanshyammann.com] > ????: Thursday, April 28, 2022 12:56 AM > ???: niujie > ??: 'openstack-discuss'; 'sunny'; 'Horace Li'; 'huang.shuquan'; 'gong.yongsheng'; 'shane.wang'; 'jian-feng.ding'; 'wangshengyjy'; 'yuzhiqiang'; 'zhangxiaoguang'; 'xujianwl' > ??: RE: [all] Resend New CFN(Computing Force Network) SIG Proposal > > > ---- On Tue, 26 Apr 2022 22:22:05 -0500 niujie wrote ---- > > Thanks Ghanshyam, > > > > Actually I saw Ildiko's email after my email was sent out :) > > My email was being held several hours for moderator before arrived because too many recipients, sorry for my expression about SIG and the confusion it might cause. > > > > For the new group proposal, we were not every clear of what kind of group is appropriate, SIG is the only option we know :), right now, we don't want to disobey any of the community's guidance, seems it's better we do not continue SIG or working-group proposal discussion until we got recommendation from the community. So we probably wait guidance before making next step(or group discussion), is it OK? > > No problem Jie, and thanks for the detail plan. As you are in discussion with foundation on possible place for CFN, I will say > you can continue that and if you or foundation need/think we need some place in OpenStack community to add it or some > component of it then we can discuss it in TC. > > Feel free to add it in TC weekly meeting agenda anytime or send me an email and i can add that- > - https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > > > > > As for the CFN topic, if anyone is interested and want to know more about what is CFN, we have a introduction slide, it is the same one we shared to the foundation. If you think it's good to share it on TC weekly meeting or any other form of share session, please tell us and we can share it. > > > > Thank you! > > Jie > > > > > > > > -----????----- > > ???: Ghanshyam Mann [mailto:gmann at ghanshyammann.com] > > ????: Wednesday, April 27, 2022 7:32 AM > > ???: niujie > > ??: 'openstack-discuss'; 'sunny'; 'Horace Li'; 'huang.shuquan'; 'gong.yongsheng'; 'shane.wang'; 'jian-feng.ding'; 'wangshengyjy'; 'yuzhiqiang'; 'zhangxiaoguang'; xujianwl > > ??: RE: [all] Resend New CFN(Computing Force Network) SIG Proposal > > > > ---- On Mon, 25 Apr 2022 23:21:32 -0500 niujie wrote ---- > > > Hi Ghanshyam, > > > > > > Thanks for forwarding the proposal. > > > > > > Yes, we will figure out exact changes(requirements) based on the exact use case and OpenStack component scope by further discussion. > > > As for the application migration, currently we don?t have plan for new project of tooling, we could probably start with tool(based on the discussion), but the ultimate goal is beyond just tooling, the vision of CFN is to achieve ecosystem for development, any application developed on this infrastructure could be migrated to any heterogeneous platforms. This may include build compiling platform on heterogeneous infrastructure, draft standardization for low-level code development, etc. > > > > > > You are right about CFN will not just bring changes to the OpenStack existing components, but also brings potential new source code components, we don't have such list/proposal for new component right now, that's why we would like to raise the CFN topic here, and based on the discussion with global wisdoms, we will figure out the next step. > > > > > > It is a good idea to start with a SIG, we can firstly start discussion here, and maybe re-evaluate as it goes. > > > > > > We have a brief CFN introduction slide, and shall I add a topic in TC weekly meeting agenda? > > > > I am fine to discuss it in TC meeting but I saw updates (ML reply too) from Ildiko that there is some discussion going on > > between foundation and your team and it seems CFN scope is more than OpenStack SIG and SIG is not right place to start > > with and you think SIG can be good place to start (I also have no objection on that). > > > > I think to have everyone on the same page, we need to discuss it together, I am ok to have that in TC weekly meeting > > including your team, foundation, and TC or a separate call. Whatever works for you and Ildiko (other foundation staff), > > please let me know. > > > > At the end, it does not matter much where we start it as SIG or separate infra project. Irrespective of the place, whatever > > you need from OpenStack community we will be supporting/implementing that as per use case and scope. > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028311.html > > > > -gmann > > > > > > > > Thanks > > > Jie Niu > > > > > > > > > > > > > > > > > > -----????----- > > > ???: Ghanshyam Mann [mailto:gmann at ghanshyammann.com] > > > ????: Tuesday, April 26, 2022 1:52 AM > > > ???: niujie > > > ??: openstack-discuss; sunny; 'Horace Li'; huang.shuquan; gong.yongsheng; shane.wang; jian-feng.ding; wangshengyjy; yuzhiqiang; zhangxiaoguang; xujianwl > > > ??: Re: [all] Resend New CFN(Computing Force Network) SIG Proposal > > > > > > Thanks, Niu for the proposal and sorry for the delay in response. > > > > > > I have raised this proposal to TC members and asking to check it. Overall proposal seems > > > interesting to me but few initial queries inline. > > > > > > > > > ---- On Wed, 13 Apr 2022 00:34:30 -0500 niujie wrote ---- > > > > > > > > Hi all > > > > I sent an email yesterday about NewCFN(Computing Force Network) SIG Proposal, I tried to recall it because therewas a typo in email address, then I get recall failed msg, so I assume the emailwas sent out successfully, and plan to keep it as it was. > > > > But I found that the ?recall? actionwas logged in pipermail, it might cause misunderstanding, we are sure about proposefor a new SIG, so I?m sending this again, sorry for the email flood J > > > > > > > > I'm from China Mobile, China Mobile is recently working onbuild a new information infrastructure focusing on connectivity, computingpower, and capabilities, this new information infrastructure is calledComputing Force Network, we think OpenStack community which gathers globalwisdom together is a perfect platform to discuss topics like CFN, so we areproposing to create a new SIG for CFN(Computing Force Network). Below is CFNbrief introduction and initial SIG scope. > > > > With the flourish of new business scenarios such as hybridcloud, multi-cloud, AI, big data processing, edge computing, building a newinformation infrastructure based on multiple key technologies that convergedcloud and network, will better support global digital transformation. This newinfrastructure is not only relates to cloud, it is getting more and moreconnected with network, and at the same time, we also need to consider how toconverge multiple technologies like AI, Blockchain, big data, security to providethis all-in-one service. > > > > Computing Force Network(CFN) is a new informationinfrastructure that based on network, focused on computing, deeply convergedArtificial intelligence, Block chain, Cloud, Data, Network, Edge computing, Endapplication, Security(ABCDNETS), providing all-in-one services. > > > > Xiaodong Duan, Vice president of China Mobile ResearchInstitute, introduced the vision and architecture of Computing Force Network in2021 November OpenInfra Live Keynotes by his presentation Connection +Computing + Capability Opens a New Era of Digital Infrastructure, heproposed the new era of CFN. > > > > We are expecting to work with OpenStack on how to buildthis new information infrastructure, and how to promote the development andimplementation of next generation infrastructure, achieve ubiquitous computingforce, computing & network convergence, intelligence orchestration,all-in-one service. Then computing force will become common utilities likewater and electric step by step, computing force will be ready for access uponuse and connected by single entry point. > > > > The above vision of CFN , from technical perspective, willmainly focus on unified management and orchestration of computing + networkintegrated system, computing and network deeply converged in architecture, formand protocols aspect, bringing potential changes to OpenStack components. CFNis aiming to achieve seamlessly migration of any application between anyheterogeneous platforms, it's a challenge for the industry currently, we feelthat in pursuit of CFN could potentially contributes to the development andevolution of OpenStack. > > > > > > Yes, it will require changes to OpenStack components but we will see based on the exact use case and OpenStack component scope. Is this include the application migration tooling in OpenStack? > > > > > > > > > > In this CFN SIG, we will mainly focus on discussing how tobuild the new information infrastructure of CFN, related key technologies, andwhat's the impact on OpenStack brought by the network & could convergencetrend , the topics are including but not limited to: > > > > 1, Acomputing basement for unified management of container, VM and Bare Metal > > > > 2,Computing infrastructure which eliminated the difference between heterogeneoushardware > > > > 3,Measurement criteria and scheduling scheme based on unified computinginfrastructure > > > > 4,Network solutions for SDN integrating smart NIC for data center > > > > 5,Unified orchestration & management for "network + cloud", and"cloud + edge + end" integrated scheduling solution > > > > We will have regular meetings to investigate and discussbusiness scenarios, development trend, technical scheme, release technicaldocuments, technical proposal and requirements for OpenStack Projects, andpropose new project when necessary. > > > > We will also collaborate with other open source projectslike LFN, CNCF, LFE, to have a consistent plan across communities, and alignwith global standardization organization like ETSI, 3GPP, IETF, to promote CFNrelated technical scheme become the standard in industry. > > > > If you have any thoughts, interests, questions,requirements, we can discuss by this mailing list. > > > > > > Thanks for the detailed information about the SIG scope. From the above, I understood that it will not be just changed to the OpenStack existing > > > component but also new source code components also, do you have such list/proposal for a new component or you would like to continue discussing > > > it and based on that you will get to know. How you are thinking about their (new component if any) releases like a coordinated > > > release with OpenStack or independent. If coordinated then it is more than SIG scope and might be good to add a new project. > > > > > > By seeing the scope of this proposal (which seems very wider), I think it is not required to answer all of them now. Overall I am ok to start > > > it as SIG and based on discussion/progress evaluation we will get to know more about new components, requirements etc and then we can > > > change it from SIG to a new project under OpenStack or other governance (based on the core/requirement/use case it produces). > > > > > > -gmann > > > > > > > Any suggestions are welcomed, and we are really hoping tohear from anyone, and work with you. > > > > > > > > Jie Niu > > > > China Mobile > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From katonalala at gmail.com Thu Apr 28 20:31:04 2022 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 28 Apr 2022 22:31:04 +0200 Subject: [neutron] Drivers meeting - Friday 29.4.2022 - cancelled Message-ID: Hi Neutron Drivers! Due to the lack of agenda, let's cancel tomorrow's drivers meeting. See You on the meeting next week. Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdehech.7 at gmail.com Fri Apr 29 09:26:54 2022 From: fdehech.7 at gmail.com (Firas Dehech) Date: Fri, 29 Apr 2022 10:26:54 +0100 Subject: ERROR trust id Message-ID: Hi all, I am working on a project Openstack in linux ubuntu 20.04. I want to create a cluster hadoop with one master-node and three worker-nodes and i have a problem with a cluster that doesn't work. Status ERROR: Creating cluster failed for the following reason(s): Failed to create trust Error ID: ef5e8b0a-8e6d-4878-bebb-f37f4fa50a88, Failed to create trust Error ID: 43157255-86af-4773-96c1-a07ca7ac66ed. links: https://docs.openstack.org/devstack/latest/ File local.conf : [[local|localrc]] ADMIN_PASSWORD=secret DATABASE_PASSWORD=secret RABBIT_PASSWORD=secret SERVICE_PASSWORD=$ADMIN_PASSWORD HOST_IP=10.0.2.15 LOGFILE=$DEST/logs/stack.sh.log SWIFT_REPLICAS=1SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5 SWIFT_DATA_DIR=$DEST/data enable_plugin sahara https://opendev.org/openstack/sahara enable_plugin sahara-dashboard https://opendev.org/openstack/sahara-dashboard Can you guys advise me about these errors. Is there anything to worry about? Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From adivya1.singh at gmail.com Fri Apr 29 13:16:41 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Fri, 29 Apr 2022 18:46:41 +0530 Subject: regarding custom role creation Message-ID: Hi Team, i want to create a custom role in openstack, with privilege being 1> to allow them to make image public 2> to modify/change the flavor how can i do this, I have openstack installed with XENA regards Adivya Singh -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Fri Apr 29 13:35:41 2022 From: akekane at redhat.com (Abhishek Kekane) Date: Fri, 29 Apr 2022 19:05:41 +0530 Subject: regarding custom role creation In-Reply-To: References: Message-ID: Hi Adivya, You can follow below steps; I am assuming you are using devstack based setup; 1. Source openrc file from devstack repo on your local machine. 2. Create new project with below command; openstack project create --description 'project-x' project-x --domain default 3. Create new users with below command; openstack user create admin --password admin openstack user create normal-user --password normal-user 4. Assign respective roles to user-project pair with above created users: openstack role add --user normal-user --project project-x member openstack role add --user admin --project project-x admin 5. Create admin-rc, member-rc files with below contents; (Note: Don't forget to change password, username, OS_PROJECT_NAME and OS_AUTH_URL) # member-rc file # Clear any old environment that may conflict. for key in $( set | awk -F= '/^OS_/ {print $1}' ); do unset "${key}" ; done export OS_AUTH_TYPE=password export OS_PASSWORD=normal-user export OS_AUTH_URL=http://xx.yy.zz.aa/identity export OS_USERNAME=normal-user export OS_PROJECT_NAME=project-x export COMPUTE_API_VERSION=1.1 export NOVA_VERSION=1.1 export OS_NO_CACHE=True export OS_CLOUDNAME=project-x export OS_IDENTITY_API_VERSION='3' export OS_PROJECT_DOMAIN_NAME='Default' export OS_USER_DOMAIN_NAME='Default' export OS_CACERT="/etc/pki/ca-trust/source/anchors/cm-local-ca.pem" # Add OS_CLOUDNAME to PS1 if [ -z "${CLOUDPROMPT_ENABLED:-}" ]; then export PS1=${PS1:-""} export PS1=\${OS_CLOUDNAME:+"(\$OS_CLOUDNAME)"}\ $PS1 export CLOUDPROMPT_ENABLED=1 fi ~ 6. Similar to above you can create admn-rc file for admin user 7. source respective rc files and run glance/nova/or any other commands; If you want to add a new role to an existing project then ignore step 2 and follow from step 3. Thanks & Best Regards, Abhishek Kekane On Fri, Apr 29, 2022 at 6:51 PM Adivya Singh wrote: > Hi Team, > > i want to create a custom role in openstack, with privilege being > > 1> to allow them to make image public > 2> to modify/change the flavor > > how can i do this, I have openstack installed with XENA > > regards > Adivya Singh > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ygk.kmr at gmail.com Fri Apr 29 14:14:59 2022 From: ygk.kmr at gmail.com (Gk Gk) Date: Fri, 29 Apr 2022 19:44:59 +0530 Subject: Need information Message-ID: Hi All, I need information about availability zones in nova. I tried googling but cant find enough information. My questions are , 1. Why is it that we have two concepts of aggregates and AZs ? Is one not enough ? Like exposing aggregates and creating flavors with extra specs to match ? Why do we need AZs also ? 2. Why is it that one node should only be a part of one AZ but not two ? whereas in the case of aggregates, it can overlap ? 3. Also why cant we expose only aggregates like AZs but block the compute member list to the users ? Doing this way will serve the purpose of AZ as well ? Why we dont want to expose aggregates as AZs ? Thanks Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Apr 29 14:55:52 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 29 Apr 2022 16:55:52 +0200 Subject: [all][tc] Change OpenStack release naming policy proposal Message-ID: <2175937.irdbgypaU6@p1> Hi, During the last PTG in April 2022 in the TC meeting we were discussing our release naming policy [1]. It seems that choosing appropriate name for every releases is very hard and time consuming. There is many factors which needs to be taken into consideration there like legal but also meaning of the chosen name in many different languages. Finally we decided that now, after Zed release, when we will go all round through alphabet it is very good time to change this policy and use only numeric version with "year"."release in the year". It is proposed in [2]. This is also good timing for such change because in the same release we are going to start our "Tick Tock" release cadence which means that every Tick release will be release with .1 (like 2023.1, 2024.1, etc.) and every Tock release will be one with .2 (2023.2, 2024.2, etc.). [1] https://etherpad.opendev.org/p/tc-zed-ptg#L265[1] [2] https://review.opendev.org/c/openstack/governance/+/839897[2] -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://etherpad.opendev.org/p/tc-zed-ptg#L265 [2] https://review.opendev.org/c/openstack/governance/+/839897 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From gmann at ghanshyammann.com Fri Apr 29 15:05:56 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 29 Apr 2022 10:05:56 -0500 Subject: [all][tc] Change OpenStack release naming policy proposal In-Reply-To: <2175937.irdbgypaU6@p1> References: <2175937.irdbgypaU6@p1> Message-ID: <18075db6e21.118543125401989.8208677652226278753@ghanshyammann.com> ---- On Fri, 29 Apr 2022 09:55:52 -0500 Slawek Kaplonski wrote ---- > Hi, > > During the last PTG in April 2022 in the TC meeting we were discussing our release naming policy [1]. > It seems that choosing appropriate name for every releases is very hard and time consuming. There is many factors which needs to be taken into consideration there like legal but also meaning of the chosen name in many different languages. Adding more detail on why TC is thinking to drop the release name and keep only number (slawek will add these in review also as histiry to know) Why we dropped the release name: ------------------------------------------ * Problem with release name: ** We are a wider community with many international communities, developers, and cultures and choosing a perfect name satisfying all of them is not possible. ** We as individuals also have some problems with a few names which might be due to emotions, political, or historical. And filtering them out is not possible. ** Name after election need trademark checks from the foundation as a final step and there is always a chance that winning names are filtered out so the electorate might not be happy with that. So the process is also not perfect. ** . * Tried to solve it in many ways: There were a lot of proposals to solve the above issues but we could not get any agreement on either of these and live with all these issues until the Zed cycle. ** https://review.opendev.org/c/openstack/governance/+/677749 ** https://review.opendev.org/c/openstack/governance/+/678046 ** https://review.opendev.org/c/openstack/governance/+/677745 ** https://review.opendev.org/c/openstack/governance/+/684688 ** https://review.opendev.org/c/openstack/governance/+/675788 ** https://review.opendev.org/c/openstack/governance/+/687764 ** https://review.opendev.org/c/openstack/governance/+/677827 ** https://review.opendev.org/c/openstack/governance/+/677748 ** https://review.opendev.org/c/openstack/governance/+/677747 ** https://review.opendev.org/c/openstack/governance/+/677746 -gmann > > Finally we decided that now, after Zed release, when we will go all round through alphabet it is very good time to change this policy and use only numeric version with "year"."release in the year". It is proposed in [2]. > This is also good timing for such change because in the same release we are going to start our "Tick Tock" release cadence which means that every Tick release will be release with .1 (like 2023.1, 2024.1, etc.) and every Tock release will be one with .2 (2023.2, 2024.2, etc.). > > [1] https://etherpad.opendev.org/p/tc-zed-ptg#L265 > [2] https://review.opendev.org/c/openstack/governance/+/839897 > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > From allison at openinfra.dev Fri Apr 29 15:15:27 2022 From: allison at openinfra.dev (Allison Price) Date: Fri, 29 Apr 2022 10:15:27 -0500 Subject: [all][tc] Change OpenStack release naming policy proposal In-Reply-To: <18075db6e21.118543125401989.8208677652226278753@ghanshyammann.com> References: <2175937.irdbgypaU6@p1> <18075db6e21.118543125401989.8208677652226278753@ghanshyammann.com> Message-ID: <088599E0-4F95-4467-900A-5041704BD031@openinfra.dev> Hi Slawek and Gmann, Thank you for raising the points about the OpenStack release naming process. > On Apr 29, 2022, at 10:05 AM, Ghanshyam Mann wrote: > > ---- On Fri, 29 Apr 2022 09:55:52 -0500 Slawek Kaplonski wrote ---- >> Hi, >> >> During the last PTG in April 2022 in the TC meeting we were discussing our release naming policy [1]. >> It seems that choosing appropriate name for every releases is very hard and time consuming. There is many factors which needs to be taken into consideration there like legal but also meaning of the chosen name in many different languages. > > Adding more detail on why TC is thinking to drop the release name and keep only number (slawek > will add these in review also as histiry to know) > > Why we dropped the release name: > ------------------------------------------ > > * Problem with release name: > > ** We are a wider community with many international communities, developers, and cultures and choosing a perfect name satisfying all of them is not possible. > ** We as individuals also have some problems with a few names which might be due to emotions, political, or historical. And filtering them out is not possible. > ** Name after election need trademark checks from the foundation as a final step and there is always a chance that winning names are filtered out so the electorate might not be happy with that. So the process is also not perfect. > ** . From a release marketing perspective, I have significant concerns going down this route. I think that not only do the names reflect whimsical aspects of the community personality, it?s also a huge marketing tool in terms of getting traction with OpenStack coverage. This helps us debunk some of the myths out there around the OpenStack community?s relevance as well as convey the innovation happening in the features that are delivered upstream. I don?t want to minimize the time consuming nature of the process as well as the cultural sensitivities, so I would like to better understand the steps here and what some of the concerns are in moving forward. From a Foundation perspective, we are happy to help take the processes off the TC as part of other release marketing activities that we do. I?d be happy to join a TC meeting or discuss this more at the Summit in Berlin, but I would like to discuss alternate ways to maintain the naming process we have in place if possible before moving forward. Allison > > * Tried to solve it in many ways: > > There were a lot of proposals to solve the above issues but we could not get any agreement on either of these and live with all these issues until the Zed cycle. > > ** https://review.opendev.org/c/openstack/governance/+/677749 > ** https://review.opendev.org/c/openstack/governance/+/678046 > ** https://review.opendev.org/c/openstack/governance/+/677745 > ** https://review.opendev.org/c/openstack/governance/+/684688 > ** https://review.opendev.org/c/openstack/governance/+/675788 > ** https://review.opendev.org/c/openstack/governance/+/687764 > ** https://review.opendev.org/c/openstack/governance/+/677827 > ** https://review.opendev.org/c/openstack/governance/+/677748 > ** https://review.opendev.org/c/openstack/governance/+/677747 > ** https://review.opendev.org/c/openstack/governance/+/677746 > > -gmann > >> >> Finally we decided that now, after Zed release, when we will go all round through alphabet it is very good time to change this policy and use only numeric version with "year"."release in the year". It is proposed in [2]. >> This is also good timing for such change because in the same release we are going to start our "Tick Tock" release cadence which means that every Tick release will be release with .1 (like 2023.1, 2024.1, etc.) and every Tock release will be one with .2 (2023.2, 2024.2, etc.). >> >> [1] https://etherpad.opendev.org/p/tc-zed-ptg#L265 >> [2] https://review.opendev.org/c/openstack/governance/+/839897 >> >> -- >> Slawek Kaplonski >> Principal Software Engineer >> Red Hat >> > From marcin.juszkiewicz at linaro.org Fri Apr 29 15:43:25 2022 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Fri, 29 Apr 2022 17:43:25 +0200 Subject: [all][tc] Change OpenStack release naming policy proposal In-Reply-To: <2175937.irdbgypaU6@p1> References: <2175937.irdbgypaU6@p1> Message-ID: <3ef1f910-5f87-fad2-9bee-79df2abced07@linaro.org> W dniu 29.04.2022 o?16:55, Slawek Kaplonski pisze: > Finally we decided that now, after Zed release, when we will go all > round through alphabet it is very good time to change this policy and > use only numeric version with "year"."release in the year". It is > proposed in [2]. > > This is also good timing for such change because in the same release we > are going to start our "Tick Tock" release cadence which means that > every Tick release will be release with .1 (like 2023.1, 2024.1, etc.) > and every Tock release will be one with .2 (2023.2, 2024.2, etc.). I wonder how often will we see something like "202x.1? I will wait for .2 to get bugs fixed". I seen that with Ubuntu LTS - xx.04 is for brave, xx.04.1 is for first upgrades, xx.04.2 is to attempt LTS->LTS upgrade. Also suggestion: drop tick/tock from naming documentation please. I never remember which is major and which is minor. From gmann at ghanshyammann.com Fri Apr 29 15:53:02 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 29 Apr 2022 10:53:02 -0500 Subject: [all][tc] Change OpenStack release naming policy proposal In-Reply-To: <088599E0-4F95-4467-900A-5041704BD031@openinfra.dev> References: <2175937.irdbgypaU6@p1> <18075db6e21.118543125401989.8208677652226278753@ghanshyammann.com> <088599E0-4F95-4467-900A-5041704BD031@openinfra.dev> Message-ID: <18076068c94.f090875e405065.8322699810706317962@ghanshyammann.com> ---- On Fri, 29 Apr 2022 10:15:27 -0500 Allison Price wrote ---- > Hi Slawek and Gmann, > > Thank you for raising the points about the OpenStack release naming process. > > > On Apr 29, 2022, at 10:05 AM, Ghanshyam Mann wrote: > > > > ---- On Fri, 29 Apr 2022 09:55:52 -0500 Slawek Kaplonski wrote ---- > >> Hi, > >> > >> During the last PTG in April 2022 in the TC meeting we were discussing our release naming policy [1]. > >> It seems that choosing appropriate name for every releases is very hard and time consuming. There is many factors which needs to be taken into consideration there like legal but also meaning of the chosen name in many different languages. > > > > Adding more detail on why TC is thinking to drop the release name and keep only number (slawek > > will add these in review also as histiry to know) > > > > Why we dropped the release name: > > ------------------------------------------ > > > > * Problem with release name: > > > > ** We are a wider community with many international communities, developers, and cultures and choosing a perfect name satisfying all of them is not possible. > > ** We as individuals also have some problems with a few names which might be due to emotions, political, or historical. And filtering them out is not possible. > > ** Name after election need trademark checks from the foundation as a final step and there is always a chance that winning names are filtered out so the electorate might not be happy with that. So the process is also not perfect. > > ** . > > From a release marketing perspective, I have significant concerns going down this route. I think that not only do the names reflect whimsical aspects of the community personality, it?s also a huge marketing tool in terms of getting traction with OpenStack coverage. This helps us debunk some of the myths out there around the OpenStack community?s relevance as well as convey the innovation happening in the features that are delivered upstream. > > I don?t want to minimize the time consuming nature of the process as well as the cultural sensitivities, so I would like to better understand the steps here and what some of the concerns are in moving forward. From a Foundation perspective, we are happy to help take the processes off the TC as part of other release marketing activities that we do. > > I?d be happy to join a TC meeting or discuss this more at the Summit in Berlin, but I would like to discuss alternate ways to maintain the naming process we have in place if possible before moving forward. Thanks Alisson for joining the discussion. As it involve the foundation members/marketting team, I thought of keeping the foundation ML in loop but forgot (doing now). I understand and we touch based the marketting perspective in PTG but not in detail. Main issue here is not just only the process but more of the cutural. None of the name is going to be accepted by everyone in community and that is why we face the objection on name almost since ussuri cycle. As you can see the reference I mentioned in ' * Tried to solve it in many ways' section, we tried to solve the process in many ways but none of those are accepted as none of it is perfect. One idea to keep markettitng things same is that we can keep some tag line with few words to make release attractive and interesting. For example: "OpenStack 2023.1 - 'Secure & Stable' ". Does that sovle the marketting need? We wil be happy to discuss if there is new idea which can solve the mentioned issues. Feel free to proposa the idea in TC and I can schedule a call for that. -gmann > > Allison > > > > > * Tried to solve it in many ways: > > > > There were a lot of proposals to solve the above issues but we could not get any agreement on either of these and live with all these issues until the Zed cycle. > > > > ** https://review.opendev.org/c/openstack/governance/+/677749 > > ** https://review.opendev.org/c/openstack/governance/+/678046 > > ** https://review.opendev.org/c/openstack/governance/+/677745 > > ** https://review.opendev.org/c/openstack/governance/+/684688 > > ** https://review.opendev.org/c/openstack/governance/+/675788 > > ** https://review.opendev.org/c/openstack/governance/+/687764 > > ** https://review.opendev.org/c/openstack/governance/+/677827 > > ** https://review.opendev.org/c/openstack/governance/+/677748 > > ** https://review.opendev.org/c/openstack/governance/+/677747 > > ** https://review.opendev.org/c/openstack/governance/+/677746 > > > > -gmann > > > >> > >> Finally we decided that now, after Zed release, when we will go all round through alphabet it is very good time to change this policy and use only numeric version with "year"."release in the year". It is proposed in [2]. > >> This is also good timing for such change because in the same release we are going to start our "Tick Tock" release cadence which means that every Tick release will be release with .1 (like 2023.1, 2024.1, etc.) and every Tock release will be one with .2 (2023.2, 2024.2, etc.). > >> > >> [1] https://etherpad.opendev.org/p/tc-zed-ptg#L265 > >> [2] https://review.opendev.org/c/openstack/governance/+/839897 > >> > >> -- > >> Slawek Kaplonski > >> Principal Software Engineer > >> Red Hat > >> > > > > > From zigo at debian.org Fri Apr 29 15:54:34 2022 From: zigo at debian.org (Thomas Goirand) Date: Fri, 29 Apr 2022 17:54:34 +0200 Subject: [all][tc] Change OpenStack release naming policy proposal In-Reply-To: <2175937.irdbgypaU6@p1> References: <2175937.irdbgypaU6@p1> Message-ID: As someone that can tell by heart all of the 26 release names, I will wholeheartedly regret release names. Is the decision final, or can it still be reverted? Cheers, Thomas Goirand (zigo) On 4/29/22 16:55, Slawek Kaplonski wrote: > Hi, > > > During the last PTG in April 2022 in the TC meeting we were discussing > our release naming policy [1]. > > It seems that choosing appropriate name for every releases is very hard > and time consuming. There is many factors which needs to be taken into > consideration there like legal but also meaning of the chosen name in > many different languages. > > > Finally we decided that now, after Zed release, when we will go all > round through alphabet it is very good time to change this policy and > use only numeric version with "year"."release in the year". It is > proposed in [2]. > > This is also good timing for such change because in the same release we > are going to start our "Tick Tock" release cadence which means that > every Tick release will be release with .1 (like 2023.1, 2024.1, etc.) > and every Tock release will be one with .2 (2023.2, 2024.2, etc.). > > > [1] https://etherpad.opendev.org/p/tc-zed-ptg#L265 > > > [2] https://review.opendev.org/c/openstack/governance/+/839897 > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > From kurt at garloff.de Fri Apr 29 17:03:47 2022 From: kurt at garloff.de (Kurt Garloff) Date: Fri, 29 Apr 2022 19:03:47 +0200 Subject: =?US-ASCII?Q?Re=3A_=5BOpenInfra_Foundation=5D_=5Ball=5D=5Btc=5D_Chang?= =?US-ASCII?Q?e_OpenStack_release_naming_policy_proposal?= In-Reply-To: <18076068c94.f090875e405065.8322699810706317962@ghanshyammann.com> References: <2175937.irdbgypaU6@p1> <18075db6e21.118543125401989.8208677652226278753@ghanshyammann.com> <088599E0-4F95-4467-900A-5041704BD031@openinfra.dev> <18076068c94.f090875e405065.8322699810706317962@ghanshyammann.com> Message-ID: <52AED2E2-E10F-463B-A06D-D219B246D980@garloff.de> Hi, I see a tendency in western societies that no decisions are ever taken out of fear someone could be offended or even litigate. While it's very reasonable to be careful to avoid offenses, we must not take it to the extreme and allow it to paralyze us by requiring no one ever objects, IMVHO. 100% happiness is too high a bar. I would hope that the offer from the foundation staff to help with the name vetting process and take off load from the TC is helpful here. Replacing well rememberable names with sterile numbers is definitely a step backwards in perception. Just my 0.02?. -- Kurt Am 29. April 2022 17:53:02 MESZ schrieb Ghanshyam Mann : > ---- On Fri, 29 Apr 2022 10:15:27 -0500 Allison Price wrote ---- > > Hi Slawek and Gmann, > > > > Thank you for raising the points about the OpenStack release naming process. > > > > > On Apr 29, 2022, at 10:05 AM, Ghanshyam Mann wrote: > > > > > > ---- On Fri, 29 Apr 2022 09:55:52 -0500 Slawek Kaplonski wrote ---- > > >> Hi, > > >> > > >> During the last PTG in April 2022 in the TC meeting we were discussing our release naming policy [1]. > > >> It seems that choosing appropriate name for every releases is very hard and time consuming. There is many factors which needs to be taken into consideration there like legal but also meaning of the chosen name in many different languages. > > > > > > Adding more detail on why TC is thinking to drop the release name and keep only number (slawek > > > will add these in review also as histiry to know) > > > > > > Why we dropped the release name: > > > ------------------------------------------ > > > > > > * Problem with release name: > > > > > > ** We are a wider community with many international communities, developers, and cultures and choosing a perfect name satisfying all of them is not possible. > > > ** We as individuals also have some problems with a few names which might be due to emotions, political, or historical. And filtering them out is not possible. > > > ** Name after election need trademark checks from the foundation as a final step and there is always a chance that winning names are filtered out so the electorate might not be happy with that. So the process is also not perfect. > > > ** . > > > > From a release marketing perspective, I have significant concerns going down this route. I think that not only do the names reflect whimsical aspects of the community personality, it?s also a huge marketing tool in terms of getting traction with OpenStack coverage. This helps us debunk some of the myths out there around the OpenStack community?s relevance as well as convey the innovation happening in the features that are delivered upstream. > > > > I don?t want to minimize the time consuming nature of the process as well as the cultural sensitivities, so I would like to better understand the steps here and what some of the concerns are in moving forward. From a Foundation perspective, we are happy to help take the processes off the TC as part of other release marketing activities that we do. > > > > I?d be happy to join a TC meeting or discuss this more at the Summit in Berlin, but I would like to discuss alternate ways to maintain the naming process we have in place if possible before moving forward. > >Thanks Alisson for joining the discussion. As it involve the foundation members/marketting team, I thought of keeping >the foundation ML in loop but forgot (doing now). > >I understand and we touch based the marketting perspective in PTG but not in detail. > >Main issue here is not just only the process but more of the cutural. None of the name is going to be >accepted by everyone in community and that is why we face the objection on name almost since >ussuri cycle. As you can see the reference I mentioned in ' * Tried to solve it in many ways' section, we >tried to solve the process in many ways but none of those are accepted as none of it is perfect. > >One idea to keep markettitng things same is that we can keep some tag line with few words to >make release attractive and interesting. For example: "OpenStack 2023.1 - 'Secure & Stable' ". Does >that sovle the marketting need? > >We wil be happy to discuss if there is new idea which can solve the mentioned issues. Feel free to >proposa the idea in TC and I can schedule a call for that. > >-gmann > > > > > Allison > > > > > > > > * Tried to solve it in many ways: > > > > > > There were a lot of proposals to solve the above issues but we could not get any agreement on either of these and live with all these issues until the Zed cycle. > > > > > > ** https://review.opendev.org/c/openstack/governance/+/677749 > > > ** https://review.opendev.org/c/openstack/governance/+/678046 > > > ** https://review.opendev.org/c/openstack/governance/+/677745 > > > ** https://review.opendev.org/c/openstack/governance/+/684688 > > > ** https://review.opendev.org/c/openstack/governance/+/675788 > > > ** https://review.opendev.org/c/openstack/governance/+/687764 > > > ** https://review.opendev.org/c/openstack/governance/+/677827 > > > ** https://review.opendev.org/c/openstack/governance/+/677748 > > > ** https://review.opendev.org/c/openstack/governance/+/677747 > > > ** https://review.opendev.org/c/openstack/governance/+/677746 > > > > > > -gmann > > > > > >> > > >> Finally we decided that now, after Zed release, when we will go all round through alphabet it is very good time to change this policy and use only numeric version with "year"."release in the year". It is proposed in [2]. > > >> This is also good timing for such change because in the same release we are going to start our "Tick Tock" release cadence which means that every Tick release will be release with .1 (like 2023.1, 2024.1, etc.) and every Tock release will be one with .2 (2023.2, 2024.2, etc.). > > >> > > >> [1] https://etherpad.opendev.org/p/tc-zed-ptg#L265 > > >> [2] https://review.opendev.org/c/openstack/governance/+/839897 > > >> > > >> -- > > >> Slawek Kaplonski > > >> Principal Software Engineer > > >> Red Hat > > >> > > > > > > > > > > >_______________________________________________ >Foundation mailing list >Foundation at lists.openinfra.dev >http://lists.openinfra.dev/cgi-bin/mailman/listinfo/foundation -- Kurt Garloff , Cologne, Germany (Sent from Mobile with K9.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Fri Apr 29 17:28:03 2022 From: emccormick at cirrusseven.com (Erik McCormick) Date: Fri, 29 Apr 2022 13:28:03 -0400 Subject: Need information In-Reply-To: References: Message-ID: On Fri, Apr 29, 2022, 10:17 AM Gk Gk wrote: > Hi All, > > I need information about availability zones in nova. I tried googling but > cant find enough information. My questions are , > > 1. Why is it that we have two concepts of aggregates and AZs ? Is one not > enough ? Like exposing aggregates and creating flavors with extra specs to > match ? Why do we need AZs also ? > AZ isn't for picking the right type of host. It's for picking hosts that are distributed. Maybe you want a host with local disk. Great, you use aggregates / flavor specs for that. But what if you want instances for a myself cluster or setting to each be in different racks? Then you need AZs. > 2. Why is it that one node should only be a part of one AZ but not two ? > whereas in the case of aggregates, it can overlap ? > Overlapping failure domains would be pretty confusing and pointless. > > 3. Also why cant we expose only aggregates like AZs but block the compute > member list to the users ? Doing this way will serve the purpose of AZ as > well ? Why we dont want to expose aggregates as AZs ? > They're mutually exclusive concepts. > Thanks > Kumar > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Apr 29 17:35:31 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 29 Apr 2022 12:35:31 -0500 Subject: [OpenInfra Foundation] [all][tc] Change OpenStack release naming policy proposal In-Reply-To: <52AED2E2-E10F-463B-A06D-D219B246D980@garloff.de> References: <2175937.irdbgypaU6@p1> <18075db6e21.118543125401989.8208677652226278753@ghanshyammann.com> <088599E0-4F95-4467-900A-5041704BD031@openinfra.dev> <18076068c94.f090875e405065.8322699810706317962@ghanshyammann.com> <52AED2E2-E10F-463B-A06D-D219B246D980@garloff.de> Message-ID: <18076645fad.c8ca1e22409208.778705174696178878@ghanshyammann.com> ---- On Fri, 29 Apr 2022 12:03:47 -0500 Kurt Garloff wrote ---- > Hi, > > I see a tendency in western societies that no decisions are ever taken out of fear someone could be offended or even litigate. > While it's very reasonable to be careful to avoid offenses, we must not take it to the extreme and allow it to paralyze us by requiring no one ever objects, IMVHO. 100% happiness is too high a bar. > > I would hope that the offer from the foundation staff to help with the name vetting process and take off load from the TC is helpful here. Definatly that an option and we will be happy to do that. > > Replacing well rememberable names with sterile numbers is definitely a step backwards in perception. Well, it depends. For marketting yes names are great to remember and publish but from tehnical perspective especially while upgrade they are hard to know which year these releses were released. And when we will have tick-tock release model[1] number are more useful to know by operators what which one is tick release and which one is tock. With name only it is not best way to find. So both have its pros and cons. [1] https://governance.openstack.org/tc/resolutions/20220210-release-cadence-adjustment.html -gmann > > Just my 0.02?. > -- Kurt > > Am 29. April 2022 17:53:02 MESZ schrieb Ghanshyam Mann : ---- On Fri, 29 Apr 2022 10:15:27 -0500 Allison Price wrote ---- > Hi Slawek and Gmann, > > Thank you for raising the points about the OpenStack release naming process. > > On Apr 29, 2022, at 10:05 AM, Ghanshyam Mann wrote: > > ---- On Fri, 29 Apr 2022 09:55:52 -0500 Slawek Kaplonski wrote ---- > Hi, > > During the last PTG in April 2022 in the TC meeting we were discussing our release naming policy [1]. > It seems that choosing appropriate name for every releases is very hard and time consuming. There is many factors which needs to be taken into consideration there like legal but also meaning of the chosen name in many different languages. > > Adding more detail on why TC is thinking to drop the release name and keep only number (slawek > will add these in review also as histiry to know) > > Why we dropped the release name: > * Problem with release name: > > ** We are a wider community with many international communities, developers, and cultures and choosing a perfect name satisfying all of them is not possible. > ** We as individuals also have some problems with a few names which might be due to emotions, political, or historical. And filtering them out is not possible. > ** Name after election need trademark checks from the foundation as a final step and there is always a chance that winning names are filtered out so the electorate might not be happy with that. So the process is also not perfect. > ** . > > From a release marketing perspective, I have significant concerns going down this route. I think that not only do the names reflect whimsical aspects of the community personality, it?s also a huge marketing tool in terms of getting traction with OpenStack coverage. This helps us debunk some of the myths out there around the OpenStack community?s relevance as well as convey the innovation happening in the features that are delivered upstream. > > I don?t want to minimize the time consuming nature of the process as well as the cultural sensitivities, so I would like to better understand the steps here and what some of the concerns are in moving forward. From a Foundation perspective, we are happy to help take the processes off the TC as part of other release marketing activities that we do. > > I?d be happy to join a TC meeting or discuss this more at the Summit in Berlin, but I would like to discuss alternate ways to maintain the naming process we have in place if possible before moving forward. > > Thanks Alisson for joining the discussion. As it involve the foundation members/marketting team, I thought of keeping > the foundation ML in loop but forgot (doing now). > > I understand and we touch based the marketting perspective in PTG but not in detail. > > Main issue here is not just only the process but more of the cutural. None of the name is going to be > accepted by everyone in community and that is why we face the objection on name almost since > ussuri cycle. As you can see the reference I mentioned in ' * Tried to solve it in many ways' section, we > tried to solve the process in many ways but none of those are accepted as none of it is perfect. > > One idea to keep markettitng things same is that we can keep some tag line with few words to > make release attractive and interesting. For example: "OpenStack 2023.1 - 'Secure & Stable' ". Does > that sovle the marketting need? > > We wil be happy to discuss if there is new idea which can solve the mentioned issues. Feel free to > proposa the idea in TC and I can schedule a call for that. > > -gmann > > > Allison > > > * Tried to solve it in many ways: > > There were a lot of proposals to solve the above issues but we could not get any agreement on either of these and live with all these issues until the Zed cycle. > > ** https://review.opendev.org/c/openstack/governance/+/677749 > ** https://review.opendev.org/c/openstack/governance/+/678046 > ** https://review.opendev.org/c/openstack/governance/+/677745 > ** https://review.opendev.org/c/openstack/governance/+/684688 > ** https://review.opendev.org/c/openstack/governance/+/675788 > ** https://review.opendev.org/c/openstack/governance/+/687764 > ** https://review.opendev.org/c/openstack/governance/+/677827 > ** https://review.opendev.org/c/openstack/governance/+/677748 > ** https://review.opendev.org/c/openstack/governance/+/677747 > ** https://review.opendev.org/c/openstack/governance/+/677746 > > -gmann > > > Finally we decided that now, after Zed release, when we will go all round through alphabet it is very good time to change this policy and use only numeric version with "year"."release in the year". It is proposed in [2]. > This is also good timing for such change because in the same release we are going to start our "Tick Tock" release cadence which means that every Tick release will be release with .1 (like 2023.1, 2024.1, etc.) and every Tock release will be one with .2 (2023.2, 2024.2, etc.). > > [1] https://etherpad.opendev.org/p/tc-zed-ptg#L265 > [2] https://review.opendev.org/c/openstack/governance/+/839897 > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > > > > > > > Foundation mailing list > Foundation at lists.openinfra.dev > http://lists.openinfra.dev/cgi-bin/mailman/listinfo/foundation > -- > Kurt Garloff , Cologne, Germany > (Sent from Mobile with K9.) From gmann at ghanshyammann.com Fri Apr 29 18:05:30 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 29 Apr 2022 13:05:30 -0500 Subject: [OpenInfra Foundation] [all][tc] Change OpenStack release naming policy proposal In-Reply-To: <18076645fad.c8ca1e22409208.778705174696178878@ghanshyammann.com> References: <2175937.irdbgypaU6@p1> <18075db6e21.118543125401989.8208677652226278753@ghanshyammann.com> <088599E0-4F95-4467-900A-5041704BD031@openinfra.dev> <18076068c94.f090875e405065.8322699810706317962@ghanshyammann.com> <52AED2E2-E10F-463B-A06D-D219B246D980@garloff.de> <18076645fad.c8ca1e22409208.778705174696178878@ghanshyammann.com> Message-ID: <180767fd287.f316b64b409958.2313211197013427574@ghanshyammann.com> ---- On Fri, 29 Apr 2022 12:35:31 -0500 Ghanshyam Mann wrote ---- > ---- On Fri, 29 Apr 2022 12:03:47 -0500 Kurt Garloff wrote ---- > > Hi, > > > > I see a tendency in western societies that no decisions are ever taken out of fear someone could be offended or even litigate. > > While it's very reasonable to be careful to avoid offenses, we must not take it to the extreme and allow it to paralyze us by requiring no one ever objects, IMVHO. 100% happiness is too high a bar. > > > > I would hope that the offer from the foundation staff to help with the name vetting process and take off load from the TC is helpful here. > > Definatly that an option and we will be happy to do that. > > > > > Replacing well rememberable names with sterile numbers is definitely a step backwards in perception. > > Well, it depends. For marketting yes names are great to remember and publish but from tehnical perspective especially > while upgrade they are hard to know which year these releses were released. And when we will have tick-tock release > model[1] number are more useful to know by operators what which one is tick release and which one is tock. With name > only it is not best way to find. > > So both have its pros and cons. NOTE: It seems keeping both ML 'openstck-discuss' and 'foundation' is not the right way as per the ML Etiquette[1] even I am sure that restrict the communication for topic targetting more than one ML or asking more people to join other ML for a single topic. So let's keep this thread discussion in openstack-dicuss ML only and anyone from other ML interested in this tiopic can join in openstack-discuss ML. Sorry for the confusion. -- foundation ML. [1] https://wiki.openstack.org/wiki/MailingListEtiquette#Avoid_cross-posting -gmann > > [1] https://governance.openstack.org/tc/resolutions/20220210-release-cadence-adjustment.html > > -gmann > > > > > Just my 0.02?. > > -- Kurt > > > > Am 29. April 2022 17:53:02 MESZ schrieb Ghanshyam Mann : ---- On Fri, 29 Apr 2022 10:15:27 -0500 Allison Price wrote ---- > > Hi Slawek and Gmann, > > > > Thank you for raising the points about the OpenStack release naming process. > > > > On Apr 29, 2022, at 10:05 AM, Ghanshyam Mann wrote: > > > > ---- On Fri, 29 Apr 2022 09:55:52 -0500 Slawek Kaplonski wrote ---- > > Hi, > > > > During the last PTG in April 2022 in the TC meeting we were discussing our release naming policy [1]. > > It seems that choosing appropriate name for every releases is very hard and time consuming. There is many factors which needs to be taken into consideration there like legal but also meaning of the chosen name in many different languages. > > > > Adding more detail on why TC is thinking to drop the release name and keep only number (slawek > > will add these in review also as histiry to know) > > > > Why we dropped the release name: > > * Problem with release name: > > > > ** We are a wider community with many international communities, developers, and cultures and choosing a perfect name satisfying all of them is not possible. > > ** We as individuals also have some problems with a few names which might be due to emotions, political, or historical. And filtering them out is not possible. > > ** Name after election need trademark checks from the foundation as a final step and there is always a chance that winning names are filtered out so the electorate might not be happy with that. So the process is also not perfect. > > ** . > > > > From a release marketing perspective, I have significant concerns going down this route. I think that not only do the names reflect whimsical aspects of the community personality, it?s also a huge marketing tool in terms of getting traction with OpenStack coverage. This helps us debunk some of the myths out there around the OpenStack community?s relevance as well as convey the innovation happening in the features that are delivered upstream. > > > > I don?t want to minimize the time consuming nature of the process as well as the cultural sensitivities, so I would like to better understand the steps here and what some of the concerns are in moving forward. From a Foundation perspective, we are happy to help take the processes off the TC as part of other release marketing activities that we do. > > > > I?d be happy to join a TC meeting or discuss this more at the Summit in Berlin, but I would like to discuss alternate ways to maintain the naming process we have in place if possible before moving forward. > > > > Thanks Alisson for joining the discussion. As it involve the foundation members/marketting team, I thought of keeping > > the foundation ML in loop but forgot (doing now). > > > > I understand and we touch based the marketting perspective in PTG but not in detail. > > > > Main issue here is not just only the process but more of the cutural. None of the name is going to be > > accepted by everyone in community and that is why we face the objection on name almost since > > ussuri cycle. As you can see the reference I mentioned in ' * Tried to solve it in many ways' section, we > > tried to solve the process in many ways but none of those are accepted as none of it is perfect. > > > > One idea to keep markettitng things same is that we can keep some tag line with few words to > > make release attractive and interesting. For example: "OpenStack 2023.1 - 'Secure & Stable' ". Does > > that sovle the marketting need? > > > > We wil be happy to discuss if there is new idea which can solve the mentioned issues. Feel free to > > proposa the idea in TC and I can schedule a call for that. > > > > -gmann > > > > > > Allison > > > > > > * Tried to solve it in many ways: > > > > There were a lot of proposals to solve the above issues but we could not get any agreement on either of these and live with all these issues until the Zed cycle. > > > > ** https://review.opendev.org/c/openstack/governance/+/677749 > > ** https://review.opendev.org/c/openstack/governance/+/678046 > > ** https://review.opendev.org/c/openstack/governance/+/677745 > > ** https://review.opendev.org/c/openstack/governance/+/684688 > > ** https://review.opendev.org/c/openstack/governance/+/675788 > > ** https://review.opendev.org/c/openstack/governance/+/687764 > > ** https://review.opendev.org/c/openstack/governance/+/677827 > > ** https://review.opendev.org/c/openstack/governance/+/677748 > > ** https://review.opendev.org/c/openstack/governance/+/677747 > > ** https://review.opendev.org/c/openstack/governance/+/677746 > > > > -gmann > > > > > > Finally we decided that now, after Zed release, when we will go all round through alphabet it is very good time to change this policy and use only numeric version with "year"."release in the year". It is proposed in [2]. > > This is also good timing for such change because in the same release we are going to start our "Tick Tock" release cadence which means that every Tick release will be release with .1 (like 2023.1, 2024.1, etc.) and every Tock release will be one with .2 (2023.2, 2024.2, etc.). > > > > [1] https://etherpad.opendev.org/p/tc-zed-ptg#L265 > > [2] https://review.opendev.org/c/openstack/governance/+/839897 > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > > > > > > > > > > > > > Foundation mailing list > > Foundation at lists.openinfra.dev > > http://lists.openinfra.dev/cgi-bin/mailman/listinfo/foundation > > -- > > Kurt Garloff , Cologne, Germany > > (Sent from Mobile with K9.) > > From fungi at yuggoth.org Fri Apr 29 18:13:54 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 29 Apr 2022 18:13:54 +0000 Subject: [dev][infra][tact-sig] Retiring the status.openstack.org server In-Reply-To: <20220422201233.mcfhn2u4haceuaf2@yuggoth.org> References: <20220422201233.mcfhn2u4haceuaf2@yuggoth.org> Message-ID: <20220429181353.iiup762gm7fvmxks@yuggoth.org> On 2022-04-22 20:12:34 +0000 (+0000), Jeremy Stanley wrote: [...] > I'm planning to take status.openstack.org offline at the end of > this month (late next week) [...] Since no concerns were raised, I've shut down the server as of 16:00 UTC today. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From allison at openinfra.dev Fri Apr 29 18:18:33 2022 From: allison at openinfra.dev (Allison Price) Date: Fri, 29 Apr 2022 13:18:33 -0500 Subject: [OpenInfra Foundation] [all][tc] Change OpenStack release naming policy proposal In-Reply-To: <18076645fad.c8ca1e22409208.778705174696178878@ghanshyammann.com> References: <2175937.irdbgypaU6@p1> <18075db6e21.118543125401989.8208677652226278753@ghanshyammann.com> <088599E0-4F95-4467-900A-5041704BD031@openinfra.dev> <18076068c94.f090875e405065.8322699810706317962@ghanshyammann.com> <52AED2E2-E10F-463B-A06D-D219B246D980@garloff.de> <18076645fad.c8ca1e22409208.778705174696178878@ghanshyammann.com> Message-ID: <1E9F233D-46CE-4369-970D-0532B0EE981D@openinfra.dev> > On Apr 29, 2022, at 12:35 PM, Ghanshyam Mann wrote: > > ---- On Fri, 29 Apr 2022 12:03:47 -0500 Kurt Garloff wrote ---- >> Hi, >> >> I see a tendency in western societies that no decisions are ever taken out of fear someone could be offended or even litigate. >> While it's very reasonable to be careful to avoid offenses, we must not take it to the extreme and allow it to paralyze us by requiring no one ever objects, IMVHO. 100% happiness is too high a bar. >> >> I would hope that the offer from the foundation staff to help with the name vetting process and take off load from the TC is helpful here. > > Definatly that an option and we will be happy to do that. I?ll put time on the next TC meeting agenda to join and discuss this as an option, but I it?s something we are happy to resource. > >> >> Replacing well rememberable names with sterile numbers is definitely a step backwards in perception. > > Well, it depends. For marketting yes names are great to remember and publish but from tehnical perspective especially > while upgrade they are hard to know which year these releses were released. And when we will have tick-tock release > model[1] number are more useful to know by operators what which one is tick release and which one is tock. With name > only it is not best way to find. > > So both have its pros and cons. > > [1] https://governance.openstack.org/tc/resolutions/20220210-release-cadence-adjustment.html > > -gmann > >> >> Just my 0.02?. >> -- Kurt >> >> Am 29. April 2022 17:53:02 MESZ schrieb Ghanshyam Mann : ---- On Fri, 29 Apr 2022 10:15:27 -0500 Allison Price wrote ---- >> Hi Slawek and Gmann, >> >> Thank you for raising the points about the OpenStack release naming process. >> >> On Apr 29, 2022, at 10:05 AM, Ghanshyam Mann wrote: >> >> ---- On Fri, 29 Apr 2022 09:55:52 -0500 Slawek Kaplonski wrote ---- >> Hi, >> >> During the last PTG in April 2022 in the TC meeting we were discussing our release naming policy [1]. >> It seems that choosing appropriate name for every releases is very hard and time consuming. There is many factors which needs to be taken into consideration there like legal but also meaning of the chosen name in many different languages. >> >> Adding more detail on why TC is thinking to drop the release name and keep only number (slawek >> will add these in review also as histiry to know) >> >> Why we dropped the release name: >> * Problem with release name: >> >> ** We are a wider community with many international communities, developers, and cultures and choosing a perfect name satisfying all of them is not possible. >> ** We as individuals also have some problems with a few names which might be due to emotions, political, or historical. And filtering them out is not possible. >> ** Name after election need trademark checks from the foundation as a final step and there is always a chance that winning names are filtered out so the electorate might not be happy with that. So the process is also not perfect. >> ** . >> >> From a release marketing perspective, I have significant concerns going down this route. I think that not only do the names reflect whimsical aspects of the community personality, it?s also a huge marketing tool in terms of getting traction with OpenStack coverage. This helps us debunk some of the myths out there around the OpenStack community?s relevance as well as convey the innovation happening in the features that are delivered upstream. >> >> I don?t want to minimize the time consuming nature of the process as well as the cultural sensitivities, so I would like to better understand the steps here and what some of the concerns are in moving forward. From a Foundation perspective, we are happy to help take the processes off the TC as part of other release marketing activities that we do. >> >> I?d be happy to join a TC meeting or discuss this more at the Summit in Berlin, but I would like to discuss alternate ways to maintain the naming process we have in place if possible before moving forward. >> >> Thanks Alisson for joining the discussion. As it involve the foundation members/marketting team, I thought of keeping >> the foundation ML in loop but forgot (doing now). >> >> I understand and we touch based the marketting perspective in PTG but not in detail. >> >> Main issue here is not just only the process but more of the cutural. None of the name is going to be >> accepted by everyone in community and that is why we face the objection on name almost since >> ussuri cycle. As you can see the reference I mentioned in ' * Tried to solve it in many ways' section, we >> tried to solve the process in many ways but none of those are accepted as none of it is perfect. >> >> One idea to keep markettitng things same is that we can keep some tag line with few words to >> make release attractive and interesting. For example: "OpenStack 2023.1 - 'Secure & Stable' ". Does >> that sovle the marketting need? >> >> We wil be happy to discuss if there is new idea which can solve the mentioned issues. Feel free to >> proposa the idea in TC and I can schedule a call for that. >> >> -gmann >> >> >> Allison >> >> >> * Tried to solve it in many ways: >> >> There were a lot of proposals to solve the above issues but we could not get any agreement on either of these and live with all these issues until the Zed cycle. >> >> ** https://review.opendev.org/c/openstack/governance/+/677749 >> ** https://review.opendev.org/c/openstack/governance/+/678046 >> ** https://review.opendev.org/c/openstack/governance/+/677745 >> ** https://review.opendev.org/c/openstack/governance/+/684688 >> ** https://review.opendev.org/c/openstack/governance/+/675788 >> ** https://review.opendev.org/c/openstack/governance/+/687764 >> ** https://review.opendev.org/c/openstack/governance/+/677827 >> ** https://review.opendev.org/c/openstack/governance/+/677748 >> ** https://review.opendev.org/c/openstack/governance/+/677747 >> ** https://review.opendev.org/c/openstack/governance/+/677746 >> >> -gmann >> >> >> Finally we decided that now, after Zed release, when we will go all round through alphabet it is very good time to change this policy and use only numeric version with "year"."release in the year". It is proposed in [2]. >> This is also good timing for such change because in the same release we are going to start our "Tick Tock" release cadence which means that every Tick release will be release with .1 (like 2023.1, 2024.1, etc.) and every Tock release will be one with .2 (2023.2, 2024.2, etc.). >> >> [1] https://etherpad.opendev.org/p/tc-zed-ptg#L265 >> [2] https://review.opendev.org/c/openstack/governance/+/839897 >> >> -- >> Slawek Kaplonski >> Principal Software Engineer >> Red Hat >> >> >> >> >> >> >> Foundation mailing list >> Foundation at lists.openinfra.dev >> http://lists.openinfra.dev/cgi-bin/mailman/listinfo/foundation >> -- >> Kurt Garloff , Cologne, Germany >> (Sent from Mobile with K9.) > From gmann at ghanshyammann.com Fri Apr 29 18:33:45 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 29 Apr 2022 13:33:45 -0500 Subject: [OpenInfra Foundation] [all][tc] Change OpenStack release naming policy proposal In-Reply-To: <1E9F233D-46CE-4369-970D-0532B0EE981D@openinfra.dev> References: <2175937.irdbgypaU6@p1> <18075db6e21.118543125401989.8208677652226278753@ghanshyammann.com> <088599E0-4F95-4467-900A-5041704BD031@openinfra.dev> <18076068c94.f090875e405065.8322699810706317962@ghanshyammann.com> <52AED2E2-E10F-463B-A06D-D219B246D980@garloff.de> <18076645fad.c8ca1e22409208.778705174696178878@ghanshyammann.com> <1E9F233D-46CE-4369-970D-0532B0EE981D@openinfra.dev> Message-ID: <1807699b1c0.e62d87dc410580.2737624458738109025@ghanshyammann.com> ---- On Fri, 29 Apr 2022 13:18:33 -0500 Allison Price wrote ---- > > > > On Apr 29, 2022, at 12:35 PM, Ghanshyam Mann wrote: > > > > ---- On Fri, 29 Apr 2022 12:03:47 -0500 Kurt Garloff wrote ---- > >> Hi, > >> > >> I see a tendency in western societies that no decisions are ever taken out of fear someone could be offended or even litigate. > >> While it's very reasonable to be careful to avoid offenses, we must not take it to the extreme and allow it to paralyze us by requiring no one ever objects, IMVHO. 100% happiness is too high a bar. > >> > >> I would hope that the offer from the foundation staff to help with the name vetting process and take off load from the TC is helpful here. > > > > Definatly that an option and we will be happy to do that. > > I?ll put time on the next TC meeting agenda to join and discuss this as an option, but I it?s something we are happy to resource. Thanks Allison, I will say to add it in 12th May meeting and meanwhile we will get more feedback on ML discussion. Let me know of that works for you. -gmann > > > > >> > >> Replacing well rememberable names with sterile numbers is definitely a step backwards in perception. > > > > Well, it depends. For marketting yes names are great to remember and publish but from tehnical perspective especially > > while upgrade they are hard to know which year these releses were released. And when we will have tick-tock release > > model[1] number are more useful to know by operators what which one is tick release and which one is tock. With name > > only it is not best way to find. > > > > So both have its pros and cons. > > > > [1] https://governance.openstack.org/tc/resolutions/20220210-release-cadence-adjustment.html > > > > -gmann > > > >> > >> Just my 0.02?. > >> -- Kurt > >> > >> Am 29. April 2022 17:53:02 MESZ schrieb Ghanshyam Mann : ---- On Fri, 29 Apr 2022 10:15:27 -0500 Allison Price wrote ---- > >> Hi Slawek and Gmann, > >> > >> Thank you for raising the points about the OpenStack release naming process. > >> > >> On Apr 29, 2022, at 10:05 AM, Ghanshyam Mann wrote: > >> > >> ---- On Fri, 29 Apr 2022 09:55:52 -0500 Slawek Kaplonski wrote ---- > >> Hi, > >> > >> During the last PTG in April 2022 in the TC meeting we were discussing our release naming policy [1]. > >> It seems that choosing appropriate name for every releases is very hard and time consuming. There is many factors which needs to be taken into consideration there like legal but also meaning of the chosen name in many different languages. > >> > >> Adding more detail on why TC is thinking to drop the release name and keep only number (slawek > >> will add these in review also as histiry to know) > >> > >> Why we dropped the release name: > >> * Problem with release name: > >> > >> ** We are a wider community with many international communities, developers, and cultures and choosing a perfect name satisfying all of them is not possible. > >> ** We as individuals also have some problems with a few names which might be due to emotions, political, or historical. And filtering them out is not possible. > >> ** Name after election need trademark checks from the foundation as a final step and there is always a chance that winning names are filtered out so the electorate might not be happy with that. So the process is also not perfect. > >> ** . > >> > >> From a release marketing perspective, I have significant concerns going down this route. I think that not only do the names reflect whimsical aspects of the community personality, it?s also a huge marketing tool in terms of getting traction with OpenStack coverage. This helps us debunk some of the myths out there around the OpenStack community?s relevance as well as convey the innovation happening in the features that are delivered upstream. > >> > >> I don?t want to minimize the time consuming nature of the process as well as the cultural sensitivities, so I would like to better understand the steps here and what some of the concerns are in moving forward. From a Foundation perspective, we are happy to help take the processes off the TC as part of other release marketing activities that we do. > >> > >> I?d be happy to join a TC meeting or discuss this more at the Summit in Berlin, but I would like to discuss alternate ways to maintain the naming process we have in place if possible before moving forward. > >> > >> Thanks Alisson for joining the discussion. As it involve the foundation members/marketting team, I thought of keeping > >> the foundation ML in loop but forgot (doing now). > >> > >> I understand and we touch based the marketting perspective in PTG but not in detail. > >> > >> Main issue here is not just only the process but more of the cutural. None of the name is going to be > >> accepted by everyone in community and that is why we face the objection on name almost since > >> ussuri cycle. As you can see the reference I mentioned in ' * Tried to solve it in many ways' section, we > >> tried to solve the process in many ways but none of those are accepted as none of it is perfect. > >> > >> One idea to keep markettitng things same is that we can keep some tag line with few words to > >> make release attractive and interesting. For example: "OpenStack 2023.1 - 'Secure & Stable' ". Does > >> that sovle the marketting need? > >> > >> We wil be happy to discuss if there is new idea which can solve the mentioned issues. Feel free to > >> proposa the idea in TC and I can schedule a call for that. > >> > >> -gmann > >> > >> > >> Allison > >> > >> > >> * Tried to solve it in many ways: > >> > >> There were a lot of proposals to solve the above issues but we could not get any agreement on either of these and live with all these issues until the Zed cycle. > >> > >> ** https://review.opendev.org/c/openstack/governance/+/677749 > >> ** https://review.opendev.org/c/openstack/governance/+/678046 > >> ** https://review.opendev.org/c/openstack/governance/+/677745 > >> ** https://review.opendev.org/c/openstack/governance/+/684688 > >> ** https://review.opendev.org/c/openstack/governance/+/675788 > >> ** https://review.opendev.org/c/openstack/governance/+/687764 > >> ** https://review.opendev.org/c/openstack/governance/+/677827 > >> ** https://review.opendev.org/c/openstack/governance/+/677748 > >> ** https://review.opendev.org/c/openstack/governance/+/677747 > >> ** https://review.opendev.org/c/openstack/governance/+/677746 > >> > >> -gmann > >> > >> > >> Finally we decided that now, after Zed release, when we will go all round through alphabet it is very good time to change this policy and use only numeric version with "year"."release in the year". It is proposed in [2]. > >> This is also good timing for such change because in the same release we are going to start our "Tick Tock" release cadence which means that every Tick release will be release with .1 (like 2023.1, 2024.1, etc.) and every Tock release will be one with .2 (2023.2, 2024.2, etc.). > >> > >> [1] https://etherpad.opendev.org/p/tc-zed-ptg#L265 > >> [2] https://review.opendev.org/c/openstack/governance/+/839897 > >> > >> -- > >> Slawek Kaplonski > >> Principal Software Engineer > >> Red Hat > >> > >> > >> > >> > >> > >> > >> Foundation mailing list > >> Foundation at lists.openinfra.dev > >> http://lists.openinfra.dev/cgi-bin/mailman/listinfo/foundation > >> -- > >> Kurt Garloff , Cologne, Germany > >> (Sent from Mobile with K9.) > > > > From gmann at ghanshyammann.com Fri Apr 29 18:33:58 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 29 Apr 2022 13:33:58 -0500 Subject: [all][tc] What's happening in Technical Committee: summary April 29th, 22: Reading: 5 min Message-ID: <1807699e526.e5c63883410583.2145716370079082821@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We had this week's meeting on April 28. Most of the meeting discussions are summarized in this email. Meeting full logs are available @https://meetings.opendev.org/meetings/tc/2022/tc.2022-04-21-15.00.html * Next TC weekly meeting will be on May 5 Thursday at 15:00 UTC, feel free to add the topic on the agenda[1] by May 4. We will cover CFN and tick-tock release notes topic there and if time permits then cover any othe topic. 2. What we completed this week: ========================= * Resolution to drop the lower constraints maintenance[2] * Remove TC Liaisons framework[3] * Retired openstack-helm-docs[4] 3. Activities In progress: ================== TC Tracker for Zed cycle ------------------------------ * Zed tracker etherpad includes the TC working items[5], we have started the many items. Open Reviews ----------------- * Seven open reviews for ongoing activities[6]. Change OpenStack release naming policy proposal ----------------------------------------------------------- We discussed it it PTG, TC decided to drop the name from release name process and keping the numbers. Slaweq proposed it on gerrit[7] and on ML[8]. Discussion is in progress as there are some objection from foundation marketting team. Migration from old ELK service to new Dashboard ----------------------------------------------------------- You might have seen that Daniel email about the new dashboard[9] and login information, we encourage community members to use it and provide feedback if any. Dainel also investigation about the e-r instance. Consistent and Secure Default RBAC ------------------------------------------- We discussed about the heat mostly in Tuesday call but could not get any consensus. Notes are in etherpad[10] and will continue the discussion on Tuesday 3rd May 14:00 UTC [11], FIPs community-wide goal ------------------------------- Ade has proposed the new milestone for this goal work, please review those and give feedback if any[12]. 2021 User Survey TC Question Analysis ----------------------------------------------- No update on this. The survey summary is up for review[13]. Feel free to check and provide feedback. Zed cycle Leaderless projects ---------------------------------- No updates on this. Only Adjutant project is leaderless/maintainer-less. We will check Adjutant's situation again on ML and hope Braden will be ready with their company side permission[14]. Fixing Zuul config error ---------------------------- Requesting projects with zuul config error to look into those and fix them which should not take much time[15]. Project updates ------------------- * Add the cinder-three-par charm to Openstack charms[16] 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[17]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [18] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://review.opendev.org/c/openstack/governance/+/838004 [3] https://review.opendev.org/c/openstack/governance/+/837891 [4] https://review.opendev.org/c/openstack/governance/+/839100 [5] https://etherpad.opendev.org/p/tc-zed-tracker [6] https://review.opendev.org/q/projects:openstack/governance+status:open [7] https://review.opendev.org/c/openstack/governance/+/839897 [8] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028354.html [9] http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028346.html [10] https://etherpad.opendev.org/p/rbac-zed-ptg#L103 [11] https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team#Meeting [12] https://review.opendev.org/c/openstack/governance/+/838601 [13] https://review.opendev.org/c/openstack/governance/+/836888 [14] http://lists.openstack.org/pipermail/openstack-discuss/2022-March/027626.html [15] https://etherpad.opendev.org/p/zuul-config-error-openstack [16] https://review.opendev.org/c/openstack/governance/+/837781 [17] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [18] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From fungi at yuggoth.org Fri Apr 29 18:46:51 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 29 Apr 2022 18:46:51 +0000 Subject: [all][tc] Release cadence terminology (Was: Change OpenStack release naming...) In-Reply-To: <3ef1f910-5f87-fad2-9bee-79df2abced07@linaro.org> References: <2175937.irdbgypaU6@p1> <3ef1f910-5f87-fad2-9bee-79df2abced07@linaro.org> Message-ID: <20220429184651.q4crwkkam74qjheu@yuggoth.org> On 2022-04-29 17:43:25 +0200 (+0200), Marcin Juszkiewicz wrote: [...] > Also suggestion: drop tick/tock from naming documentation please. > I never remember which is major and which is minor. This is a good point. From an internationalization perspective, the choice of wording could be especially confusing as it's an analogy for English onomatopoeia related to mechanical clocks. I doubt it would translate well (if at all). In retrospect, adjusting the terminology to make 2023.1 the "primary" release of 2023, with 2023.2 as the "secondary" release of that year, makes it a bit more clear as to their relationship to one another. We can say that consumers are able to upgrade directly from one primary release to another, skipping secondary releases. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dms at danplanet.com Fri Apr 29 19:07:15 2022 From: dms at danplanet.com (Dan Smith) Date: Fri, 29 Apr 2022 12:07:15 -0700 Subject: [all][tc] Release cadence terminology In-Reply-To: <20220429184651.q4crwkkam74qjheu@yuggoth.org> (Jeremy Stanley's message of "Fri, 29 Apr 2022 18:46:51 +0000") References: <2175937.irdbgypaU6@p1> <3ef1f910-5f87-fad2-9bee-79df2abced07@linaro.org> <20220429184651.q4crwkkam74qjheu@yuggoth.org> Message-ID: >> Also suggestion: drop tick/tock from naming documentation please. >> I never remember which is major and which is minor. > > This is a good point. From an internationalization perspective, the > choice of wording could be especially confusing as it's an analogy > for English onomatopoeia related to mechanical clocks. I doubt it > would translate well (if at all). I don't think there's any implication that one or the other is major or minor. I certainly didn't think the word "tick" meant anything more major or important than "tock". I chose "tick" as the slow-path one simply because I think it makes sense to start the cycle on a slow path release. Unless we choose "major and minor" or "long and short" or "stable and unstable" (the latter of which is already taken and also not correct anyway), I think people will have to learn which is which regardless. > In retrospect, adjusting the terminology to make 2023.1 the > "primary" release of 2023, with 2023.2 as the "secondary" release of > that year, makes it a bit more clear as to their relationship to one > another. We can say that consumers are able to upgrade directly from > one primary release to another, skipping secondary releases. I also don't think "primary and secondary" are appropriate because they have other connotations which also don't apply here. We decided that "the release after Zed" would be the first in this cycle, and the version of that is fixed based on when it is in the year. I don't think we should tie the position of that release in the year to the notion of the slow or fast cycle nature of them. Especially if we decide to change that cadence to some other pattern in the future. I chose "tick and tock" simply because there's precedent in the industry and that's all. I think the terminology we choose is not going to be fully intuitive and culturally appropriate for everyone on the planet, much like release names. We've already documented and discussed these as "tick and tock" and changing now/again will also bring its own confusion. See, isn't this naming stuff fun? --Dan From gouthampravi at gmail.com Fri Apr 29 19:11:12 2022 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Sat, 30 Apr 2022 00:41:12 +0530 Subject: [all][tc] Change OpenStack release naming policy proposal In-Reply-To: <2175937.irdbgypaU6@p1> References: <2175937.irdbgypaU6@p1> Message-ID: On Fri, Apr 29, 2022 at 8:36 PM Slawek Kaplonski wrote: > > Hi, > > > During the last PTG in April 2022 in the TC meeting we were discussing our release naming policy [1]. > > It seems that choosing appropriate name for every releases is very hard and time consuming. There is many factors which needs to be taken into consideration there like legal but also meaning of the chosen name in many different languages. > > > Finally we decided that now, after Zed release, when we will go all round through alphabet it is very good time to change this policy and use only numeric version with "year"."release in the year". It is proposed in [2]. > > This is also good timing for such change because in the same release we are going to start our "Tick Tock" release cadence which means that every Tick release will be release with .1 (like 2023.1, 2024.1, etc.) and every Tock release will be one with .2 (2023.2, 2024.2, etc.). Beloved TC, I'm highly disappointed in this 'decision', and would like for you to reconsider. I see the reasons you cite, but I feel like we're throwing the baby out with the bathwater here. Disagreements need not be feared, why not allow them to be aired publicly? That's a tenet of this open community. Allow names to be downvoted with reason during the proposal phase, and they'll organically fall-off from favor. Release names have always been a bonding factor. I've been happy to drum up contributor morale with our release names and the stories/anecdotes behind them. Release naming will not hurt/help the tick-tock release IMHO. We can append the release number to the name, and call it a day if you want. I do believe our current release naming process is a step out of the TC's perceived charter. There are many technical challenges that the TC is tackling, and coordinating a vote/slugfest about names isn't as important as those. As Allison suggests, we could seek help from the foundation to run the community voting and vetting for the release naming process - and expect the same level of transparency as the 4 opens that the OpenStack community espouses. > > > [1] https://etherpad.opendev.org/p/tc-zed-ptg#L265 > > [2] https://review.opendev.org/c/openstack/governance/+/839897 > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat From gmann at ghanshyammann.com Fri Apr 29 19:18:56 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 29 Apr 2022 14:18:56 -0500 Subject: [all][tc] Release cadence terminology In-Reply-To: References: <2175937.irdbgypaU6@p1> <3ef1f910-5f87-fad2-9bee-79df2abced07@linaro.org> <20220429184651.q4crwkkam74qjheu@yuggoth.org> Message-ID: <18076c30d22.ec88c4e8411392.7531315473816634161@ghanshyammann.com> ---- On Fri, 29 Apr 2022 14:07:15 -0500 Dan Smith wrote ---- > >> Also suggestion: drop tick/tock from naming documentation please. > >> I never remember which is major and which is minor. > > > > This is a good point. From an internationalization perspective, the > > choice of wording could be especially confusing as it's an analogy > > for English onomatopoeia related to mechanical clocks. I doubt it > > would translate well (if at all). > > I don't think there's any implication that one or the other is major or > minor. I certainly didn't think the word "tick" meant anything more > major or important than "tock". I chose "tick" as the slow-path one > simply because I think it makes sense to start the cycle on a slow path > release. Unless we choose "major and minor" or "long and short" or > "stable and unstable" (the latter of which is already taken and also not > correct anyway), I think people will have to learn which is which > regardless. > > > In retrospect, adjusting the terminology to make 2023.1 the > > "primary" release of 2023, with 2023.2 as the "secondary" release of > > that year, makes it a bit more clear as to their relationship to one > > another. We can say that consumers are able to upgrade directly from > > one primary release to another, skipping secondary releases. > > I also don't think "primary and secondary" are appropriate because they > have other connotations which also don't apply here. We decided that > "the release after Zed" would be the first in this cycle, and the > version of that is fixed based on when it is in the year. I don't think > we should tie the position of that release in the year to the notion of > the slow or fast cycle nature of them. Especially if we decide to change > that cadence to some other pattern in the future. I agree with Dan, we discussed it to name it differnetly but everything came up like naming one release (tick currently) more stable and other one less stable. Like Major-minor or Primary-Secondary does the same. Both the release are same stable and no change in feature development, bug fixes (like tick will not implement more feature then tock or so). So we need to clearly avoid any name which will convey them stable, less-stable or this is main and that is not-main. > > I chose "tick and tock" simply because there's precedent in the industry > and that's all. I think the terminology we choose is not going to be > fully intuitive and culturally appropriate for everyone on the planet, > much like release names. We've already documented and discussed these as > "tick and tock" and changing now/again will also bring its own > confusion. I agree tick-tock might not be best names/tag or all people are familier with and I can give 100 of names to us in alternate to tick-tock but as we have already documented about it and discussed a lot in PTG so let's keep them as it is. Adding new names will confuse more people than making it more clear. -gmann > > See, isn't this naming stuff fun? > > --Dan > > From alex.kavanagh at canonical.com Fri Apr 29 19:23:55 2022 From: alex.kavanagh at canonical.com (Alex Kavanagh) Date: Fri, 29 Apr 2022 20:23:55 +0100 Subject: [OpenInfra Foundation] [all][tc] Change OpenStack release naming policy proposal In-Reply-To: <52AED2E2-E10F-463B-A06D-D219B246D980@garloff.de> References: <2175937.irdbgypaU6@p1> <18075db6e21.118543125401989.8208677652226278753@ghanshyammann.com> <088599E0-4F95-4467-900A-5041704BD031@openinfra.dev> <18076068c94.f090875e405065.8322699810706317962@ghanshyammann.com> <52AED2E2-E10F-463B-A06D-D219B246D980@garloff.de> Message-ID: On Fri, Apr 29, 2022 at 6:24 PM Kurt Garloff wrote: > Hi, > > I see a tendency in western societies that no decisions are ever taken out > of fear someone could be offended or even litigate. > While it's very reasonable to be careful to avoid offenses, we must not > take it to the extreme and allow it to paralyze us by requiring no one ever > objects, IMVHO. 100% happiness is too high a bar. > > I would hope that the offer from the foundation staff to help with the > name vetting process and take off load from the TC is helpful here. > > Replacing well rememberable names with sterile numbers is definitely a > step backwards in perception. > I tend to agree. I, also, can remember most of the names; they provide a more tangible feel for the project. Cheers Alex. > Just my 0.02?. > -- Kurt > > Am 29. April 2022 17:53:02 MESZ schrieb Ghanshyam Mann < > gmann at ghanshyammann.com>: >> >> ---- On Fri, 29 Apr 2022 10:15:27 -0500 Allison Price wrote ---- >> >>> Hi Slawek and Gmann, >>> >>> Thank you for raising the points about the OpenStack release naming process. >>> >>> On Apr 29, 2022, at 10:05 AM, Ghanshyam Mann wrote: >>>> >>>> ---- On Fri, 29 Apr 2022 09:55:52 -0500 Slawek Kaplonski wrote ---- >>>> >>>>> Hi, >>>>> >>>>> During the last PTG in April 2022 in the TC meeting we were discussing our release naming policy [1]. >>>>> It seems that choosing appropriate name for every releases is very hard and time consuming. There is many factors which needs to be taken into consideration there like legal but also meaning of the chosen name in many different languages. >>>>> >>>> >>>> Adding more detail on why TC is thinking to drop the release name and keep only number (slawek >>>> will add these in review also as histiry to know) >>>> >>>> Why we dropped the release name: >>>> ------------------------------ >>>> * Problem with release name: >>>> >>>> ** We are a wider community with many international communities, developers, and cultures and choosing a perfect name satisfying all of them is not possible. >>>> ** We as individuals also have some problems with a few names which might be due to emotions, political, or historical. And filtering them out is not possible. >>>> ** Name after election need trademark checks from the foundation as a final step and there is always a chance that winning names are filtered out so the electorate might not be happy with that. So the process is also not perfect. >>>> ** . >>>> >>> >>> From a release marketing perspective, I have significant concerns going down this route. I think that not only do the names reflect whimsical aspects of the community personality, it?s also a huge marketing tool in terms of getting traction with OpenStack coverage. This helps us debunk some of the myths out there around the OpenStack community?s relevance as well as convey the innovation happening in the features that are delivered upstream. >>> >>> I don?t want to minimize the time consuming nature of the process as well as the cultural sensitivities, so I would like to better understand the steps here and what some of the concerns are in moving forward. From a Foundation perspective, we are happy to help take the processes off the TC as part of other release marketing activities that we do. >>> >>> I?d be happy to join a TC meeting or discuss this more at the Summit in Berlin, but I would like to discuss alternate ways to maintain the naming process we have in place if possible before moving forward. >>> >> >> Thanks Alisson for joining the discussion. As it involve the foundation members/marketting team, I thought of keeping >> the foundation ML in loop but forgot (doing now). >> >> I understand and we touch based the marketting perspective in PTG but not in detail. >> >> Main issue here is not just only the process but more of the cutural. None of the name is going to be >> accepted by everyone in community and that is why we face the objection on name almost since >> ussuri cycle. As you can see the reference I mentioned in ' * Tried to solve it in many ways' section, we >> tried to solve the process in many ways but none of those are accepted as none of it is perfect. >> >> One idea to keep markettitng things same is that we can keep some tag line with few words to >> make release attractive and interesting. For example: "OpenStack 2023.1 - 'Secure & Stable' ". Does >> that sovle the marketting need? >> >> We wil be happy to discuss if there is new idea which can solve the mentioned issues. Feel free to >> proposa the idea in TC and I can schedule a call for that. >> >> -gmann >> >> >>> Allison >>> >>> >>>> * Tried to solve it in many ways: >>>> >>>> There were a lot of proposals to solve the above issues but we could not get any agreement on either of these and live with all these issues until the Zed cycle. >>>> >>>> ** https://review.opendev.org/c/openstack/governance/+/677749 >>>> ** https://review.opendev.org/c/openstack/governance/+/678046 >>>> ** https://review.opendev.org/c/openstack/governance/+/677745 >>>> ** https://review.opendev.org/c/openstack/governance/+/684688 >>>> ** https://review.opendev.org/c/openstack/governance/+/675788 >>>> ** https://review.opendev.org/c/openstack/governance/+/687764 >>>> ** https://review.opendev.org/c/openstack/governance/+/677827 >>>> ** https://review.opendev.org/c/openstack/governance/+/677748 >>>> ** https://review.opendev.org/c/openstack/governance/+/677747 >>>> ** https://review.opendev.org/c/openstack/governance/+/677746 >>>> >>>> -gmann >>>> >>>> >>>>> Finally we decided that now, after Zed release, when we will go all round through alphabet it is very good time to change this policy and use only numeric version with "year"."release in the year". It is proposed in [2]. >>>>> This is also good timing for such change because in the same release we are going to start our "Tick Tock" release cadence which means that every Tick release will be release with .1 (like 2023.1, 2024.1, etc.) and every Tock release will be one with .2 (2023.2, 2024.2, etc.). >>>>> >>>>> [1] https://etherpad.opendev.org/p/tc-zed-ptg#L265 >>>>> [2] https://review.opendev.org/c/openstack/governance/+/839897 >>>>> >>>>> -- >>>>> Slawek Kaplonski >>>>> Principal Software Engineer >>>>> Red Hat >>>>> >>>>> >>>> >>> >>> >>> ------------------------------ >> Foundation mailing list >> Foundation at lists.openinfra.dev >> http://lists.openinfra.dev/cgi-bin/mailman/listinfo/foundation >> >> -- > Kurt Garloff , Cologne, Germany > (Sent from Mobile with K9.) > -- Alex Kavanagh - Software Engineer OpenStack Engineering - Data Centre Development - Canonical Ltd -------------- next part -------------- An HTML attachment was scrubbed... URL: From yipikai7 at gmail.com Fri Apr 29 19:39:18 2022 From: yipikai7 at gmail.com (Cedric Lemarchand) Date: Fri, 29 Apr 2022 21:39:18 +0200 Subject: [all][tc] Change OpenStack release naming policy proposal In-Reply-To: <2175937.irdbgypaU6@p1> References: <2175937.irdbgypaU6@p1> Message-ID: Maybe a best of breed approach would be to have both, like Ubuntu releases. It does not solve the time consuming and others issues regarding naming choices but I see it as a good consensual solution at this time. My 2 cents Cheers On Fri, Apr 29, 2022, 16:59 Slawek Kaplonski wrote: > Hi, > > During the last PTG in April 2022 in the TC meeting we were discussing our > release naming policy [1]. > > It seems that choosing appropriate name for every releases is very hard > and time consuming. There is many factors which needs to be taken into > consideration there like legal but also meaning of the chosen name in many > different languages. > > Finally we decided that now, after Zed release, when we will go all round > through alphabet it is very good time to change this policy and use only > numeric version with "year"."release in the year". It is proposed in [2]. > > This is also good timing for such change because in the same release we > are going to start our "Tick Tock" release cadence which means that every > Tick release will be release with .1 (like 2023.1, 2024.1, etc.) and every > Tock release will be one with .2 (2023.2, 2024.2, etc.). > > [1] https://etherpad.opendev.org/p/tc-zed-ptg#L265 > > [2] https://review.opendev.org/c/openstack/governance/+/839897 > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at danplanet.com Fri Apr 29 19:44:19 2022 From: dms at danplanet.com (Dan Smith) Date: Fri, 29 Apr 2022 12:44:19 -0700 Subject: [all][tc] Change OpenStack release naming policy proposal In-Reply-To: (Goutham Pacha Ravi's message of "Sat, 30 Apr 2022 00:41:12 +0530") References: <2175937.irdbgypaU6@p1> Message-ID: > I'm highly disappointed in this 'decision', and would like for you to > reconsider. I see the reasons you cite, but I feel like we're throwing > the baby out with the bathwater here. Disagreements need not be > feared, why not allow them to be aired publicly? That's a tenet of > this open community. Allow names to be downvoted with reason during > the proposal phase, and they'll organically fall-off from favor. We had this problem when the entire community chose release names too. Objections to nominations are solicited, and need not be secret. Even this cycle there were publicly-aired complaints about the release name on this very list, despite no objections being raised during the nomination period. I seriously thought that "Zed" would be the least possibly offensive name, given that it is literally the name of the letter in much of the world. There were *two* completely separate and serious objections to the name from multiple people each. > Release names have always been a bonding factor. I've been happy to > drum up contributor morale with our release names and the > stories/anecdotes behind them. Agreed that they were in the past, and that they should be. It doesn't feel that way anymore. > I do believe our current release naming process is a step out of the > TC's perceived charter. There are many technical challenges that the > TC is tackling, and coordinating a vote/slugfest about names isn't as > important as those. > As Allison suggests, we could seek help from the foundation to run the > community voting and vetting for the release naming process - and > expect the same level of transparency as the 4 opens that the > OpenStack community espouses. I'm totally fine with the foundation taking it over completely if that's what they want to do. My reasoning for wanting to do away with names is primarily that it has become more labor-intensive than beneficial for the TC, in my opinion. I have other lesser reasons too, but they're not as important. I'm sure everyone dutifully clicked on all of the links gmann provided, but let me just make sure you see this one: https://review.opendev.org/c/openstack/governance/+/677747 "Let the foundation do it" didn't even make it to the final round of consideration the last time the process was considered :) --Dan From gmann at ghanshyammann.com Fri Apr 29 19:47:31 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 29 Apr 2022 14:47:31 -0500 Subject: [all][tc] Change OpenStack release naming policy proposal In-Reply-To: References: <2175937.irdbgypaU6@p1> Message-ID: <18076dd3a30.116f2ef97411816.4718977843211132330@ghanshyammann.com> ---- On Fri, 29 Apr 2022 14:11:12 -0500 Goutham Pacha Ravi wrote ---- > On Fri, Apr 29, 2022 at 8:36 PM Slawek Kaplonski wrote: > > > > Hi, > > > > > > During the last PTG in April 2022 in the TC meeting we were discussing our release naming policy [1]. > > > > It seems that choosing appropriate name for every releases is very hard and time consuming. There is many factors which needs to be taken into consideration there like legal but also meaning of the chosen name in many different languages. > > > > > > Finally we decided that now, after Zed release, when we will go all round through alphabet it is very good time to change this policy and use only numeric version with "year"."release in the year". It is proposed in [2]. > > > > This is also good timing for such change because in the same release we are going to start our "Tick Tock" release cadence which means that every Tick release will be release with .1 (like 2023.1, 2024.1, etc.) and every Tock release will be one with .2 (2023.2, 2024.2, etc.). > > Beloved TC, > > I'm highly disappointed in this 'decision', and would like for you to > reconsider. I see the reasons you cite, but I feel like we're throwing > the baby out with the bathwater here. Disagreements need not be > feared, why not allow them to be aired publicly? That's a tenet of > this open community. Allow names to be downvoted with reason during > the proposal phase, and they'll organically fall-off from favor. > > Release names have always been a bonding factor. I've been happy to > drum up contributor morale with our release names and the > stories/anecdotes behind them. Release naming will not hurt/help the > tick-tock release IMHO. We can append the release number to the name, > and call it a day if you want. I agree with the disagrement ratio and that should not stop us doing the things. But here we need to understand what type of disagrement we have and on what. Most of the disagrement were cutural or historical where people has shown it emotinally. And I personally as well as a TC or communitiy member does not feel goot to ignore them or give them any reasoning not to listen them (because I do not have any reasoning on these cultural/historical disagrement). Zed cycle was one good example of such thing when it was brought up in TC channel about war thing[1] and asked about change the Zed name. I will be happy to know what is best solution for this. 1. Change Zed name: it involve lot of technical work and communciation too. If yes then let's do this now. 2. Do not listen to these emotional request to change name: We did this at the end and I do not feel ok to do that. At least I do not want to ignore such request in future. Those are main reason we in TC decided to remvoe the name as they are culturally, emotionally tied. That is main reason of droping those not any techncial or work wise issue. [1] https://meetings.opendev.org/irclogs/%23openstack-tc/%23openstack-tc.2022-03-08.log.html#t2022-03-08T14:35:26 -gmann > > I do believe our current release naming process is a step out of the > TC's perceived charter. There are many technical challenges that the > TC is tackling, and coordinating a vote/slugfest about names isn't as > important as those. > As Allison suggests, we could seek help from the foundation to run the > community voting and vetting for the release naming process - and > expect the same level of transparency as the 4 opens that the > OpenStack community espouses. Yes we will offcourse open to that but at the same time we will be waiting for the foudnation proposal to sovle such issue irespective of who is doing name selection. So let's wait for that. -gmann > > > > > > > > > > [1] https://etherpad.opendev.org/p/tc-zed-ptg#L265 > > > > [2] https://review.opendev.org/c/openstack/governance/+/839897 > > > > > > -- > > > > Slawek Kaplonski > > > > Principal Software Engineer > > > > Red Hat > > From fungi at yuggoth.org Fri Apr 29 21:39:13 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 29 Apr 2022 21:39:13 +0000 Subject: [all][tc] Change OpenStack release naming policy proposal In-Reply-To: References: <2175937.irdbgypaU6@p1> Message-ID: <20220429213913.fgobgv4ti5f2gy4f@yuggoth.org> On 2022-04-29 21:39:18 +0200 (+0200), Cedric Lemarchand wrote: > Maybe a best of breed approach would be to have both, like Ubuntu > releases. It does not solve the time consuming and others issues > regarding naming choices but I see it as a good consensual > solution at this time. That's essentially the status quo. The currently approved (and recently revised) process says we have both a release name and year-based version number: https://governance.openstack.org/tc/reference/release-naming.html The proposal under discussion now is to drop the name, so only the number remains: https://review.opendev.org/839897 -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ces.eduardo98 at gmail.com Fri Apr 29 23:34:07 2022 From: ces.eduardo98 at gmail.com (Carlos Silva) Date: Fri, 29 Apr 2022 20:34:07 -0300 Subject: [manila] Zed cycle bug squash Message-ID: Greetings Zorillas and interested stackers! As mentioned in the previous weekly meetings, we will soon be meeting for the first bugsquash of the Zed release! The event will be held from May 2nd to May 6th, providing an extended contribution window. May 2nd 15:00 - 16:00 UTC - Kick off May 5th 15:00 - 16:00 UTC - Mid term checkpoint (we won't have our regular Manila meeting on this day) May 6th 15:00 - 15:30 UTC - Wrap Up We will use a meetpad for these meetings [1]. The main idea of this event is to go over the list of stale bugs and act on them, either seeing if they are incomplete or invalid at this point or working on them. The stale bugs list will be available on [2]. [1] https://meetpad.opendev.org/ManilaZed1Bugsquash [2] https://ethercalc.openstack.org/1nesczgjufb9 See you next week! Thank you, carloss -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sat Apr 30 00:03:04 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 29 Apr 2022 19:03:04 -0500 Subject: [all][tc] Change OpenStack release naming policy proposal In-Reply-To: References: <2175937.irdbgypaU6@p1> Message-ID: <18077c730a8.d056e6ed414774.5707596448288071941@ghanshyammann.com> ---- On Fri, 29 Apr 2022 14:44:19 -0500 Dan Smith wrote ---- > > I'm highly disappointed in this 'decision', and would like for you to > > reconsider. I see the reasons you cite, but I feel like we're throwing > > the baby out with the bathwater here. Disagreements need not be > > feared, why not allow them to be aired publicly? That's a tenet of > > this open community. Allow names to be downvoted with reason during > > the proposal phase, and they'll organically fall-off from favor. [...] > > I'm totally fine with the foundation taking it over completely if that's > what they want to do. My reasoning for wanting to do away with names is > primarily that it has become more labor-intensive than beneficial for > the TC, in my opinion. I have other lesser reasons too, but they're not > as important. > > I'm sure everyone dutifully clicked on all of the links gmann provided, > but let me just make sure you see this one: > > https://review.opendev.org/c/openstack/governance/+/677747 > > "Let the foundation do it" didn't even make it to the final round of > consideration the last time the process was considered :) I have updated all the 1. problem 2. issues raised by community 3. solution we already discussed in the review https://review.opendev.org/c/openstack/governance/+/839897/2/reference/release-naming.rst I would request all of us to read them in detail and understand the problem and what all solution we already discussed. One important point here to note is that problem is not *who is doing the process* instead it is *'name' with cultural, historical, and poltical reason*. If you think TC doing the process or Community members doing it (before couple of cycle, it was community member) is the problem and foundation doing it can solve this issue, I will be more happy to know the what all new ways foundation will try to solve these problem. Because key here is solve the problem or drop the things which create the problem instead of just shifting the problem from one place to other. -gmann > > --Dan > > From m73hdi at gmail.com Sat Apr 30 20:00:31 2022 From: m73hdi at gmail.com (mahdi n) Date: Sun, 1 May 2022 00:30:31 +0430 Subject: question -dashboard horizon Message-ID: I login-ed in horizon but in page project http://ip:8000/project show error that : You are not authorized to access this page please login screenshot how to solve this? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: openstack.jpg Type: image/jpeg Size: 37524 bytes Desc: not available URL: