From nguyenhuukhoinw at gmail.com Thu Dec 1 07:52:29 2022 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Thu, 1 Dec 2022 14:52:29 +0700 Subject: [Magnum] ls /etc/cni/net.d/ is emty In-Reply-To: References: Message-ID: I confirm with you that is ok with Yoga. Nguyen Huu Khoi On Tue, Nov 29, 2022 at 6:48 PM Nguy?n H?u Kh?i wrote: > I run with the default config from magnum then it is ok. I just want to > new containerd version. I will tell you if I resolved it. > Nguyen Huu Khoi > > > On Tue, Nov 29, 2022 at 4:31 PM Jake Yip wrote: > >> Hi, >> >> Is it possible to get to Yoga, and try FCOS35 and k8s v1.23? >> >> We are running that in Prod and it works well. >> >> If you have to use Xena, maybe try without containerd? >> >> Regards, >> Jake >> >> On 26/11/2022 1:56 am, Nguy?n H?u Kh?i wrote: >> > Hello guys. >> > I use Magnum on Xena and I custom k8s cluster by labels. But My cluster >> > is not ready and there is nothing in /etc/cni/net.d/ and my cluster >> said: >> > >> > container runtime network not ready: NetworkReady=false >> > reason:NetworkPluginNotReady message:Network plugin returns error: cni >> > plugin not initialized >> > >> > And this is my labels >> > >> > >> kube_tag=v1.21.8-rancher1,container_runtime=containerd,containerd_version=1.6.10,containerd_tarball_sha256=507f47716d7b932e58aa1dc7e2b3f2b8779ee9a2988aa46ad58e09e2e47063d8,calico_tag=v3.21.2,hyperkube_prefix= >> docker.io/rancher/ >> > >> > Note: I use Fedora Core OS 31 for images. >> > >> > Thank you. >> > >> > >> > Nguyen Huu Khoi >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Thu Dec 1 18:04:48 2022 From: elod.illes at est.tech (=?utf-8?B?RWzDtWQgSWxsw6lz?=) Date: Thu, 1 Dec 2022 18:04:48 +0000 Subject: [ironic][release] Bugfix branch status and cleanup w/r/t zuul-config-errors In-Reply-To: References: Message-ID: Hi, TL;DR: for a quick solution I suggest to add the ACL [1] for the ironic cores, something like this: [access "refs/heads/bugfix/*"] delete = = group ironic-core The more lengthy answer: as far as I know, bugfix branches were allowed based on ironic team's request, but with the conditions, that * it is out of Release Management scope * no official releases will be made out of these branches So basically it's completely the Ironic team's responsibility. (More senior @Release Managers: correct me if i am wrong) Nevertheless, time to time, there seem to be a need for 'EOL'ing / deleting branches that are not in the releases repo's deliverables/* directory (for example: projects opened a stable branch manually; no tags / releases were administered; branches were reopened accidentally; given project was not part of the official series release; etc). These needs to be handled out of the releases repo 'deliverables' directory, thus needs different handling & tools. And this task is not really coupled with release management, and also note, that the release management team is a very small team nowadays. Anyway, as a general stable maintainer core, I'm also thinking about how to solve this issue (old, obsolete branch deletion) in a safe way, though I'm not there yet to have any concrete solution. Ideas are welcome ? . Until a solution is not found, the quickest solution is I think is to add the above ACL extension to the ironic.config [1]. Cheers, El?d Ill?s irc: elodilles @ #openstack-release #openstack-stable [1] https://opendev.org/openstack/project-config/src/commit/e92af53c10a811f4370cdae7436f0a5354683d7c/gerrit/acls/openstack/ironic.config#L11 ________________________________ From: Jay Faulkner Sent: Tuesday, November 8, 2022 11:33 PM To: OpenStack Discuss Subject: Re: [ironic][release] Bugfix branch status and cleanup w/r/t zuul-config-errors Does anyone from the Releases team want to chime in on the best way to execute this kind of change? -JayF On Wed, Nov 2, 2022 at 7:02 AM Jay Faulkner > wrote: On Wed, Nov 2, 2022 at 3:04 AM Dmitry Tantsur > wrote: Hi Jay, On Tue, Nov 1, 2022 at 8:17 PM Jay Faulkner > wrote: Hey all, I've been looking into the various zuul config errors showing up for Ironic-program branches. Almost all of our old bugfix branches are in the list. Additionally, not properly retiring the bugfix branches leads to an ever-growing list of branches which makes it a lot more difficult, for contributors and operators alike, to tell which ones are currently supported. I'd like to see the errors. We update Zuul configuration manually for each bugfix branch, mapping appropriate branches for other projects (devstack, nova, etc). It's possible that we always overlook a few jobs, which causes Zuul to be upset (but quietly upset, so we don't notice). The errors show up in https://zuul.opendev.org/t/openstack/config-errors -- although they seem to be broken this morning. Most of them are older bugfix branches, ones that are out of support, that have the `Queue: Ironic` param that's no longer supported. I am not in favor of anyone going to dead bugfix branches and fixing CI; instead we should retire the ones out of use. I've put together a document describing the situation as it is now, and my proposal: https://etherpad.opendev.org/p/IronicBugfixBranchCleanup Going with the "I would like to retire" would cause us so much trouble that we'll have to urgently create a downstream mirror of them. Once we do this, using upstream bugfix branches at all will be questionable. Especially bugfix/19.0 (and corresponding IPA/inspector branches) is used in a very actively maintained release. Then we won't; but we do need to think about what timeline we can talk about upstream for getting a cadence for getting these retired out, just like we have a cadence for getting them cut every two months. I'll revise the list and remove the "I would like to retire" section (move it to keep-em-up). Essentially, I think we need to: - identify bugfix branches to cleanup (I've done this in the above etherpad, but some of the ) - clean them up (the next step) - update Ironic policy to set a regular cadence for when to retire bugfix branches, and encode the process for doing so This means there are two overall questions to answer in this email: 1) Mechanically, what's the process for doing this? I don't believe the existing release tooling will be useful for this, but I'm not 100% sure. I've pulled (in the above etherpad and a local spreadsheet) the last SHA for each branch; so we should be able to EOL these branches similarly to how we EOL stable branches; except manually instead of with tooling. Who is going to do this work? (I'd prefer releases team continue to hold the keys to do this; but I understand if you don't want to take on this manual work). EOL tags will be created by the release team, yes. I don't think we can get the keys without going "independent". It's a gerrit ACL you can enable to give other people access to tags; but like I said, I don't want that access anyway :). 2) What's the pattern for Ironic to adopt regarding these branches? We just need to write down the expected lifecycle and enforce it -- so we prevent being this deep into "branch debt" in the future. With my vendor's (red) hat on, I'd prefer to have a dual approach: the newest branches are supported by the community (i.e. us all), the oldest - by vendors who need them (EOLed if nobody volunteers). I think you already have a list of branches that OCP uses? Feel free to point Riccardo, Iury or myself at any issues with them. That's not really an option IMO. These branches exist in the upstream community, and are seen by upstream contributors and operators. If they're going to live here; they need to have some reasonable documentation about what folks should expect out of them and efforts being put towards them. Even if the documentation is "bugfix/1.2 is maintained as long as Product A 1.2 is maintained", that's better than leaving the community guessing about what these are used for, and why some are more-supported than others. -Jay Dmitry What do folks think? - Jay Faulkner -- Red Hat GmbH, Registered seat: Werner von Siemens Ring 12, D-85630 Grasbrunn, Germany Commercial register: Amtsgericht Muenchen/Munich, HRB 153243, Managing Directors: Ryan Barnhart, Charles Cachera, Michael O'Neill, Amy Ross -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Dec 1 18:31:29 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 1 Dec 2022 18:31:29 +0000 Subject: [ironic][release] Bugfix branch status and cleanup w/r/t zuul-config-errors In-Reply-To: References: Message-ID: <20221201183129.33uyhpapjcki4kfh@yuggoth.org> On 2022-12-01 18:04:48 +0000 (+0000), El?d Ill?s wrote: > TL;DR: for a quick solution I suggest to add the ACL [1] for the > ironic cores, something like this: > > [access "refs/heads/bugfix/*"] > delete = = group ironic-core [...] And if you're worried about some of the core reviewers accidentally deleting branches, you could make that a different group with a smaller set of people (perhaps just the PTL for example). And if you're wanting to be super extra careful you could create a second Ubuntu One account with a different E-mail address and use that to create a new Gerrit account which you only log into in order to perform these sorts of tasks (may be overkill for this case, but it's similar to what our Gerrit admins do in order to not pollute their normal reviewer accounts with admin level permissions). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From noonedeadpunk at gmail.com Fri Dec 2 09:22:18 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Fri, 2 Dec 2022 10:22:18 +0100 Subject: [openstack-ansible] Removing support for Calico Network driver Message-ID: Hi everyone, Due to lack of maintainers of Calico driver support in OpenStack-Ansible we have decided to remove support for this driver. Eventually, CI was failing for Calico jobs for the last few cycles and we haven't seen any interest in further support of the feature. Patches for Calico removal are already proposed: https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/866123 https://review.opendev.org/c/openstack/openstack-ansible/+/866119 If you're interested in this driver and want to step in for fixing CI and supporting the feature, please let us know either by replying to this email or by voting accordingly for the mentioned patches. In case no negative feedback will be received we will merge them by the end of the next week, December 9 2022. From ralonsoh at redhat.com Fri Dec 2 09:36:19 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 2 Dec 2022 10:36:19 +0100 Subject: [neutron] Neutron drivers meeting cancelled Message-ID: Hello Neutrinos: Due to the lack of agenda, *today's meeting is cancelled*. I have one topic [1] I would like to discuss but I'll wait until I test the code and I'm sure the feature is working. Once this is done, I'll ask you to review it and vote for this RFE. Have a good weekend! [1]https://bugs.launchpad.net/neutron/+bug/1998202 -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Fri Dec 2 10:36:02 2022 From: eblock at nde.ag (Eugen Block) Date: Fri, 02 Dec 2022 10:36:02 +0000 Subject: Unable to access Internet from an instance and accessing instance using floating-point IPs from external network In-Reply-To: References: <20221122094959.Horde._DNW37_4CRcsBFAHQUP7ZG_@webmail.nde.ag> <20221130082712.Horde.kAKVcEDnoqnje6ZTI7PfsEB@webmail.nde.ag> Message-ID: <20221202103602.Horde.O3I1fMqzWHQKKPBREgDJBvw@webmail.nde.ag> Hi, if you want people to reply you should keep the mailing list in the loop. Also please don't paste pictures but terminal output as plain text so it won't get lost. >> From my controller site: >> External-internet = 192.168.1.118 >> Internal-network = 10.42.0.189 Seems like you meant it the other way around looking at the other networks. >> My external network details is shown below I don't see a segmentation ID for the external network, we create provider (external) networks with the same segmentation ID as the corresponding VLAN in our environment has. The rest of the information on the pictures is not readable to me, please paste the security-group rules as text. >> Apart from all the above information, I will briefly talk about what I >> have enabled in my global.yml file. I have set the >> kolla_internal_vip_address to "192168.1.200", kolla_external_vip_address to >> "10.42.0.200", enable_haproxy to "yes" and other core components of >> openstack was enabled. I'm not familiar with kolla so I can't really say what else could be missing or not. You could disable the port security for your instance to see if that helps, otherwise use tcpdump on control and compute node to see where the packets get lost and which vlans are in use. I'm not sure if that will resolve it but I believe your external network should have a segmentation ID. Zitat von vincent lee : > Hi, one last thing I would like to add on is the setting of my openvswitch > configuration. > [image: image.png] > [image: image.png] > > > On Wed, Nov 30, 2022 at 1:39 PM vincent lee wrote: > >> Hi Eugen, >> Sorry for causing any confusion in my previous emails and I will summarize >> the main issue here. After deploying openstack from a fresh installation, I >> have not modified the security group when working creating an instance or a >> container. >> >> From my controller site: >> External-internet = 192.168.1.118 >> Internal-network = 10.42.0.189 >> >> For my compute host 1: >> Internal-network = 192.168.1.115 >> >> For my compute host 2: >> Internal-network = 192.168.1.108 >> >> My current issue is that when I created an instance which ran >> successfully, I was not able to ping that instance from my controller via >> the external network. Besides, I was also not able to ping the internet >> (8.8.8.8) inside of the instance. >> >> However, I was able to ping to the external network (10.42.0.67) from >> inside of the newly created instance itself. In other words, the router is >> communicating correctly. I have attached an image of my network topology as >> shown below. >> [image: image.png] >> Inside of my instance >> [image: image.png] >> I have attached an image of my router details as shown below >> [image: image.png] >> My external network details is shown below >> [image: image.png] >> external network's subnet detail is shown below >> [image: image.png] >> My internal network details is shown below >> [image: image.png] >> internal network's subnet is shown below >> [image: image.png] >> My floating IPs is shown below >> [image: image.png] >> My current security group rules is shown below >> [image: image.png] >> My system information is shown below >> [image: image.png] >> Apart from all the above information, I will briefly talk about what I >> have enabled in my global.yml file. I have set the >> kolla_internal_vip_address to "192168.1.200", kolla_external_vip_address to >> "10.42.0.200", enable_haproxy to "yes" and other core components of >> openstack was enabled. >> >> Best, >> Vincent >> >> >> >> On Wed, Nov 30, 2022 at 2:30 AM Eugen Block wrote: >> >>> So you did modify the default security-group, ports 8000 and 8080 are >>> not open by default. Anyway, can you please clarify what doesn't work >>> exactly? >>> Does the instance have an IP in the public network but the router is >>> not pingeable (but that is not necessarily an issue) and you can't >>> access it via which protocols? Does SSH work? Is the access blocked by >>> a http_proxy? >>> >>> >>> Zitat von vincent lee : >>> >>> > To add on to my previous email, I have attached an image of my security >>> > group as shown below. >>> > >>> > Best regards, >>> > Vincent >>> > >>> > On Tue, Nov 22, 2022 at 3:58 AM Eugen Block wrote: >>> > >>> >> Just one more thing to check, did you edit the security-group rules to >>> >> allow access to the outside world? >>> >> >>> >> Zitat von Adivya Singh : >>> >> >>> >> > it should be missing a default route most of the time. >>> >> > or check IP tables on router namespace the DNAT and SNAT are working >>> >> > properly >>> >> > >>> >> > >>> >> > >>> >> > On Tue, Nov 22, 2022 at 9:40 AM Tobias McNulty < >>> tobias at caktusgroup.com> >>> >> > wrote: >>> >> > >>> >> >> On Mon, Nov 21, 2022 at 7:39 PM vincent lee < >>> vincentlee676 at gmail.com> >>> >> >> wrote: >>> >> >> >>> >> >>> After reviewing the post you shared, I believe that we have the >>> correct >>> >> >>> subnet. Besides, we did not modify anything related to the >>> cloud-init >>> >> for >>> >> >>> openstack. >>> >> >>> >>> >> >> >>> >> >> I didn't either. But I found it's a good test of the network! If >>> you are >>> >> >> using an image that doesn't rely on it you might not notice (but I >>> >> >> would not recommend that). >>> >> >> >>> >> >> >>> >> >>> After launching the instances, we are able to ping between the >>> >> instances >>> >> >>> of the same subnet. However, we are not able to receive any >>> internet >>> >> >>> connection within those instances. From the instance, we are able >>> to >>> >> ping >>> >> >>> the router IP addresses 10.42.0.56 and 10.0.0.1. >>> >> >>> >>> >> >> >>> >> >> To make sure I understand: >>> >> >> - 10.42.0.56 is the IP of the router external to OpenStack that >>> provides >>> >> >> internet access >>> >> >> - This router is tested and working for devices outside of OpenStack >>> >> >> - OpenStack compute instances can ping this router >>> >> >> - OpenStack compute instances cannot reach the internet >>> >> >> >>> >> >> If that is correct, it does not sound like an OpenStack issue >>> >> necessarily, >>> >> >> but perhaps a missing default route on your compute instances. I >>> would >>> >> >> check that DHCP is enabled on the internal subnet and that it's >>> >> providing >>> >> >> everything necessary for an internet connection to the instances. >>> >> >> >>> >> >> Tobias >>> >> >> >>> >> >> >>> >> >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> > >>> > -- >>> > thanks you. >>> > vincentleezihong >>> > 2garnet >>> > form2 >>> >>> >>> >>> >>> >> >> -- >> thanks you. >> vincentleezihong >> 2garnet >> form2 >> >> > > -- > thanks you. > vincentleezihong > 2garnet > form2 From massimo.sgaravatto at gmail.com Fri Dec 2 16:02:55 2022 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Fri, 2 Dec 2022 17:02:55 +0100 Subject: [ops] Migration from CentOS streams to Ubuntu and fast forward updates Message-ID: Dear all Dear all We are now running an OpenStack deployment: Yoga on CentOS8Stream. We are now thinking about a possible migration to Ubuntu for several reasons in particular: a- 5 years support for both the Operating System and OpenStack (considering LTS releases) b- Possibility do do a clean update between two Ubuntu LTS releases c- Easier procedure (also because of b) for fast forward updates (this is what we use to do) Considering the latter item, my understanding is that an update from Ubuntu 20.04 Ussuri to Ubuntu 22.04 Yoga could be done in the following way (we have two controller nodes and n compute nodes): - Update of first controller node from Ubuntu 20.04 Ussuri to Ubuntu 20.04 Victoria (update OpenStack packages + dbsync) - Update of first controller node from Ubuntu 20.04 Victoria to Ubuntu 20.04 Wallaby (update OpenStack packages + dbsync) - Update of first controller node from Ubuntu 20.04 Wallaby to Ubuntu 20.04 Xena (update OpenStack packages + dbsync) - Update of first controller node from Ubuntu 20.04 Xena to Ubuntu 20.04 Yoga (update OpenStack packages + dbsync) - Update of first controller node from Ubuntu 20.04 Yoga to Ubuntu 22.04 Yoga (update Ubuntu packages) - Update of second controller node from Ubuntu 20.04 Ussuri to Ubuntu 22.04 Yoga (update OpenStack and Ubuntu packages) - Update of the compute nodes from Ubuntu 20.04 Ussuri to Ubuntu 22.04 Yoga (update OpenStack and Ubuntu packages) We would do the same when migrating from Ubuntu 22.04 Yoga to Ubuntu 24.04 and the OpenStack xyz release (where xyz is the LTS release used in Ubuntu 24.04) Is this supposed to work or am I missing something ? If we decide to migrate to Ubuntu, the first step would be the reinstallation with Ubuntu 22.04/Yoga of each node currently running CentOS8 stream/Yoga. I suppose there are no problems having in the same OpenStack installation nodes running the same Openstack version but different operating systems, or am I wrong ? Thanks, Massimo -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Fri Dec 2 16:26:28 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Fri, 2 Dec 2022 16:26:28 +0000 Subject: [nova][placement][tempest] Hold your rechecks In-Reply-To: References: Message-ID: Hi, Could this problem affect cinder-tempest-plugin-lvm-tgt-barbican jobs as well? (All the nova+lvm test are failing) https://zuul.opendev.org/t/openstack/build/140630488ea745a69ce3ebadf85a41fa Thanks Sofia On Tue, Nov 29, 2022 at 3:22 PM Sylvain Bauza wrote: > > > Le mar. 29 nov. 2022 ? 09:33, Sylvain Bauza a ?crit : > >> (early morning, needing a coffee apparently) >> >> Le mar. 29 nov. 2022 ? 09:32, Sylvain Bauza a ?crit : >> >>> >>> >>> Le lun. 28 nov. 2022 ? 14:28, Sylvain Bauza a >>> ?crit : >>> >>>> Sorry folks, that's kind of an email I hate writing but let's be honest >>>> : our gate is busted. >>>> Until we figure out a correct path for resolution, I hereby ask you to >>>> *NOT* recheck in order to not spill our precious CI resources for tests >>>> that are certain to fail. >>>> >>>> Long story story, there are currently two problems : >>>> #1 https://launchpad.net/bugs/1940425 nova-ovs-hybrid-plug and >>>> nova-next jobs 100% fail due to a port remaining in down state. >>>> #2 https://bugs.launchpad.net/nova/+bug/1960346 nova-lvm job 100% >>>> fails due to a volume detach failure probably due to QEMU >>>> >>>> >>>> >>> Today's update : >>> >>> >>>> #1 is currently investigated by the Neutron team meanwhile a patch [1] >>>> has been proposed against Zuul to skip the failing tests. >>>> Unfortunately, this patch [1] is unable to merge due to #2. >>>> >>>> >>> Good news, kudos to the Neutron team which delivered a bugfix against >>> the rootcause, which is always better than just skipping tests (and lacking >>> then coverage). >>> https://review.opendev.org/c/openstack/neutron/+/837780/18 >>> >>> Accordingly, [1] is no longer necessary and has been abandoned after a >>> recheck to verify the job runs. >>> >>> >>> >>>> #2 has a Tempest patch that's being worked on [2] but the current state >>>> of this patch is WIP. >>>> We somehow need to have an agreement on the way forward during this >>>> afternoon (UTC) to identify whether we can reasonably progress on [2] or >>>> skip the failing tests on nova-lvm. >>>> >>>> >>> Given [2] is hard to write, gmann proposed a patch [3] for skipping some >>> nova-lvm tests. Reviews of [3] ongoing, should be hopefully merged today >>> around noon UTC. >>> >>> Once [3] is merged, the gate should be unblocked. >>> Again, an email will be sent once we progress on [3]. >>> >> > [3] is merged, so now the gate is back \o/ > Thanks all folks who helped on those issues ! > > >> -S >>> >>> >>>> Again, sorry about the bad news and I'll keep you informed. >>>> -Sylvain >>>> >>>> [1] https://review.opendev.org/c/openstack/nova/+/865658/ >>>> [2] https://review.opendev.org/c/openstack/tempest/+/842240 >>>> >>> [3] https://review.opendev.org/c/openstack/nova/+/865922 >> > -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Fri Dec 2 16:48:48 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Fri, 2 Dec 2022 17:48:48 +0100 Subject: [openstack-ansible] CI is broken - hold on rechecks Message-ID: Hi folks, After merging [1] all our CI tests seem broken until we merge [2], so please hold on all your rechecks. With that being said, we need to focus on landing 862924 or merge temporary workaround to the integrated repo to define neutron_ml2_drivers_type and neutron_plugin_base for other checks to pass without [2]. [1] https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/865961 [2] https://review.opendev.org/c/openstack/openstack-ansible/+/862924 From gmann at ghanshyammann.com Fri Dec 2 21:05:59 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 02 Dec 2022 13:05:59 -0800 Subject: [all][tc] What's happening in Technical Committee: summary 2022 Dec 02: Reading: 5 min Message-ID: <184d4a90d99.b6cc73f0190634.4811101318280087133@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We had this week's meeting on Nov 30. Most of the meeting discussions are summarized in this email. Meeting logs are available @ https://meetings.opendev.org/meetings/tc/2022/tc.2022-11-30-16.00.log.html * Next TC weekly meeting will be on Dec 7 Wed at 16:00 UTC, Feel free to add the topic to the agenda[1] by Dec 6. 2. What we completed this week: ========================= * FIPS testing on Ubuntu paid subscription We discussed it in TC meeting and after considering the currently available options, TC agreed to go for this[2]. We know it is paid subscription (given by Ubuntu as free to OpenStack upstream CI) and we should use the free open source things to test the upstream code but we do not have any other setup ready for FIPS testing (CentOSstream is not stable enough to add voting job, Debian is not tested broadly in all projects so it may take more time to add the voting job in projects) For now, testing FIPS with Ubuntu paid token is best possible go ahead and if we have any open source/free/stable version available in any distro then we can move FIPS job on that. * Added Skyline repository for OpenStack-Ansible[3] * Added the manila-infinidat charm to Openstack charms[4] * Added the cinder-infinidat charm to Openstack charms[5] 3. Activities In progress: ================== TC Tracker for 2023.1 cycle --------------------------------- * Current cycle working items and their progress are present in 2023.1 tracker etherpad[6]. Open Reviews ----------------- * Three open reviews for ongoing activities[7]. Technical Election changes (Extending Nomination and voting period) ---------------------------------------------------------------------------------- This is in review and positive response so far[8]. Renovate translation SIG i18 ---------------------------------- * No specific update for this week and it will be discussed with the Foundation staff and Board members in Dec 6th board meeting. Adjutant situation (project is not active) ----------------------------------------------- Proposal to mark Adjutant as inactive is up[9] and ready for review. I have emailed about it to current PTL also. Project updates ------------------- * Add the infinidat-tools subordinate charm to OpenStack charms[10] 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[11]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15:00 UTC [12] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031240.html [2] https://meetings.opendev.org/meetings/tc/2022/tc.2022-11-30-16.00.log.html#l-151 [3] https://review.opendev.org/c/openstack/governance/+/863166 [4] https://review.opendev.org/c/openstack/governance/+/864068 [5] https://review.opendev.org/c/openstack/governance/+/863958 [6] https://etherpad.opendev.org/p/tc-2023.1-tracker [7] https://review.opendev.org/q/projects:openstack/governance+status:open [8] https://review.opendev.org/c/openstack/governance/+/865367 [9] https://review.opendev.org/c/openstack/governance/+/849153 [10] https://review.opendev.org/c/openstack/governance/+/864067 [11] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [12] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From vincentlee676 at gmail.com Sun Dec 4 04:38:33 2022 From: vincentlee676 at gmail.com (vincent lee) Date: Sat, 3 Dec 2022 22:38:33 -0600 Subject: Unable to access Internet from an instance and accessing instance using floating-point IPs from external network In-Reply-To: <20221202103602.Horde.O3I1fMqzWHQKKPBREgDJBvw@webmail.nde.ag> References: <20221122094959.Horde._DNW37_4CRcsBFAHQUP7ZG_@webmail.nde.ag> <20221130082712.Horde.kAKVcEDnoqnje6ZTI7PfsEB@webmail.nde.ag> <20221202103602.Horde.O3I1fMqzWHQKKPBREgDJBvw@webmail.nde.ag> Message-ID: Thank you all for the suggestions; I have managed to resolve this issue by adding eth1 as a port to the bridge (br-ex) in the openvswitch container. However, even after being able to ping from outside of the container (controller), I am still not able to access the internet. Once again, thank you for all your help. Best regards, Vincent -------------- next part -------------- An HTML attachment was scrubbed... URL: From zakhar at gmail.com Mon Dec 5 04:02:25 2022 From: zakhar at gmail.com (Zakhar Kirpichenko) Date: Mon, 5 Dec 2022 06:02:25 +0200 Subject: Upgrade Ceph 16.2.10 to 17.2.x for Openstack Wallaby RBD storage Message-ID: Hi! I'm planning to upgrade our Ceph cluster from Pacific (16.2.10) to Quincy (17.2.x). The cluster is used for Openstack block storage (RBD), Openstack version is Wallaby built on Ubuntu 20.04. Is anyone using Ceph Quincy (17.2.x) with Openstack Wallaby? If you are, please let me know if you've encountered any issues specific to these Ceph and Openstack versions. Many thanks! Best regards, Zakhar -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Mon Dec 5 04:03:20 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Mon, 5 Dec 2022 09:33:20 +0530 Subject: [cinder] Cinder Midcycle - 1 (R-16) Summary Message-ID: Hello Argonauts, The summary for midcycle-1 held on 30th November, 2022 between 1400-1600 UTC is available here[1]. Please go through the etherpad and recordings for the discussion. [1] https://wiki.openstack.org/wiki/CinderAntelopeMidCycleSummary - Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From lkuchlan at redhat.com Mon Dec 5 06:00:08 2022 From: lkuchlan at redhat.com (Liron Kuchlani) Date: Mon, 5 Dec 2022 08:00:08 +0200 Subject: [Manila] S-RBAC Hack-a-thon Message-ID: Hi everyone, We are planning a Hack-a-thon for secure RBAC between Wed 12/07 to Thur 12/08 with Fri 12/09 as needed for code reviews. The session will take place between 15:00 UTC and 15:30 UTC. Join meeting: https://bluejeans.com/8520212381 Please find below helpful references and links, Openstack S-RBAC docs [0] Manila-tempest-plugin S-RBAC tests [1] S-RBAC collab review [2] Manila Secure RBAC Antelope PTG etherpad [3] Openstack Shared File Systems API documentation [4] S-RBAC code reviews [5] S-RBAC Kanban [6] [0] https://docs.openstack.org/patrole/latest/rbac-overview.html [1] https://github.com/openstack/manila-tempest-plugin/tree/master/manila_tempest_tests/tests/rbac [2] https://etherpad.opendev.org/p/Manila-S-RBAC-collaborative-review [3] https://etherpad.opendev.org/p/antelope-ptg-manila-srbac [4] https://docs.openstack.org/api-ref/shared-file-system [5] https://review.opendev.org/q/topic:secure-rbac+project:openstack/manila-tempest-plugin [6] https://tree.taiga.io/project/silvacarloss-manila-tempest-plugin-rbac/kanban Hope you can all join us, -- Thanks, Liron -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Dec 5 09:05:57 2022 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 5 Dec 2022 10:05:57 +0100 Subject: [largescale-sig] Next meeting: Dec 7, 15utc Message-ID: Hi everyone, Contrary to previous indications, the Large Scale SIG will be meeting this Wednesday in #openstack-operators on OFTC IRC, at 15UTC. You can doublecheck how that UTC time translates locally at: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20221207T15 Feel free to add topics to the agenda: https://etherpad.opendev.org/p/large-scale-sig-meeting Regards, -- Thierry Carrez From thierry at openstack.org Mon Dec 5 09:29:56 2022 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 5 Dec 2022 10:29:56 +0100 Subject: [cinder][kolla][OpenstackAnsible] Zed Cycle-Trailing Release Deadline Message-ID: <9f6bd276-2f14-8ade-3d1d-f0fffbc75084@openstack.org> Hello teams with cycle-trailing projects, The Zed cycle-trailing release deadline[1] is next week ! All projects following the cycle-trailing release model must release their Zed deliverables by Dec 14th, 2022. [1] https://releases.openstack.org/antelope/schedule.html#a-cycle-trail The following trailing projects haven't been released yet for Zed (aside the release candidates versions if exists). Cinder team's deliverables: - cinderlib OSA team's deliverables: - openstack-ansible Kolla team's deliverables: - kayobe - kolla - kolla-ansible - ansible-collection-kolla This is just a friendly reminder to allow you to release these projects in time. Do not hesitate to ping the release team if you have any questions or concerns. -- Thierry Carrez (ttx) From skaplons at redhat.com Mon Dec 5 09:57:31 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 05 Dec 2022 10:57:31 +0100 Subject: [neutron] Bug deputy - report from the week of Nov 28th Message-ID: <2082495.upRJiiujQ9@p1> Hi, I was bug deputy this last week. It was pretty busy week and we had a lot of bugs reported. Here's the summary of them: Critical https://bugs.launchpad.net/neutron/+bug/1998474 - [UT] Random failure of "test_meter_manager_allocate_meter_id" - needs assignee https://bugs.launchpad.net/neutron/+bug/1998353 - Fullstack: test_packet_rate_limit_qos_policy_rule_lifecycle failing - assigned to Rodolfo, fix proposed already https://bugs.launchpad.net/neutron/+bug/1998343 - Unittest test_distributed_port_binding_deleted_by_port_deletion fails: DeprecationWarning('ssl.PROTOCOL_TLS is deprecated') - assigned to Anton Kurbatov, fix proposed already, https://bugs.launchpad.net/neutron/+bug/1995031 - [CI][periodic] neutron-functional-with-uwsgi-fips job failing - assigned to Lajos, it can be the same issue as LP#1998337 High https://bugs.launchpad.net/neutron/+bug/1998228 - [UT] Error in "test_get_vlan_device_name" with n-lib 3.2.0 - In progress, patch https://review.opendev.org/c/openstack/neutron/+/866038 https://bugs.launchpad.net/neutron/+bug/1998226 - [fullstack] Error in "test_concurrent_router_subnet_attachment_overlapping_cidr_" - seems like real issue catched by fullstack test, assigned to Fernando Royo https://bugs.launchpad.net/neutron/+bug/1998104 - dynamic-routing: address_scope calculation error - In progress, fix proposed already https://review.opendev.org/c/openstack/neutron-dynamic-routing/+/863708 https://bugs.launchpad.net/neutron/+bug/1998621 - dnsmasq on DHCP Agent does not listen on tcp/53 after dnsmasq restart - In progress, fix already proposed https://review.opendev.org/c/openstack/neutron/+/866489 Medium https://bugs.launchpad.net/neutron/+bug/1998337 - test_dvr_router_lifecycle_ha_with_snat_with_fips fails occasionally in the gate - Needs assignee https://bugs.launchpad.net/neutron/+bug/1998751 - [L3][DVR][vlan] HA VRRP traffic flooding on physical bridges on compute nodes, needs assignee https://bugs.launchpad.net/neutron/+bug/1998749 - [L3][DVR][vlan] east-west traffic flooding on physical bridges, needs assignee Low https://bugs.launchpad.net/neutron/+bug/1998108 - Move "ChassisBandwidthConfigEvent" registration to "OvnSbIdl" class initialization - In progress, patch https://review.opendev.org/c/openstack/neutron/+/865857 https://bugs.launchpad.net/neutron/+bug/1998085 - Manual install & Configuration in Neutron incorrect vni_ranges leads to error - In progress, patch https://review.opendev.org/c/openstack/neutron/+/865960 https://bugs.launchpad.net/neutron/+bug/1998317 - CachedResourceConsumerTracker update triggered for every command - assigned to Szymon Wr?blewski Whishlist (RFEs) https://bugs.launchpad.net/neutron/+bug/1998202 - [RFE] Allow shared resource providers for physical and tunnelled networks - assigned to Rodolfo https://bugs.launchpad.net/neutron/+bug/1998608 - [RFE] A new OVN monitor agent running on each compute node - assigned to Rodolfo https://bugs.launchpad.net/neutron/+bug/1998609 - [RFE] - OVN Distributed routing + IPv6 support - needs triaging from OVN and L3 teams, Others https://bugs.launchpad.net/neutron/+bug/1998235 - Allowed address pairs and dvr routers - duplicate of the old bug https://bugs.launchpad.net/neutron/+bug/1774459 which was never fixed, -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From elod.illes at est.tech Mon Dec 5 10:52:04 2022 From: elod.illes at est.tech (=?utf-8?B?RWzDtWQgSWxsw6lz?=) Date: Mon, 5 Dec 2022 10:52:04 +0000 Subject: [tc][mistral][release] Propose to deprecate Mistral Message-ID: Hi, Mistral projects are unfortunately not actively maintained and caused hard times in latest official series releases for release management team. Thus we discussed this and decided to propose to deprecate Mistral [1] to avoid broken releases and last minute debugging of gate issues (which usually fall to release management team), and remove mistral projects from 2023.1 Antelope release [2]. We would like to ask @TC to evaluate the situation and review our patches (deprecation [1] + removal from the official release [2]). Thanks in advance, El?d Ill?s irc: elodilles @ #openstack-release [1] https://review.opendev.org/c/openstack/governance/+/866562 [2] https://review.opendev.org/c/openstack/releases/+/865577 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ygk.kmr at gmail.com Mon Dec 5 14:39:06 2022 From: ygk.kmr at gmail.com (Gk Gk) Date: Mon, 5 Dec 2022 20:09:06 +0530 Subject: Need assistance Message-ID: Hi, We have a SR-IOV guest whose xml looks like shown below: ------
---- How to find the network statistics for the virtual function assigned to the VM above from the ethtool -S output shown below: --- ethtool -S eno12409 NIC statistics: rx_unicast: 2890507 tx_unicast: 1279755 rx_multicast: 101903131 tx_multicast: 363832 rx_broadcast: 1496406 tx_broadcast: 1052 rx_bytes: 10084560360 tx_bytes: 225469520 rx_dropped: 0 rx_unknown_protocol: 0 rx_alloc_fail: 0 rx_pg_alloc_fail: 0 tx_errors: 0 tx_linearize: 0 tx_busy: 0 tx_restart: 0 tx_queue_0_packets: 120328 tx_queue_0_bytes: 15218021 tx_queue_1_packets: 185835 tx_queue_1_bytes: 33463904 tx_queue_2_packets: 255145 tx_queue_2_bytes: 62401952 tx_queue_3_packets: 73719 tx_queue_3_bytes: 9766278 tx_queue_4_packets: 75719 tx_queue_4_bytes: 13886665 tx_queue_5_packets: 756632 tx_queue_5_bytes: 64787766 tx_queue_6_packets: 127836 tx_queue_6_bytes: 13422778 tx_queue_7_packets: 49417 tx_queue_7_bytes: 7596442 rx_queue_0_packets: 103727545 rx_queue_0_bytes: 6961254601 rx_queue_1_packets: 326249 rx_queue_1_bytes: 454984714 rx_queue_2_packets: 242553 rx_queue_2_bytes: 339238020 rx_queue_3_packets: 183649 rx_queue_3_bytes: 244280310 rx_queue_4_packets: 405611 rx_queue_4_bytes: 495795368 rx_queue_5_packets: 227854 rx_queue_5_bytes: 236190286 rx_queue_6_packets: 193401 rx_queue_6_bytes: 271337926 rx_queue_7_packets: 982683 rx_queue_7_bytes: 447042416 rx_bytes.nic: 12401182418323 tx_bytes.nic: 74986406490 rx_unicast.nic: 8485232145 tx_unicast.nic: 761643543 rx_multicast.nic: 101957596 tx_multicast.nic: 388091 rx_broadcast.nic: 1495740 tx_broadcast.nic: 1718 tx_errors.nic: 0 tx_timeout.nic: 0 rx_size_64.nic: 167990805 tx_size_64.nic: 244373395 rx_size_127.nic: 240074370 tx_size_127.nic: 466106113 rx_size_255.nic: 39290324 tx_size_255.nic: 29943862 rx_size_511.nic: 6918699 tx_size_511.nic: 4570812 rx_size_1023.nic: 22389241 tx_size_1023.nic: 15207161 rx_size_1522.nic: 8112022042 tx_size_1522.nic: 1832009 rx_size_big.nic: 0 tx_size_big.nic: 0 link_xon_rx.nic: 0 link_xon_tx.nic: 0 link_xoff_rx.nic: 0 link_xoff_tx.nic: 0 tx_dropped_link_down.nic: 0 rx_undersize.nic: 0 rx_fragments.nic: 0 rx_oversize.nic: 0 rx_jabber.nic: 0 rx_csum_bad.nic: 0 rx_length_errors.nic: 0 rx_dropped.nic: 0 rx_crc_errors.nic: 0 illegal_bytes.nic: 0 mac_local_faults.nic: 0 mac_remote_faults.nic: 0 fdir_sb_match.nic: 0 fdir_sb_status.nic: 1 tx_priority_0_xon.nic: 0 tx_priority_0_xoff.nic: 0 tx_priority_1_xon.nic: 0 tx_priority_1_xoff.nic: 0 tx_priority_2_xon.nic: 0 tx_priority_2_xoff.nic: 0 tx_priority_3_xon.nic: 0 tx_priority_3_xoff.nic: 0 tx_priority_4_xon.nic: 0 tx_priority_4_xoff.nic: 0 tx_priority_5_xon.nic: 0 tx_priority_5_xoff.nic: 0 tx_priority_6_xon.nic: 0 tx_priority_6_xoff.nic: 0 tx_priority_7_xon.nic: 0 tx_priority_7_xoff.nic: 0 rx_priority_0_xon.nic: 0 rx_priority_0_xoff.nic: 0 rx_priority_1_xon.nic: 0 rx_priority_1_xoff.nic: 0 rx_priority_2_xon.nic: 0 rx_priority_2_xoff.nic: 0 rx_priority_3_xon.nic: 0 rx_priority_3_xoff.nic: 0 rx_priority_4_xon.nic: 0 rx_priority_4_xoff.nic: 0 rx_priority_5_xon.nic: 0 rx_priority_5_xoff.nic: 0 rx_priority_6_xon.nic: 0 rx_priority_6_xoff.nic: 0 rx_priority_7_xon.nic: 0 rx_priority_7_xoff.nic: 0 ----- Please help Thanks Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Mon Dec 5 14:41:05 2022 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 5 Dec 2022 15:41:05 +0100 Subject: [CFP] FOSDEM 2023 Virt & IaaS DevRoom (proposal submissions date: 10Dec2022) Message-ID: (I forgot to send this earlier, but there' still 5 more days to go) tl;dr: Please submit the proposals by *10 Dec 2022*. We are excited to announce that the call for proposals is now open for the Virtualization & IaaS devroom at the upcoming FOSDEM 2023, to be hosted on February 4th 2023. This devroom is a collaborative effort, and is organized by dedicated folks from projects such as OpenStack, Xen Project, KubeVirt, QEMU, KVM, and Foreman. We would like to invite all those who are involved in these fields to submit your proposals by December 10th, 2022. Important Dates --------------- Submission deadline: 10th December 2022 Acceptance notifications: 15th December 2022 Final schedule announcement: 20th December 2022 Conference devroom: first half of 4th February 2023 About the Devroom ----------------- The Virtualization & IaaS devroom will feature session topics such as open source hypervisors or virtual machine managers such as Xen Project, KVM, bhyve and VirtualBox as well as Infrastructure-as-a-Service projects such as KubeVirt, Apache CloudStack, OpenStack, QEMU and OpenNebula. This devroom will host presentations that focus on topics of shared interest, such as KVM; libvirt; shared storage; virtualized networking; cloud security; clustering and high availability; interfacing with multiple hypervisors; hyperconverged deployments; and scaling across hundreds or thousands of servers. Presentations in this devroom will be aimed at developers working on these platforms who are looking to collaborate and improve shared infrastructure or solve common problems. We seek topics that encourage dialog between projects and continued work post-FOSDEM. Submit Your Proposal -------------------- All submissions must be made via the Pentabarf event planning site[1]. If you have not used Pentabarf before, you will need to create an account. If you submitted proposals for FOSDEM in previous years, you can use your existing account. After creating the account, select Create Event to start the submission process. Make sure to select Virtualization and IaaS devroom from the Track list. Please fill out all the required fields, and provide a meaningful abstract and description of your proposed session. Submission Guidelines --------------------- We expect more proposals than we can possibly accept, so it is vitally important that you submit your proposal on or before the deadline. Late submissions are unlikely to be considered. All presentation slots are 30 minutes, with 20 minutes planned for presentations, and 10 minutes for Q&A. All presentations will be recorded and made available under Creative Commons licenses. In the Submission notes field, please indicate that you agree that your presentation will be licensed under the CC-By-SA-4.0 or CC-By-4.0 license and that you agree to have your presentation recorded. For example: "If my presentation is accepted for FOSDEM, I hereby agree to license all recordings, slides, and other associated materials under the Creative Commons Attribution Share-Alike 4.0 International License. Sincerely, ." In the Submission notes field, please also confirm that if your talk is accepted, you will be able to attend FOSDEM and deliver your presentation. We will not consider proposals from prospective speakers who are unsure whether they will be able to secure funds for travel and lodging to attend FOSDEM. (Sadly, we are not able to offer travel funding for prospective speakers.) Submission Guidelines --------------------- Mentored presentations will have 25-minute slots, where 20 minutes will include the presentation and 5 minutes will be reserved for questions. The number of newcomer session slots is limited, so we will probably not be able to accept all applications. You must submit your talk and abstract to apply for the mentoring program, our mentors are volunteering their time and will happily provide feedback but won't write your presentation for you! If you are experiencing problems with Pentabarf, the proposal submission interface, or have other questions, you can email our devroom mailing list[2] and we will try to help you. How to Apply ------------ In addition to agreeing to video recording and confirming that you can attend FOSDEM in case your session is accepted, please write "speaker mentoring program application" in the "Submission notes" field, and list any prior speaking experience or other relevant information for your application. Code of Conduct --------------- Following the release of the updated code of conduct for FOSDEM, we'd like to remind all speakers and attendees that all of the presentations and discussions in our devroom are held under the guidelines set in the CoC and we expect attendees, speakers, and volunteers to follow the CoC at all times. If you submit a proposal and it is accepted, you will be required to confirm that you accept the FOSDEM CoC. If you have any questions about the CoC or wish to have one of the devroom organizers review your presentation slides or any other content for CoC compliance, please email us and we will do our best to assist you. Call for Volunteers ------------------- We are also looking for volunteers to help run the devroom. We need assistance watching time for the speakers, and helping with video for the devroom. Please contact devroom mailing list[2] for more information. Questions? ---------- If you have any questions about this devroom, please send your questions to our devroom mailing list. You can also subscribe to the list to receive updates about important dates, session announcements, and to connect with other attendees. See you all at FOSDEM! [1] https://penta.fosdem.org/submission [2] iaas-virt-devroom at lists.fosdem.org PS: This is a formatted version of the official announce: https://lists.fosdem.org/pipermail/fosdem/2022q4/003473.html -- /kashyap From jay at gr-oss.io Mon Dec 5 15:31:01 2022 From: jay at gr-oss.io (Jay Faulkner) Date: Mon, 5 Dec 2022 07:31:01 -0800 Subject: [ironic] Meetings cancelled Dec 26, Jan 2 Message-ID: Hey all, Ironic team decided to cancel our meetings Dec 26, 2022 and Jan 2, 2023 to allow contributors to focus on family and friends over the holiday. Thanks for the hard work in 2022. If there are any concerns for Ironic during those weeks, please email the list or ask in IRC. Thanks, Jay Faulkner -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Mon Dec 5 15:41:10 2022 From: zigo at debian.org (Thomas Goirand) Date: Mon, 5 Dec 2022 16:41:10 +0100 Subject: [all] many Python 3.11 failures in OpenStack components Message-ID: <46d96098-67ce-5ea1-6d7a-26e9103594f4@debian.org> Hi, The below bugs have been reported against the OpenStack components in Debian. I would greatly appreciate help fixing these before Bookworm is in freeze (in January). If you fix a bug, it'd be great to add a link in the Debian bug, so I can see the URL of the review (even if the patch isn't merged yet). As a reminder: to write to a Debian bug, simply write to @bugs.debian.org (the Debian BTS is email driven). networking-mlnx: https://bugs.debian.org/1021924 FTBFS test failures: sqlalchemy.exc.InvalidRequestError: A transaction is already begun on this Session. networking-l2gw: https://bugs.debian.org/1023764 autopkgtest failure: module 'neutron.common.config' has no attribute 'register_common_config_options' ironic: https://bugs.debian.org/1024783 needs update for python3.11: Cannot spec a Mock object ironic-inspector: https://bugs.debian.org/1024952 needs update for python3.11: Failed to resolve the hostname (meow) for node uuid1 mistral-dashboard: https://bugs.debian.org/1024958 needs update for python3.11: module 'inspect' has no attribute 'getargspec'. Did you mean: 'getargs'? murano: https://bugs.debian.org/1025013 needs update for python3.11: 'NoneType' object has no attribute 'object_store' python-cinderclient: https://bugs.debian.org/1025028 needs update for python3.11: conflicting subparser: another-fake-action python-novaclient: https://bugs.debian.org/1025110 needs update for python3.11: conflicting subparser python-sushy: https://bugs.debian.org/1025124 needs update for python3.11: AssertionError: expected call not found python-taskflow: https://bugs.debian.org/1025126 needs update for python3.11: MismatchError Cheers, Thomas Goirand (zigo) From clay.gerrard at gmail.com Mon Dec 5 16:03:22 2022 From: clay.gerrard at gmail.com (Clay Gerrard) Date: Mon, 5 Dec 2022 10:03:22 -0600 Subject: [all] many Python 3.11 failures in OpenStack components In-Reply-To: <46d96098-67ce-5ea1-6d7a-26e9103594f4@debian.org> References: <46d96098-67ce-5ea1-6d7a-26e9103594f4@debian.org> Message-ID: Swift has one too! https://review.opendev.org/c/openstack/swift/+/863441 On Mon, Dec 5, 2022 at 9:45 AM Thomas Goirand wrote: > Hi, > > The below bugs have been reported against the OpenStack components in > Debian. I would greatly appreciate help fixing these before Bookworm is > in freeze (in January). If you fix a bug, it'd be great to add a link in > the Debian bug, so I can see the URL of the review (even if the patch > isn't merged yet). > > As a reminder: to write to a Debian bug, simply write to > @bugs.debian.org (the Debian BTS is email driven). > > networking-mlnx: > https://bugs.debian.org/1021924 > FTBFS test failures: sqlalchemy.exc.InvalidRequestError: A transaction > is already begun on this Session. > > networking-l2gw: > https://bugs.debian.org/1023764 > autopkgtest failure: module 'neutron.common.config' has no attribute > 'register_common_config_options' > > ironic: > https://bugs.debian.org/1024783 > needs update for python3.11: Cannot spec a Mock object > > ironic-inspector: > https://bugs.debian.org/1024952 > needs update for python3.11: Failed to resolve the hostname (meow) for > node uuid1 > > mistral-dashboard: > https://bugs.debian.org/1024958 > needs update for python3.11: module 'inspect' has no attribute > 'getargspec'. Did you mean: 'getargs'? > > murano: > https://bugs.debian.org/1025013 > needs update for python3.11: 'NoneType' object has no attribute > 'object_store' > > python-cinderclient: > https://bugs.debian.org/1025028 > needs update for python3.11: conflicting subparser: another-fake-action > > python-novaclient: > https://bugs.debian.org/1025110 > needs update for python3.11: conflicting subparser > > python-sushy: > https://bugs.debian.org/1025124 > needs update for python3.11: AssertionError: expected call not found > > python-taskflow: > https://bugs.debian.org/1025126 > needs update for python3.11: MismatchError > > Cheers, > > Thomas Goirand (zigo) > > -- Clay Gerrard 210 788 9431 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Mon Dec 5 16:53:18 2022 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 5 Dec 2022 17:53:18 +0100 Subject: [nova][placement][tempest] Hold your rechecks In-Reply-To: References: Message-ID: Le ven. 2 d?c. 2022 ? 17:26, Sofia Enriquez a ?crit : > Hi, > Could this problem affect cinder-tempest-plugin-lvm-tgt-barbican jobs as > well? (All the nova+lvm test are failing) > https://zuul.opendev.org/t/openstack/build/140630488ea745a69ce3ebadf85a41fa > > Sounds related AFAICS as this is related to a volume attach/detach. TBC, the root problem is that when you attach or detach a volume earlier than when the guest kernel is fully booted, then detach won't work. In order to not have a volume detach problem, Tempest needs to reliably wait for the guest to be sshable (as the kernel will be booted). HTH, -Sylvain Thanks > Sofia > > On Tue, Nov 29, 2022 at 3:22 PM Sylvain Bauza wrote: > >> >> >> Le mar. 29 nov. 2022 ? 09:33, Sylvain Bauza a ?crit : >> >>> (early morning, needing a coffee apparently) >>> >>> Le mar. 29 nov. 2022 ? 09:32, Sylvain Bauza a >>> ?crit : >>> >>>> >>>> >>>> Le lun. 28 nov. 2022 ? 14:28, Sylvain Bauza a >>>> ?crit : >>>> >>>>> Sorry folks, that's kind of an email I hate writing but let's be >>>>> honest : our gate is busted. >>>>> Until we figure out a correct path for resolution, I hereby ask you to >>>>> *NOT* recheck in order to not spill our precious CI resources for tests >>>>> that are certain to fail. >>>>> >>>>> Long story story, there are currently two problems : >>>>> #1 https://launchpad.net/bugs/1940425 nova-ovs-hybrid-plug and >>>>> nova-next jobs 100% fail due to a port remaining in down state. >>>>> #2 https://bugs.launchpad.net/nova/+bug/1960346 nova-lvm job 100% >>>>> fails due to a volume detach failure probably due to QEMU >>>>> >>>>> >>>>> >>>> Today's update : >>>> >>>> >>>>> #1 is currently investigated by the Neutron team meanwhile a patch [1] >>>>> has been proposed against Zuul to skip the failing tests. >>>>> Unfortunately, this patch [1] is unable to merge due to #2. >>>>> >>>>> >>>> Good news, kudos to the Neutron team which delivered a bugfix against >>>> the rootcause, which is always better than just skipping tests (and lacking >>>> then coverage). >>>> https://review.opendev.org/c/openstack/neutron/+/837780/18 >>>> >>>> Accordingly, [1] is no longer necessary and has been abandoned after a >>>> recheck to verify the job runs. >>>> >>>> >>>> >>>>> #2 has a Tempest patch that's being worked on [2] but the current >>>>> state of this patch is WIP. >>>>> We somehow need to have an agreement on the way forward during this >>>>> afternoon (UTC) to identify whether we can reasonably progress on [2] or >>>>> skip the failing tests on nova-lvm. >>>>> >>>>> >>>> Given [2] is hard to write, gmann proposed a patch [3] for skipping >>>> some nova-lvm tests. Reviews of [3] ongoing, should be hopefully merged >>>> today around noon UTC. >>>> >>>> Once [3] is merged, the gate should be unblocked. >>>> Again, an email will be sent once we progress on [3]. >>>> >>> >> [3] is merged, so now the gate is back \o/ >> Thanks all folks who helped on those issues ! >> >> >>> -S >>>> >>>> >>>>> Again, sorry about the bad news and I'll keep you informed. >>>>> -Sylvain >>>>> >>>>> [1] https://review.opendev.org/c/openstack/nova/+/865658/ >>>>> [2] https://review.opendev.org/c/openstack/tempest/+/842240 >>>>> >>>> [3] https://review.opendev.org/c/openstack/nova/+/865922 >>> >> > > -- > > Sof?a Enriquez > > she/her > > Software Engineer > > Red Hat PnT > > IRC: @enriquetaso > @RedHat Red Hat > Red Hat > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From 2292613444 at qq.com Mon Dec 5 09:49:49 2022 From: 2292613444 at qq.com (=?gb18030?B?zt7K/bXE0MfH8g==?=) Date: Mon, 5 Dec 2022 17:49:49 +0800 Subject: the zun cmd "python3 setup.py install" error(s) Message-ID: system:ubuntu:20.04 useDocumentation:https://docs.openstack.org/zun/zed/install/controller-install.html root at controller:/var/lib/zun/zun# python3 setup.py install ERROR:root:Error parsing Traceback (most recent call last):   File "/usr/lib/python3/dist-packages/pbr/core.py", line 96, in pbr     attrs = util.cfg_to_args(path, dist.script_args)   File "/usr/lib/python3/dist-packages/pbr/util.py", line 271, in cfg_to_args     pbr.hooks.setup_hook(config)   File "/usr/lib/python3/dist-packages/pbr/hooks/__init__.py", line 25, in setup_hook     metadata_config.run()   File "/usr/lib/python3/dist-packages/pbr/hooks/base.py", line 27, in run     self.hook()   File "/usr/lib/python3/dist-packages/pbr/hooks/metadata.py", line 25, in hook     self.config['version'] = packaging.get_version(   File "/usr/lib/python3/dist-packages/pbr/packaging.py", line 874, in get_version     raise Exception("Versioning for this project requires either an sdist" Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name zun was given, but was not able to be found. error in setup command: Error parsing /var/lib/zun/zun/setup.cfg: Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name zun was given, but was not able to be found. root at controller:/var/lib/zun/zun# cryed: pip3 install --upgrade distribute pip3 install --upgrade tensorflow_gpu -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Mon Dec 5 18:40:19 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 05 Dec 2022 10:40:19 -0800 Subject: the zun cmd "python3 setup.py install" error(s) In-Reply-To: References: Message-ID: <2115cd74-2311-4381-b6fd-ac0c6263dfcb@app.fastmail.com> On Mon, Dec 5, 2022, at 1:49 AM, ????? wrote: > system:ubuntu:20.04 > useDocumentation:https://docs.openstack.org/zun/zed/install/controller-install.html > > root at controller:/var/lib/zun/zun# python3 setup.py install > ERROR:root:Error parsing > Traceback (most recent call last): > File "/usr/lib/python3/dist-packages/pbr/core.py", line 96, in pbr > attrs = util.cfg_to_args(path, dist.script_args) > File "/usr/lib/python3/dist-packages/pbr/util.py", line 271, in > cfg_to_args > pbr.hooks.setup_hook(config) > File "/usr/lib/python3/dist-packages/pbr/hooks/__init__.py", line 25, > in setup_hook > metadata_config.run() > File "/usr/lib/python3/dist-packages/pbr/hooks/base.py", line 27, in > run > self.hook() > File "/usr/lib/python3/dist-packages/pbr/hooks/metadata.py", line 25, > in hook > self.config['version'] = packaging.get_version( > File "/usr/lib/python3/dist-packages/pbr/packaging.py", line 874, in > get_version > raise Exception("Versioning for this project requires either an > sdist" > Exception: Versioning for this project requires either an sdist > tarball, or access to an upstream git repository. It's also possible > that there is a mismatch between the package name in setup.cfg and the > argument given to pbr.version.VersionInfo. Project name zun was given, > but was not able to be found. > error in setup command: Error parsing /var/lib/zun/zun/setup.cfg: > Exception: Versioning for this project requires either an sdist > tarball, or access to an upstream git repository. It's also possible > that there is a mismatch between the package name in setup.cfg and the > argument given to pbr.version.VersionInfo. Project name zun was given, > but was not able to be found. > root at controller:/var/lib/zun/zun# > > > cryed: > pip3 install --upgrade distribute > pip3 install --upgrade tensorflow_gpu Often times this problem is caused by running setup.py outside of a git repo. As the error mentions versioning requires access to the git repository. Is /var/lib/zun/zun a git repository for zun? If not you'll either need to make it a git repo or build the sdist/wheel elsewhere in the git repo then install the resulting artifact. From gmann at ghanshyammann.com Mon Dec 5 19:48:38 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 05 Dec 2022 11:48:38 -0800 Subject: [tc][mistral][release] Propose to deprecate Mistral In-Reply-To: References: Message-ID: <184e3d54ea7.dbad4dbc327644.562567281842325561@ghanshyammann.com> ---- On Mon, 05 Dec 2022 02:52:04 -0800 El?d Ill?s wrote --- > div.zm_7849435736241202314_parse_3565333567893367702 P { margin-top: 0; margin-bottom: 0 }Hi, > Mistral projects are unfortunately not actively maintained and caused hard times in latest official series releases for release management team. Thus we discussed this and decided to propose to deprecate Mistral [1] to avoid broken releases and last minute debugging of gate issues (which usually fall to release management team), and remove mistral projects from 2023.1 Antelope release [2]. > > We would like to ask @TC to evaluate the situation and review our patches (deprecation [1] + removal from the official release [2]). Thanks Elod for reporting such project from release perspective which is good help to TC. As you might have seen the comment in review, the good thing is that one of mistral user (OVHcloud) Arnaud Morin volunteer to maintain it. I asked him to contact the Mistral team to help/maintain it together and based on that we can decide on the Mistral deprecation. Just in case Arnaud is not subscribed to this ML, adding him explicitly. -gmann > > Thanks in advance, > El?d Ill?sirc: elodilles @ #openstack-release > > [1]https://review.opendev.org/c/openstack/governance/+/866562 > [2]https://review.opendev.org/c/openstack/releases/+/865577 > From laurentfdumont at gmail.com Mon Dec 5 23:33:25 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Mon, 5 Dec 2022 18:33:25 -0500 Subject: Need assistance In-Reply-To: References: Message-ID: This is something I looked at in the past and it wasn't possible. Once a VF/SRIOV is bound, it's outside of the Compute Kernel control. On Mon, Dec 5, 2022 at 9:42 AM Gk Gk wrote: > Hi, > > We have a SR-IOV guest whose xml looks like shown below: > > ------ > > > > >
function='0x7'/> > > > > > >
function='0x0'/> > > ---- > > How to find the network statistics for the virtual function assigned to > the VM above from the ethtool -S output shown below: > --- > ethtool -S eno12409 > NIC statistics: > rx_unicast: 2890507 > tx_unicast: 1279755 > rx_multicast: 101903131 > tx_multicast: 363832 > rx_broadcast: 1496406 > tx_broadcast: 1052 > rx_bytes: 10084560360 > tx_bytes: 225469520 > rx_dropped: 0 > rx_unknown_protocol: 0 > rx_alloc_fail: 0 > rx_pg_alloc_fail: 0 > tx_errors: 0 > tx_linearize: 0 > tx_busy: 0 > tx_restart: 0 > tx_queue_0_packets: 120328 > tx_queue_0_bytes: 15218021 > tx_queue_1_packets: 185835 > tx_queue_1_bytes: 33463904 > tx_queue_2_packets: 255145 > tx_queue_2_bytes: 62401952 > tx_queue_3_packets: 73719 > tx_queue_3_bytes: 9766278 > tx_queue_4_packets: 75719 > tx_queue_4_bytes: 13886665 > tx_queue_5_packets: 756632 > tx_queue_5_bytes: 64787766 > tx_queue_6_packets: 127836 > tx_queue_6_bytes: 13422778 > tx_queue_7_packets: 49417 > tx_queue_7_bytes: 7596442 > rx_queue_0_packets: 103727545 > rx_queue_0_bytes: 6961254601 > rx_queue_1_packets: 326249 > rx_queue_1_bytes: 454984714 > rx_queue_2_packets: 242553 > rx_queue_2_bytes: 339238020 > rx_queue_3_packets: 183649 > rx_queue_3_bytes: 244280310 > rx_queue_4_packets: 405611 > rx_queue_4_bytes: 495795368 > rx_queue_5_packets: 227854 > rx_queue_5_bytes: 236190286 > rx_queue_6_packets: 193401 > rx_queue_6_bytes: 271337926 > rx_queue_7_packets: 982683 > rx_queue_7_bytes: 447042416 > rx_bytes.nic: 12401182418323 > tx_bytes.nic: 74986406490 > rx_unicast.nic: 8485232145 > tx_unicast.nic: 761643543 > rx_multicast.nic: 101957596 > tx_multicast.nic: 388091 > rx_broadcast.nic: 1495740 > tx_broadcast.nic: 1718 > tx_errors.nic: 0 > tx_timeout.nic: 0 > rx_size_64.nic: 167990805 > tx_size_64.nic: 244373395 > rx_size_127.nic: 240074370 > tx_size_127.nic: 466106113 > rx_size_255.nic: 39290324 > tx_size_255.nic: 29943862 > rx_size_511.nic: 6918699 > tx_size_511.nic: 4570812 > rx_size_1023.nic: 22389241 > tx_size_1023.nic: 15207161 > rx_size_1522.nic: 8112022042 > tx_size_1522.nic: 1832009 > rx_size_big.nic: 0 > tx_size_big.nic: 0 > link_xon_rx.nic: 0 > link_xon_tx.nic: 0 > link_xoff_rx.nic: 0 > link_xoff_tx.nic: 0 > tx_dropped_link_down.nic: 0 > rx_undersize.nic: 0 > rx_fragments.nic: 0 > rx_oversize.nic: 0 > rx_jabber.nic: 0 > rx_csum_bad.nic: 0 > rx_length_errors.nic: 0 > rx_dropped.nic: 0 > rx_crc_errors.nic: 0 > illegal_bytes.nic: 0 > mac_local_faults.nic: 0 > mac_remote_faults.nic: 0 > fdir_sb_match.nic: 0 > fdir_sb_status.nic: 1 > tx_priority_0_xon.nic: 0 > tx_priority_0_xoff.nic: 0 > tx_priority_1_xon.nic: 0 > tx_priority_1_xoff.nic: 0 > tx_priority_2_xon.nic: 0 > tx_priority_2_xoff.nic: 0 > tx_priority_3_xon.nic: 0 > tx_priority_3_xoff.nic: 0 > tx_priority_4_xon.nic: 0 > tx_priority_4_xoff.nic: 0 > tx_priority_5_xon.nic: 0 > tx_priority_5_xoff.nic: 0 > tx_priority_6_xon.nic: 0 > tx_priority_6_xoff.nic: 0 > tx_priority_7_xon.nic: 0 > tx_priority_7_xoff.nic: 0 > rx_priority_0_xon.nic: 0 > rx_priority_0_xoff.nic: 0 > rx_priority_1_xon.nic: 0 > rx_priority_1_xoff.nic: 0 > rx_priority_2_xon.nic: 0 > rx_priority_2_xoff.nic: 0 > rx_priority_3_xon.nic: 0 > rx_priority_3_xoff.nic: 0 > rx_priority_4_xon.nic: 0 > rx_priority_4_xoff.nic: 0 > rx_priority_5_xon.nic: 0 > rx_priority_5_xoff.nic: 0 > rx_priority_6_xon.nic: 0 > rx_priority_6_xoff.nic: 0 > rx_priority_7_xon.nic: 0 > rx_priority_7_xoff.nic: 0 > ----- > > Please help > > Thanks > Kumar > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Dec 6 02:30:20 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 05 Dec 2022 18:30:20 -0800 Subject: [all][tc] Technical Committee next weekly meeting on 2022 Dec 7 at 1600 UTC Message-ID: <184e5451516.113355438335028.5426387386435969570@ghanshyammann.com> Hello Everyone, The technical Committee's next weekly meeting is scheduled for 2022 Dec 7, at 1600 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Tuesday, Dec 6 at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From mnaser at vexxhost.com Tue Dec 6 03:19:24 2022 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 5 Dec 2022 22:19:24 -0500 Subject: [keystone] Federation fix for numerical groups Message-ID: Hi folks, I've pushed up this patch for a few weeks now with no response, could we please have a look at it? https://review.opendev.org/c/openstack/keystone/+/860726 Thanks, Mohammed -- Mohammed Naser VEXXHOST, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sgaravatto at gmail.com Tue Dec 6 08:36:56 2022 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Tue, 6 Dec 2022 09:36:56 +0100 Subject: [ops] Migration from CentOS streams to Ubuntu and fast forward updates In-Reply-To: References: Message-ID: Any comments on these questions ? Thanks, Massimo On Fri, Dec 2, 2022 at 5:02 PM Massimo Sgaravatto < massimo.sgaravatto at gmail.com> wrote: > Dear all > > > > Dear all > > We are now running an OpenStack deployment: Yoga on CentOS8Stream. > > We are now thinking about a possible migration to Ubuntu for several > reasons in particular: > > a- 5 years support for both the Operating System and OpenStack > (considering LTS releases) > b- Possibility do do a clean update between two Ubuntu LTS releases > c- Easier procedure (also because of b) for fast forward updates (this is > what we use to do) > > Considering the latter item, my understanding is that an update from > Ubuntu 20.04 Ussuri to Ubuntu 22.04 Yoga could be done in the following > way (we have two controller nodes and n compute nodes): > > - Update of first controller node from Ubuntu 20.04 Ussuri to Ubuntu 20.04 > Victoria (update OpenStack packages + dbsync) > - Update of first controller node from Ubuntu 20.04 Victoria to Ubuntu > 20.04 Wallaby (update OpenStack packages + dbsync) > - Update of first controller node from Ubuntu 20.04 Wallaby to Ubuntu > 20.04 Xena (update OpenStack packages + dbsync) > - Update of first controller node from Ubuntu 20.04 Xena to Ubuntu 20.04 > Yoga (update OpenStack packages + dbsync) > - Update of first controller node from Ubuntu 20.04 Yoga to Ubuntu 22.04 > Yoga (update Ubuntu packages) > - Update of second controller node from Ubuntu 20.04 Ussuri to Ubuntu > 22.04 Yoga (update OpenStack and Ubuntu packages) > - Update of the compute nodes from Ubuntu 20.04 Ussuri to Ubuntu 22.04 > Yoga (update OpenStack and Ubuntu packages) > > > We would do the same when migrating from Ubuntu 22.04 Yoga to Ubuntu 24.04 > and the OpenStack xyz release (where xyz > is the LTS release used in Ubuntu 24.04) > > Is this supposed to work or am I missing something ? > > If we decide to migrate to Ubuntu, the first step would be the > reinstallation with Ubuntu 22.04/Yoga of each node > currently running CentOS8 stream/Yoga. > I suppose there are no problems having in the same OpenStack installation > nodes running the same > Openstack version but different operating systems, or am I wrong ? > > Thanks, Massimo > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hrybacki at redhat.com Tue Dec 6 08:56:49 2022 From: hrybacki at redhat.com (Harry Rybacki) Date: Tue, 6 Dec 2022 09:56:49 +0100 Subject: [keystone] Federation fix for numerical groups In-Reply-To: References: Message-ID: Hi Mohammed, the core review team is very small and doing the best they can with the limited capacity they have. Would you be able to join one of the Keystone team's weekly reviewathons on Fridays at 1500UTC? @Dave Wilde can provide more joining details. /R Harry On Tue, Dec 6, 2022 at 4:24 AM Mohammed Naser wrote: > Hi folks, > > I've pushed up this patch for a few weeks now with no response, could we > please have a look at it? > > https://review.opendev.org/c/openstack/keystone/+/860726 > > Thanks, > Mohammed > > -- > Mohammed Naser > VEXXHOST, Inc. > -- Harry Rybacki Associate Engineering Manager - OpenStack Security-DFG -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Tue Dec 6 09:34:24 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Tue, 6 Dec 2022 10:34:24 +0100 Subject: [ops] Migration from CentOS streams to Ubuntu and fast forward updates In-Reply-To: References: Message-ID: Hi Massimo, Assuming you have manual installation (not using any deployment projects), I have several comments on your plan. 1. I've missed when you're going to upgrade Nova/Neutron on computes. As you should not create a gap in OpenStack versions between controllers and computes since nova-scheduler has a requirement on RPC version computes will be using. Or, you must define the rpc version explicitly in config to have older computes (but it's not really a suggested way). 2. Also once you do db sync, your second controller might misbehave (as some fields could be renamed or new tables must be used), so you will need to disable it from accepting requests until syncing openstack version as well. If you're not going to upgrade it until getting first one to Yoga - it should be disabled all the time until you get Y services running on it. 3. It's totally fine to run multi-distro setup. For computes the only thing that can go wrong is live migrations, and that depends on libvirt/qemu versions. I'm not sure if CentOS 8 Stream have compatible version with Ubuntu 22.04 for live migrations to work though, but if you care about them (I guess you do if you want to migrate workloads semalessly) - you'd better check. But my guess would be that CentOS 8 Stream should have compatible versions with Ubuntu 20.04 - still needs deeper checking. ??, 6 ???. 2022 ?. ? 09:40, Massimo Sgaravatto : > > Any comments on these questions ? > Thanks, Massimo > > On Fri, Dec 2, 2022 at 5:02 PM Massimo Sgaravatto wrote: >> >> Dear all >> >> >> >> Dear all >> >> We are now running an OpenStack deployment: Yoga on CentOS8Stream. >> >> We are now thinking about a possible migration to Ubuntu for several reasons in particular: >> >> a- 5 years support for both the Operating System and OpenStack (considering LTS releases) >> b- Possibility do do a clean update between two Ubuntu LTS releases >> c- Easier procedure (also because of b) for fast forward updates (this is what we use to do) >> >> Considering the latter item, my understanding is that an update from Ubuntu 20.04 Ussuri to Ubuntu 22.04 Yoga could be done in the following >> way (we have two controller nodes and n compute nodes): >> >> - Update of first controller node from Ubuntu 20.04 Ussuri to Ubuntu 20.04 Victoria (update OpenStack packages + dbsync) >> - Update of first controller node from Ubuntu 20.04 Victoria to Ubuntu 20.04 Wallaby (update OpenStack packages + dbsync) >> - Update of first controller node from Ubuntu 20.04 Wallaby to Ubuntu 20.04 Xena (update OpenStack packages + dbsync) >> - Update of first controller node from Ubuntu 20.04 Xena to Ubuntu 20.04 Yoga (update OpenStack packages + dbsync) >> - Update of first controller node from Ubuntu 20.04 Yoga to Ubuntu 22.04 Yoga (update Ubuntu packages) >> - Update of second controller node from Ubuntu 20.04 Ussuri to Ubuntu 22.04 Yoga (update OpenStack and Ubuntu packages) >> - Update of the compute nodes from Ubuntu 20.04 Ussuri to Ubuntu 22.04 Yoga (update OpenStack and Ubuntu packages) >> >> >> We would do the same when migrating from Ubuntu 22.04 Yoga to Ubuntu 24.04 and the OpenStack xyz release (where xyz >> is the LTS release used in Ubuntu 24.04) >> >> Is this supposed to work or am I missing something ? >> >> If we decide to migrate to Ubuntu, the first step would be the reinstallation with Ubuntu 22.04/Yoga of each node >> currently running CentOS8 stream/Yoga. >> I suppose there are no problems having in the same OpenStack installation nodes running the same >> Openstack version but different operating systems, or am I wrong ? >> >> Thanks, Massimo >> From smooney at redhat.com Tue Dec 6 09:48:30 2022 From: smooney at redhat.com (Sean Mooney) Date: Tue, 06 Dec 2022 09:48:30 +0000 Subject: [ops] Migration from CentOS streams to Ubuntu and fast forward updates In-Reply-To: References: Message-ID: <8db8e7b9e25ec85f40a1e8e625d84f60816b86a8.camel@redhat.com> On Tue, 2022-12-06 at 10:34 +0100, Dmitriy Rabotyagov wrote: > Hi Massimo, > > Assuming you have manual installation (not using any deployment > projects), I have several comments on your plan. > > 1. I've missed when you're going to upgrade Nova/Neutron on computes. > As you should not create a gap in OpenStack versions between > controllers and computes since nova-scheduler has a requirement on RPC > version computes will be using. Or, you must define the rpc version > explicitly in config to have older computes (but it's not really a > suggested way). > 2. Also once you do db sync, your second controller might misbehave > (as some fields could be renamed or new tables must be used), so you > will need to disable it from accepting requests until syncing > openstack version as well. If you're not going to upgrade it until > getting first one to Yoga - it should be disabled all the time until > you get Y services running on it. > 3. It's totally fine to run multi-distro setup. For computes the only > thing that can go wrong is live migrations, and that depends on > libvirt/qemu versions. I'm not sure if CentOS 8 Stream have compatible > version with Ubuntu 22.04 for live migrations to work though, but if > you care about them (I guess you do if you want to migrate workloads > semalessly) - you'd better check. But my guess would be that CentOS 8 > Stream should have compatible versions with Ubuntu 20.04 - still needs > deeper checking. the live migration issue is a know limiation basically it wont work across distro today because the qemu emulator path is distro specific and we do not pass that back form the destinatino to the source so libvirt will try and boot the vm referncign a binary that does not exist im sure you could propaly solve that with a symlink or similar. if you did the next issue you would hit is we dont normally allwo live mgration form a newer qemu/libvirt verson to an older one with all that said cold migration shoudl work fine and wihtine any one host os live migration will work. you could proably use host aggreates or simialr to enforece that if needed but cold migration is the best way to move the workloads form hypervior hosts with different distros. > > ??, 6 ???. 2022 ?. ? 09:40, Massimo Sgaravatto : > > > > Any comments on these questions ? > > Thanks, Massimo > > > > On Fri, Dec 2, 2022 at 5:02 PM Massimo Sgaravatto wrote: > > > > > > Dear all > > > > > > > > > > > > Dear all > > > > > > We are now running an OpenStack deployment: Yoga on CentOS8Stream. > > > > > > We are now thinking about a possible migration to Ubuntu for several reasons in particular: > > > > > > a- 5 years support for both the Operating System and OpenStack (considering LTS releases) > > > b- Possibility do do a clean update between two Ubuntu LTS releases > > > c- Easier procedure (also because of b) for fast forward updates (this is what we use to do) > > > > > > Considering the latter item, my understanding is that an update from Ubuntu 20.04 Ussuri to Ubuntu 22.04 Yoga could be done in the following > > > way (we have two controller nodes and n compute nodes): > > > > > > - Update of first controller node from Ubuntu 20.04 Ussuri to Ubuntu 20.04 Victoria (update OpenStack packages + dbsync) > > > - Update of first controller node from Ubuntu 20.04 Victoria to Ubuntu 20.04 Wallaby (update OpenStack packages + dbsync) > > > - Update of first controller node from Ubuntu 20.04 Wallaby to Ubuntu 20.04 Xena (update OpenStack packages + dbsync) > > > - Update of first controller node from Ubuntu 20.04 Xena to Ubuntu 20.04 Yoga (update OpenStack packages + dbsync) > > > - Update of first controller node from Ubuntu 20.04 Yoga to Ubuntu 22.04 Yoga (update Ubuntu packages) > > > - Update of second controller node from Ubuntu 20.04 Ussuri to Ubuntu 22.04 Yoga (update OpenStack and Ubuntu packages) > > > - Update of the compute nodes from Ubuntu 20.04 Ussuri to Ubuntu 22.04 Yoga (update OpenStack and Ubuntu packages) > > > > > > > > > We would do the same when migrating from Ubuntu 22.04 Yoga to Ubuntu 24.04 and the OpenStack xyz release (where xyz > > > is the LTS release used in Ubuntu 24.04) > > > > > > Is this supposed to work or am I missing something ? > > > > > > If we decide to migrate to Ubuntu, the first step would be the reinstallation with Ubuntu 22.04/Yoga of each node > > > currently running CentOS8 stream/Yoga. > > > I suppose there are no problems having in the same OpenStack installation nodes running the same > > > Openstack version but different operating systems, or am I wrong ? > > > > > > Thanks, Massimo > > > > From massimo.sgaravatto at gmail.com Tue Dec 6 09:55:55 2022 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Tue, 6 Dec 2022 10:55:55 +0100 Subject: [ops] Migration from CentOS streams to Ubuntu and fast forward updates In-Reply-To: References: Message-ID: Thanks for your feedback I missed to say that during the fast forward update, all OpenStack services would be down So, during the db-syncs on the first controller, all services on the second controller (but also on the compute nodes) would be down The update of nova and neutron on the compute nodes would be done in the step: "Update of the compute nodes from Ubuntu 20.04 Ussuri to Ubuntu 22.04 Yoga (update OpenStack and Ubuntu packages)" Cheers, Massimo On Tue, Dec 6, 2022 at 10:34 AM Dmitriy Rabotyagov wrote: > Hi Massimo, > > Assuming you have manual installation (not using any deployment > projects), I have several comments on your plan. > > 1. I've missed when you're going to upgrade Nova/Neutron on computes. > As you should not create a gap in OpenStack versions between > controllers and computes since nova-scheduler has a requirement on RPC > version computes will be using. Or, you must define the rpc version > explicitly in config to have older computes (but it's not really a > suggested way). > 2. Also once you do db sync, your second controller might misbehave > (as some fields could be renamed or new tables must be used), so you > will need to disable it from accepting requests until syncing > openstack version as well. If you're not going to upgrade it until > getting first one to Yoga - it should be disabled all the time until > you get Y services running on it. > 3. It's totally fine to run multi-distro setup. For computes the only > thing that can go wrong is live migrations, and that depends on > libvirt/qemu versions. I'm not sure if CentOS 8 Stream have compatible > version with Ubuntu 22.04 for live migrations to work though, but if > you care about them (I guess you do if you want to migrate workloads > semalessly) - you'd better check. But my guess would be that CentOS 8 > Stream should have compatible versions with Ubuntu 20.04 - still needs > deeper checking. > > ??, 6 ???. 2022 ?. ? 09:40, Massimo Sgaravatto < > massimo.sgaravatto at gmail.com>: > > > > Any comments on these questions ? > > Thanks, Massimo > > > > On Fri, Dec 2, 2022 at 5:02 PM Massimo Sgaravatto < > massimo.sgaravatto at gmail.com> wrote: > >> > >> Dear all > >> > >> > >> > >> Dear all > >> > >> We are now running an OpenStack deployment: Yoga on CentOS8Stream. > >> > >> We are now thinking about a possible migration to Ubuntu for several > reasons in particular: > >> > >> a- 5 years support for both the Operating System and OpenStack > (considering LTS releases) > >> b- Possibility do do a clean update between two Ubuntu LTS releases > >> c- Easier procedure (also because of b) for fast forward updates (this > is what we use to do) > >> > >> Considering the latter item, my understanding is that an update from > Ubuntu 20.04 Ussuri to Ubuntu 22.04 Yoga could be done in the following > >> way (we have two controller nodes and n compute nodes): > >> > >> - Update of first controller node from Ubuntu 20.04 Ussuri to Ubuntu > 20.04 Victoria (update OpenStack packages + dbsync) > >> - Update of first controller node from Ubuntu 20.04 Victoria to Ubuntu > 20.04 Wallaby (update OpenStack packages + dbsync) > >> - Update of first controller node from Ubuntu 20.04 Wallaby to Ubuntu > 20.04 Xena (update OpenStack packages + dbsync) > >> - Update of first controller node from Ubuntu 20.04 Xena to Ubuntu > 20.04 Yoga (update OpenStack packages + dbsync) > >> - Update of first controller node from Ubuntu 20.04 Yoga to Ubuntu > 22.04 Yoga (update Ubuntu packages) > >> - Update of second controller node from Ubuntu 20.04 Ussuri to Ubuntu > 22.04 Yoga (update OpenStack and Ubuntu packages) > >> - Update of the compute nodes from Ubuntu 20.04 Ussuri to Ubuntu 22.04 > Yoga (update OpenStack and Ubuntu packages) > >> > >> > >> We would do the same when migrating from Ubuntu 22.04 Yoga to Ubuntu > 24.04 and the OpenStack xyz release (where xyz > >> is the LTS release used in Ubuntu 24.04) > >> > >> Is this supposed to work or am I missing something ? > >> > >> If we decide to migrate to Ubuntu, the first step would be the > reinstallation with Ubuntu 22.04/Yoga of each node > >> currently running CentOS8 stream/Yoga. > >> I suppose there are no problems having in the same OpenStack > installation nodes running the same > >> Openstack version but different operating systems, or am I wrong ? > >> > >> Thanks, Massimo > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sgaravatto at gmail.com Tue Dec 6 11:19:07 2022 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Tue, 6 Dec 2022 12:19:07 +0100 Subject: [ops] Migration from CentOS streams to Ubuntu and fast forward updates In-Reply-To: <8db8e7b9e25ec85f40a1e8e625d84f60816b86a8.camel@redhat.com> References: <8db8e7b9e25ec85f40a1e8e625d84f60816b86a8.camel@redhat.com> Message-ID: Ok, thanks a lot If cold migration is supposed to work between hosts with different operating systems, we are fine Cheers, Massimo On Tue, Dec 6, 2022 at 10:48 AM Sean Mooney wrote: > On Tue, 2022-12-06 at 10:34 +0100, Dmitriy Rabotyagov wrote: > > Hi Massimo, > > > > Assuming you have manual installation (not using any deployment > > projects), I have several comments on your plan. > > > > 1. I've missed when you're going to upgrade Nova/Neutron on computes. > > As you should not create a gap in OpenStack versions between > > controllers and computes since nova-scheduler has a requirement on RPC > > version computes will be using. Or, you must define the rpc version > > explicitly in config to have older computes (but it's not really a > > suggested way). > > 2. Also once you do db sync, your second controller might misbehave > > (as some fields could be renamed or new tables must be used), so you > > will need to disable it from accepting requests until syncing > > openstack version as well. If you're not going to upgrade it until > > getting first one to Yoga - it should be disabled all the time until > > you get Y services running on it. > > 3. It's totally fine to run multi-distro setup. For computes the only > > thing that can go wrong is live migrations, and that depends on > > libvirt/qemu versions. I'm not sure if CentOS 8 Stream have compatible > > version with Ubuntu 22.04 for live migrations to work though, but if > > you care about them (I guess you do if you want to migrate workloads > > semalessly) - you'd better check. But my guess would be that CentOS 8 > > Stream should have compatible versions with Ubuntu 20.04 - still needs > > deeper checking. > the live migration issue is a know limiation > basically it wont work across distro today because the qemu emulator path > is distro specific and we do not pass that back form the destinatino to the > source so libvirt will try and boot the vm referncign a binary that does > not exist > im sure you could propaly solve that with a symlink or similar. > if you did the next issue you would hit is we dont normally allwo live > mgration > form a newer qemu/libvirt verson to an older one > > with all that said cold migration shoudl work fine and wihtine any one > host os live migration > will work. you could proably use host aggreates or simialr to enforece > that if needed but > cold migration is the best way to move the workloads form hypervior hosts > with different distros. > > > > > ??, 6 ???. 2022 ?. ? 09:40, Massimo Sgaravatto < > massimo.sgaravatto at gmail.com>: > > > > > > Any comments on these questions ? > > > Thanks, Massimo > > > > > > On Fri, Dec 2, 2022 at 5:02 PM Massimo Sgaravatto < > massimo.sgaravatto at gmail.com> wrote: > > > > > > > > Dear all > > > > > > > > > > > > > > > > Dear all > > > > > > > > We are now running an OpenStack deployment: Yoga on CentOS8Stream. > > > > > > > > We are now thinking about a possible migration to Ubuntu for several > reasons in particular: > > > > > > > > a- 5 years support for both the Operating System and OpenStack > (considering LTS releases) > > > > b- Possibility do do a clean update between two Ubuntu LTS releases > > > > c- Easier procedure (also because of b) for fast forward updates > (this is what we use to do) > > > > > > > > Considering the latter item, my understanding is that an update from > Ubuntu 20.04 Ussuri to Ubuntu 22.04 Yoga could be done in the following > > > > way (we have two controller nodes and n compute nodes): > > > > > > > > - Update of first controller node from Ubuntu 20.04 Ussuri to Ubuntu > 20.04 Victoria (update OpenStack packages + dbsync) > > > > - Update of first controller node from Ubuntu 20.04 Victoria to > Ubuntu 20.04 Wallaby (update OpenStack packages + dbsync) > > > > - Update of first controller node from Ubuntu 20.04 Wallaby to > Ubuntu 20.04 Xena (update OpenStack packages + dbsync) > > > > - Update of first controller node from Ubuntu 20.04 Xena to Ubuntu > 20.04 Yoga (update OpenStack packages + dbsync) > > > > - Update of first controller node from Ubuntu 20.04 Yoga to Ubuntu > 22.04 Yoga (update Ubuntu packages) > > > > - Update of second controller node from Ubuntu 20.04 Ussuri to > Ubuntu 22.04 Yoga (update OpenStack and Ubuntu packages) > > > > - Update of the compute nodes from Ubuntu 20.04 Ussuri to Ubuntu > 22.04 Yoga (update OpenStack and Ubuntu packages) > > > > > > > > > > > > We would do the same when migrating from Ubuntu 22.04 Yoga to Ubuntu > 24.04 and the OpenStack xyz release (where xyz > > > > is the LTS release used in Ubuntu 24.04) > > > > > > > > Is this supposed to work or am I missing something ? > > > > > > > > If we decide to migrate to Ubuntu, the first step would be the > reinstallation with Ubuntu 22.04/Yoga of each node > > > > currently running CentOS8 stream/Yoga. > > > > I suppose there are no problems having in the same OpenStack > installation nodes running the same > > > > Openstack version but different operating systems, or am I wrong ? > > > > > > > > Thanks, Massimo > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.rydberg at cleura.com Tue Dec 6 12:05:30 2022 From: tobias.rydberg at cleura.com (Tobias Rydberg) Date: Tue, 6 Dec 2022 13:05:30 +0100 Subject: [publiccloud-sig] Bi-weekly meeting reminder Message-ID: Hi everyone, Tomorrow it's time again for our bi-weekly meeting, 0800 UTC in #openstack-operators. Etherpad can be found here [0]. Hope to chat with you tomorrow! [0] https://etherpad.opendev.org/p/publiccloud-sig-meeting BR, Tobias Rydberg -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3626 bytes Desc: S/MIME Cryptographic Signature URL: From katonalala at gmail.com Tue Dec 6 14:08:34 2022 From: katonalala at gmail.com (Lajos Katona) Date: Tue, 6 Dec 2022 15:08:34 +0100 Subject: [all] many Python 3.11 failures in OpenStack components In-Reply-To: <46d96098-67ce-5ea1-6d7a-26e9103594f4@debian.org> References: <46d96098-67ce-5ea1-6d7a-26e9103594f4@debian.org> Message-ID: Hi, Thanks for the info, I will check networking-l2gw. Lajos (lajoskatona) Thomas Goirand ezt ?rta (id?pont: 2022. dec. 5., H, 16:49): > Hi, > > The below bugs have been reported against the OpenStack components in > Debian. I would greatly appreciate help fixing these before Bookworm is > in freeze (in January). If you fix a bug, it'd be great to add a link in > the Debian bug, so I can see the URL of the review (even if the patch > isn't merged yet). > > As a reminder: to write to a Debian bug, simply write to > @bugs.debian.org (the Debian BTS is email driven). > > networking-mlnx: > https://bugs.debian.org/1021924 > FTBFS test failures: sqlalchemy.exc.InvalidRequestError: A transaction > is already begun on this Session. > > networking-l2gw: > https://bugs.debian.org/1023764 > autopkgtest failure: module 'neutron.common.config' has no attribute > 'register_common_config_options' > > ironic: > https://bugs.debian.org/1024783 > needs update for python3.11: Cannot spec a Mock object > > ironic-inspector: > https://bugs.debian.org/1024952 > needs update for python3.11: Failed to resolve the hostname (meow) for > node uuid1 > > mistral-dashboard: > https://bugs.debian.org/1024958 > needs update for python3.11: module 'inspect' has no attribute > 'getargspec'. Did you mean: 'getargs'? > > murano: > https://bugs.debian.org/1025013 > needs update for python3.11: 'NoneType' object has no attribute > 'object_store' > > python-cinderclient: > https://bugs.debian.org/1025028 > needs update for python3.11: conflicting subparser: another-fake-action > > python-novaclient: > https://bugs.debian.org/1025110 > needs update for python3.11: conflicting subparser > > python-sushy: > https://bugs.debian.org/1025124 > needs update for python3.11: AssertionError: expected call not found > > python-taskflow: > https://bugs.debian.org/1025126 > needs update for python3.11: MismatchError > > Cheers, > > Thomas Goirand (zigo) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Tue Dec 6 15:22:37 2022 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 6 Dec 2022 07:22:37 -0800 Subject: [all] many Python 3.11 failures in OpenStack components In-Reply-To: <46d96098-67ce-5ea1-6d7a-26e9103594f4@debian.org> References: <46d96098-67ce-5ea1-6d7a-26e9103594f4@debian.org> Message-ID: FYI, The Octavia team has looked at the Taskflow issue and proposed a patch: https://review.opendev.org/c/openstack/taskflow/+/866610 Michael On Mon, Dec 5, 2022 at 7:46 AM Thomas Goirand wrote: > > Hi, > > The below bugs have been reported against the OpenStack components in > Debian. I would greatly appreciate help fixing these before Bookworm is > in freeze (in January). If you fix a bug, it'd be great to add a link in > the Debian bug, so I can see the URL of the review (even if the patch > isn't merged yet). > > As a reminder: to write to a Debian bug, simply write to > @bugs.debian.org (the Debian BTS is email driven). > > networking-mlnx: > https://bugs.debian.org/1021924 > FTBFS test failures: sqlalchemy.exc.InvalidRequestError: A transaction > is already begun on this Session. > > networking-l2gw: > https://bugs.debian.org/1023764 > autopkgtest failure: module 'neutron.common.config' has no attribute > 'register_common_config_options' > > ironic: > https://bugs.debian.org/1024783 > needs update for python3.11: Cannot spec a Mock object > > ironic-inspector: > https://bugs.debian.org/1024952 > needs update for python3.11: Failed to resolve the hostname (meow) for > node uuid1 > > mistral-dashboard: > https://bugs.debian.org/1024958 > needs update for python3.11: module 'inspect' has no attribute > 'getargspec'. Did you mean: 'getargs'? > > murano: > https://bugs.debian.org/1025013 > needs update for python3.11: 'NoneType' object has no attribute > 'object_store' > > python-cinderclient: > https://bugs.debian.org/1025028 > needs update for python3.11: conflicting subparser: another-fake-action > > python-novaclient: > https://bugs.debian.org/1025110 > needs update for python3.11: conflicting subparser > > python-sushy: > https://bugs.debian.org/1025124 > needs update for python3.11: AssertionError: expected call not found > > python-taskflow: > https://bugs.debian.org/1025126 > needs update for python3.11: MismatchError > > Cheers, > > Thomas Goirand (zigo) > From elfosardo at gmail.com Tue Dec 6 16:29:30 2022 From: elfosardo at gmail.com (Riccardo Pittau) Date: Tue, 6 Dec 2022 17:29:30 +0100 Subject: [all] many Python 3.11 failures in OpenStack components In-Reply-To: References: <46d96098-67ce-5ea1-6d7a-26e9103594f4@debian.org> Message-ID: Hi, Thanks for reporting this I'm checking the ironic related ones Ciao, Riccardo On Tue, Dec 6, 2022 at 4:29 PM Michael Johnson wrote: > FYI, > > The Octavia team has looked at the Taskflow issue and proposed a patch: > https://review.opendev.org/c/openstack/taskflow/+/866610 > > Michael > > On Mon, Dec 5, 2022 at 7:46 AM Thomas Goirand wrote: > > > > Hi, > > > > The below bugs have been reported against the OpenStack components in > > Debian. I would greatly appreciate help fixing these before Bookworm is > > in freeze (in January). If you fix a bug, it'd be great to add a link in > > the Debian bug, so I can see the URL of the review (even if the patch > > isn't merged yet). > > > > As a reminder: to write to a Debian bug, simply write to > > @bugs.debian.org (the Debian BTS is email driven). > > > > networking-mlnx: > > https://bugs.debian.org/1021924 > > FTBFS test failures: sqlalchemy.exc.InvalidRequestError: A transaction > > is already begun on this Session. > > > > networking-l2gw: > > https://bugs.debian.org/1023764 > > autopkgtest failure: module 'neutron.common.config' has no attribute > > 'register_common_config_options' > > > > ironic: > > https://bugs.debian.org/1024783 > > needs update for python3.11: Cannot spec a Mock object > > > > ironic-inspector: > > https://bugs.debian.org/1024952 > > needs update for python3.11: Failed to resolve the hostname (meow) for > > node uuid1 > > > > mistral-dashboard: > > https://bugs.debian.org/1024958 > > needs update for python3.11: module 'inspect' has no attribute > > 'getargspec'. Did you mean: 'getargs'? > > > > murano: > > https://bugs.debian.org/1025013 > > needs update for python3.11: 'NoneType' object has no attribute > > 'object_store' > > > > python-cinderclient: > > https://bugs.debian.org/1025028 > > needs update for python3.11: conflicting subparser: another-fake-action > > > > python-novaclient: > > https://bugs.debian.org/1025110 > > needs update for python3.11: conflicting subparser > > > > python-sushy: > > https://bugs.debian.org/1025124 > > needs update for python3.11: AssertionError: expected call not found > > > > python-taskflow: > > https://bugs.debian.org/1025126 > > needs update for python3.11: MismatchError > > > > Cheers, > > > > Thomas Goirand (zigo) > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Tue Dec 6 17:33:04 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Tue, 6 Dec 2022 18:33:04 +0100 Subject: [openstack-ansible] CI is broken - hold on rechecks In-Reply-To: References: Message-ID: Hi everyone, Gates are fixed now, so feel free to issue recheck/rebase on previously failed patches. Don't forget about providing a reason for recheck in your comment! Thanks to everyone involved in solving this ??, 2 ???. 2022 ?. ? 17:48, Dmitriy Rabotyagov : > > Hi folks, > > After merging [1] all our CI tests seem broken until we merge [2], so > please hold on all your rechecks. With that being said, we need to > focus on landing 862924 or merge temporary workaround to the > integrated repo to define neutron_ml2_drivers_type and > neutron_plugin_base for other checks to pass without [2]. > > > [1] https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/865961 > [2] https://review.opendev.org/c/openstack/openstack-ansible/+/862924 From noonedeadpunk at gmail.com Tue Dec 6 17:35:24 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Tue, 6 Dec 2022 18:35:24 +0100 Subject: [openstack-ansible] Meetings on December 27 and January 3 cancelled Message-ID: Hi folks, During today's meeting we have agreed on cancelling our regular weekly meetings on December 27 2022 and January 3 2022. We still have meetings on December 13 and December 20 before this small break. From katonalala at gmail.com Tue Dec 6 19:16:39 2022 From: katonalala at gmail.com (Lajos Katona) Date: Tue, 6 Dec 2022 20:16:39 +0100 Subject: [all] many Python 3.11 failures in OpenStack components In-Reply-To: References: <46d96098-67ce-5ea1-6d7a-26e9103594f4@debian.org> Message-ID: Hi,Brian (haleyb) is the winner: https://review.opendev.org/c/x/networking-l2gw/+/866620 Lajos Katona ezt ?rta (id?pont: 2022. dec. 6., K, 15:08): > Hi, > Thanks for the info, I will check networking-l2gw. > > Lajos (lajoskatona) > > Thomas Goirand ezt ?rta (id?pont: 2022. dec. 5., H, > 16:49): > >> Hi, >> >> The below bugs have been reported against the OpenStack components in >> Debian. I would greatly appreciate help fixing these before Bookworm is >> in freeze (in January). If you fix a bug, it'd be great to add a link in >> the Debian bug, so I can see the URL of the review (even if the patch >> isn't merged yet). >> >> As a reminder: to write to a Debian bug, simply write to >> @bugs.debian.org (the Debian BTS is email driven). >> >> networking-mlnx: >> https://bugs.debian.org/1021924 >> FTBFS test failures: sqlalchemy.exc.InvalidRequestError: A transaction >> is already begun on this Session. >> >> networking-l2gw: >> https://bugs.debian.org/1023764 >> autopkgtest failure: module 'neutron.common.config' has no attribute >> 'register_common_config_options' >> >> ironic: >> https://bugs.debian.org/1024783 >> needs update for python3.11: Cannot spec a Mock object >> >> ironic-inspector: >> https://bugs.debian.org/1024952 >> needs update for python3.11: Failed to resolve the hostname (meow) for >> node uuid1 >> >> mistral-dashboard: >> https://bugs.debian.org/1024958 >> needs update for python3.11: module 'inspect' has no attribute >> 'getargspec'. Did you mean: 'getargs'? >> >> murano: >> https://bugs.debian.org/1025013 >> needs update for python3.11: 'NoneType' object has no attribute >> 'object_store' >> >> python-cinderclient: >> https://bugs.debian.org/1025028 >> needs update for python3.11: conflicting subparser: another-fake-action >> >> python-novaclient: >> https://bugs.debian.org/1025110 >> needs update for python3.11: conflicting subparser >> >> python-sushy: >> https://bugs.debian.org/1025124 >> needs update for python3.11: AssertionError: expected call not found >> >> python-taskflow: >> https://bugs.debian.org/1025126 >> needs update for python3.11: MismatchError >> >> Cheers, >> >> Thomas Goirand (zigo) >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Dec 7 00:25:21 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 06 Dec 2022 16:25:21 -0800 Subject: [all][tc] Technical Committee next weekly meeting on 2022 Dec 7 at 1600 UTC In-Reply-To: <184e5451516.113355438335028.5426387386435969570@ghanshyammann.com> References: <184e5451516.113355438335028.5426387386435969570@ghanshyammann.com> Message-ID: <184e9f90205.da2aa6ad395630.8226678717705012469@ghanshyammann.com> Hello Everyone, Below is the agenda for the TC meeting scheduled on Dec 7 at 1600 UTC. Location: Zoom video call Details: https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting * Roll call * Follow up on past action items * Gate health check * 2023.1 TC tracker checks: ** https://etherpad.opendev.org/p/tc-2023.1-tracker *Adjutant situation (not active) ** Last change merged Oct 26, 2021 (more than 1 year back) ** Gate is broken ** https://review.opendev.org/c/openstack/governance/+/849153 * Recurring tasks check ** Bare 'recheck' state *** https://etherpad.opendev.org/p/recheck-weekly-summary * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Mon, 05 Dec 2022 18:30:20 -0800 Ghanshyam Mann wrote --- > Hello Everyone, > > The technical Committee's next weekly meeting is scheduled for 2022 Dec 7, at 1600 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Tuesday, Dec 6 at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > > From tkajinam at redhat.com Wed Dec 7 04:56:51 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Wed, 7 Dec 2022 13:56:51 +0900 Subject: [heat][release] Proposing to EOL Rocky and Stein In-Reply-To: <0DAB2EDB-BCE1-401E-BCAD-00BDDB7DF76D@redhat.com> References: <0DAB2EDB-BCE1-401E-BCAD-00BDDB7DF76D@redhat.com> Message-ID: +1 We haven't seen any interest to backport fixes to these two old branches. We've spent some amount of effort to keep CI for the stable/train branch but stable/stein and stable/rocky lack anyone interested in maintenance and CI jobs in these branches have been broken for a while. So it'd make sense to EOL these two branches. On Wed, Nov 30, 2022 at 8:25 PM Brendan Shephard wrote: > Hi, > > We were discussing some of the older branches we have and thought it was > about time we start moving some of them to EOL. > > Initially, I would like to move Rocky and Stein to EOL and have done so > here: > review.opendev.org > > [image: favicon.ico] > > > > > We would also like to move Train to EOL as well, pending a few changes > being merged. I wanted to reach out and ensure there is no objections here > to any of these branches being EOL?d. Feel free to voice any concerns, > otherwise I will move forward with that next week. > > Cheers, > > Brendan Shephard > Senior Software Engineer > Red Hat Australia > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: favicon.ico Type: application/octet-stream Size: 5430 bytes Desc: not available URL: From ralonsoh at redhat.com Wed Dec 7 10:52:21 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Wed, 7 Dec 2022 11:52:21 +0100 Subject: [neutron] Issue during the EOL of Queens, Rocky and Stein branches Message-ID: Hello all: In [1], the Neutron team transitioned all projects to EOL. However, because of a loose patch merged after the branch freeze, the automated tools can't finish the EOL task. Yesterday, during the Neutron team meeting [3], the attendants agreed to skip this last patch (that changes one line in the documentation) and comment on the affected stable release patches. This mail is to confirm that and to ask the QA team to manually perform this action. We'll be more careful in the future. Regards. [1]https://review.opendev.org/c/openstack/releases/+/862937 [2]https://review.opendev.org/q/Ie8dd684a7b79b0a322b1f2d17fffb4d58cfe94fc [3] https://meetings.opendev.org/meetings/networking/2022/networking.2022-12-06-14.01.log.html#l-129 -------------- next part -------------- An HTML attachment was scrubbed... URL: From axel.vanzaghi at ovhcloud.com Wed Dec 7 10:58:44 2022 From: axel.vanzaghi at ovhcloud.com (Axel Vanzaghi) Date: Wed, 7 Dec 2022 10:58:44 +0000 Subject: [mistral] Mistral maintenance and PTL Message-ID: <2beb7fb74db348f9a22d8e6daea23004@ovhcloud.com> Hello everyone, Let me introduce myself, Axel Vanzaghi, first used OpenStack in 2015 and got back to it when I joined OVHCloud as SRE. Here at OVHCloud, we make an intensive use of Mistral in our internals processes, so we don't want to see it going deprecated [1] or abandonned. We decided to take action to maintain it, first by proposing someone to be the PTL for Mistral [2]. I proposed myself for this role. Yet, my experience with Mistral is only being an user of it, I worked on some of our workflow. However, I really liked what we could do with the tool so I'm willing to learn more and contribute on this project as I will be able to have time for it. Our first goal is to maintain the project alive, but new features will still be welcomed. Regards, Axel Vanzaghi [1] https://review.opendev.org/c/openstack/governance/+/866562 [2] https://lists.openstack.org/pipermail/openstack-discuss/2021-February/020151.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Dec 7 11:03:09 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 7 Dec 2022 11:03:09 +0000 Subject: [cinder] Bug Report from 12-07-2022 Message-ID: This is a bug report from 11-30-2022 to 12-07-2022. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Medium - https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1025028 "[Debian] python-cinderclient: (autopkgtest) needs update for python3.11: conflicting subparser: another-fake-action." - https://bugs.launchpad.net/cinder/+bug/1998083 "volume_attachement entries are not getting deleted from DB." Unassigned. Marked as duplicate - https://bugs.launchpad.net/cinder/+bug/1998369 "[NFS] Creating an encrypted volume from an image overwrites the volume file with a broken symlink." Cheers, -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-francois.taltavull at elca.ch Wed Dec 7 11:44:33 2022 From: jean-francois.taltavull at elca.ch (=?iso-8859-1?Q?Taltavull_Jean-Fran=E7ois?=) Date: Wed, 7 Dec 2022 11:44:33 +0000 Subject: [openstack-ansible] Environment variable overflow on Ubuntu 20,04 Message-ID: Hello, The 'lxc_container_create' role appends the content of the 'global_environment_variables' OSA variable to '/etc/environment', which is used by the library 'libpam'. On Ubuntu 20.04 and 'libpam-runtime' library version 1.3.1, this works fine unless one of the environment variables content size exceeds 1024 char. In this case, the variable content is truncated. In our OpenStack deployment, this is the variable 'no_proxy' that overflows. This limit has been raised to 8192 with 'libpam-runtime' version 1.4 but there is no backport available in Ubuntu 20.04 repositories. So what would you think of the following workaround: instead of using '/etc/environment' to store the global variables, the role would create a shell script in '/etc/profile.d' thus avoiding the variable truncation issue related to the 'libpam' library ? This works on Ubuntu, Debian and CentOS hosts and containers. Cheers, JF From jonathan.rosser at rd.bbc.co.uk Wed Dec 7 12:43:35 2022 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Wed, 7 Dec 2022 12:43:35 +0000 Subject: [openstack-ansible] Environment variable overflow on Ubuntu 20, 04 In-Reply-To: References: Message-ID: <093afa5e-3a14-ccbf-2e3c-2f46bf439450@rd.bbc.co.uk> Hi JF, We have some clear documentation for this case here https://docs.openstack.org/openstack-ansible/latest/user/limited-connectivity/index.html and a discussion of the pros and cons of making system-wide proxy configuration. Once the deployment reaches a certain size it is necessary to use transient "deployment_environment_variables" instead of persistent "global_environment_variables" to manage no_proxy, both in terms of the length of the no_proxy environment variable but also the need to manage the contents of that variable across all hosts / containers and running services for any changes made to the control plane. From experience I can say that it is much much easier to selectively enable the use of a proxy during ansible tasks with deployment_environment_variables than it is to prevent the use of global proxy configuration through no_proxy in the very large number of cases that you do not want it (basically the whole runtime of your openstack services, loadbalancer, .....). Regarding the addition of an extra script in /etc/profile.d, I would not be in favor of adding this as we already have a very reliable way to deploy behind an http proxy using deployment_environment_variables. There is great benefit in not leaving any "residual" proxy configuration on the hosts or containers. Hopefully this is helpful, Jonathan. On 07/12/2022 11:44, Taltavull Jean-Fran?ois wrote: > Hello, > > The 'lxc_container_create' role appends the content of the 'global_environment_variables' OSA variable to '/etc/environment', which is used by the library 'libpam'. > > On Ubuntu 20.04 and 'libpam-runtime' library version 1.3.1, this works fine unless one of the environment variables content size exceeds 1024 char. In this case, the variable content is truncated. In our OpenStack deployment, this is the variable 'no_proxy' that overflows. > > This limit has been raised to 8192 with 'libpam-runtime' version 1.4 but there is no backport available in Ubuntu 20.04 repositories. > > So what would you think of the following workaround: instead of using '/etc/environment' to store the global variables, the role would create a shell script in '/etc/profile.d' thus avoiding the variable truncation issue related to the 'libpam' library ? > > This works on Ubuntu, Debian and CentOS hosts and containers. > > Cheers, > JF > > From arnaud.morin at gmail.com Wed Dec 7 13:25:36 2022 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Wed, 7 Dec 2022 13:25:36 +0000 Subject: [tc][mistral][release] Propose to deprecate Mistral In-Reply-To: References: Message-ID: Hey all, With Axel [1], we propose to maintain the Mistral development. This is new to us, so we will need help from the community but we really want to be fully commited so mistral will continue beeing maintained under openinfra. If you think we are too late for antelope, this can maybe happen for the next release? Cheers, [1] https://lists.openstack.org/pipermail/openstack-discuss/2022-December/031417.html On 05.12.22 - 10:52, El?d Ill?s wrote: > Hi, > > Mistral projects are unfortunately not actively maintained and caused hard times in latest official series releases for release management team. Thus we discussed this and decided to propose to deprecate Mistral [1] to avoid broken releases and last minute debugging of gate issues (which usually fall to release management team), and remove mistral projects from 2023.1 Antelope release [2]. > > We would like to ask @TC to evaluate the situation and review our patches (deprecation [1] + removal from the official release [2]). > > Thanks in advance, > > El?d Ill?s > irc: elodilles @ #openstack-release > > [1] https://review.opendev.org/c/openstack/governance/+/866562 > [2] https://review.opendev.org/c/openstack/releases/+/865577 From bkslash at poczta.onet.pl Wed Dec 7 14:27:31 2022 From: bkslash at poczta.onet.pl (A Tom) Date: Wed, 7 Dec 2022 15:27:31 +0100 Subject: [Horizon] How to enforce particular language on login Message-ID: <2BEEB019-7AC3-44CC-B7DB-5C68DAD3404A@poczta.onet.pl> Hi everyone, I have a question regarding languages in Horizon - is there any way to enforce different language for different users - I mean without user interaction. Like user account is created and since then the default language for the user is set. Can it be done via Horizon API/config? Or maybe as Domain/Project properties? And second question - what about two different Horizons to one openstack? I mean every resorces are the same, but first Horizon has FQDN X-company.YY and is branded with X-company logo, and the other one has FQDN Y-company.ZZ and is branded wit Y-company logo - is such setup possible/right? OR maybe also two domains should be used? Best regards Adam Tomas From thierry at openstack.org Wed Dec 7 15:19:47 2022 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 7 Dec 2022 16:19:47 +0100 Subject: [largescale-sig] Next meeting: Dec 7, 15utc In-Reply-To: References: Message-ID: <48a31230-ba68-9c1a-10af-c1ff3d3579e4@openstack.org> Here is the summary of our SIG meeting today. Following the cancellation of the Dec 8 OpenInfra Live episode, we planned another episode for January 26, featuring Ubisoft. You can read the detailed meeting logs at: https://meetings.opendev.org/meetings/large_scale_sig/2022/large_scale_sig.2022-12-07-15.00.html Our next IRC meeting will be January 11, at 1500utc on #openstack-operators on OFTC. From then on it will be on even weeks. Regards, -- Thierry Carrez (ttx) From rdhasman at redhat.com Wed Dec 7 15:48:32 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Wed, 7 Dec 2022 21:18:32 +0530 Subject: [cinder] volume_attachement entries are not getting deleted from DB In-Reply-To: References: Message-ID: Hi Hemant, Sorry for getting back to this so late. The problem you've described for instance actions like delete, resize or shelve not deleting the cinder attachment entries. I'm not sure about resize and shelve but deleting an instance should delete the volume attachment entry. I suggest you check nova logs for the calls it makes to cinder for attachment delete operation and see if there are any errors related to it. Ideally it should work but due to some issue, there might be failures. If you find a failure in nova logs related to attachment calls, you can go ahead to check cinder logs (api and volume) to see possible issues why the attachment wasn't getting deleted on the cinder side. - Rajat Dhasmana On Tue, Nov 29, 2022 at 2:07 PM Hemant Sonawane wrote: > Hello Sofia, > > Thank you for taking it into consideration. Do let me know if you have any > questions and updates on the same. > > On Mon, 28 Nov 2022 at 18:12, Sofia Enriquez wrote: > >> Hi Hemant, >> >> Thanks for reporting this issue on the bug tracker >> https://bugs.launchpad.net/cinder/+bug/1998083 >> >> I did a quick search and no problems with shelving operations have been >> reported for at least the last two years.I'll bring this bug to the cinder >> bug meeting this week. >> >> Thanks >> Sofia >> >> On Fri, Nov 25, 2022 at 1:15 PM Hemant Sonawane >> wrote: >> >>> Hi Rajat, >>> It's not about deleting attachments entries but the normal operations >>> from horizon or via cli does not work because of that. So it really needs >>> to be fixed to perform resize, shelve unshelve operations. >>> >>> Here are the detailed attachment entries you can see for the shelved >>> instance. >>> >>> >>> >>> +--------------------------------------+--------------------------------------+--------------------------+---------------------------------- >>> *----+---------------+-----------------------------------------**??* >>> >>> *--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> ??* >>> >>> *| id | volume_id >>> | attached_host | instance_uuid | >>> attach_status | connector ??* >>> >>> * >>> >>> | ??* >>> >>> >>> *+--------------------------------------+--------------------------------------+--------------------------+--------------------------------------+---------------+-----------------------------------------??* >>> >>> *--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> ??* >>> >>> *| 8daddacc-8fc8-4d2b-a738-d05deb20049f | >>> 67ea3a39-78b8-4d04-a280-166acdc90b8a | nfv1compute43.nfv1.o2.cz >>> | 9266a2d7-9721-4994-a6b5-6b3290862dc6 | >>> attached | {"platform": "x86_64", "os_type": "linux??* >>> >>> *", "ip": "10.42.168.87", "host": "nfv1compute43.nfv1.o2.cz >>> ", "multipath": false, "do_local_attach": >>> false, "system uuid": "65917e4f-c8c4-a2af-ec11-fe353e13f4dd", "mountpoint": >>> "/dev/vda"} | ??* >>> >>> *| d3278543-4920-42b7-b217-0858e986fcce | >>> 67ea3a39-78b8-4d04-a280-166acdc90b8a | NULL | >>> 9266a2d7-9721-4994-a6b5-6b3290862dc6 | reserved | NULL >>> ??* >>> >>> * >>> >>> | ??* >>> >>> >>> *+--------------------------------------+--------------------------------------+--------------------------+--------------------------------------+---------------+-----------------------------------------??* >>> >>> *--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> ??* >>> >>> *2 rows in set (0.00 sec) * >>> >>> >>> for e.g if I would like to unshelve this instance it wont work as it has >>> a duplicate entry in cinder db for the attachment. So i have to delete it >>> manually from db or via cli >>> >>> *root at master01:/home/hemant# cinder --os-volume-api-version 3.27 >>> attachment-list --all | grep 67ea3a39-78b8-4d04-a280-166acdc90b8a >>> ??* >>> >>> *| 8daddacc-8fc8-4d2b-a738-d05deb20049f | >>> 67ea3a39-78b8-4d04-a280-166acdc90b8a | attached | >>> 9266a2d7-9721-4994-a6b5-6b3290862dc6 | >>> ??* >>> >>> *| d3278543-4920-42b7-b217-0858e986fcce | >>> 67ea3a39-78b8-4d04-a280-166acdc90b8a** | reserved | >>> 9266a2d7-9721-4994-a6b5-6b3290862dc6 |* >>> >>> *cinder --os-volume-api-version 3.27 >>> attachment-delete 8daddacc-8fc8-4d2b-a738-d05deb20049f* >>> >>> this is the only choice I have if I would like to unshelve vm. But this >>> is not a good approach for production envs. I hope you understand me. >>> Please feel free to ask me anything if you don't understand. >>> >>> >>> >>> On Fri, 25 Nov 2022 at 13:20, Rajat Dhasmana >>> wrote: >>> >>>> Hi Hemant, >>>> >>>> If your final goal is to delete the attachment entries in the cinder >>>> DB, we have attachment APIs to perform these tasks. The command useful for >>>> you is attachment list[1] and attachment delete[2]. >>>> Make sure you pass the right microversion i.e. 3.27 to be able to >>>> execute these operations. >>>> >>>> Eg: >>>> cinder --os-volume-api-version 3.27 attachment-list >>>> >>>> [1] >>>> https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-attachment-list >>>> [2] >>>> https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-attachment-delete >>>> >>>> On Fri, Nov 25, 2022 at 5:44 PM Hemant Sonawane < >>>> hemant.sonawane at itera.io> wrote: >>>> >>>>> Hello >>>>> I am using wallaby release openstack and having issues with cinder >>>>> volumes as once I try to delete, resize or unshelve the shelved vms the >>>>> volume_attachement entries do not get deleted in cinder db and therefore >>>>> the above mentioned operations fail every time. I have to delete these >>>>> volume_attachement entries manually then it works. Is there any way to fix >>>>> this issue ? >>>>> >>>>> nova-compute logs: >>>>> >>>>> cinderclient.exceptions.ClientException: Unable to update >>>>> attachment.(Invalid volume: duplicate connectors detected on volume >>>>> >>>>> Help will be really appreciated Thanks ! >>>>> -- >>>>> Thanks and Regards, >>>>> >>>>> Hemant Sonawane >>>>> >>>>> >>> >>> -- >>> Thanks and Regards, >>> >>> Hemant Sonawane >>> >>> >> >> -- >> >> Sof?a Enriquez >> >> she/her >> >> Software Engineer >> >> Red Hat PnT >> >> IRC: @enriquetaso >> @RedHat Red Hat >> Red Hat >> >> >> >> > > -- > Thanks and Regards, > > Hemant Sonawane > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lkuchlan at redhat.com Wed Dec 7 16:05:30 2022 From: lkuchlan at redhat.com (Liron Kuchlani) Date: Wed, 7 Dec 2022 18:05:30 +0200 Subject: [Manila] S-RBAC Hack-a-thon In-Reply-To: References: Message-ID: Thank you guys for joining. For those who could not join, here's the recording : https://bluejeans.com/s/ro9bd1tGJ2b On Mon, Dec 5, 2022 at 8:00 AM Liron Kuchlani wrote: > Hi everyone, > > > We are planning a Hack-a-thon for secure RBAC between Wed 12/07 to > Thur 12/08 with Fri 12/09 as needed for code reviews. > > > The session will take place between 15:00 UTC and 15:30 UTC. > > > Join meeting: https://bluejeans.com/8520212381 > > > > Please find below helpful references and links, > > > Openstack S-RBAC docs [0] > > Manila-tempest-plugin S-RBAC tests [1] > > S-RBAC collab review [2] > > Manila Secure RBAC Antelope PTG etherpad [3] > > Openstack Shared File Systems API documentation [4] > > S-RBAC code reviews [5] > > S-RBAC Kanban [6] > > > > > [0] https://docs.openstack.org/patrole/latest/rbac-overview.html > > [1] > https://github.com/openstack/manila-tempest-plugin/tree/master/manila_tempest_tests/tests/rbac > > [2] https://etherpad.opendev.org/p/Manila-S-RBAC-collaborative-review > > [3] https://etherpad.opendev.org/p/antelope-ptg-manila-srbac > > [4] https://docs.openstack.org/api-ref/shared-file-system > > [5] > https://review.opendev.org/q/topic:secure-rbac+project:openstack/manila-tempest-plugin > > [6] > https://tree.taiga.io/project/silvacarloss-manila-tempest-plugin-rbac/kanban > > > > Hope you can all join us, > > > -- > Thanks, > Liron > > -- Thanks, Liron -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Wed Dec 7 16:06:14 2022 From: jay at gr-oss.io (Jay Faulkner) Date: Wed, 7 Dec 2022 08:06:14 -0800 Subject: [tc][mistral][release] Propose to deprecate Mistral In-Reply-To: References: Message-ID: Maintaining an entire project, especially catching it up after it's been neglected, is a serious amount of work. Are you (and your employer) committing to do this work? Are there any other interested parties that could keep Mistral maintained if you were to move on? Just wanting to ensure we're going to have the project setup for long-term support, given we promise each release will be supported for years to come. Thanks, Jay On Wed, Dec 7, 2022 at 5:34 AM Arnaud Morin wrote: > Hey all, > > With Axel [1], we propose to maintain the Mistral development. > > This is new to us, so we will need help from the community but we really > want to be fully commited so mistral will continue beeing maintained > under openinfra. > > If you think we are too late for antelope, this can maybe happen for the > next release? > > Cheers, > > [1] > https://lists.openstack.org/pipermail/openstack-discuss/2022-December/031417.html > > On 05.12.22 - 10:52, El?d Ill?s wrote: > > Hi, > > > > Mistral projects are unfortunately not actively maintained and caused > hard times in latest official series releases for release management team. > Thus we discussed this and decided to propose to deprecate Mistral [1] to > avoid broken releases and last minute debugging of gate issues (which > usually fall to release management team), and remove mistral projects from > 2023.1 Antelope release [2]. > > > > We would like to ask @TC to evaluate the situation and review our > patches (deprecation [1] + removal from the official release [2]). > > > > Thanks in advance, > > > > El?d Ill?s > > irc: elodilles @ #openstack-release > > > > [1] https://review.opendev.org/c/openstack/governance/+/866562 > > [2] https://review.opendev.org/c/openstack/releases/+/865577 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adivya1.singh at gmail.com Wed Dec 7 19:15:11 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Thu, 8 Dec 2022 00:45:11 +0530 Subject: (PXE FAILURE)using dnsmasq Message-ID: Hi Team. i m trying to setup PXE using dnsmasq, it fails on "Address listen on port 53" but i don't see any IP address listen on this port This IP 10.128.244.2 in the interface IP of the Infrastructure server where PXE is ported named as eno2 my dnsmasq.conf looks like this, All other setting are as per PXE configuration a default file under pxelinux.cfg and a installer in default, and a user-data file under the /var/www/html no-resolv interface=eno2 listen-address=10.128.244.2 dhcp-range=10.128.244.102,10.128.244.110,6h dhcp-boot=pxelinux/pxelinux.0 enable-tftp tftp-root=/var/lib/tftpboot Anyone having any idea on this error Regards Adivya Singh -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Dec 7 19:19:42 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 7 Dec 2022 19:19:42 +0000 Subject: [dev][infra][qa][tact-sig] Tox 4.0.0 breaking changes Message-ID: <20221207191941.bft3np4yndrcjspa@yuggoth.org> Tox 4.0.0, a major rewrite, just appeared on PyPI in the past couple of hours, so jobs are breaking right and left. We're going to try pinning to latest 3.x in zuul-jobs for the near term while we work through some of the more obvious issues with it, but until then lots of tox-based Zuul jobs are going to be broken. I'll follow up to this thread once we think things should be stabilized and we can start trying to pick through what's going to need fixing for the new tox version. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Wed Dec 7 20:26:07 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 7 Dec 2022 20:26:07 +0000 Subject: [dev][infra][qa][tact-sig] Tox 4.0.0 breaking changes In-Reply-To: <20221207191941.bft3np4yndrcjspa@yuggoth.org> References: <20221207191941.bft3np4yndrcjspa@yuggoth.org> Message-ID: <20221207202606.al2gupwvft7oi3sb@yuggoth.org> On 2022-12-07 19:19:42 +0000 (+0000), Jeremy Stanley wrote: > Tox 4.0.0, a major rewrite, just appeared on PyPI in the past couple > of hours, so jobs are breaking right and left. We're going to try > pinning to latest 3.x in zuul-jobs for the near term [...] The ensure-tox version pin in zuul-jobs is in place and seems to be working for now, so it should be safe to recheck any jobs which failed from this. We'll continue trying to figure out what new tox is breaking in jobs, but help is always appreciated of course! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From grajesh.ec at gmail.com Wed Dec 7 21:17:29 2022 From: grajesh.ec at gmail.com (Rajesh Gunasekaran) Date: Wed, 7 Dec 2022 22:17:29 +0100 Subject: [IRC Client] [Setup in Macbook] Best Practice to follow Message-ID: Hello Team, I am really struggling with setting up the OpenStack IRC client in my new macbook pro! I made the switch from Windows to MAC recently. Could someone please advise what is the best practice I should follow to stay connected with #openstack community channels! in IRC while using mac -- Thanks, Rajesh From amy at demarco.com Wed Dec 7 21:59:07 2022 From: amy at demarco.com (Amy Marrich) Date: Wed, 7 Dec 2022 15:59:07 -0600 Subject: [IRC Client] [Setup in Macbook] Best Practice to follow In-Reply-To: References: Message-ID: Rajesh, I use Limechat for IRC on my MAC, there's other IRC apps you can install as well.. Amy (spotz) On Wed, Dec 7, 2022 at 3:23 PM Rajesh Gunasekaran wrote: > Hello Team, > > I am really struggling with setting up the OpenStack IRC client in my > new macbook pro! > I made the switch from Windows to MAC recently. > Could someone please advise what is the best practice I should follow > to stay connected with #openstack community channels! in IRC while > using mac > -- > Thanks, > Rajesh > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Wed Dec 7 22:00:22 2022 From: jay at gr-oss.io (Jay Faulkner) Date: Wed, 7 Dec 2022 14:00:22 -0800 Subject: [ironic][stable] Cleanup of clerical issues in ironic-stable-maint group Message-ID: Hey all, I was looking over Ironic ACLs in Gerrit today, and I discovered the ironic-stable-maint group which is set to include ironic-core and stable-maint-core (as expected) -- but I also discovered a member in this list who is no longer a core reviewer, or active contributor to Ironic (Jim Rollenhagen). Lucky for us, Jim is a nice guy and we all trust him, and this access hasn't been used. In order to prevent this from happening in the future, I've removed all individual members from the ironic-stable-maint group and left it inheriting from ironic-core. The effective difference is just that Jim has been removed. tl;dr: The effective ironic-stable-maint core group has been dropped by one; Jim Rollenhagen was mistakenly left in the list. Thanks, Jay Faulkner -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Wed Dec 7 22:03:35 2022 From: jay at gr-oss.io (Jay Faulkner) Date: Wed, 7 Dec 2022 14:03:35 -0800 Subject: [ironic][release] Bugfix branch status and cleanup w/r/t zuul-config-errors In-Reply-To: References: Message-ID: Elod, thanks for the insight. We discussed this at Monday's Ironic meeting and decided to create a small, more limited group to gain this access. I've taken the first step to enable this access via this merge request: https://review.opendev.org/admin/groups/0c53b8f80897aa9e7cee7347e4710bd9b8bdfbd2,members -- once it's completed, I'll ask gerrit admins to give me access to modify membership. Once that's done, the Ironic cores who will be taking this access are me+our release liaisons, Dmitry, Iury, and Riccardo. Thanks, Jay Faulkner On Thu, Dec 1, 2022 at 10:04 AM El?d Ill?s wrote: > Hi, > > TL;DR: for a quick solution I suggest to add the ACL [1] for the ironic > cores, something like this: > > [access "refs/heads/bugfix/*"] > delete = = group ironic-core > > The more lengthy answer: as far as I know, bugfix branches were allowed > based on ironic team's request, but with the conditions, that > > - it is out of Release Management scope > - no official releases will be made out of these branches > > So basically it's completely the Ironic team's responsibility. (More > senior @Release Managers: correct me if i am wrong) > > Nevertheless, time to time, there seem to be a need for 'EOL'ing / > deleting branches that are not in the releases repo's deliverables/* > directory (for example: projects opened a stable branch manually; no tags / > releases were administered; branches were reopened accidentally; given > project was not part of the official series release; etc). These needs to > be handled out of the releases repo 'deliverables' directory, thus needs > different handling & tools. And this task is not really coupled with > release management, and also note, that the release management team is a > very small team nowadays. > > Anyway, as a general stable maintainer core, I'm also thinking about how > to solve this issue (old, obsolete branch deletion) in a safe way, though > I'm not there yet to have any concrete solution. Ideas are welcome ? . > Until a solution is not found, the quickest solution is I think is to add > the above ACL extension to the ironic.config [1]. > > Cheers, > > El?d Ill?s > irc: elodilles @ #openstack-release #openstack-stable > > [1] > https://opendev.org/openstack/project-config/src/commit/e92af53c10a811f4370cdae7436f0a5354683d7c/gerrit/acls/openstack/ironic.config#L11 > > ------------------------------ > *From:* Jay Faulkner > *Sent:* Tuesday, November 8, 2022 11:33 PM > *To:* OpenStack Discuss > *Subject:* Re: [ironic][release] Bugfix branch status and cleanup w/r/t > zuul-config-errors > > Does anyone from the Releases team want to chime in on the best way to > execute this kind of change? > > -JayF > > On Wed, Nov 2, 2022 at 7:02 AM Jay Faulkner wrote: > > > > On Wed, Nov 2, 2022 at 3:04 AM Dmitry Tantsur wrote: > > Hi Jay, > > On Tue, Nov 1, 2022 at 8:17 PM Jay Faulkner wrote: > > Hey all, > > I've been looking into the various zuul config errors showing up for > Ironic-program branches. Almost all of our old bugfix branches are in the > list. Additionally, not properly retiring the bugfix branches leads to an > ever-growing list of branches which makes it a lot more difficult, for > contributors and operators alike, to tell which ones are currently > supported. > > > I'd like to see the errors. We update Zuul configuration manually for each > bugfix branch, mapping appropriate branches for other projects (devstack, > nova, etc). It's possible that we always overlook a few jobs, which causes > Zuul to be upset (but quietly upset, so we don't notice). > > > > The errors show up in https://zuul.opendev.org/t/openstack/config-errors > -- although they seem to be broken this morning. Most of them are older > bugfix branches, ones that are out of support, that have the `Queue: > Ironic` param that's no longer supported. I am not in favor of anyone going > to dead bugfix branches and fixing CI; instead we should retire the ones > out of use. > > > > I've put together a document describing the situation as it is now, and my > proposal: > https://etherpad.opendev.org/p/IronicBugfixBranchCleanup > > > Going with the "I would like to retire" would cause us so much trouble > that we'll have to urgently create a downstream mirror of them. Once we do > this, using upstream bugfix branches at all will be questionable. > Especially bugfix/19.0 (and corresponding IPA/inspector branches) is used > in a very actively maintained release. > > > > Then we won't; but we do need to think about what timeline we can talk > about upstream for getting a cadence for getting these retired out, just > like we have a cadence for getting them cut every two months. I'll revise > the list and remove the "I would like to retire" section (move it to > keep-em-up). > > > > Essentially, I think we need to: > - identify bugfix branches to cleanup (I've done this in the above > etherpad, but some of the ) > - clean them up (the next step) > - update Ironic policy to set a regular cadence for when to retire bugfix > branches, and encode the process for doing so > > This means there are two overall questions to answer in this email: > 1) Mechanically, what's the process for doing this? I don't believe the > existing release tooling will be useful for this, but I'm not 100% sure. > I've pulled (in the above etherpad and a local spreadsheet) the last SHA > for each branch; so we should be able to EOL these branches similarly to > how we EOL stable branches; except manually instead of with tooling. Who is > going to do this work? (I'd prefer releases team continue to hold the keys > to do this; but I understand if you don't want to take on this manual work). > > > EOL tags will be created by the release team, yes. I don't think we can > get the keys without going "independent". > > > > It's a gerrit ACL you can enable to give other people access to tags; but > like I said, I don't want that access anyway :). > > > > 2) What's the pattern for Ironic to adopt regarding these branches? We > just need to write down the expected lifecycle and enforce it -- so we > prevent being this deep into "branch debt" in the future. > > > With my vendor's (red) hat on, I'd prefer to have a dual approach: the > newest branches are supported by the community (i.e. us all), the oldest - > by vendors who need them (EOLed if nobody volunteers). I think you already > have a list of branches that OCP uses? Feel free to point Riccardo, Iury or > myself at any issues with them. > > > That's not really an option IMO. These branches exist in the upstream > community, and are seen by upstream contributors and operators. If they're > going to live here; they need to have some reasonable documentation about > what folks should expect out of them and efforts being put towards them. > Even if the documentation is "bugfix/1.2 is maintained as long as Product A > 1.2 is maintained", that's better than leaving the community guessing about > what these are used for, and why some are more-supported than others. > > -Jay > > > Dmitry > > > > > What do folks think? > > - > Jay Faulkner > > > > -- > > Red Hat GmbH , Registered seat: Werner von Siemens Ring 12, D-85630 Grasbrunn, Germany > Commercial register: Amtsgericht Muenchen/Munich, HRB 153243,Managing Directors: Ryan Barnhart, Charles Cachera, Michael O'Neill, Amy Ross > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From allison at openinfra.dev Wed Dec 7 22:11:28 2022 From: allison at openinfra.dev (Allison Price) Date: Wed, 7 Dec 2022 16:11:28 -0600 Subject: [IRC Client] [Setup in Macbook] Best Practice to follow In-Reply-To: References: Message-ID: <6139DF48-304E-4DB4-AB5D-09DEE90B540B@openinfra.dev> I can also vouch for IRCCloud. > On Dec 7, 2022, at 3:59 PM, Amy Marrich wrote: > > Rajesh, > > I use Limechat for IRC on my MAC, there's other IRC apps you can install as well.. > > Amy (spotz) > > On Wed, Dec 7, 2022 at 3:23 PM Rajesh Gunasekaran > wrote: > Hello Team, > > I am really struggling with setting up the OpenStack IRC client in my > new macbook pro! > I made the switch from Windows to MAC recently. > Could someone please advise what is the best practice I should follow > to stay connected with #openstack community channels! in IRC while > using mac > -- > Thanks, > Rajesh > -------------- next part -------------- An HTML attachment was scrubbed... URL: From grajesh.ec at gmail.com Wed Dec 7 23:42:24 2022 From: grajesh.ec at gmail.com (Rajesh Gunasekaran) Date: Thu, 8 Dec 2022 00:42:24 +0100 Subject: [IRC Client] [Setup in Macbook] Best Practice to follow In-Reply-To: <6139DF48-304E-4DB4-AB5D-09DEE90B540B@openinfra.dev> References: <6139DF48-304E-4DB4-AB5D-09DEE90B540B@openinfra.dev> Message-ID: Hello Amy, Allison, I managed to connect to the IRC cloud using Limechat on my macbook now. Thank you so much for your support and guidance! - Rajesh On Wed, Dec 7, 2022 at 11:11 PM Allison Price wrote: > > I can also vouch for IRCCloud. > > On Dec 7, 2022, at 3:59 PM, Amy Marrich wrote: > > Rajesh, > > I use Limechat for IRC on my MAC, there's other IRC apps you can install as well.. > > Amy (spotz) > > On Wed, Dec 7, 2022 at 3:23 PM Rajesh Gunasekaran wrote: >> >> Hello Team, >> >> I am really struggling with setting up the OpenStack IRC client in my >> new macbook pro! >> I made the switch from Windows to MAC recently. >> Could someone please advise what is the best practice I should follow >> to stay connected with #openstack community channels! in IRC while >> using mac >> -- >> Thanks, >> Rajesh >> > -- Thanks, Rajesh From axel.vanzaghi at ovhcloud.com Thu Dec 8 08:38:15 2022 From: axel.vanzaghi at ovhcloud.com (Axel Vanzaghi) Date: Thu, 8 Dec 2022 08:38:15 +0000 Subject: [tc][mistral][release] Propose to deprecate Mistral In-Reply-To: References: , Message-ID: Hello, We (me and my employer) are really committing to do this work, it has been discussed and we agreed on this internally. Thing is, this project is currently vital for us so we will maintain it, either we do it with the community or not. We know it's a serious amount of work, more than if we keep it for us, but we think it would be better for everyone if we give back to the community. We also know we are not the only ones using it, someone has seen the discussion about its deprecation and us proposing to maintain it [1], and sent us a mail to tell us he has use cases, and even features and improvements. I'll ask him to stand up in this thread. Regards, Axel [1] https://review.opendev.org/c/openstack/governance/+/866562 ________________________________ From: Jay Faulkner Sent: Wednesday, December 7, 2022 5:06:14 PM To: Arnaud Morin Cc: El?d Ill?s; openstack-discuss at lists.openstack.org Subject: Re: [tc][mistral][release] Propose to deprecate Mistral Maintaining an entire project, especially catching it up after it's been neglected, is a serious amount of work. Are you (and your employer) committing to do this work? Are there any other interested parties that could keep Mistral maintained if you were to move on? Just wanting to ensure we're going to have the project setup for long-term support, given we promise each release will be supported for years to come. Thanks, Jay On Wed, Dec 7, 2022 at 5:34 AM Arnaud Morin > wrote: Hey all, With Axel [1], we propose to maintain the Mistral development. This is new to us, so we will need help from the community but we really want to be fully commited so mistral will continue beeing maintained under openinfra. If you think we are too late for antelope, this can maybe happen for the next release? Cheers, [1] https://lists.openstack.org/pipermail/openstack-discuss/2022-December/031417.html On 05.12.22 - 10:52, El?d Ill?s wrote: > Hi, > > Mistral projects are unfortunately not actively maintained and caused hard times in latest official series releases for release management team. Thus we discussed this and decided to propose to deprecate Mistral [1] to avoid broken releases and last minute debugging of gate issues (which usually fall to release management team), and remove mistral projects from 2023.1 Antelope release [2]. > > We would like to ask @TC to evaluate the situation and review our patches (deprecation [1] + removal from the official release [2]). > > Thanks in advance, > > El?d Ill?s > irc: elodilles @ #openstack-release > > [1] https://review.opendev.org/c/openstack/governance/+/866562 > [2] https://review.opendev.org/c/openstack/releases/+/865577 -------------- next part -------------- An HTML attachment was scrubbed... URL: From vgvoleg at gmail.com Thu Dec 8 09:22:43 2022 From: vgvoleg at gmail.com (Oleg Ovcharuk) Date: Thu, 8 Dec 2022 12:22:43 +0300 Subject: [tc][mistral][release] Propose to deprecate Mistral In-Reply-To: References: Message-ID: Hi everyone, I'm one of mistral core team members. As was mentioned, mistral is also used by other companies (Netcracker). As the community was kinda quiet, we were focused on our personal mistral fork, also we had not enough human resources to keep it up with upstream. But now things have changed - as you may notice by gerrit, we started to push our bugfixes/improvements to upstream and we have a huge list to complete. It's an amazing coincidence that the topic about deprecating mistral was started *after* two companies that are interested in mistral returned for community work. I'm also happy that someone is ready to take responsibility to perform PTL stuff and we will help them as much as we can. Let me know if you have any questions. Best regards, Oleg Ovcharuk ??, 8 ???. 2022 ?. ? 11:46, Axel Vanzaghi : > Hello, > > > We (me and my employer) are really committing to do this work, it has been > discussed and we agreed on this internally. > > > Thing is, this project is currently vital for us so we will maintain it, > either we do it with the community or not. We know it's a serious amount of > work, more than if we keep it for us, but we think it would be better for > everyone if we give back to the community. > > > We also know we are not the only ones using it, someone has seen the > discussion about its deprecation and us proposing to maintain it [1], and > sent us a mail to tell us he has use cases, and even features and > improvements. I'll ask him to stand up in this thread. > > > Regards, > > Axel > > > [1] https://review.opendev.org/c/openstack/governance/+/866562 > ------------------------------ > *From:* Jay Faulkner > *Sent:* Wednesday, December 7, 2022 5:06:14 PM > *To:* Arnaud Morin > *Cc:* El?d Ill?s; openstack-discuss at lists.openstack.org > *Subject:* Re: [tc][mistral][release] Propose to deprecate Mistral > > Maintaining an entire project, especially catching it up after it's been > neglected, is a serious amount of work. Are you (and your employer) > committing to do this work? Are there any other interested parties that > could keep Mistral maintained if you were to move on? Just wanting to > ensure we're going to have the project setup for long-term support, given > we promise each release will be supported for years to come. > > Thanks, > Jay > > On Wed, Dec 7, 2022 at 5:34 AM Arnaud Morin > wrote: > >> Hey all, >> >> With Axel [1], we propose to maintain the Mistral development. >> >> This is new to us, so we will need help from the community but we really >> want to be fully commited so mistral will continue beeing maintained >> under openinfra. >> >> If you think we are too late for antelope, this can maybe happen for the >> next release? >> >> Cheers, >> >> [1] >> https://lists.openstack.org/pipermail/openstack-discuss/2022-December/031417.html >> >> On 05.12.22 - 10:52, El?d Ill?s wrote: >> > Hi, >> > >> > Mistral projects are unfortunately not actively maintained and caused >> hard times in latest official series releases for release management team. >> Thus we discussed this and decided to propose to deprecate Mistral [1] to >> avoid broken releases and last minute debugging of gate issues (which >> usually fall to release management team), and remove mistral projects from >> 2023.1 Antelope release [2]. >> > >> > We would like to ask @TC to evaluate the situation and review our >> patches (deprecation [1] + removal from the official release [2]). >> > >> > Thanks in advance, >> > >> > El?d Ill?s >> > irc: elodilles @ #openstack-release >> > >> > [1] https://review.opendev.org/c/openstack/governance/+/866562 >> > [2] https://review.opendev.org/c/openstack/releases/+/865577 >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Thu Dec 8 10:45:47 2022 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 8 Dec 2022 11:45:47 +0100 Subject: (PXE FAILURE)using dnsmasq In-Reply-To: References: Message-ID: Hi, Though the question not really openstack specific, but this perhaps can help you also: https://bugs.launchpad.net/neutron/+bug/1998621 The bug report mentioned an issue similar to yours, so perhaps the method used in the fix also can help you. https://review.opendev.org/c/openstack/neutron/+/866489 Of course Neutron's dnsmas config will be different from yours. Lajos Katona Adivya Singh ezt ?rta (id?pont: 2022. dec. 7., Sze, 20:25): > Hi Team. > > i m trying to setup PXE using dnsmasq, it fails on "Address listen on port > 53" but i don't see any IP address listen on this port > > This IP 10.128.244.2 in the interface IP of the Infrastructure server > where PXE is ported named as eno2 > > my dnsmasq.conf looks like this, All other setting are as per PXE > configuration a default file under pxelinux.cfg > > and a installer in default, and a user-data file under the /var/www/html > > no-resolv > interface=eno2 > listen-address=10.128.244.2 > dhcp-range=10.128.244.102,10.128.244.110,6h > dhcp-boot=pxelinux/pxelinux.0 > enable-tftp > tftp-root=/var/lib/tftpboot > > Anyone having any idea on this error > > Regards > Adivya Singh > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Thu Dec 8 16:47:20 2022 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 8 Dec 2022 17:47:20 +0100 Subject: [nova][placement] Spec review day on Dec 14th Message-ID: Hey folks, As agreed on a previous meeting [1] we will have a second Spec review day on Wed Dec 14th. Now you're aware, prepare your specs in advance so they're reviewable and ideally please be around on that day in order to reply to any comments and eventually propose a new revision, ideally the same day. As a side note, we'll have an Implementation review day on Jan 10th which will focus on reviewing feature changes related to accepted specs and we also plan to review a few implementation patches by Dec 15th. Thanks, -Sylvain [1] https://meetings.opendev.org/meetings/nova/2022/nova.2022-11-29-16.00.log.html#l-102 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Dec 8 17:59:42 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 8 Dec 2022 17:59:42 +0000 Subject: [dev][infra][qa][tact-sig] Tox 4.0.0 breaking changes In-Reply-To: <20221207191941.bft3np4yndrcjspa@yuggoth.org> References: <20221207191941.bft3np4yndrcjspa@yuggoth.org> Message-ID: <20221208175941.uxymeb7tq7kgrngk@yuggoth.org> Another update... The ensure-tox role provided as part of the zuul-jobs standard library is going to undo its tox<4 cap on December 21, so jobs which are impacted will need to be fixed or get their own independent caps added before that date. See this announcement for further details: https://lists.zuul-ci.org/archives/list/zuul-announce at lists.zuul-ci.org/message/3NNATSUTSIGP5FE2MDY5X2KJ5X4NB4PT/ Projects which want to test-drive their fixes for tox v4 before that date should be able to set a commit footer like: Depends-On: https://review.opendev.org/866943 -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Thu Dec 8 18:47:03 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 08 Dec 2022 10:47:03 -0800 Subject: [tc][mistral][release] Propose to deprecate Mistral In-Reply-To: References: Message-ID: <184f3100360.c0a0b1db544780.2572463728293836752@ghanshyammann.com> ---- On Thu, 08 Dec 2022 01:22:43 -0800 Oleg Ovcharuk wrote --- > Hi everyone, I'm one of mistral core team members.As was mentioned,?mistral is?also used by other companies (Netcracker).?As the community was kinda quiet, we were focused on our personal mistral fork, also we had not enough human resources to keep it up with upstream.But now things have changed - as you may notice by gerrit, we started to push our bugfixes/improvements to upstream and we have a huge list to complete. > It's an amazing coincidence that the topic about deprecating mistral was started *after* two companies that are interested in mistral returned for community work. > I'm also happy that someone is ready to take responsibility to perform PTL stuff and we will help them as much as we can.Let me know if you have any questions. Thanks, Oleg for your response. It is good to see more than one companies are interested to maintain the Mistral. As the next step, let's do this: - Do the required work to release it. You can talk to the release team about pending things. Accordingly, we can decide on this patch https://review.opendev.org/c/openstack/governance/+/866562 - Oleg or any other existing core member can onboard Axel and Arnaud in the maintainer list (what all resp you would like to give it up to the existing core members of Mistral). - As Mistral is in the DPL model now, if you guys wanted to be moved to the PTL model, you can do it any time. These are the requirement[1] and example[2] Ping us in #openstack-tc IRC channel for any query/help you need. [1] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html#process-for-opting-in-to-distributed-leadership [2] https://review.opendev.org/c/openstack/governance/+/829037 -gmann > Best regards,Oleg Ovcharuk > ??, 8 ???. 2022 ?. ? 11:46, Axel Vanzaghi axel.vanzaghi at ovhcloud.com>: > Hello, > > > We (me and my employer) are really committing to do this work, it has been discussed and we agreed on this internally. > > > Thing is, this project is currently vital for us so we will maintain it, either we do it with the community or not. We know it's a serious amount of work, more than if we keep it for us, but we think it would be better for everyone if we give back to the community. > > > We also know we are not the only ones using it, someone has seen the discussion about its deprecation and us proposing to maintain it [1], and sent us a mail to tell us he has use cases, and even features and improvements. I'll ask him to stand up in this thread. > > > Regards, > Axel > > > [1] https://review.opendev.org/c/openstack/governance/+/866562 > > > From: Jay Faulkner jay at gr-oss.io> > Sent: Wednesday, December 7, 2022 5:06:14 PM > To: Arnaud Morin > Cc: El?d Ill?s; openstack-discuss at lists.openstack.org > Subject: Re: [tc][mistral][release] Propose to deprecate Mistral?Maintaining an entire project, especially catching it up after it's been neglected, is a serious amount of work. Are you (and your employer) committing to do this work? Are there any other interested parties that could keep Mistral maintained if you were to move on? Just wanting to ensure we're going to have the project setup for long-term support, given we promise each release will be supported for years to come. > > Thanks,Jay > > On Wed, Dec 7, 2022 at 5:34 AM Arnaud Morin arnaud.morin at gmail.com> wrote: > Hey all, > > With Axel [1], we propose to maintain the Mistral development. > > This is new to us, so we will need help from the community but we really > want to be fully commited so mistral will continue beeing maintained > under openinfra. > > If you think we are too late for antelope, this can maybe happen for the > next release? > > Cheers, > > [1] https://lists.openstack.org/pipermail/openstack-discuss/2022-December/031417.html > > On 05.12.22 - 10:52, El?d Ill?s wrote: > > Hi, > > > > Mistral projects are unfortunately not actively maintained and caused hard times in latest official series releases for release management team. Thus we discussed this and decided to propose to deprecate Mistral [1] to avoid broken releases and last minute debugging of gate issues (which usually fall to release management team), and remove mistral projects from 2023.1 Antelope release [2]. > > > > We would like to ask @TC to evaluate the situation and review our patches (deprecation [1] + removal from the official release [2]). > > > > Thanks in advance, > > > > El?d Ill?s > > irc: elodilles @ #openstack-release > > > > [1] https://review.opendev.org/c/openstack/governance/+/866562 > > [2] https://review.opendev.org/c/openstack/releases/+/865577 > > From gmann at ghanshyammann.com Thu Dec 8 20:44:17 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 08 Dec 2022 12:44:17 -0800 Subject: [dev][infra][qa][tact-sig] Tox 4.0.0 breaking changes In-Reply-To: <20221208175941.uxymeb7tq7kgrngk@yuggoth.org> References: <20221207191941.bft3np4yndrcjspa@yuggoth.org> <20221208175941.uxymeb7tq7kgrngk@yuggoth.org> Message-ID: <184f37b5653.1145eaddb547711.668145682639910818@ghanshyammann.com> ---- On Thu, 08 Dec 2022 09:59:42 -0800 Jeremy Stanley wrote --- It seems master jobs using tox<4[1] but all stable jobs using tox>=4.0[2] and failing. And both using ensure-tox [1] https://zuul.opendev.org/t/openstack/build/8a4585f2961a4854ad96c7d2a188b557/log/controller/logs/pip3-freeze.txt#235 [2] https://zuul.opendev.org/t/openstack/build/78cddc1180ff40109bbe17df884d23d8/log/controller/logs/pip3-freeze.txt#227 > Another update... The ensure-tox role provided as part of the > zuul-jobs standard library is going to undo its tox<4 cap on > December 21, so jobs which are impacted will need to be fixed or get > their own independent caps added before that date. See this > announcement for further details: Does it mean all the stable jobs also will start using the latest tox (>4.0) which is breaking the jobs and we need to fix them in stable branches too? -gmann > > https://lists.zuul-ci.org/archives/list/zuul-announce at lists.zuul-ci.org/message/3NNATSUTSIGP5FE2MDY5X2KJ5X4NB4PT/ > > Projects which want to test-drive their fixes for tox v4 before that > date should be able to set a commit footer like: > > Depends-On: https://review.opendev.org/866943 > > -- > Jeremy Stanley > From tobias.urdin at binero.com Thu Dec 8 20:57:18 2022 From: tobias.urdin at binero.com (Tobias Urdin) Date: Thu, 8 Dec 2022 20:57:18 +0000 Subject: [all] [oslo.messaging] Interest in collaboration on a NATS driver In-Reply-To: References: <52AA12A0-AE67-4EF7-B924-DE1F2873B909@binero.com> <0E347D5A-967D-42E3-AC00-56034907053B@binero.com> <8F7F790F-0F5C-4257-845A-1BCE671AF844@binero.com> Message-ID: <3E16378D-DCFE-4096-B02F-E91DABE710EA@binero.com> Hello, I first would like to apologize for not keeping everybody in the loop but I?ve been very busy with a lot of work. I just wanted to give a quick update on some potential progress. First off, the Oslo meeting on the 19th of September ([1] log here) kicked us into a direction of how we could use the official nats.py python client [2] to talk to NATS as we quickly ruled out the nats-python client [3] as it?s pretty unmaintained and incomplete. We did however get a working POC driver [4] (see patchset 14) with [3] that confirmed we could get a Devstack job working even though the driver missed a lot of critical functionality such as error handling, retries, TLS etc etc. Since the nats.py client [2] is asyncio based we investigated a path forward that meant where we could utilize asyncio in the oslo.messaging library and expose an API up to projects consuming oslo.messaging. We decided to do some POC patches into oslo.messaging and using the Blazar project as a testing ground [4]. We very quickly found ourselves digging into oslo.service to implement asyncio, checking SQLAlchemy for support, API frameworks would need asyncio support, moving from WSGI to an ASGI compatible server, duplicating code for supporting sync and async code paths and more. We came to the conclussion that introducing a NATS oslo.messaging driver meant requiring us to implement asyncio across our ecosystem which would take an massive effort. Some good came out of it in form of a patch [5] further improving the consistency of the oslo.messaging API, we also gained a lot of insight in the future if we would like to start introducing asyncio functionality into oslo projects to support it for the future. Now moving forward to today. I?ve talked with the maintainer of nats.py and the company behind him to investigate if we can include a synchronous code path into that Python library to be used in a future revision of a oslo.messaging NATS driver. Hopefully we?ll have some more progress to report on that in the beginning of next year. Feel free to reach out to me (here or on IRC, my nick is tobias-urdin) if you want to know more, have any questions or want to get involved in anything I?ve mentioned above! Best regards Tobias [1] https://meetings.opendev.org/meetings/oslo/2022/oslo.2022-09-19-15.12.log.txt [2] https://github.com/nats-io/nats.py [3] https://github.com/Gr1N/nats-python [4] https://review.opendev.org/q/topic:asyncio-nats [5] https://review.opendev.org/c/openstack/oslo.messaging/+/862419 On 12 Sep 2022, at 09:05, Tobias Urdin wrote: Hello, Since I haven?t heard any objections we?ll move the meeting to the Oslo meeting on the 19th of September. See you there! [1] [1] https://wiki.openstack.org/wiki/Meetings/Oslo Best regards Tobias On 8 Sept 2022, at 15:54, Herve Beraud > wrote: WFM Le jeu. 8 sept. 2022 ? 15:16, Tobias Urdin > a ?crit : Hello, I don?t have anything against moving to the 19th of September meeting if that?s OK with everybody else, I propose we do that. Let?s see if we get some feedback against the proposed new date otherwise we?ll move it. Best regards On 8 Sept 2022, at 08:41, Rados?aw Piliszek > wrote: Hi Tobias et al, That Monday I cannot attend due to other work commitments. Could we schedule it for the following week? Cheers, Radek On Tue, Sep 6, 2022, 18:33 Tobias Urdin > wrote: Great! I?ve updated the agenda with NATS driver and eventlet future topic https://wiki.openstack.org/wiki/Meetings/Oslo#Agenda_for_Next_Meeting See you in the next meeting on 12th of September 15:00 UTC hopefully that will give us all some time to read through the following material: - Current POC code https://review.opendev.org/c/openstack/oslo.messaging/+/848338 - Devstack plugin https://github.com/tobias-urdin/devstack-plugin-nats - Discussion on IRC on 2nd September about below links https://meetings.opendev.org/irclogs/%23openstack-oslo/%23openstack-oslo.2022-09-02.log.html Links to old material: - The old spec that we can work upon https://review.opendev.org/c/openstack/oslo-specs/+/692784 - Old driver implementation for above old spec https://review.opendev.org/c/openstack/oslo.messaging/+/680629 - Old asyncio planning https://wiki.openstack.org/wiki/Oslo/blueprints/asyncio - Old asyncio executor in oslo.messaging https://blueprints.launchpad.net/oslo.messaging/+spec/asyncio-executor - Old messaging API mail thread https://lists.openstack.org/pipermail/openstack-dev/2013-May/009784.html Looking forward to talk to you all next monday! Best regards Tobias On 5 Sept 2022, at 12:48, Herve Beraud > wrote: Le lun. 5 sept. 2022 ? 10:00, Tobias Urdin > a ?crit : Thanks! It seems that we have some traction here to get started in a more structured manner. I?m on PTO today but back tomorrow, what do people feel about scheduling a recurring meeting about this topic? The weekly Oslo meeting could host the conversation through a dedicated topic. https://wiki.openstack.org/wiki/Meetings/Oslo You could modify the meeting agenda to add this topic. Daniel Bengtsson (damani) is our meeting facilitator. Do not hesitate to ping him to discuss the meeting management. I still need to read through the old spec and old implementation of NATS, and I am a bit restricted on time currently but we could brainstorm in that meeting and get us started on a new/updated the other spec. Should also note that, to anybody interested, that the topic is a little bit more broader than just a new messaging driver as it includes planning and/or working around the usage of eventlet etc, if you are interested in that topic please also reach out to us! Best regards Tobias > On 2 Sept 2022, at 10:35, Felix H?ttner > wrote: > > +1, we would also be interested in helping out. Let me know where we can help out with reviews, writing code, testing, etc. > > -- > Felix Huettner > >> +1, I'm also interested in this topic. I did some tests with nats-py to see how it works and I'm volunteering to go further with this technology. >> Let me know if you need reviews, help etc... >> >> Le jeu. 1 sept. 2022 ? 15:50, Tobias Urdin > a ?crit : >> Hello Kai, >> >> Great to hear that you're interested! >> I'm currently just testing this as a POC directly in devstack so there is real-world testing (not should it yet I think, needs more development first). >> >> Best regards >> Tobias >> >>> On 30 Aug 2022, at 18:32, Kai Bojens > wrote: >>> >>> Am 29.08.22 um 15:46 schrieb Tobias Urdin: >>> >>>> If anybody, or any company, out there would be interested in collaborating in a project to bring this support and maintain it feel free to >>>> reach out. I'm hoping somebody will bite but atleast I've put it out there for all of you. >>> >>> Hi, >>> I am very much interested to take a closer look at your work and maybe contribute to it. Although I'm working with OpenStack for my employer at the moment, I'd do this in my spare time as I'm not sure that I can convince him to add another staging system with a full OpenStack installation just for developing a RabbitMQ replacement. That's not on our agenda as we are mostly users and not developers of OpenStack. >>> >>> As I'm pretty new to the messaging topic and had just heard of NATS and your idea, I'll first would have to dive into NATS and then I would take a closer look at your code and maybe try to create some documentation or tests. So, I can't promise anything but as I said I'm very interested in your approach as I also see the massive load from RabbitMQ. >>> >>> This brings me to the most important question: Where do you run and test your code? >>> >>> Greetings, >>> Kai > > Diese E Mail enth?lt m?glicherweise vertrauliche Inhalte und ist nur f?r die Verwertung durch den vorgesehenen Empf?nger bestimmt. Sollten Sie nicht der vorgesehene Empf?nger sein, setzen Sie den Absender bitte unverz?glich in Kenntnis und l?schen diese E Mail. Hinweise zum Datenschutz finden Sie hier>. -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Dec 8 21:01:12 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 8 Dec 2022 21:01:12 +0000 Subject: [dev][infra][qa][tact-sig] Tox 4.0.0 breaking changes In-Reply-To: <184f37b5653.1145eaddb547711.668145682639910818@ghanshyammann.com> References: <20221207191941.bft3np4yndrcjspa@yuggoth.org> <20221208175941.uxymeb7tq7kgrngk@yuggoth.org> <184f37b5653.1145eaddb547711.668145682639910818@ghanshyammann.com> Message-ID: <20221208210111.zn7f3m6nsb5wjhki@yuggoth.org> On 2022-12-08 12:44:17 -0800 (-0800), Ghanshyam Mann wrote: [...] > It seems master jobs using tox<4[1] but all stable jobs using > tox>=4.0[2] and failing. And both using ensure-tox > > [1] https://zuul.opendev.org/t/openstack/build/8a4585f2961a4854ad96c7d2a188b557/log/controller/logs/pip3-freeze.txt#235 > [2] https://zuul.opendev.org/t/openstack/build/78cddc1180ff40109bbe17df884d23d8/log/controller/logs/pip3-freeze.txt#227 [...] Those are Tempest jobs, which do some of their own installing of tox directly rather than just relying on the version provided by ensure-tox. Check with the QA team, since they've been working on fixes for those. > Does it mean all the stable jobs also will start using the latest > tox (>4.0) which is breaking the jobs and we need to fix them in > stable branches too? [...] Unless we change those jobs to pin to an old version of tox for stable branches, yes. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jay at gr-oss.io Thu Dec 8 21:14:56 2022 From: jay at gr-oss.io (Jay Faulkner) Date: Thu, 8 Dec 2022 13:14:56 -0800 Subject: [dev][infra][qa][tact-sig] Tox 4.0.0 breaking changes In-Reply-To: <20221208210111.zn7f3m6nsb5wjhki@yuggoth.org> References: <20221207191941.bft3np4yndrcjspa@yuggoth.org> <20221208175941.uxymeb7tq7kgrngk@yuggoth.org> <184f37b5653.1145eaddb547711.668145682639910818@ghanshyammann.com> <20221208210111.zn7f3m6nsb5wjhki@yuggoth.org> Message-ID: On Thu, Dec 8, 2022 at 1:12 PM Jeremy Stanley wrote: > On 2022-12-08 12:44:17 -0800 (-0800), Ghanshyam Mann wrote: > [...] > > It seems master jobs using tox<4[1] but all stable jobs using > > tox>=4.0[2] and failing. And both using ensure-tox > > > > [1] > https://zuul.opendev.org/t/openstack/build/8a4585f2961a4854ad96c7d2a188b557/log/controller/logs/pip3-freeze.txt#235 > > [2] > https://zuul.opendev.org/t/openstack/build/78cddc1180ff40109bbe17df884d23d8/log/controller/logs/pip3-freeze.txt#227 > [...] > > Those are Tempest jobs, which do some of their own installing of tox > directly rather than just relying on the version provided by > ensure-tox. Check with the QA team, since they've been working on > fixes for those. > > > > Does it mean all the stable jobs also will start using the latest > > tox (>4.0) which is breaking the jobs and we need to fix them in > > stable branches too? > [...] > > Unless we change those jobs to pin to an old version of tox for > stable branches, yes. > I'd be extremely in favor of this change. We should do everything we can to reduce the amount of effort needed for individual projects to keep stable branches passing tests. -Jay Faulkner -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Dec 8 21:16:04 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 08 Dec 2022 13:16:04 -0800 Subject: [dev][infra][qa][tact-sig] Tox 4.0.0 breaking changes In-Reply-To: <20221208210111.zn7f3m6nsb5wjhki@yuggoth.org> References: <20221207191941.bft3np4yndrcjspa@yuggoth.org> <20221208175941.uxymeb7tq7kgrngk@yuggoth.org> <184f37b5653.1145eaddb547711.668145682639910818@ghanshyammann.com> <20221208210111.zn7f3m6nsb5wjhki@yuggoth.org> Message-ID: <184f3986e85.e66f746f548446.7965885214342574077@ghanshyammann.com> ---- On Thu, 08 Dec 2022 13:01:12 -0800 Jeremy Stanley wrote --- > On 2022-12-08 12:44:17 -0800 (-0800), Ghanshyam Mann wrote: > [...] > > It seems master jobs using tox<4[1] but all stable jobs using > > tox>=4.0[2] and failing. And both using ensure-tox > > > > [1] https://zuul.opendev.org/t/openstack/build/8a4585f2961a4854ad96c7d2a188b557/log/controller/logs/pip3-freeze.txt#235 > > [2] https://zuul.opendev.org/t/openstack/build/78cddc1180ff40109bbe17df884d23d8/log/controller/logs/pip3-freeze.txt#227 > [...] > > Those are Tempest jobs, which do some of their own installing of tox > directly rather than just relying on the version provided by > ensure-tox. Check with the QA team, since they've been working on > fixes for those. I was debugging that bug only. I do not think it is Tempest's own installation. It is a devstack installation of tox (which I think uses ensure-tox on stable also unless we have any hidden installation overriding it) which is same on master as well as on stable but the master installation uses old tox which I am hoping due to ensure-tox capping tox but on stable it is not. I am confused by this different behaviour. I do not think devstack does any separate things for master and stable/zed for tox installation. Something from ensure-tox? > > > > Does it mean all the stable jobs also will start using the latest > > tox (>4.0) which is breaking the jobs and we need to fix them in > > stable branches too? > [...] > > Unless we change those jobs to pin to an old version of tox for > stable branches, yes. Ok, that means ensure-tox capping and uncapping are not useful for stable branch jobs. We should cap it in constraints also? or any specific installation of tox. Because we should not force stable branch jobs to use latest tox and break them and we fix them all. =gmann > -- > Jeremy Stanley > From gmann at ghanshyammann.com Thu Dec 8 21:21:01 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 08 Dec 2022 13:21:01 -0800 Subject: [dev][infra][qa][tact-sig] Tox 4.0.0 breaking changes In-Reply-To: <184f3986e85.e66f746f548446.7965885214342574077@ghanshyammann.com> References: <20221207191941.bft3np4yndrcjspa@yuggoth.org> <20221208175941.uxymeb7tq7kgrngk@yuggoth.org> <184f37b5653.1145eaddb547711.668145682639910818@ghanshyammann.com> <20221208210111.zn7f3m6nsb5wjhki@yuggoth.org> <184f3986e85.e66f746f548446.7965885214342574077@ghanshyammann.com> Message-ID: <184f39cfa27.c84e9567548523.4662283232088066956@ghanshyammann.com> ---- On Thu, 08 Dec 2022 13:16:04 -0800 Ghanshyam Mann wrote --- > ---- On Thu, 08 Dec 2022 13:01:12 -0800 Jeremy Stanley wrote --- > > On 2022-12-08 12:44:17 -0800 (-0800), Ghanshyam Mann wrote: > > [...] > > > It seems master jobs using tox<4[1] but all stable jobs using > > > tox>=4.0[2] and failing. And both using ensure-tox > > > > > > [1] https://zuul.opendev.org/t/openstack/build/8a4585f2961a4854ad96c7d2a188b557/log/controller/logs/pip3-freeze.txt#235 > > > [2] https://zuul.opendev.org/t/openstack/build/78cddc1180ff40109bbe17df884d23d8/log/controller/logs/pip3-freeze.txt#227 > > [...] > > > > Those are Tempest jobs, which do some of their own installing of tox > > directly rather than just relying on the version provided by > > ensure-tox. Check with the QA team, since they've been working on > > fixes for those. > > I was debugging that bug only. > > I do not think it is Tempest's own installation. It is a devstack installation of tox (which I think uses ensure-tox > on stable also unless we have any hidden installation overriding it) which is same on master as well > as on stable but the master installation uses old tox which I am hoping due to ensure-tox capping tox > but on stable it is not. I am confused by this different behaviour. I do not think devstack does any > separate things for master and stable/zed for tox installation. Something from ensure-tox? Another case of a mixup in tox installation is Devstack master Focal job which is using latest tox - https://zuul.opendev.org/t/openstack/build/aaad7312f68246c0979e1a43cfe96c3b/log/controller/logs/pip3-freeze.txt#229 -gmann > > > > > > > > > Does it mean all the stable jobs also will start using the latest > > > tox (>4.0) which is breaking the jobs and we need to fix them in > > > stable branches too? > > [...] > > > > Unless we change those jobs to pin to an old version of tox for > > stable branches, yes. > > Ok, that means ensure-tox capping and uncapping are not useful for > stable branch jobs. We should cap it in constraints also? or any specific installation > of tox. Because we should not force stable branch jobs to use latest tox and break them > and we fix them all. > > =gmann > > > -- > > Jeremy Stanley > > > From gmann at ghanshyammann.com Thu Dec 8 21:30:12 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 08 Dec 2022 13:30:12 -0800 Subject: [gate][stable][qa] Hold recheck: Devstack based job using tox4 are broken (especially greande, stable jobs) Message-ID: <184f3a5621f.faad1123548722.7315099506744764056@ghanshyammann.com> Hi All, As you might know, the devstack-based job fails on the tempest command with the below error: Error- venv: failed with tempest is not allowed, use allowlist_externals to allow it It is failing where jobs are using the tox4 (in grenade job or stable branch job or some master branch job like focal job using tox4[1]). This is another issue of mix match of tox version in stable and master jobs which we are discussing in a separate thread[2]. On tempest command or allowlist_externals failure, we are testing one fix for that but please hold recheck until it gets fixed or any workaround is merged - https://review.opendev.org/q/Ifc8fc6cd7aebda147043a52d6baf99a2bacc009c I have opened a bug also to track it https://bugs.launchpad.net/devstack/+bug/1999183 [1] https://zuul.opendev.org/t/openstack/build/aaad7312f68246c0979e1a43cfe96c3b/log/controller/logs/pip3-freeze.txt#229 [2] https://lists.openstack.org/pipermail/openstack-discuss/2022-December/031442.html -gmann From fungi at yuggoth.org Thu Dec 8 21:38:41 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 8 Dec 2022 21:38:41 +0000 Subject: [dev][infra][qa][tact-sig] Tox 4.0.0 breaking changes In-Reply-To: <184f3986e85.e66f746f548446.7965885214342574077@ghanshyammann.com> References: <20221207191941.bft3np4yndrcjspa@yuggoth.org> <20221208175941.uxymeb7tq7kgrngk@yuggoth.org> <184f37b5653.1145eaddb547711.668145682639910818@ghanshyammann.com> <20221208210111.zn7f3m6nsb5wjhki@yuggoth.org> <184f3986e85.e66f746f548446.7965885214342574077@ghanshyammann.com> Message-ID: <20221208213841.n5hjwclcqpg36lwl@yuggoth.org> On 2022-12-08 13:16:04 -0800 (-0800), Ghanshyam Mann wrote: [...] > I do not think it is Tempest's own installation. It is a devstack > installation of tox (which I think uses ensure-tox on stable also > unless we have any hidden installation overriding it) which is > same on master as well as on stable but the master installation > uses old tox which I am hoping due to ensure-tox capping tox but > on stable it is not. I am confused by this different behaviour. I > do not think devstack does any separate things for master and > stable/zed for tox installation. Something from ensure-tox? [...] Not sure where you draw the line between DevStack and Tempest, for me they're part of the same suite split into different Git repositories. The problem area which was brought to my attention earlier today in #openstack-qa is in the configure_tempest function of DevStack's lib/tempest script which reinstalls tox rather than using the version already present on the server. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From radoslaw.piliszek at gmail.com Thu Dec 8 21:55:44 2022 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 8 Dec 2022 22:55:44 +0100 Subject: [all] [oslo.messaging] Interest in collaboration on a NATS driver In-Reply-To: <3E16378D-DCFE-4096-B02F-E91DABE710EA@binero.com> References: <52AA12A0-AE67-4EF7-B924-DE1F2873B909@binero.com> <0E347D5A-967D-42E3-AC00-56034907053B@binero.com> <8F7F790F-0F5C-4257-845A-1BCE671AF844@binero.com> <3E16378D-DCFE-4096-B02F-E91DABE710EA@binero.com> Message-ID: Hi! I just wanted to thank Tobias for all his work in this topic. He uses the plural form of verbs but I believe we could switch most to singular and it would be true nonetheless. Thank you again! Kind regards, Radek -yoctozepto On Thu, 8 Dec 2022 at 22:02, Tobias Urdin wrote: > > Hello, > > I first would like to apologize for not keeping everybody in the loop but I?ve > been very busy with a lot of work. > > I just wanted to give a quick update on some potential progress. > > First off, the Oslo meeting on the 19th of September ([1] log here) kicked us into a direction > of how we could use the official nats.py python client [2] to talk to NATS as we quickly ruled > out the nats-python client [3] as it?s pretty unmaintained and incomplete. > > We did however get a working POC driver [4] (see patchset 14) with [3] that confirmed we could get > a Devstack job working even though the driver missed a lot of critical functionality such as error handling, retries, TLS etc etc. > > Since the nats.py client [2] is asyncio based we investigated a path forward that meant where we could utilize > asyncio in the oslo.messaging library and expose an API up to projects consuming oslo.messaging. > > We decided to do some POC patches into oslo.messaging and using the Blazar project as a testing ground [4]. We very quickly > found ourselves digging into oslo.service to implement asyncio, checking SQLAlchemy for support, API frameworks would need > asyncio support, moving from WSGI to an ASGI compatible server, duplicating code for supporting sync and async code paths and more. > > We came to the conclussion that introducing a NATS oslo.messaging driver meant requiring us to implement asyncio across our ecosystem > which would take an massive effort. Some good came out of it in form of a patch [5] further improving the consistency of the oslo.messaging API, we > also gained a lot of insight in the future if we would like to start introducing asyncio functionality into oslo projects to support it for the future. > > Now moving forward to today. > > I?ve talked with the maintainer of nats.py and the company behind him to investigate if we can include a synchronous code path into > that Python library to be used in a future revision of a oslo.messaging NATS driver. Hopefully we?ll have some more progress to report > on that in the beginning of next year. > > Feel free to reach out to me (here or on IRC, my nick is tobias-urdin) if you want to know more, have any questions or > want to get involved in anything I?ve mentioned above! > > Best regards > Tobias > > [1] https://meetings.opendev.org/meetings/oslo/2022/oslo.2022-09-19-15.12.log.txt > [2] https://github.com/nats-io/nats.py > [3] https://github.com/Gr1N/nats-python > [4] https://review.opendev.org/q/topic:asyncio-nats > [5] https://review.opendev.org/c/openstack/oslo.messaging/+/862419 > > On 12 Sep 2022, at 09:05, Tobias Urdin wrote: > > Hello, > > Since I haven?t heard any objections we?ll move the meeting to the Oslo > meeting on the 19th of September. See you there! [1] > > [1] https://wiki.openstack.org/wiki/Meetings/Oslo > > Best regards > Tobias > > On 8 Sept 2022, at 15:54, Herve Beraud wrote: > > WFM > > Le jeu. 8 sept. 2022 ? 15:16, Tobias Urdin a ?crit : >> >> Hello, >> >> I don?t have anything against moving to the 19th of September meeting if that?s OK >> with everybody else, I propose we do that. >> >> Let?s see if we get some feedback against the proposed new date otherwise we?ll move it. >> >> Best regards >> >> On 8 Sept 2022, at 08:41, Rados?aw Piliszek wrote: >> >> Hi Tobias et al, >> >> That Monday I cannot attend due to other work commitments. Could we schedule it for the following week? >> >> Cheers, >> Radek >> >> On Tue, Sep 6, 2022, 18:33 Tobias Urdin wrote: >>> >>> Great! I?ve updated the agenda with NATS driver and eventlet future topic https://wiki.openstack.org/wiki/Meetings/Oslo#Agenda_for_Next_Meeting >>> >>> See you in the next meeting on 12th of September 15:00 UTC hopefully that >>> will give us all some time to read through the following material: >>> >>> - Current POC code https://review.opendev.org/c/openstack/oslo.messaging/+/848338 >>> - Devstack plugin https://github.com/tobias-urdin/devstack-plugin-nats >>> - Discussion on IRC on 2nd September about below links https://meetings.opendev.org/irclogs/%23openstack-oslo/%23openstack-oslo.2022-09-02.log.html >>> >>> Links to old material: >>> - The old spec that we can work upon https://review.opendev.org/c/openstack/oslo-specs/+/692784 >>> - Old driver implementation for above old spec https://review.opendev.org/c/openstack/oslo.messaging/+/680629 >>> - Old asyncio planning https://wiki.openstack.org/wiki/Oslo/blueprints/asyncio >>> - Old asyncio executor in oslo.messaging https://blueprints.launchpad.net/oslo.messaging/+spec/asyncio-executor >>> - Old messaging API mail thread https://lists.openstack.org/pipermail/openstack-dev/2013-May/009784.html >>> >>> Looking forward to talk to you all next monday! >>> >>> Best regards >>> Tobias >>> >>> On 5 Sept 2022, at 12:48, Herve Beraud wrote: >>> >>> >>> >>> Le lun. 5 sept. 2022 ? 10:00, Tobias Urdin a ?crit : >>>> >>>> Thanks! It seems that we have some traction here to get started in a more structured manner. >>>> >>>> I?m on PTO today but back tomorrow, what do people feel about scheduling a recurring meeting about this topic? >>> >>> >>> The weekly Oslo meeting could host the conversation through a dedicated topic. >>> https://wiki.openstack.org/wiki/Meetings/Oslo >>> >>> You could modify the meeting agenda to add this topic. >>> Daniel Bengtsson (damani) is our meeting facilitator. Do not hesitate to ping him to discuss the meeting management. >>> >>>> >>>> I still >>>> need to read through the old spec and old implementation of NATS, and I am a bit restricted on time currently but we >>>> could brainstorm in that meeting and get us started on a new/updated the other spec. >>>> >>>> Should also note that, to anybody interested, that the topic is a little bit more broader than just a new messaging driver >>>> as it includes planning and/or working around the usage of eventlet etc, if you are interested in that topic please also >>>> reach out to us! >>>> >>>> Best regards >>>> Tobias >>>> >>>> > On 2 Sept 2022, at 10:35, Felix H?ttner wrote: >>>> > >>>> > +1, we would also be interested in helping out. Let me know where we can help out with reviews, writing code, testing, etc. >>>> > >>>> > -- >>>> > Felix Huettner >>>> > >>>> >> +1, I'm also interested in this topic. I did some tests with nats-py to see how it works and I'm volunteering to go further with this technology. >>>> >> Let me know if you need reviews, help etc... >>>> >> >>>> >> Le jeu. 1 sept. 2022 ? 15:50, Tobias Urdin a ?crit : >>>> >> Hello Kai, >>>> >> >>>> >> Great to hear that you're interested! >>>> >> I'm currently just testing this as a POC directly in devstack so there is real-world testing (not should it yet I think, needs more development first). >>>> >> >>>> >> Best regards >>>> >> Tobias >>>> >> >>>> >>> On 30 Aug 2022, at 18:32, Kai Bojens wrote: >>>> >>> >>>> >>> Am 29.08.22 um 15:46 schrieb Tobias Urdin: >>>> >>> >>>> >>>> If anybody, or any company, out there would be interested in collaborating in a project to bring this support and maintain it feel free to >>>> >>>> reach out. I'm hoping somebody will bite but atleast I've put it out there for all of you. >>>> >>> >>>> >>> Hi, >>>> >>> I am very much interested to take a closer look at your work and maybe contribute to it. Although I'm working with OpenStack for my employer at the moment, I'd do this in my spare time as I'm not sure that I can convince him to add another staging system with a full OpenStack installation just for developing a RabbitMQ replacement. That's not on our agenda as we are mostly users and not developers of OpenStack. >>>> >>> >>>> >>> As I'm pretty new to the messaging topic and had just heard of NATS and your idea, I'll first would have to dive into NATS and then I would take a closer look at your code and maybe try to create some documentation or tests. So, I can't promise anything but as I said I'm very interested in your approach as I also see the massive load from RabbitMQ. >>>> >>> >>>> >>> This brings me to the most important question: Where do you run and test your code? >>>> >>> >>>> >>> Greetings, >>>> >>> Kai >>>> > >>>> > Diese E Mail enth?lt m?glicherweise vertrauliche Inhalte und ist nur f?r die Verwertung durch den vorgesehenen Empf?nger bestimmt. Sollten Sie nicht der vorgesehene Empf?nger sein, setzen Sie den Absender bitte unverz?glich in Kenntnis und l?schen diese E Mail. Hinweise zum Datenschutz finden Sie hier. >>>> >>> >>> >>> -- >>> Herv? Beraud >>> Senior Software Engineer at Red Hat >>> irc: hberaud >>> https://github.com/4383/ >>> https://twitter.com/4383hberaud >>> >>> >> > > > -- > Herv? Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > > > From gmann at ghanshyammann.com Thu Dec 8 21:56:45 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 08 Dec 2022 13:56:45 -0800 Subject: [dev][infra][qa][tact-sig] Tox 4.0.0 breaking changes In-Reply-To: <20221208213841.n5hjwclcqpg36lwl@yuggoth.org> References: <20221207191941.bft3np4yndrcjspa@yuggoth.org> <20221208175941.uxymeb7tq7kgrngk@yuggoth.org> <184f37b5653.1145eaddb547711.668145682639910818@ghanshyammann.com> <20221208210111.zn7f3m6nsb5wjhki@yuggoth.org> <184f3986e85.e66f746f548446.7965885214342574077@ghanshyammann.com> <20221208213841.n5hjwclcqpg36lwl@yuggoth.org> Message-ID: <184f3bdafa3.f920157c549348.4672405299947753932@ghanshyammann.com> ---- On Thu, 08 Dec 2022 13:38:41 -0800 Jeremy Stanley wrote --- > On 2022-12-08 13:16:04 -0800 (-0800), Ghanshyam Mann wrote: > [...] > > I do not think it is Tempest's own installation. It is a devstack > > installation of tox (which I think uses ensure-tox on stable also > > unless we have any hidden installation overriding it) which is > > same on master as well as on stable but the master installation > > uses old tox which I am hoping due to ensure-tox capping tox but > > on stable it is not. I am confused by this different behaviour. I > > do not think devstack does any separate things for master and > > stable/zed for tox installation. Something from ensure-tox? > [...] > > Not sure where you draw the line between DevStack and Tempest, for > me they're part of the same suite split into different Git > repositories. The problem area which was brought to my attention > earlier today in #openstack-qa is in the configure_tempest function > of DevStack's lib/tempest script which reinstalls tox rather than > using the version already present on the server. I am debugging that bug but here I am asking something else here. My question is about different tox versions installed in the same type of jobs with devstack same scripts. Tox 4 is installed in the stable branch and master focal job and Tox 3 in the rest of the master jobs. I am trying to understand why ensure-tox capping tox seems capping it for master jobs but not for stable branch jobs (no change int the Devstack scripts in master vs stable? The only difference I see here is focal jobs (stable branch jobs or focal job on master) using latest tox. -gmann > -- > Jeremy Stanley > From fungi at yuggoth.org Thu Dec 8 22:09:05 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 8 Dec 2022 22:09:05 +0000 Subject: [dev][infra][qa][tact-sig] Tox 4.0.0 breaking changes In-Reply-To: <184f3bdafa3.f920157c549348.4672405299947753932@ghanshyammann.com> References: <20221207191941.bft3np4yndrcjspa@yuggoth.org> <20221208175941.uxymeb7tq7kgrngk@yuggoth.org> <184f37b5653.1145eaddb547711.668145682639910818@ghanshyammann.com> <20221208210111.zn7f3m6nsb5wjhki@yuggoth.org> <184f3986e85.e66f746f548446.7965885214342574077@ghanshyammann.com> <20221208213841.n5hjwclcqpg36lwl@yuggoth.org> <184f3bdafa3.f920157c549348.4672405299947753932@ghanshyammann.com> Message-ID: <20221208220905.rdl2mxfxemqy4wd2@yuggoth.org> On 2022-12-08 13:56:45 -0800 (-0800), Ghanshyam Mann wrote: [...] > I am debugging that bug but here I am asking something else here. My > question is about different tox versions installed in the same type > of jobs with devstack same scripts. Tox 4 is installed in the stable branch > and master focal job and Tox 3 in the rest of the master jobs. > > I am trying to understand why ensure-tox capping tox seems capping > it for master jobs but not for stable branch jobs (no change int the > Devstack scripts in master vs stable? The jobs you're talking about don't seem to use the ensure-tox role at all, so maybe that's where your confusion is arising? It's mostly used by jobs based (usually indirectly) on the "tox" parent job from zuul-jobs, so mostly unit tests and linters. > The only difference I see here is focal jobs (stable branch jobs > or focal job on master) using latest tox. I think you're either misunderstanding how those scripts work, or how Python packaging and pip work. If constraints lists are being supplied when the lib/tempest script installs its own copy of tox, then those can end up influencing which version of tox is installed even if tox is not in the constraints list (because current pip will try to satisfy constrained dependencies and limit available tox versions to those which work with any of its dependencies which have been constrained). If stable branches are forcing different versions of pip, that can also influence which versions of tox might end up installed, since older pip lacks a sane dependency solver and is just sort of "yolo" about the whole idea. There are lots of variables which can come into play. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jerome.becot at deveryware.com Thu Dec 8 22:15:35 2022 From: jerome.becot at deveryware.com (=?UTF-8?B?SsOpcsO0bWUgQkVDT1Q=?=) Date: Thu, 8 Dec 2022 23:15:35 +0100 Subject: Issue resizing volumes attached to running volume backed instances Message-ID: <98e054b2-c921-ce3d-e4ba-68c9ec8fe12a@deveryware.com> Hello Openstack, We have Ussuri deployed on a few clouds, and they're all plugged to PureStorage Arrays. We allow users to only use volumes for their servers. It means that each server disk is a LUN attached over ISCSI (with multipath) on the compute node hosting the server. Everything works quite fine, but we have a weird issue when extending volumes attached to running instances. The guests notice the new disk size .. of the last extent. Say I have a server with a 10gb disk. I add 5gb. On the guest, still 10gb. I add another 5gb, and on the guest I get 15, and so on. I've turned the debug mode on and I could see no error in the log. Looking closer at the log I could catch the culprit: 2022-12-08 17:35:13.998 46195 DEBUG os_brick.initiator.linuxscsi [] Starting size: *76235669504* 2022-12-08 17:35:14.028 46195 DEBUG os_brick.initiator.linuxscsi [] volume size after scsi device rescan *80530636800* extend_volume 2022-12-08 17:35:14.035 46195 DEBUG os_brick.initiator.linuxscsi [] Volume device info = {'device': '/dev/disk/by-path/ip-1...1:3260-iscsi-iqn.2010-06.com.purestorage:flasharray.x-lun-10', 'host': '5', 'channel': '0', 'id': '0', 'lun': '10'} extend_volume 2022-12-08 17:35:14.348 46195 INFO os_brick.initiator.linuxscsi [] Find Multipath device file for volume WWN 3624... 2022-12-08 17:35:14.349 46195 DEBUG os_brick.initiator.linuxscsi [] Checking to see if /dev/disk/by-id/dm-uuid-mpath-3624.. exists yet. wait_for_path 2022-12-08 17:35:14.349 46195 DEBUG os_brick.initiator.linuxscsi [] /dev/disk/by-id/dm-uuid-mpath-3624... has shown up. wait_for_path 2022-12-08 17:35:14.382 46195 INFO os_brick.initiator.linuxscsi [] mpath(/dev/disk/by-id/dm-uuid-mpath-3624) *current size 76235669504* 2022-12-08 17:35:14.412 46195 INFO os_brick.initiator.linuxscsi [] mpath(/dev/disk/by-id/dm-uuid-mpath-3624) *new size 76235669504* 2022-12-08 17:35:14.413 46195 DEBUG oslo_concurrency.lockutils [] Lock "extend_volume" released by "os_brick.initiator.connectors.iscsi.ISCSIConnector.extend_volume" :: held 2.062s inner 2022-12-08 17:35:14.459 46195 DEBUG os_brick.initiator.connectors.iscsi [] <== extend_volume: return (2217ms) *76235669504* trace_logging_wrapper 2022-12-08 17:35:14.461 46195 DEBUG nova.virt.libvirt.volume.iscsi [] Extend iSCSI Volume /dev/dm-28; new_size=*76235669504* extend_volume 2022-12-08 17:35:14.462 46195 DEBUG nova.virt.libvirt.driver [] Resizing target device /dev/dm-28 to *76235669504* _resize_attached_volume The logs clearly shows that the rescan confirm the new size but when interrogating multipath, it does not. But requesting multipath few seconds after on the command line shows the new size as well. It explains the behaviour. I'm running Ubuntu 18.04 with multipath 0.7.4-2ubuntu3.2. The os-brick code for multipath is far more basic than the one in master branch. Maybe the multipath version installed is too recent for os-brick. Thanks for the help Jerome -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradeep8985 at gmail.com Fri Dec 9 07:59:57 2022 From: pradeep8985 at gmail.com (pradeep) Date: Fri, 9 Dec 2022 13:29:57 +0530 Subject: auto switch of glance-api containers to other controllers Message-ID: Hello All, I understand that glance-api container run only one controller always. I have tried to check if the glance-api container to switch over to the next available controller by means of rebooting and stopping the containers, but nothing happened. Is there a way to make sure that glance-api container switches to other controllers if controller 1 is not available. Thanks for you help Regards Pradeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Fri Dec 9 09:05:29 2022 From: hongbin034 at gmail.com (Hongbin Lu) Date: Fri, 9 Dec 2022 17:05:29 +0800 Subject: the zun cmd "python3 setup.py install" error(s) In-Reply-To: References: Message-ID: Thanks for reporting the issue. This is due to the new git owner check [1]. I suggested to skip the check as following: # git config --global --add safe.directory /var/lib/zun/zun Then, re-running your command should work. I submitted a patch to fix the installation guide [2]. [1] setup_git_directory(): add an owner check for the top-level directory ? git/git at 8959555 ? GitHub [2] Disable git owner check for zun (I5dd1e405) ? Gerrit Code Review (opendev.org) On Tue, Dec 6, 2022 at 2:19 AM ????? <2292613444 at qq.com> wrote: > system:ubuntu:20.04 > useDocumentation: > https://docs.openstack.org/zun/zed/install/controller-install.html > > root at controller:/var/lib/zun/zun# python3 setup.py install > ERROR:root:Error parsing > Traceback (most recent call last): > File "/usr/lib/python3/dist-packages/pbr/core.py", line 96, in pbr > attrs = util.cfg_to_args(path, dist.script_args) > File "/usr/lib/python3/dist-packages/pbr/util.py", line 271, in > cfg_to_args > pbr.hooks.setup_hook(config) > File "/usr/lib/python3/dist-packages/pbr/hooks/__init__.py", line 25, in > setup_hook > metadata_config.run() > File "/usr/lib/python3/dist-packages/pbr/hooks/base.py", line 27, in run > self.hook() > File "/usr/lib/python3/dist-packages/pbr/hooks/metadata.py", line 25, in > hook > self.config['version'] = packaging.get_version( > File "/usr/lib/python3/dist-packages/pbr/packaging.py", line 874, in > get_version > raise Exception("Versioning for this project requires either an sdist" > Exception: Versioning for this project requires either an sdist tarball, > or access to an upstream git repository. It's also possible that there is a > mismatch between the package name in setup.cfg and the argument given to > pbr.version.VersionInfo. Project name zun was given, but was not able to be > found. > error in setup command: Error parsing /var/lib/zun/zun/setup.cfg: > Exception: Versioning for this project requires either an sdist tarball, or > access to an upstream git repository. It's also possible that there is a > mismatch between the package name in setup.cfg and the argument given to > pbr.version.VersionInfo. Project name zun was given, but was not able to be > found. > root at controller:/var/lib/zun/zun# > > > cryed: > pip3 install --upgrade distribute > pip3 install --upgrade tensorflow_gpu > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Fri Dec 9 09:14:13 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 9 Dec 2022 10:14:13 +0100 Subject: [neutron] Drivers meeting cancelled Message-ID: Hello Neutrinos: Due to the lack of agenda, today's meeting is cancelled. If you want to add any topic to this meeting, please check [1] and use the "on demand agenda" section. Regards. [1]https://wiki.openstack.org/wiki/Meetings/NeutronDrivers -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Fri Dec 9 09:21:06 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Fri, 9 Dec 2022 10:21:06 +0100 Subject: auto switch of glance-api containers to other controllers In-Reply-To: References: Message-ID: Hi, I'm not sure if you used any deployment tool for your environment, but usually for such failover there's a load balancer in conjunction with VRRP or Anycast, that able to detect when controller1 is down and forward traffic to another controller. As example, OpenStack-Ansible as default option installs haproxy to each controller and keepalived to implement VRRP and Virtual IPs. So when glance is down on controller1, haproxy will detect that and forward traffic to other controllers. If controller1 is down as a whole, keepalived will detect that and raise VIP on controller2, so all client traffic will go there. Since controller2 also have haproxy, it will pass traffic to available backends based on the source IP. ??, 9 ???. 2022 ?., 09:06 pradeep : > Hello All, > > I understand that glance-api container run only one controller always. I > have tried to check if the glance-api container to switch over to the next > available controller by means of rebooting and stopping the containers, but > nothing happened. Is there a way to make sure that glance-api container > switches to other controllers if controller 1 is not available. > > Thanks for you help > > Regards > Pradeep > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Fri Dec 9 11:52:41 2022 From: eblock at nde.ag (Eugen Block) Date: Fri, 09 Dec 2022 11:52:41 +0000 Subject: [ops] Migration from CentOS streams to Ubuntu and fast forward updates In-Reply-To: References: <8db8e7b9e25ec85f40a1e8e625d84f60816b86a8.camel@redhat.com> Message-ID: <20221209115241.Horde.SQ6ikp8SRifMLF3Q9-wVwVz@webmail.nde.ag> Hi, we're considering a similar scenario. I don't mean to hijack this thread, but would it be possible to migrate to Ubuntu without downtime? There are several options I can think of, one that worked for us in the past: a reinstallation of the control nodes but keeping the database (upgraded in a VM). But this means downtime until at least one control node is up and if possible I would like to avoid that, so I'm thinking of this: - Adding compute nodes and install them with Ubuntu Victoria (our current version). - Move the workload away from the other compute nodes and reinstall them one by one. That should work for computes (with cold or live migration). But what about the control nodes? We have them in HA setup, would it be possible to - Stop one control node, reinstall it with Ubuntu (Victoria), make sure all UIDs/GIDs are the same as before (we use ceph as shared storage) so the mounted CephFS still works between the nodes (same for the nova live-migration). - Repeat for other control nodes. Would those mixed control nodes work together? I will try it anyway in a test environment, but I wanted to ask if anybody has tried this approach. I'd appreciate any insights. Thanks, Eugen Zitat von Massimo Sgaravatto : > Ok, thanks a lot > If cold migration is supposed to work between hosts with different > operating systems, we are fine > Cheers, Massimo > > On Tue, Dec 6, 2022 at 10:48 AM Sean Mooney wrote: > >> On Tue, 2022-12-06 at 10:34 +0100, Dmitriy Rabotyagov wrote: >> > Hi Massimo, >> > >> > Assuming you have manual installation (not using any deployment >> > projects), I have several comments on your plan. >> > >> > 1. I've missed when you're going to upgrade Nova/Neutron on computes. >> > As you should not create a gap in OpenStack versions between >> > controllers and computes since nova-scheduler has a requirement on RPC >> > version computes will be using. Or, you must define the rpc version >> > explicitly in config to have older computes (but it's not really a >> > suggested way). >> > 2. Also once you do db sync, your second controller might misbehave >> > (as some fields could be renamed or new tables must be used), so you >> > will need to disable it from accepting requests until syncing >> > openstack version as well. If you're not going to upgrade it until >> > getting first one to Yoga - it should be disabled all the time until >> > you get Y services running on it. >> > 3. It's totally fine to run multi-distro setup. For computes the only >> > thing that can go wrong is live migrations, and that depends on >> > libvirt/qemu versions. I'm not sure if CentOS 8 Stream have compatible >> > version with Ubuntu 22.04 for live migrations to work though, but if >> > you care about them (I guess you do if you want to migrate workloads >> > semalessly) - you'd better check. But my guess would be that CentOS 8 >> > Stream should have compatible versions with Ubuntu 20.04 - still needs >> > deeper checking. >> the live migration issue is a know limiation >> basically it wont work across distro today because the qemu emulator path >> is distro specific and we do not pass that back form the destinatino to the >> source so libvirt will try and boot the vm referncign a binary that does >> not exist >> im sure you could propaly solve that with a symlink or similar. >> if you did the next issue you would hit is we dont normally allwo live >> mgration >> form a newer qemu/libvirt verson to an older one >> >> with all that said cold migration shoudl work fine and wihtine any one >> host os live migration >> will work. you could proably use host aggreates or simialr to enforece >> that if needed but >> cold migration is the best way to move the workloads form hypervior hosts >> with different distros. >> >> > >> > ??, 6 ???. 2022 ?. ? 09:40, Massimo Sgaravatto < >> massimo.sgaravatto at gmail.com>: >> > > >> > > Any comments on these questions ? >> > > Thanks, Massimo >> > > >> > > On Fri, Dec 2, 2022 at 5:02 PM Massimo Sgaravatto < >> massimo.sgaravatto at gmail.com> wrote: >> > > > >> > > > Dear all >> > > > >> > > > >> > > > >> > > > Dear all >> > > > >> > > > We are now running an OpenStack deployment: Yoga on CentOS8Stream. >> > > > >> > > > We are now thinking about a possible migration to Ubuntu for several >> reasons in particular: >> > > > >> > > > a- 5 years support for both the Operating System and OpenStack >> (considering LTS releases) >> > > > b- Possibility do do a clean update between two Ubuntu LTS releases >> > > > c- Easier procedure (also because of b) for fast forward updates >> (this is what we use to do) >> > > > >> > > > Considering the latter item, my understanding is that an update from >> Ubuntu 20.04 Ussuri to Ubuntu 22.04 Yoga could be done in the following >> > > > way (we have two controller nodes and n compute nodes): >> > > > >> > > > - Update of first controller node from Ubuntu 20.04 Ussuri to Ubuntu >> 20.04 Victoria (update OpenStack packages + dbsync) >> > > > - Update of first controller node from Ubuntu 20.04 Victoria to >> Ubuntu 20.04 Wallaby (update OpenStack packages + dbsync) >> > > > - Update of first controller node from Ubuntu 20.04 Wallaby to >> Ubuntu 20.04 Xena (update OpenStack packages + dbsync) >> > > > - Update of first controller node from Ubuntu 20.04 Xena to Ubuntu >> 20.04 Yoga (update OpenStack packages + dbsync) >> > > > - Update of first controller node from Ubuntu 20.04 Yoga to Ubuntu >> 22.04 Yoga (update Ubuntu packages) >> > > > - Update of second controller node from Ubuntu 20.04 Ussuri to >> Ubuntu 22.04 Yoga (update OpenStack and Ubuntu packages) >> > > > - Update of the compute nodes from Ubuntu 20.04 Ussuri to Ubuntu >> 22.04 Yoga (update OpenStack and Ubuntu packages) >> > > > >> > > > >> > > > We would do the same when migrating from Ubuntu 22.04 Yoga to Ubuntu >> 24.04 and the OpenStack xyz release (where xyz >> > > > is the LTS release used in Ubuntu 24.04) >> > > > >> > > > Is this supposed to work or am I missing something ? >> > > > >> > > > If we decide to migrate to Ubuntu, the first step would be the >> reinstallation with Ubuntu 22.04/Yoga of each node >> > > > currently running CentOS8 stream/Yoga. >> > > > I suppose there are no problems having in the same OpenStack >> installation nodes running the same >> > > > Openstack version but different operating systems, or am I wrong ? >> > > > >> > > > Thanks, Massimo >> > > > >> > >> >> From smooney at redhat.com Fri Dec 9 14:21:38 2022 From: smooney at redhat.com (Sean Mooney) Date: Fri, 09 Dec 2022 14:21:38 +0000 Subject: [ops] Migration from CentOS streams to Ubuntu and fast forward updates In-Reply-To: <20221209115241.Horde.SQ6ikp8SRifMLF3Q9-wVwVz@webmail.nde.ag> References: <8db8e7b9e25ec85f40a1e8e625d84f60816b86a8.camel@redhat.com> <20221209115241.Horde.SQ6ikp8SRifMLF3Q9-wVwVz@webmail.nde.ag> Message-ID: <5691566ba1da54d7b23c3b2285842928fcc3e3ca.camel@redhat.com> On Fri, 2022-12-09 at 11:52 +0000, Eugen Block wrote: > Hi, > > we're considering a similar scenario. I don't mean to hijack this > thread, but would it be possible to migrate to Ubuntu without > downtime? There are several options I can think of, one that worked > for us in the past: a reinstallation of the control nodes but keeping > the database (upgraded in a VM). But this means downtime until at > least one control node is up and if possible I would like to avoid > that, so I'm thinking of this: > - Adding compute nodes and install them with Ubuntu Victoria (our > current version). > - Move the workload away from the other compute nodes and reinstall > them one by one. yes so as has been mentioned in the tread before. openstack does not really care about the underlying os within reason. your installer tool might but there is nothing that would prevent you adding 3 new contolers going form 3->6 and then stoping the old controller after teh db have replicated and the new contoler are working simiarly you can add new computes and cold migrate one by one until your cloud is entrily workign on the other os. if you decouple changeing the opesntack version form changeing the os, i.e. do one then the other if you intend to do both. that should allow you to keep the cloud operational during this tansition. the main issues you ware likely to encounter are version mismatch beteen the rabbit/mariadb packages shiped on each distor (and ovn if you use that) and simiar constraits. for any given release of openstack we supprot both distrobutiosn so both should ahve compaitble packages with the openstack code but infra stucture compent like the db may not be compatibel the corresponding package form the other disto. if contol plane uptime is not required at all time a simple workaroudn for the db is just do a backup and restore. that would still allow you to have the workload remain active. when i first started working on openstack many years ago my default dev envionment had a a contoler with ubuntu and kernel ovs a centos compute with linux bridge and a ubuntu compute with ovs-dpdk. using devstack all 3 nodes could consume the rabbitmq and db form the ubuntu contoler and it was possibel to cold migrate vm between all 3 nodes (assuming you used vlan networking). i would not recommend that in production for any protracted period of tiem but it was defnietly possible to do in the past. > > That should work for computes (with cold or live migration). But what > about the control nodes? We have them in HA setup, would it be > possible to > - Stop one control node, reinstall it with Ubuntu (Victoria), make > sure all UIDs/GIDs are the same as before (we use ceph as shared > storage) so the mounted CephFS still works between the nodes (same for > the nova live-migration). live migration does not work due to the qemu emulator paths beign differnt between ubuntu and centos you may also have issues with the centos vm xmls reference selinux where as apparmor is used as teh security context on ubuntu. those issses should not matter for cold migration as the xml is regenerated on teh destination host not the souce host as it is with live migration. so if you can spawn a vm on the ubuntu node form a centos contol plane you should be able to cold migrate provide the ssh keys are exchanged. by the way while it proably does work nova has never offially supproted using cephfs or any other shared block/file system for the instance state path other then NFS. i know that some operators have got cephfs and glusterfs to work but that has never offically been supported or tested by the nova team. > - Repeat for other control nodes. > > Would those mixed control nodes work together? I will try it anyway in > a test environment, but I wanted to ask if anybody has tried this > approach. I'd appreciate any insights. as i said yes in principal it should work provided rabbit/mariadb/galeara are happy at the openstack level while untested we woudl expect openstack to work provided its the same version in both cases. usign something like kolla-ansible wehre you can use the same contaienr image on both os would help as that way you are sure there is no mismathc. with that said that is not a toplogy they test/supprot but is one they can deploy. so you would need to test this. > > Thanks, > Eugen > > > Zitat von Massimo Sgaravatto : > > > Ok, thanks a lot > > If cold migration is supposed to work between hosts with different > > operating systems, we are fine > > Cheers, Massimo > > > > On Tue, Dec 6, 2022 at 10:48 AM Sean Mooney wrote: > > > > > On Tue, 2022-12-06 at 10:34 +0100, Dmitriy Rabotyagov wrote: > > > > Hi Massimo, > > > > > > > > Assuming you have manual installation (not using any deployment > > > > projects), I have several comments on your plan. > > > > > > > > 1. I've missed when you're going to upgrade Nova/Neutron on computes. > > > > As you should not create a gap in OpenStack versions between > > > > controllers and computes since nova-scheduler has a requirement on RPC > > > > version computes will be using. Or, you must define the rpc version > > > > explicitly in config to have older computes (but it's not really a > > > > suggested way). > > > > 2. Also once you do db sync, your second controller might misbehave > > > > (as some fields could be renamed or new tables must be used), so you > > > > will need to disable it from accepting requests until syncing > > > > openstack version as well. If you're not going to upgrade it until > > > > getting first one to Yoga - it should be disabled all the time until > > > > you get Y services running on it. > > > > 3. It's totally fine to run multi-distro setup. For computes the only > > > > thing that can go wrong is live migrations, and that depends on > > > > libvirt/qemu versions. I'm not sure if CentOS 8 Stream have compatible > > > > version with Ubuntu 22.04 for live migrations to work though, but if > > > > you care about them (I guess you do if you want to migrate workloads > > > > semalessly) - you'd better check. But my guess would be that CentOS 8 > > > > Stream should have compatible versions with Ubuntu 20.04 - still needs > > > > deeper checking. > > > the live migration issue is a know limiation > > > basically it wont work across distro today because the qemu emulator path > > > is distro specific and we do not pass that back form the destinatino to the > > > source so libvirt will try and boot the vm referncign a binary that does > > > not exist > > > im sure you could propaly solve that with a symlink or similar. > > > if you did the next issue you would hit is we dont normally allwo live > > > mgration > > > form a newer qemu/libvirt verson to an older one > > > > > > with all that said cold migration shoudl work fine and wihtine any one > > > host os live migration > > > will work. you could proably use host aggreates or simialr to enforece > > > that if needed but > > > cold migration is the best way to move the workloads form hypervior hosts > > > with different distros. > > > > > > > > > > > ??, 6 ???. 2022 ?. ? 09:40, Massimo Sgaravatto < > > > massimo.sgaravatto at gmail.com>: > > > > > > > > > > Any comments on these questions ? > > > > > Thanks, Massimo > > > > > > > > > > On Fri, Dec 2, 2022 at 5:02 PM Massimo Sgaravatto < > > > massimo.sgaravatto at gmail.com> wrote: > > > > > > > > > > > > Dear all > > > > > > > > > > > > > > > > > > > > > > > > Dear all > > > > > > > > > > > > We are now running an OpenStack deployment: Yoga on CentOS8Stream. > > > > > > > > > > > > We are now thinking about a possible migration to Ubuntu for several > > > reasons in particular: > > > > > > > > > > > > a- 5 years support for both the Operating System and OpenStack > > > (considering LTS releases) > > > > > > b- Possibility do do a clean update between two Ubuntu LTS releases > > > > > > c- Easier procedure (also because of b) for fast forward updates > > > (this is what we use to do) > > > > > > > > > > > > Considering the latter item, my understanding is that an update from > > > Ubuntu 20.04 Ussuri to Ubuntu 22.04 Yoga could be done in the following > > > > > > way (we have two controller nodes and n compute nodes): > > > > > > > > > > > > - Update of first controller node from Ubuntu 20.04 Ussuri to Ubuntu > > > 20.04 Victoria (update OpenStack packages + dbsync) > > > > > > - Update of first controller node from Ubuntu 20.04 Victoria to > > > Ubuntu 20.04 Wallaby (update OpenStack packages + dbsync) > > > > > > - Update of first controller node from Ubuntu 20.04 Wallaby to > > > Ubuntu 20.04 Xena (update OpenStack packages + dbsync) > > > > > > - Update of first controller node from Ubuntu 20.04 Xena to Ubuntu > > > 20.04 Yoga (update OpenStack packages + dbsync) > > > > > > - Update of first controller node from Ubuntu 20.04 Yoga to Ubuntu > > > 22.04 Yoga (update Ubuntu packages) > > > > > > - Update of second controller node from Ubuntu 20.04 Ussuri to > > > Ubuntu 22.04 Yoga (update OpenStack and Ubuntu packages) > > > > > > - Update of the compute nodes from Ubuntu 20.04 Ussuri to Ubuntu > > > 22.04 Yoga (update OpenStack and Ubuntu packages) > > > > > > > > > > > > > > > > > > We would do the same when migrating from Ubuntu 22.04 Yoga to Ubuntu > > > 24.04 and the OpenStack xyz release (where xyz > > > > > > is the LTS release used in Ubuntu 24.04) > > > > > > > > > > > > Is this supposed to work or am I missing something ? > > > > > > > > > > > > If we decide to migrate to Ubuntu, the first step would be the > > > reinstallation with Ubuntu 22.04/Yoga of each node > > > > > > currently running CentOS8 stream/Yoga. > > > > > > I suppose there are no problems having in the same OpenStack > > > installation nodes running the same > > > > > > Openstack version but different operating systems, or am I wrong ? > > > > > > > > > > > > Thanks, Massimo > > > > > > > > > > > > > > > > > > > > From noonedeadpunk at gmail.com Fri Dec 9 14:29:24 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Fri, 9 Dec 2022 15:29:24 +0100 Subject: [sahara][sahara-cores] Combined patch that recovers CI tests Message-ID: Hi sahara cores, As you might know Sahara has been broken for quite some time now due to multiple reasons, and didn't have passing tests for Zed release at all. I have come up with a patch [1] that fixes major issues, including: * Zuul Queus definition * Adapting new jsonschema version * Replacing usage of iteritems with six As a result Zuul is happy, all tempest and tox tests are passing it the moment. It would be great if you could review these changes and land other important patches to unblock other contributors in the project and prevent it from falling under "inactive" projects list. [1] https://review.opendev.org/c/openstack/sahara/+/864728 From gmann at ghanshyammann.com Fri Dec 9 15:59:11 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 09 Dec 2022 07:59:11 -0800 Subject: [gate][stable][qa] Hold recheck: Devstack based job using tox4 are broken (especially greande, stable jobs) In-Reply-To: <184f3a5621f.faad1123548722.7315099506744764056@ghanshyammann.com> References: <184f3a5621f.faad1123548722.7315099506744764056@ghanshyammann.com> Message-ID: <184f79cacba.11b830391608564.9221239950148169565@ghanshyammann.com> ---- On Thu, 08 Dec 2022 13:30:12 -0800 Ghanshyam Mann wrote --- > Hi All, > > As you might know, the devstack-based job fails on the tempest command with the below error: > > Error- venv: failed with tempest is not allowed, use allowlist_externals to allow it > > It is failing where jobs are using the tox4 (in grenade job or stable branch job or some master branch > job like focal job using tox4[1]). This is another issue of mix match of tox version in stable and master jobs > which we are discussing in a separate thread[2]. > > On tempest command or allowlist_externals failure, we are testing one fix for that but please hold recheck > until it gets fixed or any workaround is merged > - https://review.opendev.org/q/Ifc8fc6cd7aebda147043a52d6baf99a2bacc009c We have pinned the tox for stable branches which is correct way to do and for master also we have pinned it as temporary workaround until we get it fixed as master testing should be moved to tox4. All the patches of pinning are merged and gate is back, you can recheck your patch - https://review.opendev.org/q/I9a138af94dedc0d8ce5a0d519d75779415d3c30b I will keep bug#1999183 open to fix and unpin it for the master. -gmann > > I have opened a bug also to track it https://bugs.launchpad.net/devstack/+bug/1999183 > > [1] https://zuul.opendev.org/t/openstack/build/aaad7312f68246c0979e1a43cfe96c3b/log/controller/logs/pip3-freeze.txt#229 > [2] https://lists.openstack.org/pipermail/openstack-discuss/2022-December/031442.html > > -gmann > > From gmann at ghanshyammann.com Fri Dec 9 16:03:09 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 09 Dec 2022 08:03:09 -0800 Subject: [dev][infra][qa][tact-sig] Tox 4.0.0 breaking changes In-Reply-To: References: <20221207191941.bft3np4yndrcjspa@yuggoth.org> <20221208175941.uxymeb7tq7kgrngk@yuggoth.org> <184f37b5653.1145eaddb547711.668145682639910818@ghanshyammann.com> <20221208210111.zn7f3m6nsb5wjhki@yuggoth.org> Message-ID: <184f7a05127.aedcc056608860.35619324587374516@ghanshyammann.com> ---- On Thu, 08 Dec 2022 13:14:56 -0800 Jay Faulkner wrote --- > > > On Thu, Dec 8, 2022 at 1:12 PM Jeremy Stanley fungi at yuggoth.org> wrote: > On 2022-12-08 12:44:17 -0800 (-0800), Ghanshyam Mann wrote: > [...] > > It seems master jobs using tox<4[1] but all stable jobs using > > tox>=4.0[2] and failing. And both using ensure-tox > > > > [1] https://zuul.opendev.org/t/openstack/build/8a4585f2961a4854ad96c7d2a188b557/log/controller/logs/pip3-freeze.txt#235 > > [2] https://zuul.opendev.org/t/openstack/build/78cddc1180ff40109bbe17df884d23d8/log/controller/logs/pip3-freeze.txt#227 > [...] > > Those are Tempest jobs, which do some of their own installing of tox > directly rather than just relying on the version provided by > ensure-tox. Check with the QA team, since they've been working on > fixes for those. > > > > Does it mean all the stable jobs also will start using the latest > > tox (>4.0) which is breaking the jobs and we need to fix them in > > stable branches too? > [...] > > Unless we change those jobs to pin to an old version of tox for > stable branches, yes. > > I'd be extremely in favor of this change. We should do everything we can to reduce the amount of effort needed for individual projects to keep stable branches passing tests. Yeah, I have pinned it on stable branches testing in devstack. It will take care of most of the devstack-based jobs unless there is an explicit installation of tox in other jobs but that can be pinned in a similar way. https://review.opendev.org/q/I9a138af94dedc0d8ce5a0d519d75779415d3c30b+-branch:master > -Jay Faulkner > From elod.illes at est.tech Fri Dec 9 16:15:52 2022 From: elod.illes at est.tech (=?utf-8?B?RWzDtWQgSWxsw6lz?=) Date: Fri, 9 Dec 2022 16:15:52 +0000 Subject: [release] Release countdown for week R-14, Dec 12 - 16 Message-ID: Development Focus ----------------- The Antelope-2 milestone will happen next month, on January 5th, 2023. 2023.1 Antelope-related specs should now be finalized so that teams can move to implementation ASAP. Some teams observe specific deadlines on the second milestone (mostly spec freezes): please refer to https://releases.openstack.org/antelope/schedule.html for details. General Information ------------------- Please remember that libraries need to be released at least once per milestone period. At milestone 2, the release team will propose releases for any library that has not been otherwise released since milestone 1. Other non-library deliverables that follow the cycle-with-intermediary release model should have an intermediary release before milestone-2. Those who haven't will be proposed to switch to the cycle-with-rc model, which is more suited to deliverables that are released only once per cycle. (Note: cycle-with-rc release model does not suit for every project) At milestone-2 we also freeze the contents of the final release. If you have a new deliverable that should be included in the final release, you should make sure it has a deliverable file in: https://opendev.org/openstack/releases/src/branch/master/deliverables/antelope You should request a beta release (or intermediary release) for those new deliverables by milestone-2. We understand some may not be quite ready for a full release yet, but if you have something minimally viable to get released it would be good to do a 0.x release to exercise the release tooling for your deliverables. See the MembershipFreeze description for more details: https://releases.openstack.org/antelope/schedule.html#a-mf Finally, now may be a good time for teams to check on any stable releases that need to be done for your deliverables. If you have bugfixes that have been backported, but no stable release getting those. If you are unsure what is out there committed but not released, in the openstack/releases repo, running the command "tools/list_stable_unreleased_changes.sh " gives a nice report. Upcoming Deadlines & Dates -------------------------- Antelope-2 Milestone: January 5th, 2023 Final 2023.1 Antelope release: March 22nd, 2023 El?d Ill?s irc: elodilles @ #openstack-release -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradeep8985 at gmail.com Fri Dec 9 16:16:59 2022 From: pradeep8985 at gmail.com (pradeep) Date: Fri, 9 Dec 2022 21:46:59 +0530 Subject: auto switch of glance-api containers to other controllers In-Reply-To: References: Message-ID: Hi Dmitriy, Thanks for the response. Sorry, I forgot to mention that i used kolla ansible for deployment. As per kolla docs, glance-api container runs only in one controller. So i am looking from kolla perspective if it can automatically switch it over other available controllers. Regards Pradeep On Fri, 9 Dec 2022 at 14:51, Dmitriy Rabotyagov wrote: > Hi, > > I'm not sure if you used any deployment tool for your environment, but > usually for such failover there's a load balancer in conjunction with VRRP > or Anycast, that able to detect when controller1 is down and forward > traffic to another controller. > > As example, OpenStack-Ansible as default option installs haproxy to each > controller and keepalived to implement VRRP and Virtual IPs. So when glance > is down on controller1, haproxy will detect that and forward traffic to > other controllers. If controller1 is down as a whole, keepalived will > detect that and raise VIP on controller2, so all client traffic will go > there. Since controller2 also have haproxy, it will pass traffic to > available backends based on the source IP. > > > ??, 9 ???. 2022 ?., 09:06 pradeep : > >> Hello All, >> >> I understand that glance-api container run only one controller always. I >> have tried to check if the glance-api container to switch over to the next >> available controller by means of rebooting and stopping the containers, but >> nothing happened. Is there a way to make sure that glance-api container >> switches to other controllers if controller 1 is not available. >> >> Thanks for you help >> >> Regards >> Pradeep >> >> >> -- ----------------------- Regards Pradeep Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From jerome.becot at deveryware.com Thu Dec 8 22:13:27 2022 From: jerome.becot at deveryware.com (=?UTF-8?B?SsOpcsO0bWUgQkVDT1Q=?=) Date: Thu, 8 Dec 2022 23:13:27 +0100 Subject: Issue resizing volumes attached to running volume backed instances Message-ID: Hello Openstack, We have Ussuri deployed on a few clouds, and they're all plugged to PureStorage Arrays. We allow users to only use volumes for their servers. It means that each server disk is a LUN attached over ISCSI (with multipath) on the compute node hosting the server. Everything works quite fine, but we have a weird issue when extending volumes attached to running instances. The guests notice the new disk size .. of the last extent. Say I have a server with a 10gb disk. I add 5gb. On the guest, still 10gb. I add another 5gb, and on the guest I get 15, and so on. I've turned the debug mode on and I could see no error in the log. Looking closer at the log I could catch the culprit: 2022-12-08 17:35:13.998 46195 DEBUG os_brick.initiator.linuxscsi [] Starting size: *76235669504* 2022-12-08 17:35:14.028 46195 DEBUG os_brick.initiator.linuxscsi [] volume size after scsi device rescan *80530636800* extend_volume 2022-12-08 17:35:14.035 46195 DEBUG os_brick.initiator.linuxscsi [] Volume device info = {'device': '/dev/disk/by-path/ip-1...1:3260-iscsi-iqn.2010-06.com.purestorage:flasharray.x-lun-10', 'host': '5', 'channel': '0', 'id': '0', 'lun': '10'} extend_volume 2022-12-08 17:35:14.348 46195 INFO os_brick.initiator.linuxscsi [] Find Multipath device file for volume WWN 3624... 2022-12-08 17:35:14.349 46195 DEBUG os_brick.initiator.linuxscsi [] Checking to see if /dev/disk/by-id/dm-uuid-mpath-3624.. exists yet. wait_for_path 2022-12-08 17:35:14.349 46195 DEBUG os_brick.initiator.linuxscsi [] /dev/disk/by-id/dm-uuid-mpath-3624... has shown up. wait_for_path 2022-12-08 17:35:14.382 46195 INFO os_brick.initiator.linuxscsi [] mpath(/dev/disk/by-id/dm-uuid-mpath-3624) *current size 76235669504* 2022-12-08 17:35:14.412 46195 INFO os_brick.initiator.linuxscsi [] mpath(/dev/disk/by-id/dm-uuid-mpath-3624) *new size 76235669504* 2022-12-08 17:35:14.413 46195 DEBUG oslo_concurrency.lockutils [] Lock "extend_volume" released by "os_brick.initiator.connectors.iscsi.ISCSIConnector.extend_volume" :: held 2.062s inner 2022-12-08 17:35:14.459 46195 DEBUG os_brick.initiator.connectors.iscsi [] <== extend_volume: return (2217ms) *76235669504* trace_logging_wrapper 2022-12-08 17:35:14.461 46195 DEBUG nova.virt.libvirt.volume.iscsi [] Extend iSCSI Volume /dev/dm-28; new_size=*76235669504* extend_volume 2022-12-08 17:35:14.462 46195 DEBUG nova.virt.libvirt.driver [] Resizing target device /dev/dm-28 to *76235669504* _resize_attached_volume The logs clearly shows that the rescan confirm the new size but when interrogating multipath, it does not. But requesting multipath few seconds after on the command line shows the new size as well. It explains the behaviour. I'm running Ubuntu 18.04 with multipath 0.7.4-2ubuntu3.2. The os-brick code for multipath is far more basic than the one in master branch. Maybe the multipath version installed is too recent for os-brick. Thanks for the help Jerome -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sat Dec 10 02:42:22 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 09 Dec 2022 18:42:22 -0800 Subject: [all][tc] What's happening in Technical Committee: summary 2022 Dec 09: Reading: 5 min Message-ID: <184f9e98897.f213f2bb621915.3522484393587633420@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We had this week's meeting on Dec 7. Most of the meeting discussions are summarized in this email. Meeting recording and summary logs are available @ https://www.youtube.com/watch?v=JQYzoJV35Xw @ https://meetings.opendev.org/meetings/tc/2022/tc.2022-12-07-16.00.log.html * Next TC weekly meeting will be on Dec 14 Wed at 16:00 UTC, Feel free to add the topic to the agenda[1] by Dec 13. 2. What we completed this week: ========================= * Technical Election changes: extending the period for nomination and voting to two weeks You might have observed in elections that a lot of candidates miss the nomination deadline. To improve the situation, TC discussed it in the election retrospective in PTG and decided to extend the nomination and the voting period to two weeks. This has been updated in the TC charter[2]. This change will be applicable from the next (2023.2 cycles) technical elections. Another update to the next elections is that ianychoi and tonyb will be serving as election officials[3]. Thanks to both to volunteer for it. * Added Adjutant to inactive project list[4] Dale Smith (current PTL) is in sync with this and already started working on the pending things to mark it Active. For example, he fixed the gate[5]. * Added the infinidat-tools subordinate charm to OpenStack charms[6] 3. Activities In progress: ================== TC Tracker for the 2023.1 cycle ------------------------------------- * Current cycle working items and their progress are present in the 2023.1 tracker etherpad[7]. Open Reviews ----------------- * Two open reviews for ongoing activities[8]. Change in Inactive project timeline ----------------------------------------- While proposing the Adjutant project in the Inactive list, the release team pointed out that we should have the final list of deliverables to release by milestone 2 instead of the final release time. This is a valid point and TC is reflecting the same in Inactive project timeline where TC will take the final decision on inactive projects to move back to active again and consider for release by milestone 2. The change in TC documentation is up for review[9] Mistral release and more maintainers ------------------------------------------- The release team identified the Mistral needs more work to be considered for release and proposing to mark Mistral release deprecated [10]. The good thing is that there are more volunteers from OVHcloud to help maintain it[11]. TC will monitor the situation and decide on the Mistral release before milestone 2. Renovate translation SIG i18 ---------------------------------- * Brian on behalf of TC presented it in the Board meeting on Dec 6th and the budget for the Weblate tool license seems ok but we still need an engineer to do the translation migration and maintenance. Here is the proposal discussed in the Board meeting[12] Project updates ------------------- * None 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[13]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15:00 UTC [14] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031240.html [2] https://review.opendev.org/c/openstack/governance/+/865367 [3] https://governance.openstack.org/election/#election-officials [4] https://review.opendev.org/c/openstack/governance/+/849153 [5] https://review.opendev.org/c/openstack/adjutant/+/866823 [6] https://review.opendev.org/c/openstack/governance/+/864067 [7] https://etherpad.opendev.org/p/tc-2023.1-tracker [8] https://review.opendev.org/q/projects:openstack/governance+status:open [9] https://review.opendev.org/c/openstack/governance/+/867062 [10] https://review.opendev.org/c/openstack/governance/+/866562 [11] https://lists.openstack.org/pipermail/openstack-discuss/2022-December/031421.html [12] https://docs.google.com/document/d/1PwGASync8blZoFOlqaIe3xwJkOc1iByHKVbGDTbNKY0/edit [13] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [14] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From fereshtehloghmani at gmail.com Sat Dec 10 13:00:12 2022 From: fereshtehloghmani at gmail.com (fereshteh loghmani) Date: Sat, 10 Dec 2022 16:30:12 +0330 Subject: openstack 504 error Message-ID: hello i use Kolla ansible multi-region. in one of my regions that is not keystone and hypervisor on it. some times I receive the error Gateway Timeout. how could i solve this problem? thanks is advance. Gateway Timeout The gateway did not receive a timely response from the upstream server or application. -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Sat Dec 10 13:14:01 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Sat, 10 Dec 2022 13:14:01 +0000 Subject: [cinder] Problem with restoring purestorage volume backup In-Reply-To: References: Message-ID: Hello, It would be really helpful if you try the patch and give feedback. However, the patch is not merged yet, so still in review process. Thanks, Sofia El El s?b, 10 de dic. de 2022 a la(s) 11:56, ????? ?????? < rishat.azizov at gmail.com> escribi?: > Hello, > > Thanks. We use cinder 18.2.0-1 Wallaby. Can we use this patch? Or does it > need to be improved? > > ??, 10 ???. 2022 ?. ? 17:54, ????? ?????? : > >> Hello, >> >> Thanks. We use cinder 18.2.0-1 Wallaby. Can we use this patch? Or does >> it need to be improved? >> >> ??, 26 ???. 2022 ?. ? 23:12, Sofia Enriquez : >> >>> Hello, >>> This is a major bug in Cinder that has been reported. Please check >>> https://launchpad.net/bugs/1895035. >>> The bug has a fix proposed to master to haven't merged yet [2]. >>> I'll mention it again in next week's Cinder meeting. >>> Regards, >>> Sofia >>> [2] https://review.opendev.org/c/openstack/cinder/+/750782 >>> >>> On Wed, Oct 26, 2022 at 9:45 AM ????? ?????? >>> wrote: >>> >>>> Hello! >>>> >>>> We have a problem with cinder-backup with cephbackup driver and volume >>>> on purestorage. When backing up purestorage volume with cinder-backup with >>>> cephbackup driver it creates in ceph pool and everything is ok. But >>>> when we try to restore the backup, it is not restored with an error >>>> "rbd.ImageNotFound" in the screenshot attached to this email. This happens >>>> because the original image is not in rbd, it is in purestorage. It is >>>> not clear why the cinder is trying to look for a disk in the ceph. Could >>>> you please help with this? >>>> >>>> Thanks. Regards. >>>> >>> >>> >>> -- >>> >>> Sof?a Enriquez >>> >>> she/her >>> >>> Software Engineer >>> >>> Red Hat PnT >>> >>> IRC: @enriquetaso >>> @RedHat Red Hat >>> Red Hat >>> >>> >>> >>> -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Sat Dec 10 14:39:08 2022 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Sat, 10 Dec 2022 21:39:08 +0700 Subject: [Magnum] Technology and Ubuntu Driver. Message-ID: Hello guys, Can you tell me which tools Magnum used to build the k8s cluster. I do a lot of research but I still have not figured it out. And one more thing, Why it does not support Ubuntu Distro? Thank you. Nguyen Huu Khoi -------------- next part -------------- An HTML attachment was scrubbed... URL: From acozyurt at gmail.com Sat Dec 10 16:35:33 2022 From: acozyurt at gmail.com (=?UTF-8?Q?Can_=C3=96zyurt?=) Date: Sat, 10 Dec 2022 19:35:33 +0300 Subject: [Magnum] Technology and Ubuntu Driver. In-Reply-To: References: Message-ID: Hi, Heat Stack Templates are what Magnum passes to Heat to generate a cluster. Documentation link On Sat, 10 Dec 2022 at 17:53, Nguy?n H?u Kh?i wrote: > Hello guys, > > Can you tell me which tools Magnum used to build the k8s cluster. I do a > lot of research but I still have not figured it out. And one more thing, > Why it does not support Ubuntu Distro? > Thank you. > > Nguyen Huu Khoi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Sat Dec 10 23:13:54 2022 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Sun, 11 Dec 2022 06:13:54 +0700 Subject: [Magnum] Technology and Ubuntu Driver. In-Reply-To: References: Message-ID: Thank you. Does it use tools like kubeadm to bootstrap cluster, that is exactly what I mean? Nguyen Huu Khoi On Sat, Dec 10, 2022 at 11:35 PM Can ?zyurt wrote: > Hi, > > Heat Stack Templates are what Magnum passes to Heat to generate a cluster. > > > Documentation link > > > > > On Sat, 10 Dec 2022 at 17:53, Nguy?n H?u Kh?i > wrote: > >> Hello guys, >> >> Can you tell me which tools Magnum used to build the k8s cluster. I do a >> lot of research but I still have not figured it out. And one more thing, >> Why it does not support Ubuntu Distro? >> Thank you. >> >> Nguyen Huu Khoi >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Sun Dec 11 00:59:34 2022 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Sun, 11 Dec 2022 07:59:34 +0700 Subject: [Openstack] Lack of Balance solution such as Watcher. Message-ID: I am building a private cloud.Everything is ok. But I cannot find a way to balance vm when It have heavy load which cause impact other vm on the compute node. I found solutions such as Watcher and Leveller. But they need to be done manually. Watcher is not good because It need cpu metric such as cpu load in Ceilometer which is removed so we cannot use it. Leveller is good, but It is obsolete. Nguyen Huu Khoi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Sun Dec 11 01:13:23 2022 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sat, 10 Dec 2022 20:13:23 -0500 Subject: [Magnum] Technology and Ubuntu Driver. In-Reply-To: References: Message-ID: Magnum does not do this but we built a driver that uses the Cluster API https://github.com/vexxhost/magnum-cluster-api On Sat, Dec 10, 2022 at 6:29 PM Nguy?n H?u Kh?i wrote: > Thank you. > Does it use tools like kubeadm to bootstrap cluster, that is exactly what > I mean? > Nguyen Huu Khoi > > > On Sat, Dec 10, 2022 at 11:35 PM Can ?zyurt wrote: > >> Hi, >> >> Heat Stack Templates are what Magnum passes to Heat to generate a cluster. >> >> >> Documentation link >> >> >> >> >> On Sat, 10 Dec 2022 at 17:53, Nguy?n H?u Kh?i >> wrote: >> >>> Hello guys, >>> >>> Can you tell me which tools Magnum used to build the k8s cluster. I do a >>> lot of research but I still have not figured it out. And one more thing, >>> Why it does not support Ubuntu Distro? >>> Thank you. >>> >>> Nguyen Huu Khoi >>> >> -- Mohammed Naser VEXXHOST, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Sun Dec 11 01:23:29 2022 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Sun, 11 Dec 2022 08:23:29 +0700 Subject: [Magnum] Technology and Ubuntu Driver. In-Reply-To: References: Message-ID: Thank you very much. Nguyen Huu Khoi On Sun, Dec 11, 2022 at 8:13 AM Mohammed Naser wrote: > Magnum does not do this but we built a driver that uses the Cluster API > > https://github.com/vexxhost/magnum-cluster-api > > On Sat, Dec 10, 2022 at 6:29 PM Nguy?n H?u Kh?i > wrote: > >> Thank you. >> Does it use tools like kubeadm to bootstrap cluster, that is exactly what >> I mean? >> Nguyen Huu Khoi >> >> >> On Sat, Dec 10, 2022 at 11:35 PM Can ?zyurt wrote: >> >>> Hi, >>> >>> Heat Stack Templates are what Magnum passes to Heat to generate a >>>> cluster. >>> >>> >>> Documentation link >>> >>> >>> >>> >>> On Sat, 10 Dec 2022 at 17:53, Nguy?n H?u Kh?i >>> wrote: >>> >>>> Hello guys, >>>> >>>> Can you tell me which tools Magnum used to build the k8s cluster. I do >>>> a lot of research but I still have not figured it out. And one more thing, >>>> Why it does not support Ubuntu Distro? >>>> Thank you. >>>> >>>> Nguyen Huu Khoi >>>> >>> -- > Mohammed Naser > VEXXHOST, Inc. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Sun Dec 11 01:50:43 2022 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Sun, 11 Dec 2022 08:50:43 +0700 Subject: [Magnum] Technology and Ubuntu Driver. In-Reply-To: References: Message-ID: Hello, Can you tell me how Magnum bootstrap k8s cluster with native API. I mean after deploy heat is deployed. I just need to know some points to research. Nguyen Huu Khoi On Sun, Dec 11, 2022 at 8:13 AM Mohammed Naser wrote: > Magnum does not do this but we built a driver that uses the Cluster API > > https://github.com/vexxhost/magnum-cluster-api > > On Sat, Dec 10, 2022 at 6:29 PM Nguy?n H?u Kh?i > wrote: > >> Thank you. >> Does it use tools like kubeadm to bootstrap cluster, that is exactly what >> I mean? >> Nguyen Huu Khoi >> >> >> On Sat, Dec 10, 2022 at 11:35 PM Can ?zyurt wrote: >> >>> Hi, >>> >>> Heat Stack Templates are what Magnum passes to Heat to generate a >>>> cluster. >>> >>> >>> Documentation link >>> >>> >>> >>> >>> On Sat, 10 Dec 2022 at 17:53, Nguy?n H?u Kh?i >>> wrote: >>> >>>> Hello guys, >>>> >>>> Can you tell me which tools Magnum used to build the k8s cluster. I do >>>> a lot of research but I still have not figured it out. And one more thing, >>>> Why it does not support Ubuntu Distro? >>>> Thank you. >>>> >>>> Nguyen Huu Khoi >>>> >>> -- > Mohammed Naser > VEXXHOST, Inc. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From homelandmailbox at gmail.com Sun Dec 11 06:12:10 2022 From: homelandmailbox at gmail.com (f.loghmani) Date: Sun, 11 Dec 2022 09:42:10 +0330 Subject: openstack 504 Gateway Message-ID: hello I use Kolla ansible multi-region. in one of my regions that is not keystone and horizon on it, sometimes I receive the error Gateway Timeout. how could I solve this problem? (in the same time I could open other region panels and it works correctly) thanks in advance. Gateway Timeout The gateway did not receive a timely response from the upstream server or application -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike_mp at zzzcomputing.com Sun Dec 11 17:23:37 2022 From: mike_mp at zzzcomputing.com (Mike Bayer) Date: Sun, 11 Dec 2022 12:23:37 -0500 Subject: [dev][infra][qa][tact-sig] Tox 4.0.0 breaking changes In-Reply-To: <20221207191941.bft3np4yndrcjspa@yuggoth.org> References: <20221207191941.bft3np4yndrcjspa@yuggoth.org> Message-ID: <5b29de91-45d3-4357-92e6-0f2ece60214a@app.fastmail.com> Hi - Also bit by sudden changes in tox 4, and I think it's important that the tox maintainers know that they did not adequately warn for these changes. our jobs should not be "breaking" without us having adequate warnings from previous tox versions. My issue at https://github.com/tox-dev/tox/issues/2676 has been locked and the maintainer does not seem to think tox did anything incorrectly here. Pretty concerning for such a foundational project, so I would encourage new issues be raised at https://github.com/tox-dev/tox/issues/ regarding individual backwards-incompatible changes that were made without warnings. On Wed, Dec 7, 2022, at 2:19 PM, Jeremy Stanley wrote: > Tox 4.0.0, a major rewrite, just appeared on PyPI in the past couple > of hours, so jobs are breaking right and left. We're going to try > pinning to latest 3.x in zuul-jobs for the near term while we work > through some of the more obvious issues with it, but until then lots > of tox-based Zuul jobs are going to be broken. I'll follow up to > this thread once we think things should be stabilized and we can > start trying to pick through what's going to need fixing for the new > tox version. > -- > Jeremy Stanley > > > *Attachments:* > * signature.asc -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradeep8985 at gmail.com Mon Dec 12 04:53:05 2022 From: pradeep8985 at gmail.com (pradeep) Date: Mon, 12 Dec 2022 10:23:05 +0530 Subject: auto switch of glance-api containers to other controllers In-Reply-To: References: Message-ID: Any help is highly appreciated. Thank you! On Fri, 9 Dec 2022 at 21:46, pradeep wrote: > Hi Dmitriy, Thanks for the response. Sorry, I forgot to mention that i > used kolla ansible for deployment. As per kolla docs, glance-api container > runs only in one controller. So i am looking from kolla perspective if it > can automatically switch it over other available controllers. > > Regards > Pradeep > > On Fri, 9 Dec 2022 at 14:51, Dmitriy Rabotyagov > wrote: > >> Hi, >> >> I'm not sure if you used any deployment tool for your environment, but >> usually for such failover there's a load balancer in conjunction with VRRP >> or Anycast, that able to detect when controller1 is down and forward >> traffic to another controller. >> >> As example, OpenStack-Ansible as default option installs haproxy to each >> controller and keepalived to implement VRRP and Virtual IPs. So when glance >> is down on controller1, haproxy will detect that and forward traffic to >> other controllers. If controller1 is down as a whole, keepalived will >> detect that and raise VIP on controller2, so all client traffic will go >> there. Since controller2 also have haproxy, it will pass traffic to >> available backends based on the source IP. >> >> >> ??, 9 ???. 2022 ?., 09:06 pradeep : >> >>> Hello All, >>> >>> I understand that glance-api container run only one controller always. I >>> have tried to check if the glance-api container to switch over to the next >>> available controller by means of rebooting and stopping the containers, but >>> nothing happened. Is there a way to make sure that glance-api container >>> switches to other controllers if controller 1 is not available. >>> >>> Thanks for you help >>> >>> Regards >>> Pradeep >>> >>> >>> > > -- > ----------------------- > Regards > Pradeep Kumar > -- ----------------------- Regards Pradeep Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Mon Dec 12 08:57:00 2022 From: eblock at nde.ag (Eugen Block) Date: Mon, 12 Dec 2022 08:57:00 +0000 Subject: [ops] Migration from CentOS streams to Ubuntu and fast forward updates In-Reply-To: <5691566ba1da54d7b23c3b2285842928fcc3e3ca.camel@redhat.com> References: <8db8e7b9e25ec85f40a1e8e625d84f60816b86a8.camel@redhat.com> <20221209115241.Horde.SQ6ikp8SRifMLF3Q9-wVwVz@webmail.nde.ag> <5691566ba1da54d7b23c3b2285842928fcc3e3ca.camel@redhat.com> Message-ID: <20221212085700.Horde.QppQoS9FVKVzXIi3SSILYQJ@webmail.nde.ag> Hi Sean, your input is valuable as always, I appreciate it. Testing will be key here anyway, so that's a given. I didn't mention which distro we're currently using (openSUSE Leap 15.2) but it's difficult to get newer packages for that (or later) releases, that's why we'll have to switch. We'll see how the testing goes, because we also need to adopt our installation mechanisms to using a different distro, that may take some time. Anyway, thanks again! Eugen Zitat von Sean Mooney : > On Fri, 2022-12-09 at 11:52 +0000, Eugen Block wrote: >> Hi, >> >> we're considering a similar scenario. I don't mean to hijack this >> thread, but would it be possible to migrate to Ubuntu without >> downtime? There are several options I can think of, one that worked >> for us in the past: a reinstallation of the control nodes but keeping >> the database (upgraded in a VM). But this means downtime until at >> least one control node is up and if possible I would like to avoid >> that, so I'm thinking of this: >> - Adding compute nodes and install them with Ubuntu Victoria (our >> current version). >> - Move the workload away from the other compute nodes and reinstall >> them one by one. > > yes so as has been mentioned in the tread before. > openstack does not really care about the underlying os within reason. > > your installer tool might but there is nothing that would prevent > you adding 3 new contolers going form 3->6 > and then stoping the old controller after teh db have replicated and > the new contoler are working > > simiarly you can add new computes and cold migrate one by one until > your cloud is entrily workign on the other os. > > if you decouple changeing the opesntack version form changeing the os, > i.e. do one then the other if you intend to do both. > that should allow you to keep the cloud operational during this tansition. > > the main issues you ware likely to encounter are version mismatch > beteen the rabbit/mariadb packages shiped on each distor > (and ovn if you use that) and simiar constraits. > > for any given release of openstack we supprot both distrobutiosn so > both should ahve compaitble packages with the openstack > code but infra stucture compent like the db may not be compatibel > the corresponding package form the other disto. > > if contol plane uptime is not required at all time a simple > workaroudn for the db is just do a backup and restore. > that would still allow you to have the workload remain active. > > when i first started working on openstack many years ago my default > dev envionment had a a contoler with ubuntu and kernel ovs > a centos compute with linux bridge and a ubuntu compute with ovs-dpdk. > > using devstack all 3 nodes could consume the rabbitmq and db form > the ubuntu contoler and it was possibel to cold migrate vm > between all 3 nodes (assuming you used vlan networking). > > i would not recommend that in production for any protracted period > of tiem but it was defnietly possible to do in the past. > >> >> That should work for computes (with cold or live migration). But what >> about the control nodes? We have them in HA setup, would it be >> possible to >> - Stop one control node, reinstall it with Ubuntu (Victoria), make >> sure all UIDs/GIDs are the same as before (we use ceph as shared >> storage) so the mounted CephFS still works between the nodes (same for >> the nova live-migration). > live migration does not work due to the qemu emulator paths beign > differnt between ubuntu and centos > you may also have issues with the centos vm xmls reference selinux > where as apparmor is used as teh security > context on ubuntu. > > those issses should not matter for cold migration as the xml is > regenerated on teh destination host not the > souce host as it is with live migration. > > so if you can spawn a vm on the ubuntu node form a centos contol > plane you should be able to cold migrate provide the ssh keys are > exchanged. > > by the way while it proably does work nova has never offially > supproted using cephfs or any other shared block/file system for the > instance > state path other then NFS. > > i know that some operators have got cephfs and glusterfs to work but > that has never offically been supported or tested by the nova team. > >> - Repeat for other control nodes. >> >> Would those mixed control nodes work together? I will try it anyway in >> a test environment, but I wanted to ask if anybody has tried this >> approach. I'd appreciate any insights. > as i said yes in principal it should work provided > rabbit/mariadb/galeara are happy > > at the openstack level while untested we woudl expect openstack to > work provided its the same version in both cases. > > usign something like kolla-ansible wehre you can use the same > contaienr image on both os would help as that way you are > sure there is no mismathc. with that said that is not a toplogy they > test/supprot but is one they can deploy. > so you would need to test this. >> >> Thanks, >> Eugen >> >> >> Zitat von Massimo Sgaravatto : >> >> > Ok, thanks a lot >> > If cold migration is supposed to work between hosts with different >> > operating systems, we are fine >> > Cheers, Massimo >> > >> > On Tue, Dec 6, 2022 at 10:48 AM Sean Mooney wrote: >> > >> > > On Tue, 2022-12-06 at 10:34 +0100, Dmitriy Rabotyagov wrote: >> > > > Hi Massimo, >> > > > >> > > > Assuming you have manual installation (not using any deployment >> > > > projects), I have several comments on your plan. >> > > > >> > > > 1. I've missed when you're going to upgrade Nova/Neutron on computes. >> > > > As you should not create a gap in OpenStack versions between >> > > > controllers and computes since nova-scheduler has a requirement on RPC >> > > > version computes will be using. Or, you must define the rpc version >> > > > explicitly in config to have older computes (but it's not really a >> > > > suggested way). >> > > > 2. Also once you do db sync, your second controller might misbehave >> > > > (as some fields could be renamed or new tables must be used), so you >> > > > will need to disable it from accepting requests until syncing >> > > > openstack version as well. If you're not going to upgrade it until >> > > > getting first one to Yoga - it should be disabled all the time until >> > > > you get Y services running on it. >> > > > 3. It's totally fine to run multi-distro setup. For computes the only >> > > > thing that can go wrong is live migrations, and that depends on >> > > > libvirt/qemu versions. I'm not sure if CentOS 8 Stream have compatible >> > > > version with Ubuntu 22.04 for live migrations to work though, but if >> > > > you care about them (I guess you do if you want to migrate workloads >> > > > semalessly) - you'd better check. But my guess would be that CentOS 8 >> > > > Stream should have compatible versions with Ubuntu 20.04 - still needs >> > > > deeper checking. >> > > the live migration issue is a know limiation >> > > basically it wont work across distro today because the qemu >> emulator path >> > > is distro specific and we do not pass that back form the >> destinatino to the >> > > source so libvirt will try and boot the vm referncign a binary that does >> > > not exist >> > > im sure you could propaly solve that with a symlink or similar. >> > > if you did the next issue you would hit is we dont normally allwo live >> > > mgration >> > > form a newer qemu/libvirt verson to an older one >> > > >> > > with all that said cold migration shoudl work fine and wihtine any one >> > > host os live migration >> > > will work. you could proably use host aggreates or simialr to enforece >> > > that if needed but >> > > cold migration is the best way to move the workloads form >> hypervior hosts >> > > with different distros. >> > > >> > > > >> > > > ??, 6 ???. 2022 ?. ? 09:40, Massimo Sgaravatto < >> > > massimo.sgaravatto at gmail.com>: >> > > > > >> > > > > Any comments on these questions ? >> > > > > Thanks, Massimo >> > > > > >> > > > > On Fri, Dec 2, 2022 at 5:02 PM Massimo Sgaravatto < >> > > massimo.sgaravatto at gmail.com> wrote: >> > > > > > >> > > > > > Dear all >> > > > > > >> > > > > > >> > > > > > >> > > > > > Dear all >> > > > > > >> > > > > > We are now running an OpenStack deployment: Yoga on CentOS8Stream. >> > > > > > >> > > > > > We are now thinking about a possible migration to Ubuntu >> for several >> > > reasons in particular: >> > > > > > >> > > > > > a- 5 years support for both the Operating System and OpenStack >> > > (considering LTS releases) >> > > > > > b- Possibility do do a clean update between two Ubuntu >> LTS releases >> > > > > > c- Easier procedure (also because of b) for fast forward updates >> > > (this is what we use to do) >> > > > > > >> > > > > > Considering the latter item, my understanding is that an >> update from >> > > Ubuntu 20.04 Ussuri to Ubuntu 22.04 Yoga could be done in the following >> > > > > > way (we have two controller nodes and n compute nodes): >> > > > > > >> > > > > > - Update of first controller node from Ubuntu 20.04 >> Ussuri to Ubuntu >> > > 20.04 Victoria (update OpenStack packages + dbsync) >> > > > > > - Update of first controller node from Ubuntu 20.04 Victoria to >> > > Ubuntu 20.04 Wallaby (update OpenStack packages + dbsync) >> > > > > > - Update of first controller node from Ubuntu 20.04 Wallaby to >> > > Ubuntu 20.04 Xena (update OpenStack packages + dbsync) >> > > > > > - Update of first controller node from Ubuntu 20.04 Xena to Ubuntu >> > > 20.04 Yoga (update OpenStack packages + dbsync) >> > > > > > - Update of first controller node from Ubuntu 20.04 Yoga to Ubuntu >> > > 22.04 Yoga (update Ubuntu packages) >> > > > > > - Update of second controller node from Ubuntu 20.04 Ussuri to >> > > Ubuntu 22.04 Yoga (update OpenStack and Ubuntu packages) >> > > > > > - Update of the compute nodes from Ubuntu 20.04 Ussuri to Ubuntu >> > > 22.04 Yoga (update OpenStack and Ubuntu packages) >> > > > > > >> > > > > > >> > > > > > We would do the same when migrating from Ubuntu 22.04 >> Yoga to Ubuntu >> > > 24.04 and the OpenStack xyz release (where xyz >> > > > > > is the LTS release used in Ubuntu 24.04) >> > > > > > >> > > > > > Is this supposed to work or am I missing something ? >> > > > > > >> > > > > > If we decide to migrate to Ubuntu, the first step would be the >> > > reinstallation with Ubuntu 22.04/Yoga of each node >> > > > > > currently running CentOS8 stream/Yoga. >> > > > > > I suppose there are no problems having in the same OpenStack >> > > installation nodes running the same >> > > > > > Openstack version but different operating systems, or am I wrong ? >> > > > > > >> > > > > > Thanks, Massimo >> > > > > > >> > > > >> > > >> > > >> >> >> >> From arnaud.morin at gmail.com Mon Dec 12 09:00:50 2022 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Mon, 12 Dec 2022 09:00:50 +0000 Subject: [neutron][ironic] Distributed routers and SNAT Message-ID: Hello team, When using router in DVR (+ HA), we end-up having the router on all computes where needed. So far, this is nice. We want to introduce Ironic baremetal servers, with a private network access. DVR won't apply on such baremetal servers, and we know floating IP are not going to work. Anyway, we were thinking that SNAT part would be OK. After doing few tests, we noticed that all distributed routers are answering to ARP and ICMP, thus creating duplicates in the network. $ arping -c1 192.168.43.1 ARPING 192.168.43.1 60 bytes from fa:16:3f:67:97:6a (192.168.43.1): index=0 time=634.700 usec 60 bytes from fa:16:3f:dc:67:91 (192.168.43.1): index=1 time=750.298 usec --- 192.168.43.1 statistics --- 1 packets transmitted, 2 packets received, 0% unanswered (1 extra) Is there anything possible on neutron side to prevent this? FYI, I did a comparison with routers in centralized mode (+ HA). In that situation, keepalived is putting the qr-xxx interface down in qrouter namespace. In distributed mode, keepalives is running in snat- namespace and cannot manage the router interface. Any help / tip would be appreciated. Thanks! Arnaud. From josephine.seifert at secustack.com Mon Dec 12 09:07:42 2022 From: josephine.seifert at secustack.com (Josephine Seifert) Date: Mon, 12 Dec 2022 10:07:42 +0100 Subject: [Image Encryption] No meetings until next year Message-ID: Hi, due to an injury and upcoming vacation, the next meeting for the image encrpyption team will be on Monday the 9th of Jannuary. greetings and happy holidays Luzi From ralonsoh at redhat.com Mon Dec 12 09:11:35 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 12 Dec 2022 10:11:35 +0100 Subject: [neutron][ironic] Distributed routers and SNAT In-Reply-To: References: Message-ID: Hello Arnaud: You said "all distributed routers are answering to ARP and ICMP, thus creating duplicates in the network". To what IP addresses are the DVR routers replying? Regards. On Mon, Dec 12, 2022 at 10:01 AM Arnaud Morin wrote: > Hello team, > > When using router in DVR (+ HA), we end-up having the router on all > computes where needed. > > So far, this is nice. > > We want to introduce Ironic baremetal servers, with a private network > access. > DVR won't apply on such baremetal servers, and we know floating IP are > not going to work. > > Anyway, we were thinking that SNAT part would be OK. > After doing few tests, we noticed that all distributed routers are > answering to ARP and ICMP, thus creating duplicates in the network. > > $ arping -c1 192.168.43.1 > ARPING 192.168.43.1 > 60 bytes from fa:16:3f:67:97:6a (192.168.43.1): index=0 time=634.700 usec > 60 bytes from fa:16:3f:dc:67:91 (192.168.43.1): index=1 time=750.298 usec > > --- 192.168.43.1 statistics --- > 1 packets transmitted, 2 packets received, 0% unanswered (1 extra) > > > > Is there anything possible on neutron side to prevent this? > > > FYI, I did a comparison with routers in centralized mode (+ HA). > In that situation, keepalived is putting the qr-xxx interface down in > qrouter namespace. > In distributed mode, keepalives is running in snat- namespace and cannot > manage the router interface. > > Any help / tip would be appreciated. > > Thanks! > > Arnaud. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amonster369 at gmail.com Mon Dec 12 09:19:19 2022 From: amonster369 at gmail.com (A Monster) Date: Mon, 12 Dec 2022 10:19:19 +0100 Subject: Kolla-ansible [nova | Ensure RabbitMQ users exist] task error Message-ID: I'm trying to deploy openstack xena on centos stream 8 on 2 controller nodes, 8 compute and a storage cluster, using kolla-ansible but I keep getting the following error, any workaround I could do to avoid this problem ? thank you. TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************************************* skipping: [Controller-02] => (item=None) skipping: [Controller-02] FAILED - RETRYING: nova | Ensure RabbitMQ users exist (5 retries left). FAILED - RETRYING: nova | Ensure RabbitMQ users exist (4 retries left). FAILED - RETRYING: nova | Ensure RabbitMQ users exist (3 retries left). FAILED - RETRYING: nova | Ensure RabbitMQ users exist (2 retries left). FAILED - RETRYING: nova | Ensure RabbitMQ users exist (1 retries left). failed: [Controller-01 -> Controller-01] (item=None) => {"attempts": 5, "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} fatal: [Controller-01 -> {{ service_rabbitmq_delegate_host }}]: FAILED! => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} -------------- next part -------------- An HTML attachment was scrubbed... URL: From Danny.Webb at thehutgroup.com Mon Dec 12 09:24:45 2022 From: Danny.Webb at thehutgroup.com (Danny Webb) Date: Mon, 12 Dec 2022 09:24:45 +0000 Subject: auto switch of glance-api containers to other controllers In-Reply-To: References: Message-ID: Kolla only uses a single controller if you use file based backend by default. If you want to continue using the file backend, you can use something like an NFS share to share images between controllers and then change the glance_file_datadir_volume parameter which will result in the api being deployed on all controllers ________________________________ From: pradeep Sent: 12 December 2022 04:53 To: Dmitriy Rabotyagov Cc: openstack-discuss Subject: Re: auto switch of glance-api containers to other controllers CAUTION: This email originates from outside THG ________________________________ Any help is highly appreciated. Thank you! On Fri, 9 Dec 2022 at 21:46, pradeep > wrote: Hi Dmitriy, Thanks for the response. Sorry, I forgot to mention that i used kolla ansible for deployment. As per kolla docs, glance-api container runs only in one controller. So i am looking from kolla perspective if it can automatically switch it over other available controllers. Regards Pradeep On Fri, 9 Dec 2022 at 14:51, Dmitriy Rabotyagov > wrote: Hi, I'm not sure if you used any deployment tool for your environment, but usually for such failover there's a load balancer in conjunction with VRRP or Anycast, that able to detect when controller1 is down and forward traffic to another controller. As example, OpenStack-Ansible as default option installs haproxy to each controller and keepalived to implement VRRP and Virtual IPs. So when glance is down on controller1, haproxy will detect that and forward traffic to other controllers. If controller1 is down as a whole, keepalived will detect that and raise VIP on controller2, so all client traffic will go there. Since controller2 also have haproxy, it will pass traffic to available backends based on the source IP. ??, 9 ???. 2022 ?., 09:06 pradeep >: Hello All, I understand that glance-api container run only one controller always. I have tried to check if the glance-api container to switch over to the next available controller by means of rebooting and stopping the containers, but nothing happened. Is there a way to make sure that glance-api container switches to other controllers if controller 1 is not available. Thanks for you help Regards Pradeep -- ----------------------- Regards Pradeep Kumar -- ----------------------- Regards Pradeep Kumar Danny Webb Principal OpenStack Engineer Danny.Webb at thehutgroup.com [THG Ingenuity Logo] www.thg.com [https://i.imgur.com/wbpVRW6.png] [https://i.imgur.com/c3040tr.png] Danny Webb Principal OpenStack Engineer The Hut Group Tel: Email: Danny.Webb at thehutgroup.com For the purposes of this email, the "company" means The Hut Group Limited, a company registered in England and Wales (company number 6539496) whose registered office is at Fifth Floor, Voyager House, Chicago Avenue, Manchester Airport, M90 3DQ and/or any of its respective subsidiaries. Confidentiality Notice This e-mail is confidential and intended for the use of the named recipient only. If you are not the intended recipient please notify us by telephone immediately on +44(0)1606 811888 or return it to us by e-mail. Please then delete it from your system and note that any use, dissemination, forwarding, printing or copying is strictly prohibited. Any views or opinions are solely those of the author and do not necessarily represent those of the company. Encryptions and Viruses Please note that this e-mail and any attachments have not been encrypted. They may therefore be liable to be compromised. Please also note that it is your responsibility to scan this e-mail and any attachments for viruses. We do not, to the extent permitted by law, accept any liability (whether in contract, negligence or otherwise) for any virus infection and/or external compromise of security and/or confidentiality in relation to transmissions sent by e-mail. Monitoring Activity and use of the company's systems is monitored to secure its effective use and operation and for other lawful business purposes. Communications using these systems will also be monitored and may be recorded to secure effective use and operation and for other lawful business purposes. hgvyjuv -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Mon Dec 12 09:30:42 2022 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Mon, 12 Dec 2022 09:30:42 +0000 Subject: [neutron][ironic] Distributed routers and SNAT In-Reply-To: References: Message-ID: Hello, My subnet is: 192.168.43.0/24 My router is: 192.168.43.1 My ironic server is: 192.168.43.43 When I do a ping against router from server: $ ping -c5 192.168.43.1 PING 192.168.43.1 (192.168.43.1) 56(84) bytes of data. 64 bytes from 192.168.43.1: icmp_seq=1 ttl=64 time=0.458 ms 64 bytes from 192.168.43.1: icmp_seq=1 ttl=64 time=0.899 ms (DUP!) 64 bytes from 192.168.43.1: icmp_seq=2 ttl=64 time=0.372 ms 64 bytes from 192.168.43.1: icmp_seq=2 ttl=64 time=0.399 ms (DUP!) 64 bytes from 192.168.43.1: icmp_seq=3 ttl=64 time=0.484 ms 64 bytes from 192.168.43.1: icmp_seq=3 ttl=64 time=0.485 ms (DUP!) 64 bytes from 192.168.43.1: icmp_seq=4 ttl=64 time=0.411 ms 64 bytes from 192.168.43.1: icmp_seq=4 ttl=64 time=0.411 ms (DUP!) 64 bytes from 192.168.43.1: icmp_seq=5 ttl=64 time=0.299 ms --- 192.168.43.1 ping statistics --- 5 packets transmitted, 5 received, +4 duplicates, 0% packet loss, time 4101ms rtt min/avg/max/mdev = 0.299/0.468/0.899/0.161 ms We can see the DUP! which are coming from the 2 SNAT nodes that I have (I am using max_l3_agents_per_router=2). Cheers On 12.12.22 - 10:11, Rodolfo Alonso Hernandez wrote: > Hello Arnaud: > > You said "all distributed routers are answering to ARP and ICMP, thus > creating duplicates in the network". To what IP addresses are the DVR > routers replying? > > Regards. > > > On Mon, Dec 12, 2022 at 10:01 AM Arnaud Morin > wrote: > > > Hello team, > > > > When using router in DVR (+ HA), we end-up having the router on all > > computes where needed. > > > > So far, this is nice. > > > > We want to introduce Ironic baremetal servers, with a private network > > access. > > DVR won't apply on such baremetal servers, and we know floating IP are > > not going to work. > > > > Anyway, we were thinking that SNAT part would be OK. > > After doing few tests, we noticed that all distributed routers are > > answering to ARP and ICMP, thus creating duplicates in the network. > > > > $ arping -c1 192.168.43.1 > > ARPING 192.168.43.1 > > 60 bytes from fa:16:3f:67:97:6a (192.168.43.1): index=0 time=634.700 usec > > 60 bytes from fa:16:3f:dc:67:91 (192.168.43.1): index=1 time=750.298 usec > > > > --- 192.168.43.1 statistics --- > > 1 packets transmitted, 2 packets received, 0% unanswered (1 extra) > > > > > > > > Is there anything possible on neutron side to prevent this? > > > > > > FYI, I did a comparison with routers in centralized mode (+ HA). > > In that situation, keepalived is putting the qr-xxx interface down in > > qrouter namespace. > > In distributed mode, keepalives is running in snat- namespace and cannot > > manage the router interface. > > > > Any help / tip would be appreciated. > > > > Thanks! > > > > Arnaud. > > > > From Danny.Webb at thehutgroup.com Mon Dec 12 09:49:33 2022 From: Danny.Webb at thehutgroup.com (Danny Webb) Date: Mon, 12 Dec 2022 09:49:33 +0000 Subject: Kolla-ansible [nova | Ensure RabbitMQ users exist] task error In-Reply-To: References: Message-ID: Just a quick note, you can't do 2 controllers with kolla. You need an odd number for quorom purposes. ________________________________ From: A Monster Sent: 12 December 2022 09:19 To: openstack-discuss Subject: Kolla-ansible [nova | Ensure RabbitMQ users exist] task error CAUTION: This email originates from outside THG ________________________________ I'm trying to deploy openstack xena on centos stream 8 on 2 controller nodes, 8 compute and a storage cluster, using kolla-ansible but I keep getting the following error, any workaround I could do to avoid this problem ? thank you. TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************************************* skipping: [Controller-02] => (item=None) skipping: [Controller-02] FAILED - RETRYING: nova | Ensure RabbitMQ users exist (5 retries left). FAILED - RETRYING: nova | Ensure RabbitMQ users exist (4 retries left). FAILED - RETRYING: nova | Ensure RabbitMQ users exist (3 retries left). FAILED - RETRYING: nova | Ensure RabbitMQ users exist (2 retries left). FAILED - RETRYING: nova | Ensure RabbitMQ users exist (1 retries left). failed: [Controller-01 -> Controller-01] (item=None) => {"attempts": 5, "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} fatal: [Controller-01 -> {{ service_rabbitmq_delegate_host }}]: FAILED! => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} Danny Webb Principal OpenStack Engineer Danny.Webb at thehutgroup.com [THG Ingenuity Logo] www.thg.com [https://i.imgur.com/wbpVRW6.png] [https://i.imgur.com/c3040tr.png] Danny Webb Principal OpenStack Engineer The Hut Group Tel: Email: Danny.Webb at thehutgroup.com For the purposes of this email, the "company" means The Hut Group Limited, a company registered in England and Wales (company number 6539496) whose registered office is at Fifth Floor, Voyager House, Chicago Avenue, Manchester Airport, M90 3DQ and/or any of its respective subsidiaries. Confidentiality Notice This e-mail is confidential and intended for the use of the named recipient only. If you are not the intended recipient please notify us by telephone immediately on +44(0)1606 811888 or return it to us by e-mail. Please then delete it from your system and note that any use, dissemination, forwarding, printing or copying is strictly prohibited. Any views or opinions are solely those of the author and do not necessarily represent those of the company. Encryptions and Viruses Please note that this e-mail and any attachments have not been encrypted. They may therefore be liable to be compromised. Please also note that it is your responsibility to scan this e-mail and any attachments for viruses. We do not, to the extent permitted by law, accept any liability (whether in contract, negligence or otherwise) for any virus infection and/or external compromise of security and/or confidentiality in relation to transmissions sent by e-mail. Monitoring Activity and use of the company's systems is monitored to secure its effective use and operation and for other lawful business purposes. Communications using these systems will also be monitored and may be recorded to secure effective use and operation and for other lawful business purposes. hgvyjuv -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Mon Dec 12 09:55:38 2022 From: zigo at debian.org (Thomas Goirand) Date: Mon, 12 Dec 2022 10:55:38 +0100 Subject: [all] many Python 3.11 failures in OpenStack components In-Reply-To: <46d96098-67ce-5ea1-6d7a-26e9103594f4@debian.org> References: <46d96098-67ce-5ea1-6d7a-26e9103594f4@debian.org> Message-ID: <686cd927-1efe-ea15-6dd8-be63b5ff13fb@debian.org> FYI, only these are remaining for me: On 12/5/22 16:41, Thomas Goirand wrote: > networking-mlnx: > https://bugs.debian.org/1021924 > FTBFS test failures: sqlalchemy.exc.InvalidRequestError: A transaction > is already begun on this Session. > > python-novaclient: > https://bugs.debian.org/1025110 > needs update for python3.11: conflicting subparser For novaclient, a similar fix as for cinderclient should be applied: https://review.opendev.org/c/openstack/python-cinderclient/+/851467 Can someone from the Nova team take care of it? As the mlnx driver isn't critical, I'd say Zed in Debian Bookworm is now Python 3.11 ready! :) Thanks for everyone that helped fixing the bugs. Cheers, Thomas Goirand (zigo) From fungi at yuggoth.org Mon Dec 12 12:46:15 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 12 Dec 2022 12:46:15 +0000 Subject: [Image Encryption] No meetings until next year In-Reply-To: References: Message-ID: <20221212124615.r2j77g7fx4t2dfo7@yuggoth.org> On 2022-12-12 10:07:42 +0100 (+0100), Josephine Seifert wrote: > due to an injury and upcoming vacation, the next meeting for the > image encrpyption team will be on Monday the 9th of Jannuary. Oh no, I hope the injury doesn't interfere with you enjoying your time off. Get well soon! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jean-francois.taltavull at elca.ch Mon Dec 12 13:00:46 2022 From: jean-francois.taltavull at elca.ch (=?utf-8?B?VGFsdGF2dWxsIEplYW4tRnJhbsOnb2lz?=) Date: Mon, 12 Dec 2022 13:00:46 +0000 Subject: [openstack-ansible] Environment variable overflow on Ubuntu 20, 04 In-Reply-To: <093afa5e-3a14-ccbf-2e3c-2f46bf439450@rd.bbc.co.uk> References: <093afa5e-3a14-ccbf-2e3c-2f46bf439450@rd.bbc.co.uk> Message-ID: Hi Jonathan, I understand your opinion about /etc/profile.d. I will test "deployment_environment_variables". Thanks for helping ! JF > -----Original Message----- > From: Jonathan Rosser > Sent: mercredi, 7 d?cembre 2022 13:44 > To: openstack-discuss at lists.openstack.org > Subject: Re: [openstack-ansible] Environment variable overflow on Ubuntu 20, > 04 > > > > EXTERNAL MESSAGE - This email comes from outside ELCA companies. > > Hi JF, > > We have some clear documentation for this case here > https://docs.openstack.org/openstack-ansible/latest/user/limited- > connectivity/index.html > and a discussion of the pros and cons of making system-wide proxy > configuration. > > Once the deployment reaches a certain size it is necessary to use transient > "deployment_environment_variables" instead of persistent > "global_environment_variables" to manage no_proxy, both in terms of the > length of the no_proxy environment variable but also the need to manage the > contents of that variable across all hosts / containers and running services for > any changes made to the control plane. > > From experience I can say that it is much much easier to selectively enable the > use of a proxy during ansible tasks with deployment_environment_variables > than it is to prevent the use of global proxy configuration through no_proxy in > the very large number of cases that you do not want it (basically the whole > runtime of your openstack services, loadbalancer, .....). > > Regarding the addition of an extra script in /etc/profile.d, I would not be in favor > of adding this as we already have a very reliable way to deploy behind an http > proxy using deployment_environment_variables. > There is great benefit in not leaving any "residual" proxy configuration on the > hosts or containers. > > Hopefully this is helpful, > Jonathan. > > On 07/12/2022 11:44, Taltavull Jean-Fran?ois wrote: > > Hello, > > > > The 'lxc_container_create' role appends the content of the > 'global_environment_variables' OSA variable to '/etc/environment', which is > used by the library 'libpam'. > > > > On Ubuntu 20.04 and 'libpam-runtime' library version 1.3.1, this works fine > unless one of the environment variables content size exceeds 1024 char. In this > case, the variable content is truncated. In our OpenStack deployment, this is the > variable 'no_proxy' that overflows. > > > > This limit has been raised to 8192 with 'libpam-runtime' version 1.4 but there > is no backport available in Ubuntu 20.04 repositories. > > > > So what would you think of the following workaround: instead of using > '/etc/environment' to store the global variables, the role would create a shell > script in '/etc/profile.d' thus avoiding the variable truncation issue related to the > 'libpam' library ? > > > > This works on Ubuntu, Debian and CentOS hosts and containers. > > > > Cheers, > > JF > > > > From haleyb.dev at gmail.com Mon Dec 12 14:19:01 2022 From: haleyb.dev at gmail.com (Brian Haley) Date: Mon, 12 Dec 2022 09:19:01 -0500 Subject: [neutron][ironic] Distributed routers and SNAT In-Reply-To: References: Message-ID: Hi Arnaud, Are you using agent_mode=dvr_snat on computes? That is unsupported: https://review.opendev.org/c/openstack/neutron/+/801503 -Brian On 12/12/22 4:30 AM, Arnaud Morin wrote: > Hello, > > My subnet is: 192.168.43.0/24 > > My router is: 192.168.43.1 > > My ironic server is: 192.168.43.43 > > When I do a ping against router from server: > $ ping -c5 192.168.43.1 > PING 192.168.43.1 (192.168.43.1) 56(84) bytes of data. > 64 bytes from 192.168.43.1: icmp_seq=1 ttl=64 time=0.458 ms > 64 bytes from 192.168.43.1: icmp_seq=1 ttl=64 time=0.899 ms (DUP!) > 64 bytes from 192.168.43.1: icmp_seq=2 ttl=64 time=0.372 ms > 64 bytes from 192.168.43.1: icmp_seq=2 ttl=64 time=0.399 ms (DUP!) > 64 bytes from 192.168.43.1: icmp_seq=3 ttl=64 time=0.484 ms > 64 bytes from 192.168.43.1: icmp_seq=3 ttl=64 time=0.485 ms (DUP!) > 64 bytes from 192.168.43.1: icmp_seq=4 ttl=64 time=0.411 ms > 64 bytes from 192.168.43.1: icmp_seq=4 ttl=64 time=0.411 ms (DUP!) > 64 bytes from 192.168.43.1: icmp_seq=5 ttl=64 time=0.299 ms > > --- 192.168.43.1 ping statistics --- > 5 packets transmitted, 5 received, +4 duplicates, 0% packet loss, time > 4101ms > rtt min/avg/max/mdev = 0.299/0.468/0.899/0.161 ms > > > > > We can see the DUP! which are coming from the 2 SNAT nodes that I have > (I am using max_l3_agents_per_router=2). > > > > Cheers > > > On 12.12.22 - 10:11, Rodolfo Alonso Hernandez wrote: >> Hello Arnaud: >> >> You said "all distributed routers are answering to ARP and ICMP, thus >> creating duplicates in the network". To what IP addresses are the DVR >> routers replying? >> >> Regards. >> >> >> On Mon, Dec 12, 2022 at 10:01 AM Arnaud Morin >> wrote: >> >>> Hello team, >>> >>> When using router in DVR (+ HA), we end-up having the router on all >>> computes where needed. >>> >>> So far, this is nice. >>> >>> We want to introduce Ironic baremetal servers, with a private network >>> access. >>> DVR won't apply on such baremetal servers, and we know floating IP are >>> not going to work. >>> >>> Anyway, we were thinking that SNAT part would be OK. >>> After doing few tests, we noticed that all distributed routers are >>> answering to ARP and ICMP, thus creating duplicates in the network. >>> >>> $ arping -c1 192.168.43.1 >>> ARPING 192.168.43.1 >>> 60 bytes from fa:16:3f:67:97:6a (192.168.43.1): index=0 time=634.700 usec >>> 60 bytes from fa:16:3f:dc:67:91 (192.168.43.1): index=1 time=750.298 usec >>> >>> --- 192.168.43.1 statistics --- >>> 1 packets transmitted, 2 packets received, 0% unanswered (1 extra) >>> >>> >>> >>> Is there anything possible on neutron side to prevent this? >>> >>> >>> FYI, I did a comparison with routers in centralized mode (+ HA). >>> In that situation, keepalived is putting the qr-xxx interface down in >>> qrouter namespace. >>> In distributed mode, keepalives is running in snat- namespace and cannot >>> manage the router interface. >>> >>> Any help / tip would be appreciated. >>> >>> Thanks! >>> >>> Arnaud. >>> >>> > From arnaud.morin at gmail.com Mon Dec 12 14:33:08 2022 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Mon, 12 Dec 2022 14:33:08 +0000 Subject: [neutron][ironic] Distributed routers and SNAT In-Reply-To: References: Message-ID: Hello, I am not, on computes I am using agent_mode=dvr on network nodes, I am using agent_mode=dvr_snat Note that the computes routers are also answering as soon an instance lives on it (or a dhcp agent hosting the network). Arnaud On 12.12.22 - 09:19, Brian Haley wrote: > Hi Arnaud, > > Are you using agent_mode=dvr_snat on computes? That is unsupported: > > https://review.opendev.org/c/openstack/neutron/+/801503 > > -Brian > > On 12/12/22 4:30 AM, Arnaud Morin wrote: > > Hello, > > > > My subnet is: 192.168.43.0/24 > > > > My router is: 192.168.43.1 > > > > My ironic server is: 192.168.43.43 > > > > When I do a ping against router from server: > > $ ping -c5 192.168.43.1 > > PING 192.168.43.1 (192.168.43.1) 56(84) bytes of data. > > 64 bytes from 192.168.43.1: icmp_seq=1 ttl=64 time=0.458 ms > > 64 bytes from 192.168.43.1: icmp_seq=1 ttl=64 time=0.899 ms (DUP!) > > 64 bytes from 192.168.43.1: icmp_seq=2 ttl=64 time=0.372 ms > > 64 bytes from 192.168.43.1: icmp_seq=2 ttl=64 time=0.399 ms (DUP!) > > 64 bytes from 192.168.43.1: icmp_seq=3 ttl=64 time=0.484 ms > > 64 bytes from 192.168.43.1: icmp_seq=3 ttl=64 time=0.485 ms (DUP!) > > 64 bytes from 192.168.43.1: icmp_seq=4 ttl=64 time=0.411 ms > > 64 bytes from 192.168.43.1: icmp_seq=4 ttl=64 time=0.411 ms (DUP!) > > 64 bytes from 192.168.43.1: icmp_seq=5 ttl=64 time=0.299 ms > > > > --- 192.168.43.1 ping statistics --- > > 5 packets transmitted, 5 received, +4 duplicates, 0% packet loss, time > > 4101ms > > rtt min/avg/max/mdev = 0.299/0.468/0.899/0.161 ms > > > > > > > > > > We can see the DUP! which are coming from the 2 SNAT nodes that I have > > (I am using max_l3_agents_per_router=2). > > > > > > > > Cheers > > > > > > On 12.12.22 - 10:11, Rodolfo Alonso Hernandez wrote: > > > Hello Arnaud: > > > > > > You said "all distributed routers are answering to ARP and ICMP, thus > > > creating duplicates in the network". To what IP addresses are the DVR > > > routers replying? > > > > > > Regards. > > > > > > > > > On Mon, Dec 12, 2022 at 10:01 AM Arnaud Morin > > > wrote: > > > > > > > Hello team, > > > > > > > > When using router in DVR (+ HA), we end-up having the router on all > > > > computes where needed. > > > > > > > > So far, this is nice. > > > > > > > > We want to introduce Ironic baremetal servers, with a private network > > > > access. > > > > DVR won't apply on such baremetal servers, and we know floating IP are > > > > not going to work. > > > > > > > > Anyway, we were thinking that SNAT part would be OK. > > > > After doing few tests, we noticed that all distributed routers are > > > > answering to ARP and ICMP, thus creating duplicates in the network. > > > > > > > > $ arping -c1 192.168.43.1 > > > > ARPING 192.168.43.1 > > > > 60 bytes from fa:16:3f:67:97:6a (192.168.43.1): index=0 time=634.700 usec > > > > 60 bytes from fa:16:3f:dc:67:91 (192.168.43.1): index=1 time=750.298 usec > > > > > > > > --- 192.168.43.1 statistics --- > > > > 1 packets transmitted, 2 packets received, 0% unanswered (1 extra) > > > > > > > > > > > > > > > > Is there anything possible on neutron side to prevent this? > > > > > > > > > > > > FYI, I did a comparison with routers in centralized mode (+ HA). > > > > In that situation, keepalived is putting the qr-xxx interface down in > > > > qrouter namespace. > > > > In distributed mode, keepalives is running in snat- namespace and cannot > > > > manage the router interface. > > > > > > > > Any help / tip would be appreciated. > > > > > > > > Thanks! > > > > > > > > Arnaud. > > > > > > > > > > From juliaashleykreger at gmail.com Mon Dec 12 14:46:57 2022 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 12 Dec 2022 06:46:57 -0800 Subject: [neutron][ironic] Distributed routers and SNAT In-Reply-To: References: Message-ID: So for clarification, just so we're all on the same page. You have dedicated network nodes, which are running the agent, and the bare metal nodes are obviously wired into them on the same logical network, https://bugs.launchpad.net/neutron/+bug/1934666 refers only to on compute nodes, which seems different from this configuration. On Mon, Dec 12, 2022 at 6:36 AM Arnaud Morin wrote: > Hello, > > I am not, on computes I am using agent_mode=dvr > on network nodes, I am using agent_mode=dvr_snat > > Note that the computes routers are also answering as soon an instance > lives on it (or a dhcp agent hosting the network). > > Arnaud > > On 12.12.22 - 09:19, Brian Haley wrote: > > Hi Arnaud, > > > > Are you using agent_mode=dvr_snat on computes? That is unsupported: > > > > https://review.opendev.org/c/openstack/neutron/+/801503 > > > > -Brian > > > > On 12/12/22 4:30 AM, Arnaud Morin wrote: > > > Hello, > > > > > > My subnet is: 192.168.43.0/24 > > > > > > My router is: 192.168.43.1 > > > > > > My ironic server is: 192.168.43.43 > > > > > > When I do a ping against router from server: > > > $ ping -c5 192.168.43.1 > > > PING 192.168.43.1 (192.168.43.1) 56(84) bytes of data. > > > 64 bytes from 192.168.43.1: icmp_seq=1 ttl=64 time=0.458 ms > > > 64 bytes from 192.168.43.1: icmp_seq=1 ttl=64 time=0.899 ms (DUP!) > > > 64 bytes from 192.168.43.1: icmp_seq=2 ttl=64 time=0.372 ms > > > 64 bytes from 192.168.43.1: icmp_seq=2 ttl=64 time=0.399 ms (DUP!) > > > 64 bytes from 192.168.43.1: icmp_seq=3 ttl=64 time=0.484 ms > > > 64 bytes from 192.168.43.1: icmp_seq=3 ttl=64 time=0.485 ms (DUP!) > > > 64 bytes from 192.168.43.1: icmp_seq=4 ttl=64 time=0.411 ms > > > 64 bytes from 192.168.43.1: icmp_seq=4 ttl=64 time=0.411 ms (DUP!) > > > 64 bytes from 192.168.43.1: icmp_seq=5 ttl=64 time=0.299 ms > > > > > > --- 192.168.43.1 ping statistics --- > > > 5 packets transmitted, 5 received, +4 duplicates, 0% packet loss, time > > > 4101ms > > > rtt min/avg/max/mdev = 0.299/0.468/0.899/0.161 ms > > > > > > > > > > > > > > > We can see the DUP! which are coming from the 2 SNAT nodes that I have > > > (I am using max_l3_agents_per_router=2). > > > > > > > > > > > > Cheers > > > > > > > > > On 12.12.22 - 10:11, Rodolfo Alonso Hernandez wrote: > > > > Hello Arnaud: > > > > > > > > You said "all distributed routers are answering to ARP and ICMP, thus > > > > creating duplicates in the network". To what IP addresses are the DVR > > > > routers replying? > > > > > > > > Regards. > > > > > > > > > > > > On Mon, Dec 12, 2022 at 10:01 AM Arnaud Morin < > arnaud.morin at gmail.com> > > > > wrote: > > > > > > > > > Hello team, > > > > > > > > > > When using router in DVR (+ HA), we end-up having the router on all > > > > > computes where needed. > > > > > > > > > > So far, this is nice. > > > > > > > > > > We want to introduce Ironic baremetal servers, with a private > network > > > > > access. > > > > > DVR won't apply on such baremetal servers, and we know floating IP > are > > > > > not going to work. > > > > > > > > > > Anyway, we were thinking that SNAT part would be OK. > > > > > After doing few tests, we noticed that all distributed routers are > > > > > answering to ARP and ICMP, thus creating duplicates in the network. > > > > > > > > > > $ arping -c1 192.168.43.1 > > > > > ARPING 192.168.43.1 > > > > > 60 bytes from fa:16:3f:67:97:6a (192.168.43.1): index=0 > time=634.700 usec > > > > > 60 bytes from fa:16:3f:dc:67:91 (192.168.43.1): index=1 > time=750.298 usec > > > > > > > > > > --- 192.168.43.1 statistics --- > > > > > 1 packets transmitted, 2 packets received, 0% unanswered (1 > extra) > > > > > > > > > > > > > > > > > > > > Is there anything possible on neutron side to prevent this? > > > > > > > > > > > > > > > FYI, I did a comparison with routers in centralized mode (+ HA). > > > > > In that situation, keepalived is putting the qr-xxx interface down > in > > > > > qrouter namespace. > > > > > In distributed mode, keepalives is running in snat- namespace and > cannot > > > > > manage the router interface. > > > > > > > > > > Any help / tip would be appreciated. > > > > > > > > > > Thanks! > > > > > > > > > > Arnaud. > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rishat.azizov at gmail.com Sat Dec 10 11:55:55 2022 From: rishat.azizov at gmail.com (=?UTF-8?B?0KDQuNGI0LDRgiDQkNC30LjQt9C+0LI=?=) Date: Sat, 10 Dec 2022 17:55:55 +0600 Subject: [cinder] Problem with restoring purestorage volume backup In-Reply-To: References: Message-ID: Hello, Thanks. We use cinder 18.2.0-1 Wallaby. Can we use this patch? Or does it need to be improved? ??, 10 ???. 2022 ?. ? 17:54, ????? ?????? : > Hello, > > Thanks. We use cinder 18.2.0-1 Wallaby. Can we use this patch? Or does it > need to be improved? > > ??, 26 ???. 2022 ?. ? 23:12, Sofia Enriquez : > >> Hello, >> This is a major bug in Cinder that has been reported. Please check >> https://launchpad.net/bugs/1895035. >> The bug has a fix proposed to master to haven't merged yet [2]. >> I'll mention it again in next week's Cinder meeting. >> Regards, >> Sofia >> [2] https://review.opendev.org/c/openstack/cinder/+/750782 >> >> On Wed, Oct 26, 2022 at 9:45 AM ????? ?????? >> wrote: >> >>> Hello! >>> >>> We have a problem with cinder-backup with cephbackup driver and volume >>> on purestorage. When backing up purestorage volume with cinder-backup with >>> cephbackup driver it creates in ceph pool and everything is ok. But >>> when we try to restore the backup, it is not restored with an error >>> "rbd.ImageNotFound" in the screenshot attached to this email. This happens >>> because the original image is not in rbd, it is in purestorage. It is >>> not clear why the cinder is trying to look for a disk in the ceph. Could >>> you please help with this? >>> >>> Thanks. Regards. >>> >> >> >> -- >> >> Sof?a Enriquez >> >> she/her >> >> Software Engineer >> >> Red Hat PnT >> >> IRC: @enriquetaso >> @RedHat Red Hat >> Red Hat >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From 2292613444 at qq.com Sun Dec 11 04:01:27 2022 From: 2292613444 at qq.com (=?gb18030?B?zt7K/bXE0MfH8g==?=) Date: Sun, 11 Dec 2022 12:01:27 +0800 Subject: the zun server (zun-api) in controller host error(s) Message-ID: iAfter I followed the official documentation and fixed a 'pip3' error, it still failed to run successfully. This is an error reported by the zun api. Other components work normally. If you need my profile, it is attached root at controller:~# systemctl status zun-api ? zun-api.service - OpenStack Container Service API      Loaded: loaded (/etc/systemd/system/zun-api.service; enabled; vendor preset: enabled)      Active: failed (Result: exit-code) since Sun 2022-12-11 11:43:04 CST; 1min 38s ago     Process: 1017 ExecStart=/usr/local/bin/zun-api (code=exited, status=1/FAILURE)    Main PID: 1017 (code=exited, status=1/FAILURE) Dec 11 11:43:04 controller zun-api[1017]:     from zun import objects Dec 11 11:43:04 controller zun-api[1017]:   File "/usr/local/lib/python3.8/dist-packages/zun/objects/__init__.py", line 13, in From arnaud.morin at gmail.com Mon Dec 12 15:33:03 2022 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Mon, 12 Dec 2022 15:33:03 +0000 Subject: [neutron][ironic] Distributed routers and SNAT In-Reply-To: References: Message-ID: Yes, network nodes and baremetal nodes are on the same physical network. I want the baremetal to use the neutron routers as SNAT gateways, just like a regular instance. Within an hypervisor, the instance is having a small qrouter- namespace (DVR) which is acting as local router before forwarding the traffic to the network (SNAT) node (this is done in OVS with openflow rules). >From a baremetal perspective, I dont have this small DVR, so I am reaching the router on the SNAT nodes, which are both answering. On 12.12.22 - 06:46, Julia Kreger wrote: > So for clarification, just so we're all on the same page. You > have dedicated network nodes, which are running the agent, and the bare > metal nodes are obviously wired into them on the same logical network, > > https://bugs.launchpad.net/neutron/+bug/1934666 refers only to on compute > nodes, which seems different from this configuration. > > On Mon, Dec 12, 2022 at 6:36 AM Arnaud Morin wrote: > > > Hello, > > > > I am not, on computes I am using agent_mode=dvr > > on network nodes, I am using agent_mode=dvr_snat > > > > Note that the computes routers are also answering as soon an instance > > lives on it (or a dhcp agent hosting the network). > > > > Arnaud > > > > On 12.12.22 - 09:19, Brian Haley wrote: > > > Hi Arnaud, > > > > > > Are you using agent_mode=dvr_snat on computes? That is unsupported: > > > > > > https://review.opendev.org/c/openstack/neutron/+/801503 > > > > > > -Brian > > > > > > On 12/12/22 4:30 AM, Arnaud Morin wrote: > > > > Hello, > > > > > > > > My subnet is: 192.168.43.0/24 > > > > > > > > My router is: 192.168.43.1 > > > > > > > > My ironic server is: 192.168.43.43 > > > > > > > > When I do a ping against router from server: > > > > $ ping -c5 192.168.43.1 > > > > PING 192.168.43.1 (192.168.43.1) 56(84) bytes of data. > > > > 64 bytes from 192.168.43.1: icmp_seq=1 ttl=64 time=0.458 ms > > > > 64 bytes from 192.168.43.1: icmp_seq=1 ttl=64 time=0.899 ms (DUP!) > > > > 64 bytes from 192.168.43.1: icmp_seq=2 ttl=64 time=0.372 ms > > > > 64 bytes from 192.168.43.1: icmp_seq=2 ttl=64 time=0.399 ms (DUP!) > > > > 64 bytes from 192.168.43.1: icmp_seq=3 ttl=64 time=0.484 ms > > > > 64 bytes from 192.168.43.1: icmp_seq=3 ttl=64 time=0.485 ms (DUP!) > > > > 64 bytes from 192.168.43.1: icmp_seq=4 ttl=64 time=0.411 ms > > > > 64 bytes from 192.168.43.1: icmp_seq=4 ttl=64 time=0.411 ms (DUP!) > > > > 64 bytes from 192.168.43.1: icmp_seq=5 ttl=64 time=0.299 ms > > > > > > > > --- 192.168.43.1 ping statistics --- > > > > 5 packets transmitted, 5 received, +4 duplicates, 0% packet loss, time > > > > 4101ms > > > > rtt min/avg/max/mdev = 0.299/0.468/0.899/0.161 ms > > > > > > > > > > > > > > > > > > > > We can see the DUP! which are coming from the 2 SNAT nodes that I have > > > > (I am using max_l3_agents_per_router=2). > > > > > > > > > > > > > > > > Cheers > > > > > > > > > > > > On 12.12.22 - 10:11, Rodolfo Alonso Hernandez wrote: > > > > > Hello Arnaud: > > > > > > > > > > You said "all distributed routers are answering to ARP and ICMP, thus > > > > > creating duplicates in the network". To what IP addresses are the DVR > > > > > routers replying? > > > > > > > > > > Regards. > > > > > > > > > > > > > > > On Mon, Dec 12, 2022 at 10:01 AM Arnaud Morin < > > arnaud.morin at gmail.com> > > > > > wrote: > > > > > > > > > > > Hello team, > > > > > > > > > > > > When using router in DVR (+ HA), we end-up having the router on all > > > > > > computes where needed. > > > > > > > > > > > > So far, this is nice. > > > > > > > > > > > > We want to introduce Ironic baremetal servers, with a private > > network > > > > > > access. > > > > > > DVR won't apply on such baremetal servers, and we know floating IP > > are > > > > > > not going to work. > > > > > > > > > > > > Anyway, we were thinking that SNAT part would be OK. > > > > > > After doing few tests, we noticed that all distributed routers are > > > > > > answering to ARP and ICMP, thus creating duplicates in the network. > > > > > > > > > > > > $ arping -c1 192.168.43.1 > > > > > > ARPING 192.168.43.1 > > > > > > 60 bytes from fa:16:3f:67:97:6a (192.168.43.1): index=0 > > time=634.700 usec > > > > > > 60 bytes from fa:16:3f:dc:67:91 (192.168.43.1): index=1 > > time=750.298 usec > > > > > > > > > > > > --- 192.168.43.1 statistics --- > > > > > > 1 packets transmitted, 2 packets received, 0% unanswered (1 > > extra) > > > > > > > > > > > > > > > > > > > > > > > > Is there anything possible on neutron side to prevent this? > > > > > > > > > > > > > > > > > > FYI, I did a comparison with routers in centralized mode (+ HA). > > > > > > In that situation, keepalived is putting the qr-xxx interface down > > in > > > > > > qrouter namespace. > > > > > > In distributed mode, keepalives is running in snat- namespace and > > cannot > > > > > > manage the router interface. > > > > > > > > > > > > Any help / tip would be appreciated. > > > > > > > > > > > > Thanks! > > > > > > > > > > > > Arnaud. > > > > > > > > > > > > > > > > > > > > From haleyb.dev at gmail.com Mon Dec 12 15:35:05 2022 From: haleyb.dev at gmail.com (Brian Haley) Date: Mon, 12 Dec 2022 10:35:05 -0500 Subject: [neutron] Bug deputy report for week of December 5th Message-ID: Hi, I was Neutron bug deputy last week. Below is a short summary about the reported bugs. -Brian Critical bugs ------------- * https://bugs.launchpad.net/neutron/+bug/1999249 - neutron-tempest-plugin jobs timing out on nested-virt nodes - Needs owner Medium bugs ----------- * https://bugs.launchpad.net/neutron/+bug/1998749 - [L3][DVR][vlan] east-west traffic flooding on physical bridges - Needs owner * https://bugs.launchpad.net/neutron/+bug/1998751 - [L3][DVR][vlan] HA VRRP traffic flooding on physical bridges on compute nodes - Needs owner * https://bugs.launchpad.net/neutron/+bug/1998952 - The static route removes the local route from qrouter-namespace - Adding/removing route same as attached interface subnet causes issue - Possible user error, the neutron API will let you "break" things - https://bugzilla.redhat.com/show_bug.cgi?id=1836870 has link to note - Needs discussion/owner * https://bugs.launchpad.net/neutron/+bug/1999154 - ovs/ovn source deployment broken with ovs_branch=master - https://review.opendev.org/c/openstack/neutron/+/867004 Low bugs -------- * https://bugs.launchpad.net/neutron/+bug/1998833 - [OVN] "MetadataAgentOvnSbIdl" should implement "post_connect" method - https://review.opendev.org/c/openstack/neutron/+/866601 Misc bugs --------- * https://bugs.launchpad.net/neutron/+bug/1999170 - Package names missing in neutron openvswitch setup guide - Added note that this information is in the install guide, closed * https://bugs.launchpad.net/neutron/+bug/1999209 - [ovn]dnat_adn_snat will not changed if updated fixed_ips of internal port - Asked for more information * https://bugs.launchpad.net/neutron/+bug/1999238 * https://bugs.launchpad.net/neutron/+bug/1999239 * https://bugs.launchpad.net/neutron/+bug/1999240 - Functional test failures - Related to https://review.opendev.org/c/openstack/neutron/+/867075 From arnaud.morin at gmail.com Mon Dec 12 15:41:00 2022 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Mon, 12 Dec 2022 15:41:00 +0000 Subject: [neutron][ironic] Distributed routers and SNAT In-Reply-To: References: Message-ID: Here is a small image that explain the issue. Both network nodes are hosting the same router, in HA. One if MASTER, other is BACKUP. BUT, both of them are still accessible/answering. Cheers On 12.12.22 - 15:33, Arnaud Morin wrote: > Yes, network nodes and baremetal nodes are on the same physical network. > I want the baremetal to use the neutron routers as SNAT gateways, > just like a regular instance. > > Within an hypervisor, the instance is having a small qrouter- namespace > (DVR) which is acting as local router before forwarding the traffic to > the network (SNAT) node (this is done in OVS with openflow rules). > > From a baremetal perspective, I dont have this small DVR, so I am > reaching the router on the SNAT nodes, which are both answering. > > > > On 12.12.22 - 06:46, Julia Kreger wrote: > > So for clarification, just so we're all on the same page. You > > have dedicated network nodes, which are running the agent, and the bare > > metal nodes are obviously wired into them on the same logical network, > > > > https://bugs.launchpad.net/neutron/+bug/1934666 refers only to on compute > > nodes, which seems different from this configuration. > > > > On Mon, Dec 12, 2022 at 6:36 AM Arnaud Morin wrote: > > > > > Hello, > > > > > > I am not, on computes I am using agent_mode=dvr > > > on network nodes, I am using agent_mode=dvr_snat > > > > > > Note that the computes routers are also answering as soon an instance > > > lives on it (or a dhcp agent hosting the network). > > > > > > Arnaud > > > > > > On 12.12.22 - 09:19, Brian Haley wrote: > > > > Hi Arnaud, > > > > > > > > Are you using agent_mode=dvr_snat on computes? That is unsupported: > > > > > > > > https://review.opendev.org/c/openstack/neutron/+/801503 > > > > > > > > -Brian > > > > > > > > On 12/12/22 4:30 AM, Arnaud Morin wrote: > > > > > Hello, > > > > > > > > > > My subnet is: 192.168.43.0/24 > > > > > > > > > > My router is: 192.168.43.1 > > > > > > > > > > My ironic server is: 192.168.43.43 > > > > > > > > > > When I do a ping against router from server: > > > > > $ ping -c5 192.168.43.1 > > > > > PING 192.168.43.1 (192.168.43.1) 56(84) bytes of data. > > > > > 64 bytes from 192.168.43.1: icmp_seq=1 ttl=64 time=0.458 ms > > > > > 64 bytes from 192.168.43.1: icmp_seq=1 ttl=64 time=0.899 ms (DUP!) > > > > > 64 bytes from 192.168.43.1: icmp_seq=2 ttl=64 time=0.372 ms > > > > > 64 bytes from 192.168.43.1: icmp_seq=2 ttl=64 time=0.399 ms (DUP!) > > > > > 64 bytes from 192.168.43.1: icmp_seq=3 ttl=64 time=0.484 ms > > > > > 64 bytes from 192.168.43.1: icmp_seq=3 ttl=64 time=0.485 ms (DUP!) > > > > > 64 bytes from 192.168.43.1: icmp_seq=4 ttl=64 time=0.411 ms > > > > > 64 bytes from 192.168.43.1: icmp_seq=4 ttl=64 time=0.411 ms (DUP!) > > > > > 64 bytes from 192.168.43.1: icmp_seq=5 ttl=64 time=0.299 ms > > > > > > > > > > --- 192.168.43.1 ping statistics --- > > > > > 5 packets transmitted, 5 received, +4 duplicates, 0% packet loss, time > > > > > 4101ms > > > > > rtt min/avg/max/mdev = 0.299/0.468/0.899/0.161 ms > > > > > > > > > > > > > > > > > > > > > > > > > We can see the DUP! which are coming from the 2 SNAT nodes that I have > > > > > (I am using max_l3_agents_per_router=2). > > > > > > > > > > > > > > > > > > > > Cheers > > > > > > > > > > > > > > > On 12.12.22 - 10:11, Rodolfo Alonso Hernandez wrote: > > > > > > Hello Arnaud: > > > > > > > > > > > > You said "all distributed routers are answering to ARP and ICMP, thus > > > > > > creating duplicates in the network". To what IP addresses are the DVR > > > > > > routers replying? > > > > > > > > > > > > Regards. > > > > > > > > > > > > > > > > > > On Mon, Dec 12, 2022 at 10:01 AM Arnaud Morin < > > > arnaud.morin at gmail.com> > > > > > > wrote: > > > > > > > > > > > > > Hello team, > > > > > > > > > > > > > > When using router in DVR (+ HA), we end-up having the router on all > > > > > > > computes where needed. > > > > > > > > > > > > > > So far, this is nice. > > > > > > > > > > > > > > We want to introduce Ironic baremetal servers, with a private > > > network > > > > > > > access. > > > > > > > DVR won't apply on such baremetal servers, and we know floating IP > > > are > > > > > > > not going to work. > > > > > > > > > > > > > > Anyway, we were thinking that SNAT part would be OK. > > > > > > > After doing few tests, we noticed that all distributed routers are > > > > > > > answering to ARP and ICMP, thus creating duplicates in the network. > > > > > > > > > > > > > > $ arping -c1 192.168.43.1 > > > > > > > ARPING 192.168.43.1 > > > > > > > 60 bytes from fa:16:3f:67:97:6a (192.168.43.1): index=0 > > > time=634.700 usec > > > > > > > 60 bytes from fa:16:3f:dc:67:91 (192.168.43.1): index=1 > > > time=750.298 usec > > > > > > > > > > > > > > --- 192.168.43.1 statistics --- > > > > > > > 1 packets transmitted, 2 packets received, 0% unanswered (1 > > > extra) > > > > > > > > > > > > > > > > > > > > > > > > > > > > Is there anything possible on neutron side to prevent this? > > > > > > > > > > > > > > > > > > > > > FYI, I did a comparison with routers in centralized mode (+ HA). > > > > > > > In that situation, keepalived is putting the qr-xxx interface down > > > in > > > > > > > qrouter namespace. > > > > > > > In distributed mode, keepalives is running in snat- namespace and > > > cannot > > > > > > > manage the router interface. > > > > > > > > > > > > > > Any help / tip would be appreciated. > > > > > > > > > > > > > > Thanks! > > > > > > > > > > > > > > Arnaud. > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: s_1670859430.png Type: image/png Size: 123653 bytes Desc: not available URL: From rdhasman at redhat.com Mon Dec 12 16:11:06 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Mon, 12 Dec 2022 21:41:06 +0530 Subject: [cinder] Festival of XS reviews? Message-ID: Hello Argonauts, I've come to know informally that people might not be around during this time due to the holiday season. Based on this, I would like to ask the cinder team if we would like to have the festival of XS reviews this friday or not? Date: 16 December, 2022 Time: 1400-1600 UTC Please reply with your feedback and if we've enough people joining, we can continue with the session. Thanks Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Mon Dec 12 17:34:18 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Mon, 12 Dec 2022 17:34:18 +0000 Subject: [all] many Python 3.11 failures in OpenStack components In-Reply-To: <686cd927-1efe-ea15-6dd8-be63b5ff13fb@debian.org> References: <46d96098-67ce-5ea1-6d7a-26e9103594f4@debian.org> <686cd927-1efe-ea15-6dd8-be63b5ff13fb@debian.org> Message-ID: <052909812ef1ffd18eed50f4da18b24ce97ba934.camel@redhat.com> On Mon, 2022-12-12 at 10:55 +0100, Thomas Goirand wrote: > FYI, only these are remaining for me: > > On 12/5/22 16:41, Thomas Goirand wrote: > > networking-mlnx: > > https://bugs.debian.org/1021924 > > FTBFS test failures: sqlalchemy.exc.InvalidRequestError: A transaction > > is already begun on this Session. > > > > python-novaclient: > > https://bugs.debian.org/1025110 > > needs update for python3.11: conflicting subparser > > For novaclient, a similar fix as for cinderclient should be applied: > https://review.opendev.org/c/openstack/python-cinderclient/+/851467 > > Can someone from the Nova team take care of it? https://review.opendev.org/c/openstack/python-novaclient/+/867270 > > As the mlnx driver isn't critical, I'd say Zed in Debian Bookworm is now > Python 3.11 ready! :) > > Thanks for everyone that helped fixing the bugs. > > Cheers, > > Thomas Goirand (zigo) > > From gmann at ghanshyammann.com Mon Dec 12 19:40:15 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 12 Dec 2022 11:40:15 -0800 Subject: [neutron][devstack][qa] Dropping lib/neutron module In-Reply-To: <18352719.yBdmKBht2i@p1> References: <18352719.yBdmKBht2i@p1> Message-ID: <18507da28c1.d4408c17747031.603333610850051259@ghanshyammann.com> ---- On Mon, 28 Nov 2022 03:23:53 -0800 Slawek Kaplonski wrote --- > Hi, > > As You maybe know (or not as this was very long time ago) there are 2 modules to deploy Neutron in devstack: > * old one called lib/neutron-legacy > * new one called lib/neutron > > The problem is that new module lib/neutron was really never finished and used widely and still everyone is using (and should use) old one lib/neutron-legacy. > We discussed that few times during PTGs and we finally decided to drop "new" module lib/neutron and have then rename old "lib/neutron-legacy" to be "lib/neutron" again. > Decision was made because old module works fine and do its job, and as there is nobody who would like to finish really new module. Also having 2 modules from which "legacy" one is the only one which really works can be confusing and we want to avoid that confusion. > > So I proposed patches [1] and [2] to drop this unfinished module. I also proposed DNM patches for Neutron [3] and Tempest [4] to test that all jobs will work fine. But if You are maybe relaying on the Neutron modules from Devstack, please propose some test patch in Your project too and check if everything works fine for You. > > In patch [2] I didn't really removed "lib/neutron-legacy" yet because there are some projects which are sourcing that file in their devstack module. As [1] and [2] will be merged I will be proposing patches to change that for those projects but any help with that is welcome :) Thanks, Slawek to clean up this. Instead of removing lib/neutron and renaming lib/neutorn-legacy in two separate steps/patches, should not we do it in a single patch so that 1. project using lib/neutron (like octavia[1]) does not need any change 2. project using lib/neutron-legacy and lib/neutron both[2] does not need to change it twice (if 865015 takes time to merge and they need to remove lib/neutron usage first)? [1] https://opendev.org/openstack/octavia/src/commit/8d27bdb5462474dafc934d164f3a299bfca8dd89/devstack/upgrade/shutdown.sh#L13 [2] https://opendev.org/openstack/networking-generic-switch/src/commit/0ecd02aaa036311c67671717562db956bbc810aa/devstack/upgrade/upgrade.sh#L45-L46 -gmann > > [1] https://review.opendev.org/c/openstack/devstack/+/865014 > [2] https://review.opendev.org/c/openstack/devstack/+/865015 > [3] https://review.opendev.org/c/openstack/neutron/+/865822 > [4] https://review.opendev.org/c/openstack/tempest/+/865821 > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > From gmann at ghanshyammann.com Mon Dec 12 19:48:32 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 12 Dec 2022 11:48:32 -0800 Subject: [all] many Python 3.11 failures in OpenStack components In-Reply-To: <686cd927-1efe-ea15-6dd8-be63b5ff13fb@debian.org> References: <46d96098-67ce-5ea1-6d7a-26e9103594f4@debian.org> <686cd927-1efe-ea15-6dd8-be63b5ff13fb@debian.org> Message-ID: <18507e1bbc4.11e67eeb4747305.719600654594675902@ghanshyammann.com> ---- On Mon, 12 Dec 2022 01:55:38 -0800 Thomas Goirand wrote --- > FYI, only these are remaining for me: > > On 12/5/22 16:41, Thomas Goirand wrote: > > networking-mlnx: > > https://bugs.debian.org/1021924 > > FTBFS test failures: sqlalchemy.exc.InvalidRequestError: A transaction > > is already begun on this Session. > > > > python-novaclient: > > https://bugs.debian.org/1025110 > > needs update for python3.11: conflicting subparser > > For novaclient, a similar fix as for cinderclient should be applied: > https://review.opendev.org/c/openstack/python-cinderclient/+/851467 > > Can someone from the Nova team take care of it? > > As the mlnx driver isn't critical, I'd say Zed in Debian Bookworm is now > Python 3.11 ready! :) Thanks, zigo for testing, once it is available we can add py3.11 non voting unit test job also in this cycle. -gmann > > Thanks for everyone that helped fixing the bugs. > > Cheers, > > Thomas Goirand (zigo) > > > From gmann at ghanshyammann.com Mon Dec 12 22:15:30 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 12 Dec 2022 14:15:30 -0800 Subject: [all][tc] Technical Committee next weekly meeting on 2022 Dec 14 at 1600 UTC Message-ID: <18508684b10.b16bee35751533.7881950644328973637@ghanshyammann.com> Hello Everyone, The technical Committee's next weekly meeting is scheduled for 2022 Dec 14, at 1600 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Tuesday, Dec 13 at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From xek at redhat.com Tue Dec 13 10:23:53 2022 From: xek at redhat.com (Grzegorz Grasza) Date: Tue, 13 Dec 2022 11:23:53 +0100 Subject: [barbican] Meeting cancelled today Message-ID: Hi all, I won't be able to host the Barbican meeting today. As there are no outstanding topics to discuss, I'm cancelling it. / Greg -------------- next part -------------- An HTML attachment was scrubbed... URL: From hemant.sonawane at itera.io Tue Dec 13 13:43:33 2022 From: hemant.sonawane at itera.io (Hemant Sonawane) Date: Tue, 13 Dec 2022 14:43:33 +0100 Subject: [Keystone] member role with access rule can not list hypervisors details Message-ID: Hello, Does application credentials created as member role with access rule need any special permissions to list hypervisors via cli? Here is the access rule i created for my application credentials [ { "path": "/v2.1/**", "method": "GET", "service": "compute" } ] +--------------------------------+---------+--------+-------------------+ | ID | Service | Method | Path | +--------------------------------+---------+--------+-------------------+ | f2417b29f8394281aaa088d7c01059 | compute | POST | /v2.1/hypervisors | | 0cacd63becc04db086cd070db978c5 | compute | POST | /v2.1/servers | | 366abc32faf543b9ab047a83eafe83 | compute | GET | /v2.1/servers | | 4a4258b6a37c49f3816b708efa179f | compute | GET | /v2.1/** | +--------------------------------+---------+--------+-------------------+ I tried to use other access rule too. After sourcing the credentials if i would like to list any of the openstack resources it simply does not work *openstack volume list openstack The request you have made requires authentication. (HTTP 401)openstack hypervisor list The request you have made requires authentication. (HTTP 401) (Request-ID: req-c4532458-5606-4c85-8f93-20e61f57e2c9)* i think this could be a bug in keystone or it might need any special permissions with access rule. Please help will be really appreciated. -- Thanks and Regards, Hemant Sonawane -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Tue Dec 13 17:04:16 2022 From: jay at gr-oss.io (Jay Faulkner) Date: Tue, 13 Dec 2022 09:04:16 -0800 Subject: [ironic] Bugfix branches being EOL'd first week of Jan, 2023 Message-ID: OpenStack Community and Operators, As documented in https://specs.openstack.org/openstack/ironic-specs/specs/approved/new-release-model.html, Ironic performs bugfix releases in the middle of a cycle to permit downstream packagers to more rapidly deliver features to standalone Ironic users. However, we've neglected as a project to cleanup or EOL any of these branches -- until now. Please take notice that during the first week in January, we will be EOL-ing all old, unsupported Ironic bugfix branches. This will be handled similarly to an EOL of a stable branch; we will create a tag -- e.g. for bugfix/x.y branch, we would tag bugfix-x.y-eol -- then remove the branch. These branches have been out of support for months and should not be in use in your Ironic clusters. If you are using any branches slated for retirement, please immediately upgrade to a supported Ironic version. A full listing of projects and branches impacted: ironic branches being retired bugfix/15.1 bugfix/15.2 bugfix/16.1 bugfix/16.2 bugfix/18.0 bugfix/20.0 ironic-python-agent branches being retired bugfix/6.2 bugfix/6.3 bugfix/6.5 bugfix/6.6 bugfix/8.0 bugfix/8.4 ironic-inspector branches being retired bugfix/10.2 bugfix/10.3 bugfix/10.5 bugfix/10.10 Thank you, Jay Faulkner Ironic PTL -------------- next part -------------- An HTML attachment was scrubbed... URL: From hemant.sonawane at itera.io Tue Dec 13 13:25:14 2022 From: hemant.sonawane at itera.io (Hemant Sonawane) Date: Tue, 13 Dec 2022 14:25:14 +0100 Subject: [cinder] volume_attachement entries are not getting deleted from DB In-Reply-To: References: Message-ID: Hello Rajat and Sofia, Sorry for the late response. Thanks to both of you. The logs I shared before are from nova only; we do not see anything apart from that in nova logs. Here are the full logs for your ready reference. 2022-12-07 10:52:16.189 23213 ERROR nova.volume.cinder [req-59de985e-9db2-4eb5-a82b-31c64f9758a9 14fd679363724aaf9e9f0503820c0b1e a95811c12f3449cf8730331fd3be1c69 - default default] Update attachment failed for attachment cab57e8a-9585-4ca5-9e16-2a75bbf16994. Error: Unable to update attachment.(Invalid volume: duplicate connectors detected on volume 7c0aed26-3ad0-4a0e-8733-4fcda12d97e1). (HTTP 500) (Request-ID: req-f6eca4ff-3275-4255-bb92-c5bdf49adb8d) Code: 500: cinderclient.exceptions.ClientException: Unable to update attachment.(Invalid volume: duplicate connectors detected on volume 7c0aed26-3ad0-4a0e-8733-4fcda12d97e1). (HTTP 500) (Request-ID: req-f6eca4ff-3275-4255-bb92-c5bdf49adb8d) 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [req-59de985e-9db2-4eb5-a82b-31c64f9758a9 14fd679363724aaf9e9f0503820c0b1e a95811c12f3449cf8730331fd3be1c69 - default default] [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] Instance failed block device setup: cinderclient.exceptions.ClientException: Unable to update attachment.(Invalid volume: duplicate connectors detected on volume 7c0aed26-3ad0-4a0e-8733-4fcda12d97e1). (HTTP 500) (Request-ID: req-f6eca4ff-3275-4255-bb92-c5bdf49adb8d) 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] Traceback (most recent call last): 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] File "/var/lib/openstack/lib/python3.8/site-packages/nova/compute/manager.py", line 1976, in _prep_block_device 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] driver_block_device.attach_block_devices( 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] File "/var/lib/openstack/lib/python3.8/site-packages/nova/virt/block_device.py", line 874, in attach_block_devices 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] _log_and_attach(device) 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] File "/var/lib/openstack/lib/python3.8/site-packages/nova/virt/block_device.py", line 871, in _log_and_attach 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] bdm.attach(*attach_args, **attach_kwargs) 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] File "/var/lib/openstack/lib/python3.8/site-packages/nova/virt/block_device.py", line 46, in wrapped 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] ret_val = method(obj, context, *args, **kwargs) 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] File "/var/lib/openstack/lib/python3.8/site-packages/nova/virt/block_device.py", line 672, in attach 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] self._do_attach(context, instance, volume, volume_api, 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] File "/var/lib/openstack/lib/python3.8/site-packages/nova/virt/block_device.py", line 657, in _do_attach 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] self._volume_attach(context, volume, connector, instance, 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] File "/var/lib/openstack/lib/python3.8/site-packages/nova/virt/block_device.py", line 571, in _volume_attach 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] connection_info = volume_api.attachment_update( 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] File "/var/lib/openstack/lib/python3.8/site-packages/nova/volume/cinder.py", line 396, in wrapper 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] res = method(self, ctx, *args, **kwargs) 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] File "/var/lib/openstack/lib/python3.8/site-packages/nova/volume/cinder.py", line 447, in wrapper 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] res = method(self, ctx, attachment_id, *args, **kwargs) 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] File "/var/lib/openstack/lib/python3.8/site-packages/nova/volume/cinder.py", line 875, in attachment_update 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] LOG.error('Update attachment failed for attachment ' 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] File "/var/lib/openstack/lib/python3.8/site-packages/oslo_utils/excutils.py", line 227, in __exit__ 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] self.force_reraise() 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] File "/var/lib/openstack/lib/python3.8/site-packages/oslo_utils/excutils.py", line 200, in force_reraise 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] raise self.value 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] File "/var/lib/openstack/lib/python3.8/site-packages/nova/volume/cinder.py", line 867, in attachment_update 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] attachment_ref = cinderclient( 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] File "/var/lib/openstack/lib/python3.8/site-packages/cinderclient/api_versions.py", line 423, in substitution 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] return method.func(obj, *args, **kwargs) 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] File "/var/lib/openstack/lib/python3.8/site-packages/cinderclient/v3/attachments.py", line 75, in update 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] resp = self._update('/attachments/%s' % id, body) 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] File "/var/lib/openstack/lib/python3.8/site-packages/cinderclient/base.py", line 312, in _update 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] resp, body = self.api.client.put(url, body=body, **kwargs) 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] File "/var/lib/openstack/lib/python3.8/site-packages/cinderclient/client.py", line 220, in put 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] return self._cs_request(url, 'PUT', **kwargs) 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] File "/var/lib/openstack/lib/python3.8/site-packages/cinderclient/client.py", line 205, in _cs_request 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] return self.request(url, method, **kwargs) 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] File "/var/lib/openstack/lib/python3.8/site-packages/cinderclient/client.py", line 191, in request 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] raise exceptions.from_response(resp, body) 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] cinderclient.exceptions.ClientException: Unable to update attachment.(Invalid volume: duplicate connectors detected on volume 7c0aed26-3ad0-4a0e-8733-4fcda12d97e1). (HTTP 500) (Request-ID: req-f6eca4ff-3275-4255-bb92-c5bdf49adb8d) 2022-12-07 10:52:16.214 23213 ERROR nova.compute.manager [instance: 2f0e87a1-c8a3-48ca-b273-9a2d4206c340] On Wed, 7 Dec 2022 at 16:48, Rajat Dhasmana wrote: > Hi Hemant, > > Sorry for getting back to this so late. > The problem you've described for instance actions like delete, resize or > shelve not deleting the cinder attachment entries. > I'm not sure about resize and shelve but deleting an instance should > delete the volume attachment entry. > I suggest you check nova logs for the calls it makes to cinder for > attachment delete operation and see if there are any errors related to it. > Ideally it should work but due to some issue, there might be failures. > If you find a failure in nova logs related to attachment calls, you can go > ahead to check cinder logs (api and volume) to see possible > issues why the attachment wasn't getting deleted on the cinder side. > > - > Rajat Dhasmana > > On Tue, Nov 29, 2022 at 2:07 PM Hemant Sonawane > wrote: > >> Hello Sofia, >> >> Thank you for taking it into consideration. Do let me know if you >> have any questions and updates on the same. >> >> On Mon, 28 Nov 2022 at 18:12, Sofia Enriquez wrote: >> >>> Hi Hemant, >>> >>> Thanks for reporting this issue on the bug tracker >>> https://bugs.launchpad.net/cinder/+bug/1998083 >>> >>> I did a quick search and no problems with shelving operations have been >>> reported for at least the last two years.I'll bring this bug to the cinder >>> bug meeting this week. >>> >>> Thanks >>> Sofia >>> >>> On Fri, Nov 25, 2022 at 1:15 PM Hemant Sonawane < >>> hemant.sonawane at itera.io> wrote: >>> >>>> Hi Rajat, >>>> It's not about deleting attachments entries but the normal operations >>>> from horizon or via cli does not work because of that. So it really needs >>>> to be fixed to perform resize, shelve unshelve operations. >>>> >>>> Here are the detailed attachment entries you can see for the shelved >>>> instance. >>>> >>>> >>>> >>>> +--------------------------------------+--------------------------------------+--------------------------+---------------------------------- >>>> *----+---------------+-----------------------------------------**??* >>>> >>>> *--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> ??* >>>> >>>> *| id | volume_id >>>> | attached_host | instance_uuid | >>>> attach_status | connector ??* >>>> >>>> * >>>> >>>> | ??* >>>> >>>> >>>> *+--------------------------------------+--------------------------------------+--------------------------+--------------------------------------+---------------+-----------------------------------------??* >>>> >>>> *--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> ??* >>>> >>>> *| 8daddacc-8fc8-4d2b-a738-d05deb20049f | >>>> 67ea3a39-78b8-4d04-a280-166acdc90b8a | nfv1compute43.nfv1.o2.cz >>>> | 9266a2d7-9721-4994-a6b5-6b3290862dc6 | >>>> attached | {"platform": "x86_64", "os_type": "linux??* >>>> >>>> *", "ip": "10.42.168.87", "host": "nfv1compute43.nfv1.o2.cz >>>> ", "multipath": false, "do_local_attach": >>>> false, "system uuid": "65917e4f-c8c4-a2af-ec11-fe353e13f4dd", "mountpoint": >>>> "/dev/vda"} | ??* >>>> >>>> *| d3278543-4920-42b7-b217-0858e986fcce | >>>> 67ea3a39-78b8-4d04-a280-166acdc90b8a | NULL | >>>> 9266a2d7-9721-4994-a6b5-6b3290862dc6 | reserved | NULL >>>> ??* >>>> >>>> * >>>> >>>> | ??* >>>> >>>> >>>> *+--------------------------------------+--------------------------------------+--------------------------+--------------------------------------+---------------+-----------------------------------------??* >>>> >>>> *--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> ??* >>>> >>>> *2 rows in set (0.00 sec) * >>>> >>>> >>>> for e.g if I would like to unshelve this instance it wont work as it >>>> has a duplicate entry in cinder db for the attachment. So i have to delete >>>> it manually from db or via cli >>>> >>>> *root at master01:/home/hemant# cinder --os-volume-api-version 3.27 >>>> attachment-list --all | grep 67ea3a39-78b8-4d04-a280-166acdc90b8a >>>> ??* >>>> >>>> *| 8daddacc-8fc8-4d2b-a738-d05deb20049f | >>>> 67ea3a39-78b8-4d04-a280-166acdc90b8a | attached | >>>> 9266a2d7-9721-4994-a6b5-6b3290862dc6 | >>>> ??* >>>> >>>> *| d3278543-4920-42b7-b217-0858e986fcce | >>>> 67ea3a39-78b8-4d04-a280-166acdc90b8a** | reserved | >>>> 9266a2d7-9721-4994-a6b5-6b3290862dc6 |* >>>> >>>> *cinder --os-volume-api-version 3.27 >>>> attachment-delete 8daddacc-8fc8-4d2b-a738-d05deb20049f* >>>> >>>> this is the only choice I have if I would like to unshelve vm. But this >>>> is not a good approach for production envs. I hope you understand me. >>>> Please feel free to ask me anything if you don't understand. >>>> >>>> >>>> >>>> On Fri, 25 Nov 2022 at 13:20, Rajat Dhasmana >>>> wrote: >>>> >>>>> Hi Hemant, >>>>> >>>>> If your final goal is to delete the attachment entries in the cinder >>>>> DB, we have attachment APIs to perform these tasks. The command useful for >>>>> you is attachment list[1] and attachment delete[2]. >>>>> Make sure you pass the right microversion i.e. 3.27 to be able to >>>>> execute these operations. >>>>> >>>>> Eg: >>>>> cinder --os-volume-api-version 3.27 attachment-list >>>>> >>>>> [1] >>>>> https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-attachment-list >>>>> [2] >>>>> https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-attachment-delete >>>>> >>>>> On Fri, Nov 25, 2022 at 5:44 PM Hemant Sonawane < >>>>> hemant.sonawane at itera.io> wrote: >>>>> >>>>>> Hello >>>>>> I am using wallaby release openstack and having issues with cinder >>>>>> volumes as once I try to delete, resize or unshelve the shelved vms the >>>>>> volume_attachement entries do not get deleted in cinder db and therefore >>>>>> the above mentioned operations fail every time. I have to delete these >>>>>> volume_attachement entries manually then it works. Is there any way to fix >>>>>> this issue ? >>>>>> >>>>>> nova-compute logs: >>>>>> >>>>>> cinderclient.exceptions.ClientException: Unable to update >>>>>> attachment.(Invalid volume: duplicate connectors detected on volume >>>>>> >>>>>> Help will be really appreciated Thanks ! >>>>>> -- >>>>>> Thanks and Regards, >>>>>> >>>>>> Hemant Sonawane >>>>>> >>>>>> >>>> >>>> -- >>>> Thanks and Regards, >>>> >>>> Hemant Sonawane >>>> >>>> >>> >>> -- >>> >>> Sof?a Enriquez >>> >>> she/her >>> >>> Software Engineer >>> >>> Red Hat PnT >>> >>> IRC: @enriquetaso >>> @RedHat Red Hat >>> Red Hat >>> >>> >>> >>> >> >> -- >> Thanks and Regards, >> >> Hemant Sonawane >> >> -- Thanks and Regards, Hemant Sonawane -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Dec 13 23:53:06 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 13 Dec 2022 15:53:06 -0800 Subject: [all][tc] Technical Committee next weekly meeting on 2022 Dec 14 at 1600 UTC In-Reply-To: <18508684b10.b16bee35751533.7881950644328973637@ghanshyammann.com> References: <18508684b10.b16bee35751533.7881950644328973637@ghanshyammann.com> Message-ID: <1850de80156.bf490cd7843209.7156042519539462444@ghanshyammann.com> Hello Everyone, Below is the agenda for the TC meeting scheduled on Dec 14 at 1600 UTC. Location: IRC OFTC network in the #openstack-tc channel * Roll call * Follow up on past action items * Gate health check * 2023.1 TC tracker checks: ** https://etherpad.opendev.org/p/tc-2023.1-tracker * Mistral situation ** Release team proposing it to mark its release deprecated *** https://review.opendev.org/c/openstack/governance/+/866562 ** New volunteers from OVHCloud to help maintain it *** https://lists.openstack.org/pipermail/openstack-discuss/2022-December/031421.html * Recurring tasks check ** Bare 'recheck' state *** https://etherpad.opendev.org/p/recheck-weekly-summary * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Mon, 12 Dec 2022 14:15:30 -0800 Ghanshyam Mann wrote --- > Hello Everyone, > > The technical Committee's next weekly meeting is scheduled for 2022 Dec 14, at 1600 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Tuesday, Dec 13 at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From sbauza at redhat.com Wed Dec 14 08:37:16 2022 From: sbauza at redhat.com (Sylvain Bauza) Date: Wed, 14 Dec 2022 09:37:16 +0100 Subject: [nova][placement] Spec review day on Dec 14th In-Reply-To: References: Message-ID: Le jeu. 8 d?c. 2022 ? 17:47, Sylvain Bauza a ?crit : > Hey folks, > > As agreed on a previous meeting [1] we will have a second Spec review day > on Wed Dec 14th. > Now you're aware, prepare your specs in advance so they're reviewable and > ideally please be around on that day in order to reply to any comments and > eventually propose a new revision, ideally the same day. > > Just a quick reminder, this is today ;-) Happy spec reviews everyone. > As a side note, we'll have an Implementation review day on Jan 10th which > will focus on reviewing feature changes related to accepted specs and we > also plan to review a few implementation patches by Dec 15th. > > Thanks, > -Sylvain > > [1] > https://meetings.opendev.org/meetings/nova/2022/nova.2022-11-29-16.00.log.html#l-102 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Wed Dec 14 11:26:09 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Wed, 14 Dec 2022 12:26:09 +0100 Subject: [neutron] Neutron meetings cancelled last week of December Message-ID: Hello Neutrinos: The Neutron meetings (team, CI and drivers) will be cancelled the last week of December. Enjoy your holidays! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Wed Dec 14 11:29:19 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Wed, 14 Dec 2022 12:29:19 +0100 Subject: [neutron] Neutron drivers meeting Friday Dec 16 Message-ID: Hello Neutrinos: Please check the Neutron drivers meeting agenda: https://wiki.openstack.org/wiki/Meetings/NeutronDrivers. So far we have two topics to discuss: * https://bugs.launchpad.net/neutron/+bug/1998609 * https://bugs.launchpad.net/neutron/+bug/1998608 See you next Friday. -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Dec 14 13:31:54 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 14 Dec 2022 13:31:54 +0000 Subject: [cinder] Bug Report from 12-14-2022 Message-ID: This is a bug report from 12-07-2022 to 12-14-2022. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- High - https://bugs.launchpad.net/cinder/+bug/1999252 "[RBD -> SolidFire] Cache mode was not updated after volume retype." Unassigned. Medium - https://bugs.launchpad.net/nova/+bug/1999125 "cinder retype volume raise exception with shutoff vm." Unassigned. Cheers, Sofia -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Dec 14 14:22:07 2022 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 14 Dec 2022 08:22:07 -0600 Subject: 2023 PTGs + Virtual Team Signup Message-ID: Hello Everyone! The next PTG on the calendar will be our usual virtual format and it will take place March 27 -31 right after the 2023.1 release. The project signup survey is already ready to go! If your team is interested in participating, sign them up here[1]! Registration is also open now[2] We have also heard many requests from the contributor community to bring back in person events and also heard that with global travel reduction, it would be better travel wise for the community at large to combine it with the OpenInfra Summit[3]. So, an in-person PTG will take place Wednesday and Thursday! June 14 -15th. OpenInfra Summit registration will be required to participate, and contributor pricing and travel support[4] will be available. We will also be planning another virtual PTG in the latter half of the year. The exact date is still being finalized. This will give you three opportunities to get your team together in 2023: two virtual and one in-person. - Virtual: March 27-31 - In- Person: June 14-15 collocated with the OpenInfra Summit - Virtual: TBD, but second half of the year As any project in our community, you are free to take advantage of all or some of the PTGs this coming year. I, for one, hope to see you and your project at as many as make sense for your team! To make sure we are as inclusive as possible I will continue to encourage team moderators to write summaries of their discussions to gather and promote after the events as well. Stay tuned for more info as we get closer, but the team signup survey[1] has opened for the first virtual PTG and is ready for you! -Kendall (diablo_rojo) [1] https://openinfrafoundation.formstack.com/forms/march2023_vptg_survey [2] https://openinfra-ptg.eventbrite.com [3]https://openinfra.dev/summit/vancouver-2023 [4] https://openinfrafoundation.formstack.com/forms/openinfra_tsp -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Dec 14 14:44:40 2022 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 14 Dec 2022 09:44:40 -0500 Subject: [cinder] Festival of XS reviews? In-Reply-To: References: Message-ID: On 12/12/22 11:11 AM, Rajat Dhasmana wrote: > Hello Argonauts, > > I've come to know informally that people might not be around during this > time due to the holiday season. Based on this, I would like to ask the > cinder team if we would like to have the festival of XS reviews this > friday or not? > > Date: 16 December, 2022 > Time: 1400-1600 UTC > > Please reply with your feedback and if we've enough people joining, we > can continue with the session. We discussed this at today's cinder weekly meeting, and the consensus was that we should hold the Festival on Friday as usual. So: Date: 16 December, 2022 Time: 1400-1600 UTC Etherpad: https://etherpad.opendev.org/p/cinder-festival-of-reviews See you all there! > > Thanks > Rajat Dhasmana From hanguangyu2 at gmail.com Wed Dec 14 15:01:13 2022 From: hanguangyu2 at gmail.com (=?UTF-8?B?6Z+p5YWJ5a6H?=) Date: Wed, 14 Dec 2022 15:01:13 +0000 Subject: stackalytics is stop update, Message-ID: Hi, all I see stackalytics[1] is stop update. The page show "The data was last updated on 09 Nov 2022 12:44:22 UTC". Does anyone know why? And If you want to check the contribution statistics, is there still a way? Thank you for any help. Cheers, Han [1]: www.stackalytics.io From fungi at yuggoth.org Wed Dec 14 15:36:12 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 14 Dec 2022 15:36:12 +0000 Subject: stackalytics is stop update, In-Reply-To: References: Message-ID: <20221214153612.mexqmdzt6t5msk7l@yuggoth.org> On 2022-12-14 15:01:13 +0000 (+0000), ??? wrote: > I see stackalytics[1] is stop update. The page show "The data was > last updated on 09 Nov 2022 12:44:22 UTC". > > Does anyone know why? Hopefully the Stackalytics site administrators follow this mailing list (it's not run within the OpenDev Collaboratory or by the OpenInfra Foundation), so we'll have to wait for them to chime in. > And If you want to check the contribution statistics, is there > still a way? [...] The OpenInfra Foundation is contracting with a company called Bitergia to develop a contribution activity tracker at https://openstack.biterg.io/ though its affiliation data isn't complete yet. You can follow the discussion about that on the foundation mailing list: https://lists.openinfra.dev/pipermail/foundation/2022-October/003099.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From smooney at redhat.com Wed Dec 14 18:30:57 2022 From: smooney at redhat.com (Sean Mooney) Date: Wed, 14 Dec 2022 18:30:57 +0000 Subject: stackalytics is stop update, In-Reply-To: <20221214153612.mexqmdzt6t5msk7l@yuggoth.org> References: <20221214153612.mexqmdzt6t5msk7l@yuggoth.org> Message-ID: <13a1fb6c122002d4ac515bc01f6c4c0a384455e8.camel@redhat.com> On Wed, 2022-12-14 at 15:36 +0000, Jeremy Stanley wrote: > On 2022-12-14 15:01:13 +0000 (+0000), ??? wrote: > > I see stackalytics[1] is stop update. The page show "The data was > > last updated on 09 Nov 2022 12:44:22 UTC". > > > > Does anyone know why? > > Hopefully the Stackalytics site administrators follow this mailing > list (it's not run within the OpenDev Collaboratory or by the > OpenInfra Foundation), so we'll have to wait for them to chime in. > > > And If you want to check the contribution statistics, is there > > still a way? > [...] > > The OpenInfra Foundation is contracting with a company called > Bitergia to develop a contribution activity tracker at > https://openstack.biterg.io/ though its affiliation data isn't > complete yet. You can follow the discussion about that on the > foundation mailing list: > > https://lists.openinfra.dev/pipermail/foundation/2022-October/003099.html that looks interesting but its missing the review data form gerrit it seams to have mailing list infro which is nice but its only looking at git commits and lines of code is not a proxy or messure for contibutions in general. my perspective the primary metric for staclaitcs was reviews https://www.stackalytics.io/?module=nova-group https://www.stackalytics.io/report/contribution?module=nova-group&project_type=openstack&days=100 and the rbeak down of +/- 1/2 and a/w as well as the +% are imporant i hope that is planned to be included the stats in the github mirror cover most of the commit/git based stats already > From ces.eduardo98 at gmail.com Wed Dec 14 18:52:56 2022 From: ces.eduardo98 at gmail.com (Carlos Silva) Date: Wed, 14 Dec 2022 15:52:56 -0300 Subject: [manila][release] Proposing to EOL Stein In-Reply-To: References: Message-ID: Hello! Patches were proposed [2]. Please take a look and provide feedback if needed. [2] https://review.opendev.org/q/project:openstack/releases+eol+manila+status:open Thanks, carloss Em seg., 28 de nov. de 2022 ?s 18:29, Carlos Silva escreveu: > Hello! > > Recently in a weekly meeting we chatted about EOLing stable/stein and all > the attendants were found to be in favor of this [0]. As the procedures go > [1], we need to make it formal through a post in this mailing list, and see > if there are objections. > > In case of concerns or objections, please reach out through email or the > #openstack-manila IRC channel. > > This will impact all branched manila repositories (manila, > python-manilaclient and manila-ui). If there aren't objections or strong > concerns, I will be proposing the patches to EOL stable/stein within one > week. > > [0] > https://meetings.opendev.org/meetings/manila/2022/manila.2022-11-03-15.00.log.html#l-20 > [1] > https://docs.openstack.org/project-team-guide/stable-branches.html#end-of-life > > Thanks, > carloss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Wed Dec 14 19:02:09 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Wed, 14 Dec 2022 20:02:09 +0100 Subject: stackalytics is stop update, In-Reply-To: <13a1fb6c122002d4ac515bc01f6c4c0a384455e8.camel@redhat.com> References: <20221214153612.mexqmdzt6t5msk7l@yuggoth.org> <13a1fb6c122002d4ac515bc01f6c4c0a384455e8.camel@redhat.com> Message-ID: I wonder if it would make sense for us to follow Kubernetes in terms of way of providing contribution metrics. As for data representation they use just grafana dashboard, like [1]. And there's analysis plugin [2] for Gerrit (that I guess we use as of today?). Though I'm not sure if analytics will expand Gerrit metrics (I guess not) in order just to use Prometheus exporter plugin[3] to gather them. But even if not, leveraging analytics API to get some simple exporter for Prometheus should be doable. Yes, that would require some time and effort for setting things up and I bet infra team is quite busy and already lack ppl. But in long term it might take not that much time for maintenance and give us some freedom comparing to vendor locks. [1] https://k8s.devstats.cncf.io/d/9/companies-table [2] https://gerrit.googlesource.com/plugins/analytics/ [3] https://gerrit.googlesource.com/plugins/metrics-reporter-prometheus/ ??, 14 ???. 2022 ?., 19:33 Sean Mooney : > On Wed, 2022-12-14 at 15:36 +0000, Jeremy Stanley wrote: > > On 2022-12-14 15:01:13 +0000 (+0000), ??? wrote: > > > I see stackalytics[1] is stop update. The page show "The data was > > > last updated on 09 Nov 2022 12:44:22 UTC". > > > > > > Does anyone know why? > > > > Hopefully the Stackalytics site administrators follow this mailing > > list (it's not run within the OpenDev Collaboratory or by the > > OpenInfra Foundation), so we'll have to wait for them to chime in. > > > > > And If you want to check the contribution statistics, is there > > > still a way? > > [...] > > > > The OpenInfra Foundation is contracting with a company called > > Bitergia to develop a contribution activity tracker at > > https://openstack.biterg.io/ though its affiliation data isn't > > complete yet. You can follow the discussion about that on the > > foundation mailing list: > > > > > https://lists.openinfra.dev/pipermail/foundation/2022-October/003099.html > > that looks interesting but its missing the review data form gerrit > it seams to have mailing list infro which is nice but its only looking at > git commits > and lines of code is not a proxy or messure for contibutions in general. > > > my perspective the primary metric for staclaitcs was reviews > https://www.stackalytics.io/?module=nova-group > > https://www.stackalytics.io/report/contribution?module=nova-group&project_type=openstack&days=100 > > and the rbeak down of +/- 1/2 and a/w as well as the +% are imporant > > i hope that is planned to be included > > the stats in the github mirror cover most of the commit/git based stats > already > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Dec 14 19:22:39 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 14 Dec 2022 19:22:39 +0000 Subject: stackalytics is stop update, In-Reply-To: <13a1fb6c122002d4ac515bc01f6c4c0a384455e8.camel@redhat.com> References: <20221214153612.mexqmdzt6t5msk7l@yuggoth.org> <13a1fb6c122002d4ac515bc01f6c4c0a384455e8.camel@redhat.com> Message-ID: <20221214192238.7j3c44hdslrql4cj@yuggoth.org> On 2022-12-14 18:30:57 +0000 (+0000), Sean Mooney wrote: [...] > that looks interesting but its missing the review data form gerrit [...] Yes, the eventual goal is to include the same metrics as for the equivalent dashboards we already have for the other OpenInfra projects, for example https://zuul.biterg.io/ which shows "Gerrit" statistics like "Approvals" (i.e. votes). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Wed Dec 14 19:33:21 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 14 Dec 2022 11:33:21 -0800 Subject: stackalytics is stop update, In-Reply-To: References: <20221214153612.mexqmdzt6t5msk7l@yuggoth.org> <13a1fb6c122002d4ac515bc01f6c4c0a384455e8.camel@redhat.com> Message-ID: <18512208f8e.11375ce43923906.4026338105138393025@ghanshyammann.com> ---- On Wed, 14 Dec 2022 11:02:09 -0800 Dmitriy Rabotyagov wrote --- > I wonder if it would make sense for us to follow Kubernetes in terms of way of providing contribution metrics. As for data representation they use just grafana dashboard, like [1]. And there's analysis plugin [2] for Gerrit (that I guess we use as of today?). Though I'm not sure if analytics will expand Gerrit metrics (I guess not) in order just to use Prometheus exporter plugin[3] to gather them. But even if not, leveraging analytics API to get some simple exporter for Prometheus should be doable. > Yes, that would require some time and effort for setting things up and I bet infra team is quite busy and already lack ppl. But in long term it might take not that much time for maintenance and give us some freedom comparing to vendor locks. +1, if we can do that. -gmann > > [1] https://k8s.devstats.cncf.io/d/9/companies-table[2] https://gerrit.googlesource.com/plugins/analytics/[3] https://gerrit.googlesource.com/plugins/metrics-reporter-prometheus/ > ??, 14 ???. 2022 ?., 19:33 Sean Mooney smooney at redhat.com>: > On Wed, 2022-12-14 at 15:36 +0000, Jeremy Stanley wrote: > > On 2022-12-14 15:01:13 +0000 (+0000), ??? wrote: > > > I see stackalytics[1] is stop update. The page show "The data was > > > last updated on 09 Nov 2022 12:44:22 UTC". > > > > > > Does anyone know why? > > > > Hopefully the Stackalytics site administrators follow this mailing > > list (it's not run within the OpenDev Collaboratory or by the > > OpenInfra Foundation), so we'll have to wait for them to chime in. > > > > > And If you want to check the contribution statistics, is there > > > still a way? > > [...] > > > > The OpenInfra Foundation is contracting with a company called > > Bitergia to develop a contribution activity tracker at > > https://openstack.biterg.io/ though its affiliation data isn't > > complete yet. You can follow the discussion about that on the > > foundation mailing list: > > > > https://lists.openinfra.dev/pipermail/foundation/2022-October/003099.html > > that looks interesting but its missing the review data form gerrit > it seams to have mailing list infro which is nice but its only looking at git commits > and lines of code is not a proxy or messure for contibutions in general. > > > my perspective the primary metric for staclaitcs was reviews > https://www.stackalytics.io/?module=nova-group > https://www.stackalytics.io/report/contribution?module=nova-group&project_type=openstack&days=100 > > and the rbeak down of +/- 1/2 and a/w as well as the +% are imporant > > i hope that is planned to be included > > the stats in the github mirror cover most of the commit/git based stats already > > > > > > From fungi at yuggoth.org Wed Dec 14 19:55:46 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 14 Dec 2022 19:55:46 +0000 Subject: stackalytics is stop update, In-Reply-To: References: <20221214153612.mexqmdzt6t5msk7l@yuggoth.org> <13a1fb6c122002d4ac515bc01f6c4c0a384455e8.camel@redhat.com> Message-ID: <20221214195545.ek4ylerv3qiyajp6@yuggoth.org> On 2022-12-14 20:02:09 +0100 (+0100), Dmitriy Rabotyagov wrote: > I wonder if it would make sense for us to follow Kubernetes in > terms of way of providing contribution metrics. As for data > representation they use just grafana dashboard, like [1]. And > there's analysis plugin [2] for Gerrit (that I guess we use as of > today?). Though I'm not sure if analytics will expand Gerrit > metrics (I guess not) in order just to use Prometheus exporter > plugin[3] to gather them. But even if not, leveraging analytics > API to get some simple exporter for Prometheus should be doable. [...] The software for the new activity dashboards is GrimoireLab (part of the CHAOSS project at Linux Foundation), and the organization contracted to set it up and maintain it is the primary developer of Grimoire Lab (Bitergia). You can find details on what data sources it supports documented at https://chaoss.github.io/grimoirelab/ if interested. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From pradeep8985 at gmail.com Wed Dec 14 20:33:33 2022 From: pradeep8985 at gmail.com (pradeep) Date: Thu, 15 Dec 2022 02:03:33 +0530 Subject: reconfiguring the external interface to multiple interfaces doesn't work Message-ID: Hello All, I have deployed OpenStack Yoga using kolla ansible with a single external interface initially. Later we realized that we wanted to use multiple interfaces. I have followed the docs to modify the external interface in the globals.yml file as below. *Before:* *globals.yml* neutron_external_interface: "bond2" *After:* *globals.yml* neutron_external_interface: "bond2,enp6s0f1" neutron_bridge_name: "br-ex,br-ex2" *Command used:* kolla-ansible -i multinode reconfigure --tags "common,horizon,neutron" -vv br-ex2 doesn't show up in the neutron nodes. I also noticed that my neutron-openvswitch-agent is continuously restating in both the neutron nodes. neutron-openvswitch-agent.log says as below 2022-12-15 02:12:35.121 7 INFO neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge [-] Bridge br-int has datapath-ID 0000be5c93241c4f 2022-12-15 02:12:36.040 7 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Mapping physical network physnet1 to bridge br-ex 2022-12-15 02:12:36.041 7 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Bridge br-ex datapath-id = 0x0000801844eaf971 2022-12-15 02:12:36.057 7 INFO neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge [-] Bridge br-ex has datapath-ID 0000801844eaf971 2022-12-15 02:12:36.063 7 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Mapping physical network physnet2 to bridge br-ex2 2022-12-15 02:12:36.063 7 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Bridge br-ex2 for physical network physnet2 does not exist. Agent terminated! Here is the config file. cat /etc/kolla/neutron-openvswitch-agent/openvswitch_agent.ini [agent] tunnel_types = vxlan l2_population = true arp_responder = true extensions = qos,sfc [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver [ovs] bridge_mappings = physnet1:br-ex,physnet2:br-ex2 datapath_type = system ovsdb_connection = tcp:127.0.0.1:6640 ovsdb_timeout = 10 local_ip = Also, i tried to change the bridge name from br-ex to br-ex1 but it still shows up br-ex (this can be perceived as my second issue). How can I resolve both issues? Kindly help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jerome.becot at deveryware.com Thu Dec 15 09:52:43 2022 From: jerome.becot at deveryware.com (=?UTF-8?B?SsOpcsO0bWUgQkVDT1Q=?=) Date: Thu, 15 Dec 2022 10:52:43 +0100 Subject: Help overriding Nova policies (Ussuri) Message-ID: <05d3231f-ec52-bd94-9d59-41e9cac7c880@deveryware.com> Hello, I should miss something, and I need some help. I've updated some rules that I placed in the /etc/nova/policy.yaml file. I use all defaults for the olso policies (ie scope is not enabled). When I generate the fulle policy with oslopolicy-policy-generator, the rules are applied in the generated output. If I test the policy with oslopolicy-checker under the user's token, the results matches the permissions, but Nova API continue to refuse the operation Policy check for os_compute_api:os-flavor-manage:create failed with credentials I'm using the openstack cli to make the request (openstack flavor create) Thanks for your help J From zigo at debian.org Thu Dec 15 10:18:53 2022 From: zigo at debian.org (Thomas Goirand) Date: Thu, 15 Dec 2022 11:18:53 +0100 Subject: [designate] Implemeting PTR record restrictions Message-ID: Hi, We implemented this scenario for our public cloud: https://docs.openstack.org/neutron/latest/admin/config-dns-int-ext-serv.html#use-case-3b-the-dns-domain-ports-extension This is currently in production in beta-mode at Infomaniak's public cloud. We did that, because we want our customers to be able to set any domain name or PTR for the IPs they own. However, we discovered that there's no restriction on what zone customers can set. For example, if customer A owns the IP 203.0.113.9, customer B can do "openstack zone create 9.113.0.203.in-addr.arpa.", preventing customer A to set their PTR record. Is there currently a way to fix this? Or maybe a spec to implement the correct restrictions? What is the way to fix this problem in a public cloud env? Cheers, Thomas Goirand (zigo) From zigo at debian.org Thu Dec 15 10:24:33 2022 From: zigo at debian.org (Thomas Goirand) Date: Thu, 15 Dec 2022 11:24:33 +0100 Subject: [designate] New project: designate-tlds Message-ID: <03875da6-626c-cdd2-bca3-bda2a24fde1e@debian.org> Hi, We wrote this: https://salsa.debian.org/openstack-team/services/designate-tlds The interesting code bits are in: https://salsa.debian.org/openstack-team/services/designate-tlds/-/blob/debian/zed/designate_tlds/tlds.py What it does is download the TLD list from https://publicsuffix.org/list/public_suffix_list.dat using requests (with an optional proxy), compare it to the list of TLDs in Designate, and fix the difference. It's by default setup in a cron every week. Basically, it's just apt-get install designate-tlds, configure keystone_authtoken in /etc/designate-tlds/designate-tlds.conf and set dry_run=false, and you're done! Note I also wrote a patch for puppet-designate [1] to support it. Moving forward I see 2 solutions: 1- we continue to maintain this separately from Designate 2- our code gets integrated into Designate itself. Designate team: are you interested for option 2? Cheers, Thomas Goirand (zigo) [1] https://salsa.debian.org/openstack-team/puppet/puppet-module-designate/-/blob/debian/zed/debian/patches/add_designate_tlds_config.patch From amonster369 at gmail.com Thu Dec 15 11:00:13 2022 From: amonster369 at gmail.com (A Monster) Date: Thu, 15 Dec 2022 12:00:13 +0100 Subject: kolla-ansible deploy glance with file as a backend on multiple nodes Message-ID: I have made a deployment of openstack on multiple centos stream 8 nodes, and used files as a backend for glance service, but according to the documentation, when using file as a glance backend, It can only be deployed on a single node, what can I do to deploy it on multiple servers at once. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Thu Dec 15 14:18:19 2022 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 15 Dec 2022 09:18:19 -0500 Subject: kolla-ansible deploy glance with file as a backend on multiple nodes In-Reply-To: References: Message-ID: To run glance on multiple nodes you need shared storage like Ceph/NFS etc. Otherwise you won't be able to replicate images between all glance nodes. In the past I have deployed glusterfs running on all 3 nodes and mounts that filesystem into a glance docker container for shared filesystem. On Thu, Dec 15, 2022 at 6:02 AM A Monster wrote: > I have made a deployment of openstack on multiple centos stream 8 nodes, > and used files as a backend for glance service, but according to the > documentation, when using file as a glance backend, It can only be deployed > on a single node, what can I do to deploy it on multiple servers at once. > Thank you. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Dec 15 14:57:29 2022 From: smooney at redhat.com (Sean Mooney) Date: Thu, 15 Dec 2022 14:57:29 +0000 Subject: kolla-ansible deploy glance with file as a backend on multiple nodes In-Reply-To: References: Message-ID: On Thu, 2022-12-15 at 09:18 -0500, Satish Patel wrote: > To run glance on multiple nodes you need shared storage like Ceph/NFS etc. > Otherwise you won't be able to replicate images between all glance nodes. only if you dont use the glance multi store feature. you can have independet glance deployment with the file store without shared filesystem and automtic replciation in that case you can import image to speicic stores via the glance api. and you can copy data between them using the image copy import scheme. to do this in kolla you will proably need to do the glance configurtion per host vai the config overriede mechanium but kolla shoudl provide the flexablity to configure that if you want too. > > In the past I have deployed glusterfs running on all 3 nodes and mounts > that filesystem into a glance docker container for shared filesystem. > > On Thu, Dec 15, 2022 at 6:02 AM A Monster wrote: > > > I have made a deployment of openstack on multiple centos stream 8 nodes, > > and used files as a backend for glance service, but according to the > > documentation, when using file as a glance backend, It can only be deployed > > on a single node, what can I do to deploy it on multiple servers at once. > > Thank you. > > From mkopec at redhat.com Thu Dec 15 16:25:24 2022 From: mkopec at redhat.com (Martin Kopec) Date: Thu, 15 Dec 2022 17:25:24 +0100 Subject: [qa] Cancelling next office hours Message-ID: Hi everyone, we're cancelling office hours for the next 3 weeks due to the holidays. The next office hour is gonna be on January 10th. Thank you. Enjoy your holidays! -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA IM: kopecmartin -------------- next part -------------- An HTML attachment was scrubbed... URL: From 1316587041 at qq.com Thu Dec 15 15:30:07 2022 From: 1316587041 at qq.com (=?gb18030?B?uqu54tPu?=) Date: Thu, 15 Dec 2022 23:30:07 +0800 Subject: Could I deploy openstack in openstack instance Message-ID: Hello, Could I deploy openstack in openstack instance? When I use a instance just with one network interface. After ip move to brq(using linuxbridge) or br-ex(uring openvswitch), the instance lost connection.  Does anyone know how to set it up? Or suggest me some documentation. I would appreciate any kind of guidance or help. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradeep8985 at gmail.com Thu Dec 15 16:35:54 2022 From: pradeep8985 at gmail.com (pradeep) Date: Thu, 15 Dec 2022 22:05:54 +0530 Subject: auto switch of glance-api containers to other controllers In-Reply-To: References: Message-ID: Hello All Any help is highly appreciated. On Thu, 15 Dec 2022 at 01:41, pradeep wrote: > Hi Danny, > > Yes, i did, All of them are mounted in all the 3 controllers as well. > > # cat /etc/kolla/globals.yml | egrep > "glance_backend_file|glance_file_datadir_volume" > #glance_backend_file: "yes" > glance_file_datadir_volume: "/glanceimages" > > root at ctrl-1:~# df -h /glanceimages/ > Filesystem Size Used Avail Use% Mounted on > :/Openstack_fileshare1 150G 888M 150G 1% /glanceimages > > root at ctrl-2:~# df -h /glanceimages/ > Filesystem Size Used Avail Use% Mounted on > :/Openstack_fileshare1 150G 888M 150G 1% /glanceimages > > root at ctrl-3:~# df -h /glanceimages/ > Filesystem Size Used Avail Use% Mounted on > :/Openstack_fileshare1 150G 888M 150G 1% /glanceimages > > > On Wed, 14 Dec 2022 at 17:54, Danny Webb > wrote: > >> did you change the glance_file_datadir_volume variable? that is >> required for kolla to deploy multiples: >> >> >> https://github.com/openstack/kolla-ansible/blob/ae3de342e48bc4293564a3b532a66bfbf1326c0d/ansible/group_vars/all.yml#L903 >> >> ------------------------------ >> *From:* pradeep >> *Sent:* 14 December 2022 10:19 >> *To:* Danny Webb >> *Subject:* Re: auto switch of glance-api containers to other controllers >> >> >> * CAUTION: This email originates from outside THG * >> ------------------------------ >> Thanks Danny, All >> >> I have found the glance-api containers not deployed in control2 and >> control3 hosts although I have used NFS as my backend and share is enabled >> in all my controllers. >> >> I have tried to deploy the containers in control 2 and 3 as below but see >> some failures. I hope I am not using the wrong command. Am i missing >> something? Please let me know. >> >> *Command used* >> kolla-ansible -i multinode deploy --tags "glance" --limit >> control02.lab,control03.lab -vv >> >> *Errors in deployment* >> >> RUNNING HANDLER [glance : Restart glance-api container] >> ********************************************************************************* >> task path: >> /root/kolla-ansible/venv/share/kolla-ansible/ansible/roles/glance/handlers/main.yml:2 >> >> fatal: [control02.lab]: FAILED! => {"changed": true, "msg": "'Traceback >> (most recent call last):\\n File >> \"/usr/local/lib/python3.8/dist-packages/docker/api/client.py\", line 268, >> in _raise_for_status\\n response.raise_for_status()\\n File >> \"/usr/lib/python3/dist-packages/requests/models.py\", line 940, in >> raise_for_status\\n raise HTTPError(http_error_msg, >> response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal >> Server Error for url: >> http+docker://localhost/v1.41/images/create?tag=yoga&fromImage=quay.io%2Fopenstack.kolla%2Fubuntu-source-glance-api\\n\\nDuring >> handling of the above exception, another exception occurred:\\n\\nTraceback >> (most recent call last):\\n File >> \"/tmp/ansible_kolla_docker_payload_iltg71bi/ansible_kolla_docker_payload.zip/ansible/modules/kolla_docker.py\", >> line 381, in main\\n File >> \"/tmp/ansible_kolla_docker_payload_iltg71bi/ansible_kolla_docker_payload.zip/ansible/module_utils/kolla_docker_worker.py\", >> line 651, in recreate_or_restart_container\\n self.start_container()\\n >> File >> \"/tmp/ansible_kolla_docker_payload_iltg71bi/ansible_kolla_docker_payload.zip/ansible/module_utils/kolla_docker_worker.py\", >> line 669, in start_container\\n self.pull_image()\\n File >> \"/tmp/ansible_kolla_docker_payload_iltg71bi/ansible_kolla_docker_payload.zip/ansible/module_utils/kolla_docker_worker.py\", >> line 450, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) >> for line in self.dc.pull(\\n File >> \"/usr/local/lib/python3.8/dist-packages/docker/api/image.py\", line 430, >> in pull\\n self._raise_for_status(response)\\n File >> \"/usr/local/lib/python3.8/dist-packages/docker/api/client.py\", line 270, >> in _raise_for_status\\n raise create_api_error_from_http_exception(e)\\n >> File \"/usr/local/lib/python3.8/dist-packages/docker/errors.py\", line 31, >> in create_api_error_from_http_exception\\n raise cls(e, response=response, >> explanation=explanation)\\ndocker.errors.APIError: 500 Server Error for >> http+docker://localhost/v1.41/images/create?tag=yoga&fromImage=quay.io%2Fopenstack.kolla%2Fubuntu-source-glance-api: >> Internal Server Error (\"Get \"https://quay.io/v2/\ >> ": net/http: request canceled while waiting for >> connection (Client.Timeout exceeded while awaiting headers)\")\\n'"} >> fatal: [control03.lab]: FAILED! => {"changed": true, "msg": "'Traceback >> (most recent call last):\\n File >> \"/usr/local/lib/python3.8/dist-packages/docker/api/client.py\", line 268, >> in _raise_for_status\\n response.raise_for_status()\\n File >> \"/usr/lib/python3/dist-packages/requests/models.py\", line 940, in >> raise_for_status\\n raise HTTPError(http_error_msg, >> response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal >> Server Error for url: >> http+docker://localhost/v1.41/images/create?tag=yoga&fromImage=quay.io%2Fopenstack.kolla%2Fubuntu-source-glance-api\\n\\nDuring >> handling of the above exception, another exception occurred:\\n\\nTraceback >> (most recent call last):\\n File >> \"/tmp/ansible_kolla_docker_payload_kkez0k31/ansible_kolla_docker_payload.zip/ansible/modules/kolla_docker.py\", >> line 381, in main\\n File >> \"/tmp/ansible_kolla_docker_payload_kkez0k31/ansible_kolla_docker_payload.zip/ansible/module_utils/kolla_docker_worker.py\", >> line 651, in recreate_or_restart_container\\n self.start_container()\\n >> File >> \"/tmp/ansible_kolla_docker_payload_kkez0k31/ansible_kolla_docker_payload.zip/ansible/module_utils/kolla_docker_worker.py\", >> line 669, in start_container\\n self.pull_image()\\n File >> \"/tmp/ansible_kolla_docker_payload_kkez0k31/ansible_kolla_docker_payload.zip/ansible/module_utils/kolla_docker_worker.py\", >> line 450, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) >> for line in self.dc.pull(\\n File >> \"/usr/local/lib/python3.8/dist-packages/docker/api/image.py\", line 430, >> in pull\\n self._raise_for_status(response)\\n File >> \"/usr/local/lib/python3.8/dist-packages/docker/api/client.py\", line 270, >> in _raise_for_status\\n raise create_api_error_from_http_exception(e)\\n >> File \"/usr/local/lib/python3.8/dist-packages/docker/errors.py\", line 31, >> in create_api_error_from_http_exception\\n raise cls(e, response=response, >> explanation=explanation)\\ndocker.errors.APIError: 500 Server Error for >> http+docker://localhost/v1.41/images/create?tag=yoga&fromImage=quay.io%2Fopenstack.kolla%2Fubuntu-source-glance-api: >> Internal Server Error (\"Get \"https://quay.io/v2/\ >> ": net/http: request canceled while waiting for >> connection (Client.Timeout exceeded while awaiting headers)\")\\n'"} >> META: ran handlers >> >> Regards >> Pradeep >> >> >> On Mon, 12 Dec 2022 at 14:54, Danny Webb >> wrote: >> >> Kolla only uses a single controller if you use file based backend by >> default. If you want to continue using the file backend, you can use >> something like an NFS share to share images between controllers and then >> change the glance_file_datadir_volume parameter which will result in the >> api being deployed on all controllers >> ------------------------------ >> *From:* pradeep >> *Sent:* 12 December 2022 04:53 >> *To:* Dmitriy Rabotyagov >> *Cc:* openstack-discuss >> *Subject:* Re: auto switch of glance-api containers to other controllers >> >> >> * CAUTION: This email originates from outside THG * >> ------------------------------ >> Any help is highly appreciated. >> >> Thank you! >> >> On Fri, 9 Dec 2022 at 21:46, pradeep wrote: >> >> Hi Dmitriy, Thanks for the response. Sorry, I forgot to mention that i >> used kolla ansible for deployment. As per kolla docs, glance-api container >> runs only in one controller. So i am looking from kolla perspective if it >> can automatically switch it over other available controllers. >> >> Regards >> Pradeep >> >> On Fri, 9 Dec 2022 at 14:51, Dmitriy Rabotyagov >> wrote: >> >> Hi, >> >> I'm not sure if you used any deployment tool for your environment, but >> usually for such failover there's a load balancer in conjunction with VRRP >> or Anycast, that able to detect when controller1 is down and forward >> traffic to another controller. >> >> As example, OpenStack-Ansible as default option installs haproxy to each >> controller and keepalived to implement VRRP and Virtual IPs. So when glance >> is down on controller1, haproxy will detect that and forward traffic to >> other controllers. If controller1 is down as a whole, keepalived will >> detect that and raise VIP on controller2, so all client traffic will go >> there. Since controller2 also have haproxy, it will pass traffic to >> available backends based on the source IP. >> >> >> ??, 9 ???. 2022 ?., 09:06 pradeep : >> >> Hello All, >> >> I understand that glance-api container run only one controller always. I >> have tried to check if the glance-api container to switch over to the next >> available controller by means of rebooting and stopping the containers, but >> nothing happened. Is there a way to make sure that glance-api container >> switches to other controllers if controller 1 is not available. >> >> Thanks for you help >> >> Regards >> Pradeep >> >> >> >> >> -- >> ----------------------- >> Regards >> Pradeep Kumar >> >> >> >> -- >> ----------------------- >> Regards >> Pradeep Kumar >> >> *Danny Webb* >> Principal OpenStack Engineer >> Danny.Webb at thehutgroup.com >> [image: THG Ingenuity Logo] >> www.thg.com >> >> >> >> Danny Webb >> Principal OpenStack Engineer >> The Hut Group >> >> Tel: >> Email: Danny.Webb at thehutgroup.com >> >> >> For the purposes of this email, the "company" means The Hut Group >> Limited, a company registered in England and Wales (company number 6539496) >> whose registered office is at Fifth Floor, Voyager House, Chicago Avenue, >> Manchester Airport, M90 3DQ and/or any of its respective subsidiaries. >> >> *Confidentiality Notice* >> This e-mail is confidential and intended for the use of the named >> recipient only. If you are not the intended recipient please notify us by >> telephone immediately on +44(0)1606 811888 or return it to us by e-mail. >> Please then delete it from your system and note that any use, >> dissemination, forwarding, printing or copying is strictly prohibited. Any >> views or opinions are solely those of the author and do not necessarily >> represent those of the company. >> >> *Encryptions and Viruses* >> Please note that this e-mail and any attachments have not been encrypted. >> They may therefore be liable to be compromised. Please also note that it is >> your responsibility to scan this e-mail and any attachments for viruses. We >> do not, to the extent permitted by law, accept any liability (whether in >> contract, negligence or otherwise) for any virus infection and/or external >> compromise of security and/or confidentiality in relation to transmissions >> sent by e-mail. >> >> *Monitoring* >> Activity and use of the company's systems is monitored to secure its >> effective use and operation and for other lawful business purposes. >> Communications using these systems will also be monitored and may be >> recorded to secure effective use and operation and for other lawful >> business purposes. >> hgvyjuv >> >> >> >> -- >> ----------------------- >> Regards >> Pradeep Kumar >> >> *Danny Webb* >> Principal OpenStack Engineer >> Danny.Webb at thehutgroup.com >> [image: THG Ingenuity Logo] >> www.thg.com >> >> >> > > > -- > ----------------------- > Regards > Pradeep Kumar > -- ----------------------- Regards Pradeep Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Thu Dec 15 16:38:40 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Thu, 15 Dec 2022 17:38:40 +0100 Subject: Could I deploy openstack in openstack instance In-Reply-To: References: Message-ID: Hi, In short - you can. Some deployment tools, like OpenStack-Ansible or kolla-ansible provide guides for all on one deployments. Devstack also is capable of that, since this principal is widely used in our CI, as tempest scenarios usually spawn real VMs and CI is run inside openstack VMs as well. This usually involve creation of dummy interfaces, that are used in bridges as tunnel ones and then instaces are connected to these bridges through neutron. So here you can find some docs and architecture involved: https://docs.openstack.org/openstack-ansible/latest/user/aio/quickstart.html https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html ??, 15 ???. 2022 ?., 17:30 ??? <1316587041 at qq.com>: > Hello, > > Could I deploy openstack in openstack instance? > > When I use a instance just with one network interface. After ip move to > brq(using linuxbridge) or br-ex(uring openvswitch), the instance lost > connection. > > Does anyone know how to set it up? Or suggest me some documentation. > > I would appreciate any kind of guidance or help. > > Thank you. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.weinmann at me.com Fri Dec 16 12:08:37 2022 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Fri, 16 Dec 2022 12:08:37 -0000 Subject: =?utf-8?B?TWFnbnVtIHByaXZhdGUgZG9ja2VyIHJlZ2lzdHJ5IChpbnNlY3VyZV9yZWdp?= =?utf-8?B?c3RyeSkgbm90IHdvcmtpbmc/?= Message-ID: <1a2eaf45-0023-4131-9fb7-dbcdc584b018@me.com> Hi,I can't seem to get magnum (k8s) to accept my private docker registry. I wanted to have a central registry so not all hosts pull the images during deployment.For this I configured a registry:v2 docker container, pulled the images and pushed them to the local registry and added the following label to my k8s template:container_infra_prefix=172.28.7.140:4000/At first this seems to be working fine and when deploying a new k8s cluster using magnum I can see that it pulls the heat-container-agent image from my local registry:[core at k8s-admin-test-local-reg-6c4hx7gxbdhr-master-0 ~]$ sudo podman ps -aCONTAINER ID? IMAGE??????????????????????????????????????????????????? COMMAND?????????????? CREATED?????? STATUS?????????? PORTS?????? NAMES2d08559b9cdc? 172.28.7.140:4000/heat-container-agent:wallaby-stable-1? /usr/bin/start-he...? 1 second ago? Up 1 second ago????????????? heat-container-agentBut then it fails to pull the next container:tail -f /var/log/heat-config/heat-config-script/64d35aad-5453-4da4-97c7-45abb640fc90-k8s-admin-test-local-reg-6c4hx7gxbdhr-kube_masters-h3wbcqgm6qv4-0-sfagopiu52se-master_config-2f5lhvr32z7j.logWARNING Attempt 8: Trying to install kubectl. Sleeping 5s+ ssh -F /srv/magnum/.ssh/config root at localhost '/usr/bin/podman run???? --entrypoint /bin/bash???? --name install-kubectl???? --net host???? --privileged???? --rm???? --user root???? --volume /srv/magnum/bin:/host/srv/magnum/bin???? 172.28.7.140:4000/hyperkube:v1.23.3-rancher1???? -c '\''cp /usr/local/bin/kubectl /host/srv/magnum/bin/kubectl'\'''Trying to pull 172.28.7.140:4000/hyperkube:v1.23.3-rancher1...Error: initializing source docker://172.28.7.140:4000/hyperkube:v1.23.3-rancher1: pinging container registry 172.28.7.140:4000: Get "https://172.28.7.140:4000/v2/": http: server gave HTTP response to HTTPS clientI don't know why but there is no /etc/docker/daemon.json and the /etc/sysconfig/docker also doesn'T contain the line for my insecure registry:root at k8s-admin-test-local-reg-6c4hx7gxbdhr-master-0 ~]# cat /etc/sysconfig/docker# /etc/sysconfig/docker# Modify these options if you want to change the way the docker daemon runsOPTIONS="--selinux-enabled \? --log-driver=journald \? --live-restore \? --default-ulimit nofile=1024:1024 \? --init-path /usr/libexec/docker/docker-init \? --userland-proxy-path /usr/libexec/docker/docker-proxy \"As soon as I manually add my insecure registry here it works just fine. I looked at the magnum code and there is indeed some lines that should actually handle this, but it doesn't seem to be working. What is also weird is that while there is the Option in the Horizon WebUI to set an insecure registry, the openstack coe command doesn't offer this.Best Regards,Oliver -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.Kieske at mittwald.de Fri Dec 16 12:53:06 2022 From: S.Kieske at mittwald.de (Sven Kieske) Date: Fri, 16 Dec 2022 12:53:06 +0000 Subject: [all][TC] Bare rechecks stats week of 25.07 In-Reply-To: <15225123.PIt3FUKRBJ@p1> References: <15225123.PIt3FUKRBJ@p1> Message-ID: <2bc12d56cb36e1e6cd90f895c6321cb7641ef71d.camel@mittwald.de> On Do, 2022-07-28 at 12:51 +0200, Slawek Kaplonski wrote: > Hi, > > Here are fresh data about bare rechecks. > > +--------------------+---------------+--------------+-------------------+ > | Team?????????????? | Bare rechecks | All Rechecks | Bare rechecks [%] | > +--------------------+---------------+--------------+-------------------+ > | requirements?????? | 2???????????? | 2??????????? | 100.0???????????? | > | keystone?????????? | 1???????????? | 1??????????? | 100.0???????????? | > | OpenStackSDK?????? | 3???????????? | 3??????????? | 100.0???????????? | > | tacker???????????? | 3???????????? | 3??????????? | 100.0???????????? | > | freezer??????????? | 1???????????? | 1??????????? | 100.0???????????? | > | rally????????????? | 1???????????? | 1??????????? | 100.0???????????? | > | OpenStack-Helm???? | 14??????????? | 15?????????? | 93.33???????????? | > | cinder???????????? | 6???????????? | 7??????????? | 85.71???????????? | > | kolla????????????? | 20??????????? | 25?????????? | 80.0????????????? | > | neutron??????????? | 4???????????? | 5??????????? | 80.0????????????? | > | Puppet OpenStack?? | 10??????????? | 13?????????? | 76.92???????????? | > | OpenStack Charms?? | 3???????????? | 6??????????? | 50.0????????????? | > | nova?????????????? | 11??????????? | 22?????????? | 50.0????????????? | > | tripleo??????????? | 8???????????? | 19?????????? | 42.11???????????? | > | ironic???????????? | 1???????????? | 3??????????? | 33.33???????????? | > | Quality Assurance? | 1???????????? | 3??????????? | 33.33???????????? | > | octavia??????????? | 0???????????? | 1??????????? | 0.0?????????????? | > | OpenStackAnsible?? | 0???????????? | 1??????????? | 0.0?????????????? | > | designate????????? | 0???????????? | 1??????????? | 0.0?????????????? | > | Release Management | 0???????????? | 1??????????? | 0.0?????????????? | > | horizon??????????? | 0???????????? | 1??????????? | 0.0?????????????? | > +--------------------+---------------+--------------+-------------------+ > > Those data should be more accurate as my script now counts only comments from last 7 days, not all comments from patches updated in last 7 days. > > Reminder: "bare rechecks" are recheck comments without any reason given. If You need to do recheck for patch due to failed job(s), please first check such failed job and try to identify what was the > issue there. Maybe there is already opened bug for that or maybe You can open new one and add it as explanation in the recheck comment. Or maybe it was some infra issue, in such case short > explanation in the comment would also be enough. Sorry to resurrect this old thread, but while this is all very interesting, is there any documentation on rechecks in the developer guide? We are unable to find anything beside https://docs.openstack.org/contributors/code-and-documentation/elastic-recheck.html which does not really say much about when and how to recheck and when and where to report bugs. any advice would be greatly appreciated. -- Mit freundlichen Gr??en / Regards Sven Kieske Systementwickler / systems engineer ? ? Mittwald CM Service GmbH & Co. KG K?nigsberger Stra?e 4-6 32339 Espelkamp ? Tel.: 05772 / 293-900 Fax: 05772 / 293-333 ? https://www.mittwald.de ? Gesch?ftsf?hrer: Robert Meyer, Florian J?rgens ? St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplement?rin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen Informationen zur Datenverarbeitung im Rahmen unserer Gesch?ftst?tigkeit? gem?? Art. 13-14 DSGVO sind unter www.mittwald.de/ds abrufbar. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From fungi at yuggoth.org Fri Dec 16 12:59:36 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 16 Dec 2022 12:59:36 +0000 Subject: [all][TC] Bare rechecks stats week of 25.07 In-Reply-To: <2bc12d56cb36e1e6cd90f895c6321cb7641ef71d.camel@mittwald.de> References: <15225123.PIt3FUKRBJ@p1> <2bc12d56cb36e1e6cd90f895c6321cb7641ef71d.camel@mittwald.de> Message-ID: <20221216125936.ldgepy44vs7gggfi@yuggoth.org> On 2022-12-16 12:53:06 +0000 (+0000), Sven Kieske wrote: [... > Sorry to resurrect this old thread, but while this is all very interesting, is there any documentation > on rechecks in the developer guide? We are unable to find anything beside https://docs.openstack.org/contributors/code-and-documentation/elastic-recheck.html > > which does not really say much about when and how to recheck and when and where to report bugs. > > any advice would be greatly appreciated. It's covered in this section of the Project Team Guide: https://docs.openstack.org/project-team-guide/testing.html#how-to-handle-test-failures -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From smooney at redhat.com Fri Dec 16 13:38:48 2022 From: smooney at redhat.com (Sean Mooney) Date: Fri, 16 Dec 2022 13:38:48 +0000 Subject: [all][TC] Bare rechecks stats week of 25.07 In-Reply-To: <20221216125936.ldgepy44vs7gggfi@yuggoth.org> References: <15225123.PIt3FUKRBJ@p1> <2bc12d56cb36e1e6cd90f895c6321cb7641ef71d.camel@mittwald.de> <20221216125936.ldgepy44vs7gggfi@yuggoth.org> Message-ID: On Fri, 2022-12-16 at 12:59 +0000, Jeremy Stanley wrote: > On 2022-12-16 12:53:06 +0000 (+0000), Sven Kieske wrote: > [... > > Sorry to resurrect this old thread, but while this is all very interesting, is there any documentation > > on rechecks in the developer guide? We are unable to find anything beside https://docs.openstack.org/contributors/code-and-documentation/elastic-recheck.html > > > > which does not really say much about when and how to recheck and when and where to report bugs. > > > > any advice would be greatly appreciated. > > It's covered in this section of the Project Team Guide: > > https://docs.openstack.org/project-team-guide/testing.html#how-to-handle-test-failures there is not only documentatoin as noted above but the zuul message links that to your in its comment on every ci failure. """ Build failed (check pipeline). For information on how to proceed, see https://docs.opendev.org/opendev/infra-manual/latest/developers.html#automated-testing """ we might also want to add https://docs.openstack.org/project-team-guide/testing.html#how-to-handle-test-failures to that message here https://github.com/openstack/project-config/blob/master/zuul.d/pipelines.yaml#L10-L12 i kind fo feel tracking the barerehceck in team calls weekly or on the mailing list is not useful anymore. the people that pay attention to these already know why we dont want to do bare rehcecks and the cases that still happen are either because they dont care, where in a rush and left a comment seperatly that the script did not see ( i.e. hit send to soon and added a comment after explinging the recheck) or forgot. we have set the expecation that contibutors shoudl provide a reason before rechecking so other then updating the zuul failure message or adding a zuul job to trigger on bare rechecks and comment saying "you shoudl not do this and here is what you should do instead" im not sure that we are going to make more progress on this. > From fungi at yuggoth.org Fri Dec 16 13:54:38 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 16 Dec 2022 13:54:38 +0000 Subject: [all][TC] Bare rechecks stats week of 25.07 In-Reply-To: References: <15225123.PIt3FUKRBJ@p1> <2bc12d56cb36e1e6cd90f895c6321cb7641ef71d.camel@mittwald.de> <20221216125936.ldgepy44vs7gggfi@yuggoth.org> Message-ID: <20221216135438.ysacei3eh3lb6tri@yuggoth.org> On 2022-12-16 13:38:48 +0000 (+0000), Sean Mooney wrote: [...] > we might also want to add > https://docs.openstack.org/project-team-guide/testing.html#how-to-handle-test-failures > to that message here > https://github.com/openstack/project-config/blob/master/zuul.d/pipelines.yaml#L10-L12 [...] The challenge with that is that the "openstack" tenant in OpenDev's Zuul deployment is shared by a lot of other non-OpenStack projects who may have their own independent policies about such things, so we try to stay neutral (as well as brief) in those tenant-wide messages and not refer to project-specific documentation there. However, the OpenDev Developer's Guide section we already link to does say a couple of things like "if this is an OpenStack project then..." so I suppose we could expand that section to link to the OpenStack Project Team Guide in addition to the current OpenStack Contributor Guide link. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dwilde at redhat.com Fri Dec 16 16:44:24 2022 From: dwilde at redhat.com (Dave Wilde) Date: Fri, 16 Dec 2022 11:44:24 -0500 Subject: [keystone] Holiday Meeting Schedule Message-ID: Hello, For the upcoming holidays I?m setting the Keystone meeting schedule as follows: 2022: - 20-Dec: Keystone weekly meeting on IRC - 23-Dec: Keystone Reviewathon on Google Meet - 27-Dec: Cancelled - Keystone weekly meeting - 30-Dec: Cancelled - Keystone Reviewathon on Google Meet 2023: - 03-Jan: Keystone weekly meeting on IRC as usual - 06-Jan: Keystone Reviewathon on Google Meet As always please let me know if you need anything. I hope you have a nice holiday and safe and happy new year! Thanks, /Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sat Dec 17 00:41:49 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 16 Dec 2022 16:41:49 -0800 Subject: [all][TC] Bare rechecks stats week of 25.07 In-Reply-To: <20221216135438.ysacei3eh3lb6tri@yuggoth.org> References: <15225123.PIt3FUKRBJ@p1> <2bc12d56cb36e1e6cd90f895c6321cb7641ef71d.camel@mittwald.de> <20221216125936.ldgepy44vs7gggfi@yuggoth.org> <20221216135438.ysacei3eh3lb6tri@yuggoth.org> Message-ID: <1851d87af07.c66c472321649.9203814019537589687@ghanshyammann.com> ---- On Fri, 16 Dec 2022 05:54:38 -0800 Jeremy Stanley wrote --- > On 2022-12-16 13:38:48 +0000 (+0000), Sean Mooney wrote: > [...] > > we might also want to add > > https://docs.openstack.org/project-team-guide/testing.html#how-to-handle-test-failures > > to that message here > > https://github.com/openstack/project-config/blob/master/zuul.d/pipelines.yaml#L10-L12 > [...] > > The challenge with that is that the "openstack" tenant in OpenDev's > Zuul deployment is shared by a lot of other non-OpenStack projects I thought ..openstack/project-config/blob/master/zuul.d/pipelines.yaml is specifc to the OpenStack also top of file mentioned the same - "Shared zuul config specific to the OpenStack Project.." I also think adding that link to zuul message can be more useful and easy to learn about these bare rechecks. I have noticed in other place also where developers asked about any documentation on bare rechecks. -gmann > who may have their own independent policies about such things, so we > try to stay neutral (as well as brief) in those tenant-wide messages > and not refer to project-specific documentation there. However, the > OpenDev Developer's Guide section we already link to does say a > couple of things like "if this is an OpenStack project then..." so I > suppose we could expand that section to link to the OpenStack > Project Team Guide in addition to the current OpenStack Contributor > Guide link. > -- > Jeremy Stanley > From gmann at ghanshyammann.com Sat Dec 17 01:14:03 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 16 Dec 2022 17:14:03 -0800 Subject: [all][tc] What's happening in Technical Committee: summary 2022 Dec 16: Reading: 5 min Message-ID: <1851da5321a.d6e2934621974.648833965073704276@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We had this week's meeting on Dec 14. Most of the meeting discussions are summarized in this email. Meeting logs are available @ https://meetings.opendev.org/meetings/tc/2022/tc.2022-12-14-16.00.log.html * Next TC weekly meeting will be on Dec 21 Wed at 16:00 UTC, Feel free to add the topic to the agenda[1] by Dec 20. 2. What we completed this week: ========================= * The technical elections dates for next cycle is finalized[2], please plan your nominations for open leaderships roles accordingly. 3. Activities In progress: ================== TC Tracker for the 2023.1 cycle ------------------------------------- * Current cycle working items and their progress are present in the 2023.1 tracker etherpad[3]. Open Reviews ----------------- * Three open reviews for ongoing activities[4]. IMPORTANT: Tox 4 failure and time to fix them -------------------------------------------------------- You might know about Tox 4 failure and currently it is capped as <4 in ensure-tox role. This is going to be uncapped on Dec 21[5] and you might see tox based unit/functional tests job (or any job using ensure-role except devstack based) will start failing. Please plan the testing and fix accordingly. Calling out here as holiday seasons or fixing it in early cycle will help to plan the smooth development cycle and release. Change in Inactive project timeline ----------------------------------------- The proposal for this is under review[6] and positive response so far. Mistral release and more maintainers ------------------------------------------- avanzagh joined TC meeting and updates us that new maintainers from OVHCloud have been added as Core members to help maintaining Mistral. And we also discussed and agreed to continue Mistral in DPL model for this cycle. As next step, avanzagh will discuss the required work to release Mistral or what are the expectations from the release team to consider it as release item. This seems like good progress. We will keep monitor the situation until proposal for Mistral release deprecation is closed[7]. Renovate translation SIG i18 ---------------------------------- * No updates this week and we will track it in TC tracker and update on ML if there is any update. Project updates ------------------- * Add Cinder Huawei charm[8] 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[9]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15:00 UTC [10] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031240.html [2] https://governance.openstack.org/election/ [3] https://etherpad.opendev.org/p/tc-2023.1-tracker [4] https://review.opendev.org/q/projects:openstack/governance+status:open [5] https://lists.openstack.org/pipermail/openstack-discuss/2022-December/031440.html [6] https://review.opendev.org/c/openstack/governance/+/867062 [7] https://review.opendev.org/c/openstack/governance/+/866562 [8] https://review.opendev.org/c/openstack/governance/+/867588 [9] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [10] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From fungi at yuggoth.org Sat Dec 17 02:53:00 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 17 Dec 2022 02:53:00 +0000 Subject: [all][TC] Bare rechecks stats week of 25.07 In-Reply-To: <1851d87af07.c66c472321649.9203814019537589687@ghanshyammann.com> References: <15225123.PIt3FUKRBJ@p1> <2bc12d56cb36e1e6cd90f895c6321cb7641ef71d.camel@mittwald.de> <20221216125936.ldgepy44vs7gggfi@yuggoth.org> <20221216135438.ysacei3eh3lb6tri@yuggoth.org> <1851d87af07.c66c472321649.9203814019537589687@ghanshyammann.com> Message-ID: <20221217025259.6fhmwwtnnmb72vj7@yuggoth.org> On 2022-12-16 16:41:49 -0800 (-0800), Ghanshyam Mann wrote: [...] > I thought > ..openstack/project-config/blob/master/zuul.d/pipelines.yaml is > specifc to the OpenStack also top of file mentioned the same - > "Shared zuul config specific to the OpenStack Project.." That's definitely inaccurate and should get fixed. The "openstack" tenant was originally the only Zuul tenant we had (when Zuul first became multi-tenant capable), and only a few projects have moved out to or started in other tenants since then. For example, StarlingX and Airship repositories all gate in the "openstack" tenant. Alternatively, we could decide to kick all non-OpenStack projects out of that tenant so it can be customized more for OpenStack's use, but would be a significant undertaking if we went that route. > I also think adding that link to zuul message can be more useful > and easy to learn about these bare rechecks. I have noticed in > other place also where developers asked about any documentation on > bare rechecks. We need to keep the comments from Zuul as brief and to the point as possible. Overloading those comments with more documentation URLs is not good for usability, but making sure the documentation we link to gets people to the right information by having it link to more places is fine. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rdhasman at redhat.com Sat Dec 17 06:42:55 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Sat, 17 Dec 2022 12:12:55 +0530 Subject: [cinder] Extending specs deadline to 23rd December Message-ID: Hello Argonauts, As per the schedule[1], cinder spec deadline was 16th December, 2022. However, due to unavailability of some core members, less review bandwidth throughout the week (including me) and other factors like bringing cinderlib to a release-able state, we couldn't get the sufficient reviews done and hence will be extending the deadline for all the open specs proposed for 2023.1 (Antelope) release. The new deadline for cinder specs will be 23rd December, 2022. Please make sure your specs are updated and in a reviewable state. [1] https://releases.openstack.org/antelope/schedule.html#a-cinder-spec-freeze Thanks Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From amonster369 at gmail.com Sun Dec 18 07:34:04 2022 From: amonster369 at gmail.com (A Monster) Date: Sun, 18 Dec 2022 08:34:04 +0100 Subject: Kolla-ansible openstack deployment with High Availability Message-ID: Do I need to deploy and configure PaceMaker and HAproxy or something similar in order to get a HA kolla-ansible openstack deployment? or is it something done automatically? If so, what are the steps I should be taking? -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.weinmann at me.com Sun Dec 18 08:02:54 2022 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Sun, 18 Dec 2022 09:02:54 +0100 Subject: Kolla-ansible openstack deployment with High Availability In-Reply-To: References: Message-ID: <132F9FE0-A89D-4AD7-9F72-E6D371DA5826@me.com> Hi, It?s automatic. Configure at least 3 control nodes, configure the external vip and enable ha proxy in globals.yml. Make sure to configure an odd number of controllers. Cheers, Oliver Von meinem iPhone gesendet > Am 18.12.2022 um 08:38 schrieb A Monster : > > ? > Do I need to deploy and configure PaceMaker and HAproxy or something similar in order to get a HA kolla-ansible openstack deployment? or is it something done automatically? > If so, what are the steps I should be taking? From lokendrarathour at gmail.com Sun Dec 18 12:19:33 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Sun, 18 Dec 2022 17:49:33 +0530 Subject: [Openstack Tacker] - Issue while creating NS Message-ID: Hi Team, Was trying to create NS using the document : https://docs.openstack.org/tacker/yoga/user/nsd_usage_guide.html But getting an error, the bug has been opened, requesting your kind assistance. https://bugs.launchpad.net/tacker/+bug/1999502 thanks once again. -- ~ Lokendra skype: lokendrarathour -------------- next part -------------- An HTML attachment was scrubbed... URL: From hanguangyu2 at gmail.com Sun Dec 18 13:32:23 2022 From: hanguangyu2 at gmail.com (=?UTF-8?B?6Z+p5YWJ5a6H?=) Date: Sun, 18 Dec 2022 21:32:23 +0800 Subject: Kolla-ansible openstack deployment with High Availability In-Reply-To: <132F9FE0-A89D-4AD7-9F72-E6D371DA5826@me.com> References: <132F9FE0-A89D-4AD7-9F72-E6D371DA5826@me.com> Message-ID: Hi, I'm trying to deploy hacluster(pacemaker, corosync) by kolla-ansible. I want to deploy a 2 controller and 3 compute(storage) env, compute nodes also are ceph storage nodes. Actually, the require of pacemaker is from I want to deploy masakari and masakari deploys hacloster(pacemaker, corosync). Could I deploy Pacemaker in this two controller nodes environment? And if not, is there any way I can implement masakri's deployment in this environment I would appreciate any kind of guidance or help. Han Oliver Weinmann ?2022?12?18??? 16:11??? > > Hi, > > It?s automatic. Configure at least 3 control nodes, configure the external vip and enable ha proxy in globals.yml. Make sure to configure an odd number of controllers. > > Cheers, > Oliver > > Von meinem iPhone gesendet > > > Am 18.12.2022 um 08:38 schrieb A Monster : > > > > ? > > Do I need to deploy and configure PaceMaker and HAproxy or something similar in order to get a HA kolla-ansible openstack deployment? or is it something done automatically? > > If so, what are the steps I should be taking? > From amonster369 at gmail.com Mon Dec 19 09:13:17 2022 From: amonster369 at gmail.com (A Monster) Date: Mon, 19 Dec 2022 10:13:17 +0100 Subject: Glance api deployed only on a single controller on multi-controller deployment [kolla] In-Reply-To: References: Message-ID: Thank you for the clarification. So in order to deploy glance on multiple nodes, I need to first set up an NFS storage then specify the shared path either by overriding the content of ansible/group_vars/all.yml or by adding glance_file_datadir_volume:NFS_PATH in globals.yml ? Which NFS tool do you think I should use? Thank you again. Regards On Thu, 21 Jul 2022 at 10:42, Pierre Riteau wrote: > With the default backend (file), Glance is deployed on a single > controller, because it uses a local Docker volume to store Glance images. > This is explained in the documentation [1]: "By default when using file > backend only one glance-api container can be running". See also the > definition of glance_api_hosts in ansible/group_vars/all.yml. > > If you set glance_file_datadir_volume to a non-default path, it is assumed > to be on shared storage and kolla-ansible will automatically use all > glance-api group members. > > You can also switch to another backend such as Ceph or Swift. > > [1] > https://docs.openstack.org/kolla-ansible/latest/reference/shared-services/glance-guide.html > > On Thu, 21 Jul 2022 at 10:57, A Monster wrote: > >> I've deployed openstack xena using kolla ansible on a centos 8 stream >> cluster, using two controller nodes, however I found out after the >> deployment that glance api is not available in one node, I tried >> redeploying but I got the same behavior, although the deployment finished >> without displaying any error. >> >> Thank you. Regards >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradeep8985 at gmail.com Mon Dec 19 16:59:08 2022 From: pradeep8985 at gmail.com (pradeep) Date: Mon, 19 Dec 2022 22:29:08 +0530 Subject: Glance api deployed only on a single controller on multi-controller deployment [kolla] In-Reply-To: References: Message-ID: Hi, Sorry to hijack this thread. I have a similar issue. I have used netapp NAS share, mounted in all the 3 controllers as /glanceimages and modified glance_file_datadir_volume:/glanceimages in globals.yml. I have tried deploy and reconfigure commands but was unlucky. It doesn't deploy glance-api containers in controller2 and 3. Please let me know if you succeed in this. Regards Pradeep On Mon, 19 Dec 2022 at 14:49, A Monster wrote: > Thank you for the clarification. > So in order to deploy glance on multiple nodes, I need to first set up an > NFS storage then specify the shared path either by overriding the content > of ansible/group_vars/all.yml or by adding > glance_file_datadir_volume:NFS_PATH in globals.yml ? > Which NFS tool do you think I should use? > Thank you again. Regards > > On Thu, 21 Jul 2022 at 10:42, Pierre Riteau wrote: > >> With the default backend (file), Glance is deployed on a single >> controller, because it uses a local Docker volume to store Glance images. >> This is explained in the documentation [1]: "By default when using file >> backend only one glance-api container can be running". See also the >> definition of glance_api_hosts in ansible/group_vars/all.yml. >> >> If you set glance_file_datadir_volume to a non-default path, it >> is assumed to be on shared storage and kolla-ansible will automatically use >> all glance-api group members. >> >> You can also switch to another backend such as Ceph or Swift. >> >> [1] >> https://docs.openstack.org/kolla-ansible/latest/reference/shared-services/glance-guide.html >> >> On Thu, 21 Jul 2022 at 10:57, A Monster wrote: >> >>> I've deployed openstack xena using kolla ansible on a centos 8 stream >>> cluster, using two controller nodes, however I found out after the >>> deployment that glance api is not available in one node, I tried >>> redeploying but I got the same behavior, although the deployment finished >>> without displaying any error. >>> >>> Thank you. Regards >>> >> -- ----------------------- Regards Pradeep Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Dec 19 20:10:12 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 19 Dec 2022 12:10:12 -0800 Subject: [all][tc] Technical Committee next weekly meeting on 2022 Dec 21 at 1600 UTC Message-ID: <1852c02170d.dfd584af81780.2139426519315863849@ghanshyammann.com> Hello Everyone, The technical Committee's next weekly meeting is scheduled for 2022 Dec 21, at 1600 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Tuesday, Dec 20 at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From ddorra at t-online.de Mon Dec 19 20:27:48 2022 From: ddorra at t-online.de (ddorra at t-online.de) Date: Mon, 19 Dec 2022 21:27:48 +0100 (CET) Subject: [Openstack Heat] - Problems to find right Heat syntax to declare static route in a router template Message-ID: <1671481668560.793355.9c89816dd60500dc0bcba8111c51f68ac55727d6@spica.telekom.de> Hi, I want extend my router HOT template by adding a static route. In the Openstack cli this would be openstack router add route --route destination='1.2.3.4/17',gateway='10.0.0.66' myrouter >From the OS::Neutron::Router resource description I guess this belongs into the value_specs section as a map, but all my attempts failed, e.g. router1: type: 'OS::Neutron::Router' properties: external_gateway_info: network: provider name: myrouter value_specs: route: destination: '0.0.0.0/0' gateway: '10.0.0.62' Can somebody help me with the correct syntax? Thanks & best regards, Dieter ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From erin at openstack.org Mon Dec 19 21:15:30 2022 From: erin at openstack.org (Erin Disney) Date: Mon, 19 Dec 2022 15:15:30 -0600 Subject: OpenStack Release Name Voting Now Open! Message-ID: Hey everyone- It?s that time again to vote for the next OpenStack release name! Here are the finalists: 1. Bandicoot 2. Basalt 3. Bobcat 4. Boomerang Please submit your vote by January 3rd at 11:59pm PT (January 4th at 7:59 UTC). https://civs1.civs.us/cgi-bin/vote.pl?id=E_9c7f7a20a8871b09&akey=5b974247f99dd478 Happy voting! Thanks, Erin -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Dec 20 00:24:02 2022 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 20 Dec 2022 11:24:02 +1100 Subject: OpenStack Release Name Voting Now Open! In-Reply-To: References: Message-ID: Also, 100% my bad about saying B will be 2023.1... it's definitely 2023.2. Sorry for any confusion! -Kendall On Tue, Dec 20, 2022, 8:18 AM Erin Disney wrote: > Hey everyone- > > It?s that time again to vote for the next OpenStack release name! > > Here are the finalists: > 1. Bandicoot > 2. Basalt > 3. Bobcat > 4. Boomerang > > Please submit your vote by January 3rd at 11:59pm PT (January 4th at 7:59 > UTC). > > > https://civs1.civs.us/cgi-bin/vote.pl?id=E_9c7f7a20a8871b09&akey=5b974247f99dd478 > > Happy voting! > > Thanks, > Erin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Tue Dec 20 10:34:34 2022 From: katonalala at gmail.com (Lajos Katona) Date: Tue, 20 Dec 2022 11:34:34 +0100 Subject: [Openstack Heat] - Problems to find right Heat syntax to declare static route in a router template In-Reply-To: <1671481668560.793355.9c89816dd60500dc0bcba8111c51f68ac55727d6@spica.telekom.de> References: <1671481668560.793355.9c89816dd60500dc0bcba8111c51f68ac55727d6@spica.telekom.de> Message-ID: Hi, For adding extra/static routes for your routers you have 2 ways in Heat (actually Neutron API has it, and you can use them in hot also): - OS::Neutron::ExtraRoute - https://docs.openstack.org/api-ref/network/v2/index.html#update-router - OS::Neutron::ExtraRouteSet - https://docs.openstack.org/api-ref/network/v2/index.html#add-extra-routes-to-router The 1st one is the "old" set router attribute, and the 2nd is add_extraroutes & remove_extraroutes is optimized for concurrent updates (see the API ref's description for add_extraroutes). So in hot template it will look something like this: type: OS::Neutron::ExtraRouteSet properties: router: { get_resource: myrouter0 } routes: - destination: 179.24.2.0/24 nexthop: 192.168.222.221 Lajos ddorra at t-online.de ezt ?rta (id?pont: 2022. dec. 19., H, 21:43): > Hi, > > > > I want extend my router HOT template by adding a static route. > > > > In the Openstack cli this would be > > > > openstack router add route --route destination='1.2.3.4/17',gateway='10.0.0.66' > myrouter > > > > From the OS::Neutron::Router resource description I guess this belongs > into the value_specs > > section as a map, but all my attempts failed, e.g. > > > > router1: > type: 'OS::Neutron::Router' > properties: > external_gateway_info: > network: provider > name: myrouter > value_specs: > > route: > destination: '0.0.0.0/0' > gateway: '10.0.0.62' > > > > Can somebody help me with the correct syntax? > > > > > > Thanks & best regards, > > Dieter > > > ? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Tue Dec 20 12:10:39 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Tue, 20 Dec 2022 13:10:39 +0100 Subject: [neutron] Bug deputy report for week of December 12th Message-ID: Hello Neutrinos: This is the bug report from last week. * Critical: ** https://bugs.launchpad.net/neutron/+bug/2000150: "Fullstack dhcp test test_multiple_agents_for_network is failing intermittently". *Not assigned* * High: ** https://bugs.launchpad.net/neutron/+bug/1999558: "tox4 is breaking Neutron CI (most of the jobs)". To be discussed during the weekly Neutron meeting. ** https://bugs.launchpad.net/neutron/+bug/2000071: "[ovn-octavia-provider] Do not make the status of a newly HM conditional on the status of existing members". Patch: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/868092 * Medium: ** https://bugs.launchpad.net/neutron/+bug/1999392: "Some events which are supposed to be run AFTER changes are committed are performed BEFORE commit". Assigned ** https://bugs.launchpad.net/neutron/+bug/1999678: "Static route can get stuck in the router snat namespace". Patch: https://review.opendev.org/c/openstack/neutron/+/867678 ** https://bugs.launchpad.net/neutron/+bug/1999774: "SDK: Neutron stadiums use python bindings from python-neutronclient which will be deprecated". This bug is a container for any patch on any project deprecating "python-neutronclient". ** https://bugs.launchpad.net/neutron/+bug/1999813: "[ovn-octavia-provider] when a HM is created/deleted the listener remains in PENDING_UPDATE". Patch: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/867974. ** https://bugs.launchpad.net/neutron/+bug/2000078: "neutron-remove-duplicated-port-bindings doesn't remove binding_levels". Assigned/ * Low ** https://bugs.launchpad.net/neutron/+bug/1999390: "TypeError raised in neutron-dhcp-agent due to missing argument in clean_devices method". *Not assigned.* ** https://bugs.launchpad.net/neutron/+bug/1999391: "Attribute error in neutron-ovs-agent".* Not assigned.* * Incomplete: ** https://bugs.launchpad.net/neutron/+bug/1999400: "neutron-metadata-agent does not sometimes provide instance-id". The issue seems to be an error during the cloud-init route assignment. Must be confirmed first. ** https://bugs.launchpad.net/neutron/+bug/1999540: "Tempest test test_two_vms_fips failed due to port binding on instance failing". Missing logs (not accessible). ** https://bugs.launchpad.net/neutron/+bug/1999677: "Defunct nodes are reported as happy in network agent list". This issue is already solved in newer versions (latest Ussuri release 16.4.2). Also newer releases allow the removal of the OVN agents using the CLI. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From manchandavishal143 at gmail.com Tue Dec 20 17:44:58 2022 From: manchandavishal143 at gmail.com (vishal manchanda) Date: Tue, 20 Dec 2022 23:14:58 +0530 Subject: [horizon] Cancelling next three weekly meetings Message-ID: Hello Team, As agreed, during the last weekly meeting, we are canceling our weekly meeting on 21st, 28th December and 4th January. The next weekly meeting will be on 11th January. Happy holidays and a happy new year in advance! Thanks & Regards, Vishal Manchanda -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Dec 21 01:27:31 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 20 Dec 2022 17:27:31 -0800 Subject: [all][tc] Technical Committee next weekly meeting on 2022 Dec 21 at 1600 UTC In-Reply-To: <1852c02170d.dfd584af81780.2139426519315863849@ghanshyammann.com> References: <1852c02170d.dfd584af81780.2139426519315863849@ghanshyammann.com> Message-ID: <185324af768.e0ef577b168174.5733813745039652828@ghanshyammann.com> Hello Everyone, Below is the agenda for the TC meeting scheduled on Dec 21 at 1600 UTC. Location: IRC OFTC network in the #openstack-tc channel * Roll call * Follow up on past action items * Gate health check * 2023.1 TC tracker checks: ** https://etherpad.opendev.org/p/tc-2023.1-tracker * Mistral situation ** Release team proposing it to mark its release deprecated *** https://review.opendev.org/c/openstack/governance/+/866562 ** New volunteers from OVHCloud are added in Mistral core team. *** https://lists.openstack.org/pipermail/openstack-discuss/2022-December/031421.html * Recurring tasks check ** Bare 'recheck' state *** https://etherpad.opendev.org/p/recheck-weekly-summary * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Mon, 19 Dec 2022 12:10:12 -0800 Ghanshyam Mann wrote --- > Hello Everyone, > > The technical Committee's next weekly meeting is scheduled for 2022 Dec 21, at 1600 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Tuesday, Dec 20 at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From amonster369 at gmail.com Wed Dec 21 09:59:30 2022 From: amonster369 at gmail.com (A Monster) Date: Wed, 21 Dec 2022 10:59:30 +0100 Subject: Instance console works only if opened from one of the deployment nodes Message-ID: I've deployed openstack yoga using kolla ansible, in three controller nodes, a storage cluster and multiple compute nodes. The deployment included using haproxy, keepalived and hacluster. everything seems to work fine, I can launch instances and networking is doing great, however I found out a problem that I didn't get in all the deployment tests that i've run before. I've combien openstack networks under the same network 10.10.10.0/24 and used the same ip address for both : *kolla_internal_vip_address* and *kolla_external_vip_interface *which is 10.10.10.254, and used an ip route to access to the openstack cluster from an external pc > ip route add 10.10.10.0/24 via 192.168.129.29 dev eth0 where *192.168.129.29* is the address of one of my controllers, however when I try to login to the horizon dashboard using *10.10.10.254 *i don't get a response from the web page, and when I log to the dashboard using the ip address of one of my controller nodes, I cannot use the instance console since it is under the ip address of 10.10.10.254 which is assigned to both *kolla_internal_vip_address* and *kolla_external_vip_interface.* although i can ping 10.10.10.254 from the external network using ip route. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.aminian.server at gmail.com Wed Dec 21 10:38:23 2022 From: p.aminian.server at gmail.com (Parsa Aminian) Date: Wed, 21 Dec 2022 14:08:23 +0330 Subject: horizon policy Message-ID: Hello is it possible to prevent access to delete instance on horizon for some users ? I want to access only admin user to delete instances from dashboard -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnasiadka at gmail.com Wed Dec 21 11:13:12 2022 From: mnasiadka at gmail.com (=?utf-8?Q?Micha=C5=82_Nasiadka?=) Date: Wed, 21 Dec 2022 12:13:12 +0100 Subject: [kolla] Cancelling meetings on 21 and 28 Dec Message-ID: <37DC6520-5A66-4CA4-A2BA-B50AE23D86F6@gmail.com> Hello Koalas, Cancelling this and next weekly meeting due to Christmas/New Year period. See you next year! Best regards, Michal From oliver.weinmann at me.com Wed Dec 21 11:50:58 2022 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Wed, 21 Dec 2022 11:50:58 -0000 Subject: =?utf-8?B?UmU6IE1hZ251bSBwcml2YXRlIGRvY2tlciByZWdpc3RyeSAoaW5zZWN1cmVf?= =?utf-8?B?cmVnaXN0cnkpIG5vdCB3b3JraW5nPw==?= In-Reply-To: <1a2eaf45-0023-4131-9fb7-dbcdc584b018@me.com> References: <1a2eaf45-0023-4131-9fb7-dbcdc584b018@me.com> Message-ID: Hi all,Problem solved. I was not using the latest fedora Core is 35 image. It is kind of hard to find it since the last version on the page is 36 and there is no download archive. I was able to find a Reddit post (https://www.reddit.com/r/Fedora/comments/mmtv5c/is_there_an_archive_for_previous_fcos_releases/) on how to download older versions. Using the latest fedora core os 35 version, it works just fine. Still I have not found a way to set the insecure-registry via cmdline. I saw the option when using terraform.Cheers,OliverVon meinem iPhone gesendetAm 16.12.2022 um 13:08 schrieb Oliver Weinmann :?Hi,I can't seem to get magnum (k8s) to accept my private docker registry. I wanted to have a central registry so not all hosts pull the images during deployment.For this I configured a registry:v2 docker container, pulled the images and pushed them to the local registry and added the following label to my k8s template:container_infra_prefix=172.28.7.140:4000/At first this seems to be working fine and when deploying a new k8s cluster using magnum I can see that it pulls the heat-container-agent image from my local registry:[core at k8s-admin-test-local-reg-6c4hx7gxbdhr-master-0 ~]$ sudo podman ps -aCONTAINER ID? IMAGE??????????????????????????????????????????????????? COMMAND?????????????? CREATED?????? STATUS?????????? PORTS?????? NAMES2d08559b9cdc? 172.28.7.140:4000/heat-container-agent:wallaby-stable-1? /usr/bin/start-he...? 1 second ago? Up 1 second ago????????????? heat-container-agentBut then it fails to pull the next container:tail -f /var/log/heat-config/heat-config-script/64d35aad-5453-4da4-97c7-45abb640fc90-k8s-admin-test-local-reg-6c4hx7gxbdhr-kube_masters-h3wbcqgm6qv4-0-sfagopiu52se-master_config-2f5lhvr32z7j.logWARNING Attempt 8: Trying to install kubectl. Sleeping 5s+ ssh -F /srv/magnum/.ssh/config root at localhost '/usr/bin/podman run???? --entrypoint /bin/bash???? --name install-kubectl???? --net host???? --privileged???? --rm???? --user root???? --volume /srv/magnum/bin:/host/srv/magnum/bin???? 172.28.7.140:4000/hyperkube:v1.23.3-rancher1???? -c '\''cp /usr/local/bin/kubectl /host/srv/magnum/bin/kubectl'\'''Trying to pull 172.28.7.140:4000/hyperkube:v1.23.3-rancher1...Error: initializing source docker://172.28.7.140:4000/hyperkube:v1.23.3-rancher1: pinging container registry 172.28.7.140:4000: Get "https://172.28.7.140:4000/v2/": http: server gave HTTP response to HTTPS clientI don't know why but there is no /etc/docker/daemon.json and the /etc/sysconfig/docker also doesn'T contain the line for my insecure registry:root at k8s-admin-test-local-reg-6c4hx7gxbdhr-master-0 ~]# cat /etc/sysconfig/docker# /etc/sysconfig/docker# Modify these options if you want to change the way the docker daemon runsOPTIONS="--selinux-enabled \? --log-driver=journald \? --live-restore \? --default-ulimit nofile=1024:1024 \? --init-path /usr/libexec/docker/docker-init \? --userland-proxy-path /usr/libexec/docker/docker-proxy \"As soon as I manually add my insecure registry here it works just fine. I looked at the magnum code and there is indeed some lines that should actually handle this, but it doesn't seem to be working. What is also weird is that while there is the Option in the Horizon WebUI to set an insecure registry, the openstack coe command doesn't offer this.Best Regards,Oliver -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Dec 21 12:34:44 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 21 Dec 2022 12:34:44 +0000 Subject: [cinder] Bug Report from 12-21-2022 Message-ID: This is a bug report from 12-14-2022 to 12-21-2022. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- High - https://bugs.launchpad.net/cinder/+bug/1999706 "Cinder fails to automatically map availability zones to volume types." Unassigned. Medium - https://bugs.launchpad.net/cinder/+bug/1999766 "Retype VM root Volume Cause VM failed to boot." Unassigned. Cheers, Sofia -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From ddorra at t-online.de Wed Dec 21 13:50:29 2022 From: ddorra at t-online.de (ddorra at t-online.de) Date: Wed, 21 Dec 2022 14:50:29 +0100 (CET) Subject: [Openstack Heat] - Problems to find right Heat syntax to declare static route in a router template Message-ID: <1671630629712.888468.ca044d5fd5a3176fa8cea886d979e4a0090ff433@spica.telekom.de> Hi Lajos, many thanks for your hints! Dieter > Hi, > > For adding extra/static routes for your routers you have 2 ways in Heat > (actually Neutron API has it, and you can use them in hot also): > > OS::Neutron::ExtraRoute https://docs.openstack.org/api-ref/network/v2/index.html#update-router > > OS::Neutron::ExtraRouteSethttps://docs.openstack.org/api-ref/network/v2/index.html#add-extra-routes-to-router > > The 1st one is the "old" set router attribute, and the 2nd is add_extraroutes & remove_extraroutes is optimized for concurrent > updates (see the API ref's > description for add_extraroutes). > So in hot template it will look something like this: > type: OS::Neutron::ExtraRouteSet > properties: > router: { get_resource: myrouter0 } > routes: > - destination: 179.24.2.0/24 > nexthop: 192.168.222.221 > > Lajos > > > ddorra at t-online.de ezt ?rta (id?pont: 2022. dec. 19., H, 21:43): > > Hi, > > > > I want extend my router HOT template by adding a static route. > > > > In the Openstack cli this would be > > > > openstack router add route --route destination='1.2.3.4/17',gateway='10.0.0.66' myrouter > > > > From the OS::Neutron::Router resource description I guess this belongs into the value_specs > > section as a map, but all my attempts failed, e.g. > > > > router1: > > type: 'OS::Neutron::Router' > > properties: > > external_gateway_info: > > network: provider > > name: myrouter > > value_specs: > > route: > > destination: '0.0.0.0/0' > > gateway: '10.0.0.62' > > > > Can somebody help me with the correct syntax? > > > > > > Thanks & best regards, > > Dieter ? From amonster369 at gmail.com Wed Dec 21 14:46:43 2022 From: amonster369 at gmail.com (A Monster) Date: Wed, 21 Dec 2022 15:46:43 +0100 Subject: Kolla ansible, add a service to a running openstack deployment Message-ID: is there a way to add a service such as elasticsearch to an already done openstack deployment using kolla-ansible? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Wed Dec 21 15:03:36 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Wed, 21 Dec 2022 20:33:36 +0530 Subject: [cinder] Cancelling meeting on 28th December Message-ID: Hello Argonauts, As discussed in today's meeting[1], we will be cancelling the upstream cinder meeting on 28st December, 2022 and will meet in the meeting on 4th January, 2023. Wishing everyone Merry Christmas and a happy new year! [1] https://meetings.opendev.org/meetings/cinder/2022/cinder.2022-12-21-14.00.log.html#l-85 Thanks Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From vince.mulhollon at springcitysolutions.com Wed Dec 21 15:36:25 2022 From: vince.mulhollon at springcitysolutions.com (Vince Mulhollon) Date: Wed, 21 Dec 2022 09:36:25 -0600 Subject: Kolla ansible, add a service to a running openstack deployment In-Reply-To: References: Message-ID: Sure, for example on a Ubuntu you'd edit /etc/kolla/globals.yml or more likely put in a file /etc/kolla/globals.d/something.yml enable_elasticsearch: "yes" Then run a deploy like you did initially. Probably off a guide like https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html You're probably not going to like the results WRT Monasca or whatever you're doing pulling in a VERY old version of ES, but this strategy in general works. As always, try on the test or dev cluster Best of luck to you! On Wed, Dec 21, 2022 at 9:05 AM A Monster wrote: > is there a way to add a service such as elasticsearch to an already done > openstack deployment using kolla-ansible? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gthiemonge at redhat.com Thu Dec 22 13:37:24 2022 From: gthiemonge at redhat.com (Gregory Thiemonge) Date: Thu, 22 Dec 2022 14:37:24 +0100 Subject: [Octavia] cancelling meeting Dec 28 Message-ID: Hi, Next week's meeting (Dec 28th) is cancelled, The next weekly meeting will be on Jan 4th! Gregory -------------- next part -------------- An HTML attachment was scrubbed... URL: From pdeore at redhat.com Thu Dec 22 14:49:54 2022 From: pdeore at redhat.com (Pranali Deore) Date: Thu, 22 Dec 2022 20:19:54 +0530 Subject: [Glance] Cancelling Weekly meeting for Next 2 weeks Message-ID: Hello, Upstream glance weekly meeting for next 2 weeks is cancelled as most of the team members will be on New Year Holidays. The Next weekly meeting will be on Jan 12th ! Wishing everyone a Wonderful Holiday Season & a Joyful New Year. Thanks, Pranali -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Dec 22 18:26:00 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 22 Dec 2022 10:26:00 -0800 Subject: [all][tc] Canceling next week TC meetings Message-ID: <1853b15c511.fa84e5e350267.6138627370379415648@ghanshyammann.com> Hello Everyone, Due to the holidays, we are cancelling next week's (Dec 28) TC meeting. -gmann From satish.txt at gmail.com Thu Dec 22 21:50:51 2022 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 22 Dec 2022 16:50:51 -0500 Subject: [kolla-ansible] telemetry data Message-ID: Folks, I am deployed kolla-ansible and trying to set up telemetry for billing etc. But not able to get it working may be broken or something. In global.yml I turned on the following options and deployed kolla and it did install all the components but I am not seeing it deploy a ceilometer agent on compute nodes. How does it collect data from compute nodes if the ceilometer agent is not running on compute nodes? Did I miss something here? enable_ceilometer: "yes" enable_gnocchi: "yes" enable_cloudkitty: "yes" -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Fri Dec 23 09:34:48 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 23 Dec 2022 10:34:48 +0100 Subject: [neutron] Neutron drivers meeting cancelled Message-ID: Hello Neutrinos: Due to the lack of agenda, today's drivers meeting is cancelled. Please spend some time reviewing the active specs from the RFEs approved last week: https://review.opendev.org/q/project:openstack/neutron-specs+status:open+-age:1week . See you next year! -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at bitswalk.com Fri Dec 23 11:28:01 2022 From: gael.therond at bitswalk.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Fri, 23 Dec 2022 12:28:01 +0100 Subject: [ansible][ACO] - Contribution questions Message-ID: Hi ansible collections openstack team! I finally had time to list all my issues met with the project, created few bug reports and even contributed to a patch today (minor, mainly copy/paste) however I?ve few questions regarding the CI process! Overall, what?s the rule with the CI code testing? I?ve read the contributing guide and had an eye on previous patches to see how it?s used but I?m having a hard time to find a real unified method. For instance, it seems that some module miss CI tasks (such as compute_service_info) or did I missed something? Thanks a lot for all the good job! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ces.eduardo98 at gmail.com Fri Dec 23 14:27:43 2022 From: ces.eduardo98 at gmail.com (Carlos Silva) Date: Fri, 23 Dec 2022 11:27:43 -0300 Subject: [manila] Cancelling Dec 29th weekly meeting Message-ID: Hello, Zorillas! As discussed in yesterday's weekly meeting, next week's IRC meeting (Dec 29th) is cancelled due to most of us being offline. The next weekly meeting is on January 5th. Happy holidays to those celebrating! Cheers, carloss -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sat Dec 24 01:16:16 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 23 Dec 2022 17:16:16 -0800 Subject: [all][tc] What's happening in Technical Committee: summary 2022 Dec 23: Reading: 5 min Message-ID: <18541b3bccd.e230e8c9114001.4862433585680877870@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We had this week's meeting on Dec 21. Most of the meeting discussions are summarized in this email. Meeting logs are available @ https://meetings.opendev.org/meetings/tc/2022/tc.2022-12-21-16.00.log.html * The TC meeting on Dec 28 is cancelled [1]. The next TC weekly meeting will be on Jan 4 Wed at 16:00 UTC, Feel free to add the topic to the agenda[2] by Jan 3. 2. What we completed this week: ========================= * Made inactive projects timeline as release milestone-2 of the cycle[3] 3. Activities In progress: ================== TC Tracker for the 2023.1 cycle ------------------------------------- * Current cycle working items and their progress are present in the 2023.1 tracker etherpad[4]. Open Reviews ----------------- * Three open reviews for ongoing activities[5]. IMPORTANT: Tox 4 failure and time to fix them -------------------------------------------------------- tox is uncapped in ensure-tox [6]. With that tox4 is used in unit/functional tests job (or any job using ensure-role. If you are seeing failure in your project's jobs, please fix those on priority and if it needs time then you can cap tox it in your project zuul.yaml or for the particular job via 'ensure_tox_version' var. Mistral release and more maintainers ------------------------------------------- avanzagh who is the new maintainer of Mistral is in discussion with the release team for the required work for the Mistral release. We will keep monitoring the situation until the proposal for Mistral release deprecation is closed[7]. Project updates ------------------- * Add Cinder Huawei charm[8] 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[9]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15:00 UTC [10] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://lists.openstack.org/pipermail/openstack-discuss/2022-December/031568.html [2] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Agenda_Suggestions [3] https://review.opendev.org/c/openstack/governance/+/867062 [4] https://etherpad.opendev.org/p/tc-2023.1-tracker [5] https://review.opendev.org/q/projects:openstack/governance+status:open [6] https://review.opendev.org/c/zuul/zuul-jobs/+/866943 [7] https://review.opendev.org/c/openstack/governance/+/866562 [8] https://review.opendev.org/c/openstack/governance/+/867588 [9] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [10] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From hongbin034 at gmail.com Sat Dec 24 08:55:39 2022 From: hongbin034 at gmail.com (Hongbin Lu) Date: Sat, 24 Dec 2022 16:55:39 +0800 Subject: the zun server (zun-api) in controller host error(s) In-Reply-To: References: Message-ID: Hi, I submitted a fix for it: https://review.opendev.org/c/openstack/zun/+/868513 . Thanks for reporting. Best regards, Hongbin On Mon, Dec 12, 2022 at 11:11 PM ????? <2292613444 at qq.com> wrote: > iAfter I followed the official documentation and fixed a 'pip3' error, it > still failed to run successfully. This is an error reported by the zun api. > Other components work normally. If you need my profile, it is attached > > root at controller:~# systemctl status zun-api > ? zun-api.service - OpenStack Container Service API > Loaded: loaded (/etc/systemd/system/zun-api.service; enabled; vendor > preset: enabled) > Active: failed (Result: exit-code) since Sun 2022-12-11 11:43:04 CST; > 1min 38s ago > Process: 1017 ExecStart=/usr/local/bin/zun-api (code=exited, > status=1/FAILURE) > Main PID: 1017 (code=exited, status=1/FAILURE) > > Dec 11 11:43:04 controller zun-api[1017]: from zun import objects > Dec 11 11:43:04 controller zun-api[1017]: File > "/usr/local/lib/python3.8/dist-packages/zun/objects/__init__.py", line 13, > in > Dec 11 11:43:04 controller zun-api[1017]: from zun.objects import > compute_node > Dec 11 11:43:04 controller zun-api[1017]: File > "/usr/local/lib/python3.8/dist-packages/zun/objects/compute_node.py", line > 16, in > Dec 11 11:43:04 controller zun-api[1017]: from zun.db import api as > dbapi > Dec 11 11:43:04 controller zun-api[1017]: File > "/usr/local/lib/python3.8/dist-packages/zun/db/api.py", line 32, in > Dec 11 11:43:04 controller zun-api[1017]: @profiler.trace("db") > Dec 11 11:43:04 controller zun-api[1017]: AttributeError: partially > initialized module 'zun.common.profiler' has no attribute 'trace' (most > likely due to a circular import) > Dec 11 11:43:04 controller systemd[1]: zun-api.service: Main process > exited, code=exited, status=1/FAILURE > Dec 11 11:43:04 controller systemd[1]: zun-api.service: Failed with result > 'exit-code'. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincentlee676 at gmail.com Fri Dec 23 03:32:55 2022 From: vincentlee676 at gmail.com (vincent lee) Date: Thu, 22 Dec 2022 21:32:55 -0600 Subject: Container access mounted device Message-ID: Hi, I have deployed a working OpenStack and would like to know if there are any possible options I can enable so that when creating a new container, files can be mounted to it, and a user from the container can access it. For instance, having a mounted device from the compute host and allowing the /dev/(mounted device name) file to be accessible in the container. Best regards, Vincent -------------- next part -------------- An HTML attachment was scrubbed... URL: From 2292613444 at qq.com Sat Dec 24 11:55:08 2022 From: 2292613444 at qq.com (=?gb18030?B?zt7K/bXE0MfH8g==?=) Date: Sat, 24 Dec 2022 19:55:08 +0800 Subject: the zun service run is OK , but at dashboard web no view zun serivce! Message-ID: After I successfully installed the zun service, I logged in to the dashboard, but I could not see the zun entry I've rebooted all the machines?but invalid. thanks you ! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: C5DDA95B at 861AF22A.9CE8A663.jpg Type: image/jpeg Size: 31856 bytes Desc: not available URL: From pierre at stackhpc.com Sat Dec 24 14:44:54 2022 From: pierre at stackhpc.com (Pierre Riteau) Date: Sat, 24 Dec 2022 15:44:54 +0100 Subject: [kolla-ansible] telemetry data In-Reply-To: References: Message-ID: With these settings enabled, you should see a ceilometer_compute container on each hypervisor. Try deploying/reconfiguring without any limit? On Thu, 22 Dec 2022 at 23:21, Satish Patel wrote: > Folks, > > I am deployed kolla-ansible and trying to set up telemetry for billing > etc. But not able to get it working may be broken or something. > > In global.yml I turned on the following options and deployed kolla and it > did install all the components but I am not seeing it deploy a ceilometer > agent on compute nodes. How does it collect data from compute nodes if the > ceilometer agent is not running on compute nodes? Did I miss something > here? > > enable_ceilometer: "yes" > enable_gnocchi: "yes" > enable_cloudkitty: "yes" > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ddanelski at cloudferro.com Fri Dec 23 10:11:47 2022 From: ddanelski at cloudferro.com (Dominik Danelski) Date: Fri, 23 Dec 2022 10:11:47 +0000 Subject: [nova] Isolating scheduler Message-ID: Hello, I'd like to create and test some new filters and weights for the scheduler. Ideally, I would mimic an existing OpenStack deployment I have access to. I already installed placement component, to which the scheduler would connect and now I'd like to isolate it. I'm not sure how to approach it, as scheduler is part of Nova and not a stand-alone component as placement. My intention is to initially fill placement's database with information about the infrastructure, set up the scheduler and then query it for consecutive requests (linear, no need for locks if that changes anything) to log the decisions it would make with modified weights/filters and the state of the system in-between requests. What is, in your opinion, the best way to detach the scheduler as much as possible from the rest of Nova, in order to test it under configuration most resembling the real-world setup? Should I use the test framework and mimic how it is set up for the unit tests or is there a better way to go? Thank you for your time. Regards Dominik Danelski From miguel at mlavalle.com Fri Dec 23 13:39:03 2022 From: miguel at mlavalle.com (Miguel Lavalle) Date: Fri, 23 Dec 2022 07:39:03 -0600 Subject: [neutron] bug deputy report December 19th -23th Message-ID: Hi, Here's this week's bugs deputy report: Under investigation at the moment of this report ==================================== https://bugs.launchpad.net/neutron/+bug/2000378 [OVN] orphaned virtual parent ports break new ports. Under investigation by ralonsoh Medium ====== https://bugs.launchpad.net/neutron/+bug/2000163 [FT] Error in "test_get_datapath_id". Proposed fix: https://review.opendev.org/c/openstack/neutron/+/868311 https://bugs.launchpad.net/neutron/+bug/2000164 [FT] Error in "test_dvr_update_gateway_port_with_no_gw_port_in_namespace". Needs owner https://bugs.launchpad.net/neutron/+bug/2000252 [OVN] "DBInconsistenciesPeriodics.check_for_inconsistencies" failing during port deletion. Needs owner https://bugs.launchpad.net/neutron/+bug/2000314 [OVS] Call to "driver.unregister" failing. Needs owner Low === https://bugs.launchpad.net/neutron/+bug/2000238 [FT] Remove the "Running command:" messages in the logs. Proposed fix: https://review.opendev.org/c/openstack/neutron/+/868304 -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Sat Dec 24 15:18:12 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Sat, 24 Dec 2022 16:18:12 +0100 Subject: Container access mounted device In-Reply-To: References: Message-ID: Hi, I think you should mention what deployment tooling you used to setup your openstack deployment to get more relevant answer ??, 24 ???. 2022 ?., 16:04 vincent lee : > Hi, I have deployed a working OpenStack and would like to know if there > are any possible options I can enable so that when creating a new > container, files can be mounted to it, and a user from the container can > access it. For instance, having a mounted device from the compute host and > allowing the /dev/(mounted device name) file to be accessible in the > container. > > Best regards, > Vincent > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Sat Dec 24 15:41:46 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Sat, 24 Dec 2022 16:41:46 +0100 Subject: the zun service run is OK , but at dashboard web no view zun serivce! In-Reply-To: References: Message-ID: Hi, Please, ensure that you have not forgot to install zun dashboard to the horizon and enable it accordingly, as it's provided by a separate package that should be installed to the same env where horizon runs. Please refer zun-ui docs for more details: https://docs.openstack.org/zun-ui/latest/ ??, 24 ???. 2022 ?., 16:30 ????? <2292613444 at qq.com>: > After I successfully installed the zun service, I logged in to the > dashboard, but I could not see the zun entry > > I've rebooted all the machines?but invalid. > > thanks you ! > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: C5DDA95B at 861AF22A.9CE8A663.jpg Type: image/jpeg Size: 31856 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: C5DDA95B at 861AF22A.9CE8A663.jpg Type: image/jpeg Size: 31856 bytes Desc: not available URL: From gmann at ghanshyammann.com Mon Dec 26 04:37:51 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 25 Dec 2022 20:37:51 -0800 Subject: The CFP for the OpenInfra Summit 2023 is open! In-Reply-To: <4696B08E-D32F-46C5-9BCB-5C31A6FF637F@openstack.org> References: <4696B08E-D32F-46C5-9BCB-5C31A6FF637F@openstack.org> Message-ID: <1854cb90181.10c97acb4146942.9021588617600504980@ghanshyammann.com> ---- On Tue, 15 Nov 2022 07:00:57 -0800 Erin Disney wrote --- > Hi Everyone! > The CFP for the 2023 OpenInfra Summit (June 13-15, 2023) is NOW LIVE [1]! Check out the full list of tracks and submit a talk on your topic of expertise [2]. > The CFP closes January 10, 2023, at 11:59 p.m. PT > We are also now accepting submissions for Forum sessions [3]! Just to double check, the Forum session proposal deadline is April 21 as shown in the submission form? -gmann What should you submit to the forum vs. the traditional CFP? For the Forum, submit discussion-oriented sessions, including challenges around different software components, working group progress or best practices to tackle common issues.? > Looking for other resources? Registration [4], sponsorships [5], travel support [6] and visa requests [7] are also all open! > Find all the information on the OpenInfra Summit 2023 in one place [8]! > Cheers,Erin > [1] https://cfp.openinfra.dev/app/vancouver-2023/19/presentations[2] https://openinfra.dev/summit/vancouver-2023/summit-tracks/?[3] https://cfp.openinfra.dev/app/vancouver-2023/20/?[4] http://openinfra.dev/summit/registration?[5] http://openinfra.dev/summit/vancouver-2023/summit-sponsor/?[6] https://openinfrafoundation.formstack.com/forms/openinfra_tsp?[7] https://openinfrafoundation.formstack.com/forms/visa_yvrsummit2023?[8] https://openinfra.dev/summit/vancouver-2023 > From satish.txt at gmail.com Mon Dec 26 04:45:54 2022 From: satish.txt at gmail.com (Satish Patel) Date: Sun, 25 Dec 2022 23:45:54 -0500 Subject: [kolla-ansible] telemetry data In-Reply-To: References: Message-ID: HI Pierre, You are correct it was my mistake to run playbook without -i multinode so it just ran on the controller node. Now I can see the ceilometer has been deployed on the compute nodes. But i can't see data in cloudkitty so I'm trying to debug. My understanding is that a ceilometer pushes the data to gnocchi and cloudkitty consumes it using a rating engine. I am able to see data in gnocchi but cloudkitty showing all zero cost in GUI. I did configure rates based on flavor_id etc from openstack example. Not seeing any interesting error or log data which indicate problem anywhere. # openstack metric measures show e7913002-5f7b-4229-8cd8-61577a4fcf73 +---------------------------+-------------+--------------+ | timestamp | granularity | value | +---------------------------+-------------+--------------+ | 2022-12-23T03:55:00+01:00 | 300.0 | 891.00390625 | | 2022-12-23T04:00:00+01:00 | 300.0 | 890.97265625 | | 2022-12-23T04:05:00+01:00 | 300.0 | 891.00390625 | | 2022-12-23T04:10:00+01:00 | 300.0 | 890.7890625 | | 2022-12-23T04:15:00+01:00 | 300.0 | 891.00390625 | On Sat, Dec 24, 2022 at 9:45 AM Pierre Riteau wrote: > With these settings enabled, you should see a ceilometer_compute container > on each hypervisor. Try deploying/reconfiguring without any limit? > > On Thu, 22 Dec 2022 at 23:21, Satish Patel wrote: > >> Folks, >> >> I am deployed kolla-ansible and trying to set up telemetry for billing >> etc. But not able to get it working may be broken or something. >> >> In global.yml I turned on the following options and deployed kolla and it >> did install all the components but I am not seeing it deploy a ceilometer >> agent on compute nodes. How does it collect data from compute nodes if the >> ceilometer agent is not running on compute nodes? Did I miss something >> here? >> >> enable_ceilometer: "yes" >> enable_gnocchi: "yes" >> enable_cloudkitty: "yes" >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaurmanpreet2620 at gmail.com Mon Dec 26 05:07:04 2022 From: kaurmanpreet2620 at gmail.com (manpreet kaur) Date: Mon, 26 Dec 2022 10:37:04 +0530 Subject: [Tacker][SRBAC] Update regarding implementation of project personas in Tacker Message-ID: Hi Ogawa san and Tacker team, This mailer is regarding the SRBAC implementation happening in Tacker. In the Tacker release 2023.1 virtual PTG [1], it was decided by the Tacker community to partially implement the project personas (project-reader role) in the current release. And in upcoming releases, we will implement the remaining project-member role. To address the above requirement, I have prepared a specification [2] and pushed the same in Gerrit for community review. Ghanshyam san reviewed the specification and shared TC's opinion and suggestion to implement both the project-reader and project-member roles. The complete persona implementation will depreciate the 'owner' rule, and help in restricting any other role to accessing project-based resources. Additionally, intact legacy admin (current admin), works in the same way so that we do not break things and introduce the project personas which should be additional things to be available for operators to adopt. Current Status: Incorporated the new requirement and uploaded a new patch set to address the review comment. Note: The Tacker spec freeze date is 28th Dec 2022, there might be some delay in merging the specification in shared timelines. [1] https://etherpad.opendev.org/p/tacker-antelope-ptg#L186 [2] https://review.opendev.org/c/openstack/tacker-specs/+/866956 Thanks & Regards, Manpreet Kaur -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasufum.o at gmail.com Mon Dec 26 07:32:26 2022 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Mon, 26 Dec 2022 16:32:26 +0900 Subject: [tacker] Cancelling next irc meetings Message-ID: Hi team, Due to the holidays, we are cancelling next two meetings on 27th Dec and 3rd Jun. Cheers, Yasufumi From hanguangyu2 at gmail.com Mon Dec 26 10:04:00 2022 From: hanguangyu2 at gmail.com (=?UTF-8?B?6Z+p5YWJ5a6H?=) Date: Mon, 26 Dec 2022 10:04:00 +0000 Subject: [nova] Do openstack support USB passthrough Message-ID: Hi, all I want to ask if openstack support USB passthrough now? Or if I want the instance to recognize the USB flash drive on the host, do you have any suggestions? Thanks, Han From rdhasman at redhat.com Mon Dec 26 10:49:33 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Mon, 26 Dec 2022 16:19:33 +0530 Subject: [openstacksdk][cinder] openstacksdk-functional-devstack job is failing Message-ID: Hi, Currently the *openstacksdk-functional-devstack* job is failing on the cinder gate[1] due to which gate is blocked. This seems to be due to the new tox 4 enforcement and could be an issue for other projects as well. We've a patch in OpenstackSDK that fixes this issue[2] but that is also failing on the* osc-tox-py38-tips* job. The patch has no update since 21st December which is concerning since it's blocking other projects (at least cinder). I've proposed a patch[3] to make the *openstacksdk-functional-devstack *job non-voting until the situation is resolved on the OpenStackSDK side. [1] https://zuul.opendev.org/t/openstack/build/29c0eb15abd144a8a4f167e70fe61530 [2] https://review.opendev.org/c/openstack/openstacksdk/+/867827 [3] https://review.opendev.org/c/openstack/cinder/+/868596 Thanks Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Mon Dec 26 14:53:10 2022 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Mon, 26 Dec 2022 21:53:10 +0700 Subject: [openstack][cinder] Cinder volume id wont change when retype volume Message-ID: Hello guys, I have a problem with cinder: I use cinder volume retype to move my volume to another backend. It worked but when I check on new backend, volume id doest match with id which on Horizon dash board.It expects to have same volume id on Horizon and new backend. I use DellSC and DellPowerstore. Thank you. Nguyen Huu Khoi -------------- next part -------------- An HTML attachment was scrubbed... URL: From jerome.becot at deveryware.com Mon Dec 26 13:15:15 2022 From: jerome.becot at deveryware.com (=?UTF-8?B?SsOpcsO0bWUgQkVDT1Q=?=) Date: Mon, 26 Dec 2022 14:15:15 +0100 Subject: [nova] Help overriding Nova policies (Ussuri) In-Reply-To: <05d3231f-ec52-bd94-9d59-41e9cac7c880@deveryware.com> References: <05d3231f-ec52-bd94-9d59-41e9cac7c880@deveryware.com> Message-ID: Hello, Someone please ? Le 15/12/2022 ? 10:52, J?r?me BECOT a ?crit?: > Hello, > > I should miss something, and I need some help. I've updated some rules > that I placed in the /etc/nova/policy.yaml file. I use all defaults > for the olso policies (ie scope is not enabled). > > When I generate the fulle policy with oslopolicy-policy-generator, the > rules are applied in the generated output. If I test the policy with > oslopolicy-checker under the user's token, the results matches the > permissions, but Nova API continue to refuse the operation > > Policy check for os_compute_api:os-flavor-manage:create failed with > credentials > > I'm using the openstack cli to make the request (openstack flavor create) > > Thanks for your help > > J > > -- *J?r?me BECOT* Ing?nieur DevOps Infrastructure T?l?phone fixe: 01 82 28 37 06 Mobile : +33 757 173 193 Deveryware - 43 rue Taitbout - 75009 PARIS https://www.deveryware.com Deveryware_Logo -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: baniere_signature_dw_2022.png Type: image/png Size: 471995 bytes Desc: not available URL: From jerome.becot at deveryware.com Mon Dec 26 13:16:03 2022 From: jerome.becot at deveryware.com (=?UTF-8?B?SsOpcsO0bWUgQkVDT1Q=?=) Date: Mon, 26 Dec 2022 14:16:03 +0100 Subject: [cinder] Issue resizing volumes attached to running volume backed instances In-Reply-To: References: Message-ID: <348289e3-9a62-5d17-3d92-d73dc85c4a6f@deveryware.com> Anyone please ? Le 08/12/2022 ? 23:13, J?r?me BECOT a ?crit?: > > Hello Openstack, > > We have Ussuri deployed on a few clouds, and they're all plugged to > PureStorage Arrays. We allow users to only use volumes for their > servers. It means that each server disk is a LUN attached over ISCSI > (with multipath) on the compute node hosting the server. Everything > works quite fine, but we have a weird issue when extending volumes > attached to running instances. The guests notice the new disk size .. > of the last extent. > > Say I have a server with a 10gb disk. I add 5gb. On the guest, still > 10gb. I add another 5gb, and on the guest I get 15, and so on. I've > turned the debug mode on and I could see no error in the log. Looking > closer at the log I could catch the culprit: > > 2022-12-08 17:35:13.998 46195 DEBUG os_brick.initiator.linuxscsi [] > Starting size: *76235669504* > 2022-12-08 17:35:14.028 46195 DEBUG os_brick.initiator.linuxscsi [] > volume size after scsi device rescan *80530636800* extend_volume > 2022-12-08 17:35:14.035 46195 DEBUG os_brick.initiator.linuxscsi [] > Volume device info = {'device': > '/dev/disk/by-path/ip-1...1:3260-iscsi-iqn.2010-06.com.purestorage:flasharray.x-lun-10', > 'host': '5', 'channel': '0', 'id': '0', 'lun': '10'} extend_volume > 2022-12-08 17:35:14.348 46195 INFO os_brick.initiator.linuxscsi [] > Find Multipath device file for volume WWN 3624... > 2022-12-08 17:35:14.349 46195 DEBUG os_brick.initiator.linuxscsi [] > Checking to see if /dev/disk/by-id/dm-uuid-mpath-3624.. exists yet. > wait_for_path > 2022-12-08 17:35:14.349 46195 DEBUG os_brick.initiator.linuxscsi [] > /dev/disk/by-id/dm-uuid-mpath-3624... has shown up. wait_for_path > 2022-12-08 17:35:14.382 46195 INFO os_brick.initiator.linuxscsi [] > mpath(/dev/disk/by-id/dm-uuid-mpath-3624) *current size 76235669504* > 2022-12-08 17:35:14.412 46195 INFO os_brick.initiator.linuxscsi [] > mpath(/dev/disk/by-id/dm-uuid-mpath-3624) *new size 76235669504* > 2022-12-08 17:35:14.413 46195 DEBUG oslo_concurrency.lockutils [] Lock > "extend_volume" released by > "os_brick.initiator.connectors.iscsi.ISCSIConnector.extend_volume" :: > held 2.062s inner 2022-12-08 17:35:14.459 46195 DEBUG > os_brick.initiator.connectors.iscsi [] <== extend_volume: return > (2217ms) *76235669504* trace_logging_wrapper > 2022-12-08 17:35:14.461 46195 DEBUG nova.virt.libvirt.volume.iscsi [] > Extend iSCSI Volume /dev/dm-28; new_size=*76235669504* extend_volume > 2022-12-08 17:35:14.462 46195 DEBUG nova.virt.libvirt.driver [] > Resizing target device /dev/dm-28 to *76235669504* > _resize_attached_volume > > The logs clearly shows that the rescan confirm the new size but when > interrogating multipath, it does not. But requesting multipath few > seconds after on the command line shows the new size as well. It > explains the behaviour. > > I'm running Ubuntu 18.04 with multipath 0.7.4-2ubuntu3.2. The os-brick > code for multipath is far more basic than the one in master branch. > Maybe the multipath version installed is too recent for os-brick. > > Thanks for the help > > Jerome > > -- *J?r?me BECOT* Ing?nieur DevOps Infrastructure T?l?phone fixe: 01 82 28 37 06 Mobile : +33 757 173 193 Deveryware - 43 rue Taitbout - 75009 PARIS https://www.deveryware.com Deveryware_Logo -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: baniere_signature_dw_2022.png Type: image/png Size: 471995 bytes Desc: not available URL: From gmann at ghanshyammann.com Mon Dec 26 19:44:54 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 26 Dec 2022 11:44:54 -0800 Subject: [openstacksdk][cinder] openstacksdk-functional-devstack job is failing In-Reply-To: References: Message-ID: <1854ff77045.b6fcddff173376.4549731092261044854@ghanshyammann.com> ---- On Mon, 26 Dec 2022 02:49:33 -0800 Rajat Dhasmana wrote --- > Hi, > Currently the openstacksdk-functional-devstack?job is failing on the cinder gate[1] due to which? gate is blocked. This seems to be due to the new tox 4 enforcement and could be an issue for other projects as well.We've a patch in OpenstackSDK that fixes this issue[2] but that is also failing on the?osc-tox-py38-tips job. The patch has no update since 21st December which is concerning since it's blocking other projects (at least cinder).I've proposed a patch[3] to make the?openstacksdk-functional-devstack job non-voting until the situation is resolved on the OpenStackSDK side. sdks job fix is merged now and we can recheck patches - https://review.opendev.org/c/openstack/openstacksdk/+/867827 -gmann > [1]?https://zuul.opendev.org/t/openstack/build/29c0eb15abd144a8a4f167e70fe61530[2]?https://review.opendev.org/c/openstack/openstacksdk/+/867827[3]?https://review.opendev.org/c/openstack/cinder/+/868596 > ThanksRajat Dhasmana From sshnaidm at redhat.com Mon Dec 26 20:06:03 2022 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Mon, 26 Dec 2022 22:06:03 +0200 Subject: [ansible][ACO] - Contribution questions In-Reply-To: References: Message-ID: Hi, Gael, Thanks for your contribution! Currently the tox-2.12 CI job always fails, it's because of tox version 4 changes. I add a workaround in the patch https://review.opendev.org/c/openstack/ansible-collections-openstack/+/868607 When it (or other solution) is merged, you're good to go with your patch. Sorry for the inconvenience . Thanks On Sat, Dec 24, 2022 at 4:22 PM Ga?l THEROND wrote: > Hi ansible collections openstack team! > > I finally had time to list all my issues met with the project, created few > bug reports and even contributed to a patch today (minor, mainly > copy/paste) however I?ve few questions regarding the CI process! > > Overall, what?s the rule with the CI code testing? > I?ve read the contributing guide and had an eye on previous patches to see > how it?s used but I?m having a hard time to find a real unified method. For > instance, it seems that some module miss CI tasks (such as > compute_service_info) or did I missed something? > > Thanks a lot for all the good job! > -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Dec 26 22:08:34 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 26 Dec 2022 14:08:34 -0800 Subject: Help overriding Nova policies (Ussuri) In-Reply-To: <05d3231f-ec52-bd94-9d59-41e9cac7c880@deveryware.com> References: <05d3231f-ec52-bd94-9d59-41e9cac7c880@deveryware.com> Message-ID: <185507af9b1.108acea23174829.1901335233578562039@ghanshyammann.com> ---- On Thu, 15 Dec 2022 01:52:43 -0800 J?r?me BECOT wrote --- > Hello, > > I should miss something, and I need some help. I've updated some rules > that I placed in the /etc/nova/policy.yaml file. I use all defaults for > the olso policies (ie scope is not enabled). > > When I generate the fulle policy with oslopolicy-policy-generator, the > rules are applied in the generated output. If I test the policy with > oslopolicy-checker under the user's token, the results matches the > permissions, but Nova API continue to refuse the operation > > Policy check for os_compute_api:os-flavor-manage:create failed with > credentials Problem: ----------- We have two problem here. First is 'oslopolicy-policy-generator' tool and its usage. This has come up many times when deployers use this tool with the expectation that generated the file will work the same as the default way which is not the case. Generated file by this tool makes you move to the new defaults (remove old defaults and so does the old token might stop working). Generating the policy.yaml by oslopolicy-policy-generator tool actually enable the new defaults even if you have disabled them in the configuration file (via enforce_new_defaults=false). This tool generates the policy.yaml file by merging the overridden rules and defaults rules (if no overridden rule then it takes the default one). The newly generated policy.yaml does not contain the old deprecated rules in logical OR to new defaults (they are added in logical OR via policy-in-code when there is no policy.yaml). This means only new defaults will be enforced and depending on the difference between old and new defaults your existing user token might stop working as you are indirectly moving to the new defaults. I think I should clarify it in tool help messages or documents so that it is used in the right way. The second problem we have with the Nova new defaults in Ussuri-Xena is that rules new defaults contain the special string "system_scope:all" due to this disabling the scope via enforce_scope=false does not actually disable the scope checks. This is why you are getting errors. This has been fixed in the yoga release when we redesigned the new RBAC for Openstack. Recommendation on OpenStack policy: ---------------------------------------------- * Due to the problem where scope checks cannot be disabled (until you override the rule to remove the "system_scope:all" from rule check_str), we do not recommend enabling the new defaults and use defaults where old defaults are present and keep your deployment working. * Keep the overridden rules only in policy.yaml file and let oslo_policy to take the defaults from what is there in the code. This way things will work fine. Workaround for Nova policy for Ussuri-Xena: ----------------------------------------------------- If you enable the new defaults which is the case when you generate the file by 'oslopolicy-policy-generator' the tool then you can do: Option 1: Comment out all the rules in the file except overridden rules. Option 2: Override the below base rules to remove the problematic "system_scope:all" string: "system_admin_api": "'role:admin" "system_reader_api": "'role:reader" Please let me know if option 2 works fine for you. -gmann > > I'm using the openstack cli to make the request (openstack flavor create) > > Thanks for your help > > J > > > From abishop at redhat.com Mon Dec 26 23:04:28 2022 From: abishop at redhat.com (Alan Bishop) Date: Mon, 26 Dec 2022 15:04:28 -0800 Subject: [openstack][cinder] Cinder volume id wont change when retype volume In-Reply-To: References: Message-ID: On Mon, Dec 26, 2022 at 7:00 AM Nguy?n H?u Kh?i wrote: > Hello guys, I have a problem with cinder: > > I use cinder volume retype to move my volume to another backend. It worked > but when I check on new backend, volume id doest match with id which on > Horizon dash board.It expects to have same volume id on Horizon and new > backend. I use DellSC and DellPowerstore. > I'm guessing you are migrating volumes from the DellSC to the DellPowerstore, in which case the problem is that the DellPowerstore driver doesn't implement the update_migrated_volume method. Here is an extract from [1] on the purpose of that function: "This method can be used in a generally wide range, but the most common use case covered in this method is to rename the back-end name to the original volume id in your driver to make sure that the back-end still keeps the same id or name as it is before the volume migration" [1] https://docs.openstack.org/cinder/latest/contributor/migration.html You might file a bug against the Powerstore driver. This is something that Dell EMC would need to investigate. Alan > Thank you. > > Nguyen Huu Khoi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at bitswalk.com Mon Dec 26 23:11:27 2022 From: gael.therond at bitswalk.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Tue, 27 Dec 2022 00:11:27 +0100 Subject: [ansible][ACO] - Contribution questions In-Reply-To: References: Message-ID: Hi sadi, Thanks for this feedback! I?ll wait for this patch to be merged then, no biggies as it?s currently greetings season?s so no rush xD I?ll probably have few patches after that especially around unifying options (filtering especially) on few modules. Thanks for the answer! Le lun. 26 d?c. 2022 ? 21:06, Sagi Shnaidman a ?crit : > Hi, Gael, > > Thanks for your contribution! Currently the tox-2.12 CI job always fails, > it's because of tox version 4 changes. I add a workaround in the patch > https://review.opendev.org/c/openstack/ansible-collections-openstack/+/868607 > When it (or other solution) is merged, you're good to go with your patch. > Sorry for the inconvenience . > > Thanks > > > On Sat, Dec 24, 2022 at 4:22 PM Ga?l THEROND > wrote: > >> Hi ansible collections openstack team! >> >> I finally had time to list all my issues met with the project, created >> few bug reports and even contributed to a patch today (minor, mainly >> copy/paste) however I?ve few questions regarding the CI process! >> >> Overall, what?s the rule with the CI code testing? >> I?ve read the contributing guide and had an eye on previous patches to >> see how it?s used but I?m having a hard time to find a real unified method. >> For instance, it seems that some module miss CI tasks (such as >> compute_service_info) or did I missed something? >> >> Thanks a lot for all the good job! >> > > > -- > Best regards > Sagi Shnaidman > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Mon Dec 26 23:43:07 2022 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Tue, 27 Dec 2022 06:43:07 +0700 Subject: [openstack][cinder] Cinder volume id wont change when retype volume In-Reply-To: References: Message-ID: Hello. Thank you for your email, I also try to volumes from the DellPowerstore to DellSC then volume id match between Horizon and Dell SC but old volume won't clear on Powerstore, I just informed you. I will investigate and let you know. Thank you. Nguyen Huu Khoi On Tue, Dec 27, 2022 at 6:04 AM Alan Bishop wrote: > > > On Mon, Dec 26, 2022 at 7:00 AM Nguy?n H?u Kh?i > wrote: > >> Hello guys, I have a problem with cinder: >> >> I use cinder volume retype to move my volume to another backend. It >> worked but when I check on new backend, volume id doest match with id which >> on Horizon dash board.It expects to have same volume id on Horizon and new >> backend. I use DellSC and DellPowerstore. >> > > I'm guessing you are migrating volumes from the DellSC to the > DellPowerstore, in which case the problem is that the DellPowerstore driver > doesn't implement the update_migrated_volume method. Here is an extract > from [1] on the purpose of that function: > > "This method can be used in a generally wide range, but the most common > use case covered in this method is to rename the back-end name to the > original volume id in your driver to make sure that the back-end still > keeps the same id or name as it is before the volume migration" > > [1] https://docs.openstack.org/cinder/latest/contributor/migration.html > > You might file a bug against the Powerstore driver. This is something that > Dell EMC would need to investigate. > > Alan > > >> Thank you. >> >> Nguyen Huu Khoi >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Dec 27 03:44:33 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 26 Dec 2022 19:44:33 -0800 Subject: [Tacker][SRBAC] Update regarding implementation of project personas in Tacker In-Reply-To: References: Message-ID: <18551ae91f1.ef9e2b0f177518.4346096321996572251@ghanshyammann.com> ---- On Sun, 25 Dec 2022 21:07:04 -0800 manpreet kaur wrote --- > Hi Ogawa san and Tacker team, > This mailer is regarding the SRBAC implementation happening in Tacker. > In the Tacker release 2023.1 virtual PTG [1], it was decided by the Tacker community to partially implement the project personas (project-reader role) in the current release.?And in upcoming releases, we will implement the remaining project-member role. > > To address the above requirement, I have prepared a specification [2] and pushed the same in Gerrit for community review. > > Ghanshyam san reviewed the specification and shared TC's opinion and suggestion to implement both the project-reader and project-member roles. > The complete persona implementation will depreciate the 'owner' rule, and?help in?restricting any other role to accessing project-based resources. Yeah, this was a problem in many projects where 'owner' rules were checking only prtoject_id and not the 'member' role. Due to that any role in the project (foo, reader etc) can behave as the 'owner' of the project and perform all the operations within the project scope. This behaviour was actually a bug in our policy. To make the project reader work as expected, we need to fix the existing 'owner' rule to add a 'member' role along with project_id so that 'owner' (project_members) will be different than project_reader. We have fixed it in nova, neutron and many other projects in same way. > Additionally, intact legacy admin (current admin), works?in?the same way so that we do not break things and introduce the project personas which should be additional things to be available for operators to adopt. +1, this was the case which came up during RBAC feedback from operators and NFV users. To make sure we do not break the NFV deployment, we are keeping legacy admin behavior/permission the same as it is today. -gmann > > Current Status: Incorporated the new?requirement and uploaded a new patch set to address the review comment. > > Note: The Tacker spec freeze date is 28th Dec 2022, there might be some delay in merging the specification in shared timelines. > > [1]?https://etherpad.opendev.org/p/tacker-antelope-ptg#L186[2]?https://review.opendev.org/c/openstack/tacker-specs/+/866956 > > Thanks & Regards,Manpreet Kaur From rdhasman at redhat.com Tue Dec 27 04:15:38 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Tue, 27 Dec 2022 09:45:38 +0530 Subject: [openstacksdk][cinder] openstacksdk-functional-devstack job is failing In-Reply-To: <1854ff77045.b6fcddff173376.4549731092261044854@ghanshyammann.com> References: <1854ff77045.b6fcddff173376.4549731092261044854@ghanshyammann.com> Message-ID: openstacksdk-functional-devstack is passing now. Abandoning [1]. Thanks gmann! [1] https://review.opendev.org/c/openstack/cinder/+/868596 On Tue, Dec 27, 2022 at 1:15 AM Ghanshyam Mann wrote: > ---- On Mon, 26 Dec 2022 02:49:33 -0800 Rajat Dhasmana wrote --- > > Hi, > > Currently the openstacksdk-functional-devstack job is failing on the > cinder gate[1] due to which gate is blocked. This seems to be due to the > new tox 4 enforcement and could be an issue for other projects as > well.We've a patch in OpenstackSDK that fixes this issue[2] but that is > also failing on the osc-tox-py38-tips job. The patch has no update since > 21st December which is concerning since it's blocking other projects (at > least cinder).I've proposed a patch[3] to make > the openstacksdk-functional-devstack job non-voting until the situation is > resolved on the OpenStackSDK side. > > sdks job fix is merged now and we can recheck patches > - https://review.opendev.org/c/openstack/openstacksdk/+/867827 > > -gmann > > > [1] > https://zuul.opendev.org/t/openstack/build/29c0eb15abd144a8a4f167e70fe61530[2] > https://review.opendev.org/c/openstack/openstacksdk/+/867827[3] > > https://review.opendev.org/c/openstack/cinder/+/868596 > > ThanksRajat Dhasmana > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jmeng at redhat.com Tue Dec 27 10:40:00 2022 From: jmeng at redhat.com (Jakob Meng) Date: Tue, 27 Dec 2022 11:40:00 +0100 Subject: [ansible][ACO] - Contribution questions In-Reply-To: References: Message-ID: <93eb35c5-2f88-b119-a8d7-26d0b3d5a7b2@redhat.com> Hello Ga?l, thank you for giving us feedback on our Ansible modules and actually submitting a new module ?? We are currently lagging a bit in responses because we are trying to get release 2.0.0 of the Ansible OpenStack collection out of the door in January 2023. As part of this effort we also refactored our CI integration tests, it is more consistent nowadays but still not complete. With compute_service_info you picked our worst case, it is tested in role nova_services ? A relict from the past. Initially we planned to write one Ansible role per module, e.g. role project_info for openstack.cloud.project_info. But doing so produced a lot of redundant code. So during this year we changed our plan. Now we merge tests for *_info modules with their non-info equivalents. For example, tests for both modules federation_mapping and federation_mapping_info can be found in role federation_mapping. Integration tests for volume_service_info would be located in Ansible role volume_service. Instead of compute_service_info better take neutron_rbac_policies_info as an example of how to write and test *_info modules. Refactoring compute_service_info is still on my todo list ? Same goes for our docs on how to write modules etc. ? Best, Jakob On 27.12.22 00:11, Ga?l THEROND wrote: > Hi sadi, > > Thanks for this feedback! > I?ll wait for this patch to be merged then, no biggies as it?s > currently greetings season?s so no rush xD > > I?ll probably have few patches after that especially around unifying > options (filtering especially) on few modules. > > Thanks for the answer! > > Le?lun. 26 d?c. 2022 ? 21:06, Sagi Shnaidman a > ?crit?: > > Hi,? Gael, > > Thanks for your contribution! Currently the tox-2.12 CI job always > fails, it's because of tox version 4 changes. I add a workaround > in the patch > https://review.opendev.org/c/openstack/ansible-collections-openstack/+/868607 > > When it (or other solution) is merged, you're good to go with your > patch. Sorry for the inconvenience . > > Thanks > > > On Sat, Dec 24, 2022 at 4:22 PM Ga?l THEROND > wrote: > > Hi ansible collections openstack team! > > I finally had time to list all my issues met with the project, > created few bug reports and even contributed to a patch today > (minor, mainly copy/paste) however I?ve few questions > regarding the CI process! > > Overall, what?s the rule with the CI code testing? > I?ve read the contributing guide and had an eye on previous > patches to see how it?s used but I?m having a hard time to > find a real unified method. For instance, it seems that some > module miss CI tasks (such as compute_service_info) or did I > missed something? > > Thanks a lot for all the good job! > > > > -- > Best regards > Sagi Shnaidman > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincentlee676 at gmail.com Tue Dec 27 11:01:21 2022 From: vincentlee676 at gmail.com (vincent lee) Date: Tue, 27 Dec 2022 19:01:21 +0800 Subject: Container access mounted device Message-ID: Hi, I have deployed a working OpenStack and would like to know if there are any possible options I can enable so that when creating a new container, files can be mounted to it, and a user from the container can access it. For instance, having a mounted device from the compute host and allowing the /dev/(mounted device name) file to be accessible in the container. Best regards, Vincent -------------- next part -------------- An HTML attachment was scrubbed... URL: From acozyurt at gmail.com Tue Dec 27 12:33:56 2022 From: acozyurt at gmail.com (=?UTF-8?Q?Can_=C3=96zyurt?=) Date: Tue, 27 Dec 2022 15:33:56 +0300 Subject: [ops][nova] RBD IOPS bottleneck on client-side Message-ID: Hi everyone, I hope you are all doing well. We are trying to pinpoint an IOPS problem with RBD and decided to ask you for your take on it. 1 control plane 1 compute node 5 storage with 8 SSD disks each Openstack Stein/Ceph Mimic deployed with kolla-ansible on ubuntu-1804 (kernel 5.4) isolcpus 4-127 on compute vcpu_pin_set 4-127 in nova.conf image_metadatas: hw_scsi_model: virtio-scsi hw_disk_bus: scsi flavor_metadatas: hw:cpu_policy: dedicated What we have tested: fio --directory=. --ioengine=libaio --direct=1 --name=benchmark_random_read_write --filename=test_rand --bs=4k --iodepth=32 --size=1G --readwrite=randrw --rwmixread=50 --time_based --runtime=300s --numjobs=16 1. First we run the fio test above on a guest VM, we see average 5K/5K read/write IOPS consistently. What we realize is that during the test, one single core on compute host is used at max, which is the first of the pinned cpus of the guest. 'top -Hp $qemupid' shows that some threads (notably tp_librbd) share the very same core throughout the test. (also emulatorpin set = vcpupin set as expected) 2. We remove isolcpus and every other configuration stays the same. Now fio tests now show 11K/11K read/write IOPS. No bottlenecked single cpu on the host, observed threads seem to visit all emulatorpins. 3. We bring isolcpus back and redeploy the cluster with Train/Nautilus on ubuntu-1804. Observations are identical to #1. 4. We tried replacing vcpu_pin_set to cpu_shared_set and cpu_dedicated_set to be able to pin emulator cpuset to 0-4 to no avail. Multiple guests on a host can easily deplete resources and IOPS drops. 5. Isolcpus are still in place and we deploy Ussuri with kolla-ansible and Train (to limit the moving parts) with ceph-ansible both on ubuntu-1804. Now we see 7K/7K read/write IOPS. 6. We destroy only the compute node and boot it with ubuntu-2004 with isolcpus set. Add it back to the existing cluster and fio shows slightly above 10K/10K read/write IOPS. What we think happens: 1. Since isolcpus disables scheduling between given cpus, qemu process and its threads are stuck at the same cpu which created the bottleneck. They should be runnable on any given emulatorpin cpus. 2. Ussuri is more performant despite isolcpus, with the improvements made over time. 3. Ubuntu-2004 is more performant despite isolcpus, with the improvements made over time in the kernel. Now the questions are: 1. What else are we missing here? 2. Are any of those assumptions false? 3. If all true, what can we do to solve this issue given that we cannot upgrade openstack nor ceph on production overnight? 4. Has anyone dealt with this issue before? We welcome any opinion and suggestions at this point as we need to make sure that we are on the right path regarding the problem and upgrade is not the only solution. Thanks in advance. From dwilde at redhat.com Tue Dec 27 15:00:30 2022 From: dwilde at redhat.com (Dave Wilde) Date: Tue, 27 Dec 2022 07:00:30 -0800 Subject: [keystone][Meeting] Reminder Keystone meeting is cancelled today Message-ID: Just a quick reminder that there won?t be the keystone weekly meeting today. We?ll resume our regulararly scheduled programming on 03-Jan-2023. Please update the agenda if you have anything you?d like to discuess. The reviewathon is also cancelled this week, to be resumed on 06-Jan-2023. /Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From zakhar at gmail.com Tue Dec 27 15:40:47 2022 From: zakhar at gmail.com (Zakhar Kirpichenko) Date: Tue, 27 Dec 2022 17:40:47 +0200 Subject: Nova libvirt/kvm sound device Message-ID: Hi! I'd like to have the following configuration added to every guest on a specific host managed by Nova and libvirt/kvm:
When I add the device manually to instance xml, it works as intended but the instance configuration gets overwritten on instance stop/start or hard reboot via Nova. What is the currently supported / proper way to add a virtual sound device without having to modify libvirt or Nova code? I would appreciate any advice. Best regards, Zakhar -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasufum.o at gmail.com Tue Dec 27 19:42:16 2022 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Wed, 28 Dec 2022 04:42:16 +0900 Subject: [Tacker][SRBAC] Update regarding implementation of project personas in Tacker In-Reply-To: References: Message-ID: <5e4a7010-ffc7-83d9-e74b-ed6aae76c15c@gmail.com> Hi Manpreet-san, Thanks for your notice. I've started to review and understood this change is considered backward compatibility from suggestions on the etherpad. Although it's LGTM, I'd like to ask if Yuta has any comment for the proposal because he has also propose a policy management feature in this release. For the deadline, let us discuss again to reschedule it if we cannot merge the deadline. Thanks, Yasufumi On 2022/12/26 14:07, manpreet kaur wrote: > Hi Ogawa san and Tacker team, > > This mailer is regarding the SRBAC implementation happening in Tacker. > > In the Tacker release 2023.1 virtual PTG [1], it was decided by the > Tacker community to partially implement the project personas > (project-reader role) in the current release. And in upcoming releases, > we will implement the remaining project-member role. > > To address the above requirement, I have prepared a specification [2] > and pushed the same in Gerrit for community review. > > Ghanshyam san reviewed the specification and shared TC's opinion and > suggestion to implement both the project-reader and project-member roles. > The complete persona implementation will depreciate the 'owner' rule, > and?help in?restricting any other role to accessing project-based resources. > Additionally, intact legacy admin (current admin), works?in?the same way > so that we do not break things and introduce the project personas which > should be additional things to be available for operators to adopt. > > Current Status: Incorporated the new?requirement and uploaded a new > patch set to address the review comment. > > Note: The Tacker spec freeze date is 28th Dec 2022, there might be some > delay in merging the specification in shared timelines. > > [1] https://etherpad.opendev.org/p/tacker-antelope-ptg#L186 > > [2] https://review.opendev.org/c/openstack/tacker-specs/+/866956 > > > Thanks & Regards, > Manpreet Kaur From kaurmanpreet2620 at gmail.com Wed Dec 28 05:05:32 2022 From: kaurmanpreet2620 at gmail.com (manpreet kaur) Date: Wed, 28 Dec 2022 10:35:32 +0530 Subject: [Tacker][SRBAC] Update regarding implementation of project personas in Tacker In-Reply-To: <5e4a7010-ffc7-83d9-e74b-ed6aae76c15c@gmail.com> References: <5e4a7010-ffc7-83d9-e74b-ed6aae76c15c@gmail.com> Message-ID: Hi Ogawa san, Thanks for accepting the new RBAC proposal, please find the latest patch-set 7 [1] as the final version. Would try to merge the specification within the proposed timelines. @Ghanshyam san, Thanks for adding clarity to the proposed changes and for a quick review. [1] https://review.opendev.org/c/openstack/tacker-specs/+/866956 Best Regards, Manpreet Kaur On Wed, Dec 28, 2022 at 1:12 AM Yasufumi Ogawa wrote: > Hi Manpreet-san, > > Thanks for your notice. I've started to review and understood this > change is considered backward compatibility from suggestions on the > etherpad. Although it's LGTM, I'd like to ask if Yuta has any comment > for the proposal because he has also propose a policy management feature > in this release. > > For the deadline, let us discuss again to reschedule it if we cannot > merge the deadline. > > Thanks, > Yasufumi > > On 2022/12/26 14:07, manpreet kaur wrote: > > Hi Ogawa san and Tacker team, > > > > This mailer is regarding the SRBAC implementation happening in Tacker. > > > > In the Tacker release 2023.1 virtual PTG [1], it was decided by the > > Tacker community to partially implement the project personas > > (project-reader role) in the current release. And in upcoming releases, > > we will implement the remaining project-member role. > > > > To address the above requirement, I have prepared a specification [2] > > and pushed the same in Gerrit for community review. > > > > Ghanshyam san reviewed the specification and shared TC's opinion and > > suggestion to implement both the project-reader and project-member roles. > > The complete persona implementation will depreciate the 'owner' rule, > > and help in restricting any other role to accessing project-based > resources. > > Additionally, intact legacy admin (current admin), works in the same way > > so that we do not break things and introduce the project personas which > > should be additional things to be available for operators to adopt. > > > > Current Status: Incorporated the new requirement and uploaded a new > > patch set to address the review comment. > > > > Note: The Tacker spec freeze date is 28th Dec 2022, there might be some > > delay in merging the specification in shared timelines. > > > > [1] https://etherpad.opendev.org/p/tacker-antelope-ptg#L186 > > > > [2] https://review.opendev.org/c/openstack/tacker-specs/+/866956 > > > > > > Thanks & Regards, > > Manpreet Kaur > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pdeore at redhat.com Wed Dec 28 09:16:20 2022 From: pdeore at redhat.com (Pranali Deore) Date: Wed, 28 Dec 2022 14:46:20 +0530 Subject: [Glance] Extending Specs Freeze Deadline by 2 weeks Message-ID: Hello All, As discussed in last glance meeting [1] we are extending the glance specs freeze deadline by 2 weeks due to unavailability of core members and less review bandwidth because of year end holidays. Current Spec Freeze date (for Glance) : 5th Jan 2023 New Spec Freeze date (for Glance) : 19th Jan 2023 Please make sure your specs are updated before the new deadline. [1]: https://meetings.opendev.org/meetings/glance/2022/glance.2022-12-22-14.00.log.html#l-58 Thanks, Pranali deore -------------- next part -------------- An HTML attachment was scrubbed... URL: From uemit.seren at gmail.com Wed Dec 28 12:59:29 2022 From: uemit.seren at gmail.com (=?Windows-1252?Q?=DCmit_Seren?=) Date: Wed, 28 Dec 2022 12:59:29 +0000 Subject: [ops][nova] RBD IOPS bottleneck on client-side Message-ID: We had a similar performance issue with networking (via openswitch) instead of I/O. Our hypervisor and VM configuration were like yours (VCPU pinning + isolcpus). We saw a 50% drop in virtualized networking throughput (measure via iperf). This was because the vhost_net kthreads which are responsible for the virtualized networking were pinned to 2 cores per socket and this quickly became the bottleneck. This was with OpenStack Queens and RHEL 7.6. We ended up keeping the VCPU pinning but removing the isolcpus kernel setting. This fixed the performance regression. Unfortunately, we didn?t further investigate this, so I don?t know why a newer kernel and/or newer Openstack release improves it. Hope this still helps Best ?mit On 27.12.22, 13:33, "Can ?zyurt" wrote: Hi everyone, I hope you are all doing well. We are trying to pinpoint an IOPS problem with RBD and decided to ask you for your take on it. 1 control plane 1 compute node 5 storage with 8 SSD disks each Openstack Stein/Ceph Mimic deployed with kolla-ansible on ubuntu-1804 (kernel 5.4) isolcpus 4-127 on compute vcpu_pin_set 4-127 in nova.conf image_metadatas: hw_scsi_model: virtio-scsi hw_disk_bus: scsi flavor_metadatas: hw:cpu_policy: dedicated What we have tested: fio --directory=. --ioengine=libaio --direct=1 --name=benchmark_random_read_write --filename=test_rand --bs=4k --iodepth=32 --size=1G --readwrite=randrw --rwmixread=50 --time_based --runtime=300s --numjobs=16 1. First we run the fio test above on a guest VM, we see average 5K/5K read/write IOPS consistently. What we realize is that during the test, one single core on compute host is used at max, which is the first of the pinned cpus of the guest. 'top -Hp $qemupid' shows that some threads (notably tp_librbd) share the very same core throughout the test. (also emulatorpin set = vcpupin set as expected) 2. We remove isolcpus and every other configuration stays the same. Now fio tests now show 11K/11K read/write IOPS. No bottlenecked single cpu on the host, observed threads seem to visit all emulatorpins. 3. We bring isolcpus back and redeploy the cluster with Train/Nautilus on ubuntu-1804. Observations are identical to #1. 4. We tried replacing vcpu_pin_set to cpu_shared_set and cpu_dedicated_set to be able to pin emulator cpuset to 0-4 to no avail. Multiple guests on a host can easily deplete resources and IOPS drops. 5. Isolcpus are still in place and we deploy Ussuri with kolla-ansible and Train (to limit the moving parts) with ceph-ansible both on ubuntu-1804. Now we see 7K/7K read/write IOPS. 6. We destroy only the compute node and boot it with ubuntu-2004 with isolcpus set. Add it back to the existing cluster and fio shows slightly above 10K/10K read/write IOPS. What we think happens: 1. Since isolcpus disables scheduling between given cpus, qemu process and its threads are stuck at the same cpu which created the bottleneck. They should be runnable on any given emulatorpin cpus. 2. Ussuri is more performant despite isolcpus, with the improvements made over time. 3. Ubuntu-2004 is more performant despite isolcpus, with the improvements made over time in the kernel. Now the questions are: 1. What else are we missing here? 2. Are any of those assumptions false? 3. If all true, what can we do to solve this issue given that we cannot upgrade openstack nor ceph on production overnight? 4. Has anyone dealt with this issue before? We welcome any opinion and suggestions at this point as we need to make sure that we are on the right path regarding the problem and upgrade is not the only solution. Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From acozyurt at gmail.com Thu Dec 29 09:30:12 2022 From: acozyurt at gmail.com (=?UTF-8?Q?Can_=C3=96zyurt?=) Date: Thu, 29 Dec 2022 12:30:12 +0300 Subject: [ops][nova] RBD IOPS bottleneck on client-side In-Reply-To: References: Message-ID: Thanks for the fast reply and for sharing your experience. We have considered removing isolcpus as well but the idea of introducing noise into guest workload is somewhat concerning. Also restraining dockerized deployment without isolcpus will not be as easy. We definitely keep this option as a last resort. On Wed, 28 Dec 2022 at 15:59, ?mit Seren wrote: > > We had a similar performance issue with networking (via openswitch) instead of I/O. > > Our hypervisor and VM configuration were like yours (VCPU pinning + isolcpus). We saw a 50% drop in virtualized networking throughput (measure via iperf). > This was because the vhost_net kthreads which are responsible for the virtualized networking were pinned to 2 cores per socket and this quickly became the bottleneck. This was with OpenStack Queens and RHEL 7.6. > > We ended up keeping the VCPU pinning but removing the isolcpus kernel setting. This fixed the performance regression. > > > > Unfortunately, we didn?t further investigate this, so I don?t know why a newer kernel and/or newer Openstack release improves it. > > > > Hope this still helps > > > > Best > > ?mit > > > > On 27.12.22, 13:33, "Can ?zyurt" wrote: > > Hi everyone, > > > > I hope you are all doing well. We are trying to pinpoint an IOPS > > problem with RBD and decided to ask you for your take on it. > > > > 1 control plane > > 1 compute node > > 5 storage with 8 SSD disks each > > Openstack Stein/Ceph Mimic deployed with kolla-ansible on ubuntu-1804 > > (kernel 5.4) > > isolcpus 4-127 on compute > > vcpu_pin_set 4-127 in nova.conf > > > > image_metadatas: > > hw_scsi_model: virtio-scsi > > hw_disk_bus: scsi > > flavor_metadatas: > > hw:cpu_policy: dedicated > > > > What we have tested: > > fio --directory=. --ioengine=libaio --direct=1 > > --name=benchmark_random_read_write --filename=test_rand --bs=4k > > --iodepth=32 --size=1G --readwrite=randrw --rwmixread=50 --time_based > > --runtime=300s --numjobs=16 > > > > 1. First we run the fio test above on a guest VM, we see average 5K/5K > > read/write IOPS consistently. What we realize is that during the test, > > one single core on compute host is used at max, which is the first of > > the pinned cpus of the guest. 'top -Hp $qemupid' shows that some > > threads (notably tp_librbd) share the very same core throughout the > > test. (also emulatorpin set = vcpupin set as expected) > > 2. We remove isolcpus and every other configuration stays the same. > > Now fio tests now show 11K/11K read/write IOPS. No bottlenecked single > > cpu on the host, observed threads seem to visit all emulatorpins. > > 3. We bring isolcpus back and redeploy the cluster with Train/Nautilus > > on ubuntu-1804. Observations are identical to #1. > > 4. We tried replacing vcpu_pin_set to cpu_shared_set and > > cpu_dedicated_set to be able to pin emulator cpuset to 0-4 to no > > avail. Multiple guests on a host can easily deplete resources and IOPS > > drops. > > 5. Isolcpus are still in place and we deploy Ussuri with kolla-ansible > > and Train (to limit the moving parts) with ceph-ansible both on > > ubuntu-1804. Now we see 7K/7K read/write IOPS. > > 6. We destroy only the compute node and boot it with ubuntu-2004 with > > isolcpus set. Add it back to the existing cluster and fio shows > > slightly above 10K/10K read/write IOPS. > > > > > > What we think happens: > > > > 1. Since isolcpus disables scheduling between given cpus, qemu process > > and its threads are stuck at the same cpu which created the > > bottleneck. They should be runnable on any given emulatorpin cpus. > > 2. Ussuri is more performant despite isolcpus, with the improvements > > made over time. > > 3. Ubuntu-2004 is more performant despite isolcpus, with the > > improvements made over time in the kernel. > > > > Now the questions are: > > > > 1. What else are we missing here? > > 2. Are any of those assumptions false? > > 3. If all true, what can we do to solve this issue given that we > > cannot upgrade openstack nor ceph on production overnight? > > 4. Has anyone dealt with this issue before? > > > > We welcome any opinion and suggestions at this point as we need to > > make sure that we are on the right path regarding the problem and > > upgrade is not the only solution. Thanks in advance. > > > > > > From vincentlee676 at gmail.com Thu Dec 29 09:33:10 2022 From: vincentlee676 at gmail.com (vincent lee) Date: Thu, 29 Dec 2022 03:33:10 -0600 Subject: Container trying to access mounted device file Message-ID: Hi all, I am trying to access a mounted device file from my node to my newly created container. I am currently using Kolla-ansible for the deployment, and it is in the yoga version. For example, when I mount a device to my node, such as a radio receiver, the device file (/dev/mounted device name) is accessible on my node directly. However, I am looking for an option to enable the newly created container to access that device file. Best, Vincent -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnasiadka at gmail.com Thu Dec 29 09:58:57 2022 From: mnasiadka at gmail.com (=?utf-8?Q?Micha=C5=82_Nasiadka?=) Date: Thu, 29 Dec 2022 10:58:57 +0100 Subject: [kolla] Propose Bartosz Bezak for core reviewer Message-ID: Hello Koalas, I?d like to propose Bartosz Bezak as a core reviewer for Kolla, Kolla-Ansible, Kayobe and ansible-collection-kolla. Bartosz has recently went through release preparations and release process itself for all mentioned repositories, has been a great deal of help in meeting the cycle trailing projects deadline. In addition to that, he?s been the main author of Ubuntu Jammy and EL9 (Rocky Linux 9 to be precise) support in Kayobe for Zed release, as well as fixing various bugs amongst all four repositories. Bartosz also brings OVN knowledge, which will make the review process for those patches better (and improve our overall review velocity, which hasn?t been great recently). Kind regards, Michal Nasiadka From celiker.kerem at icloud.com Tue Dec 27 18:32:19 2022 From: celiker.kerem at icloud.com (=?utf-8?Q?Kerem_=C3=87eliker?=) Date: Tue, 27 Dec 2022 21:32:19 +0300 Subject: openstack-discuss Digest, Vol 50, Issue 62 Message-ID: <2774C816-05FC-43DE-A5DA-26A270D3899B@icloud.com> ? ?Hello Vincent, To allow a user from a container to access files on OpenStack, you can use the --volume or -v option when creating the container to mount a host file or directory as a volume inside the container. For example: Copy code docker run -it --volume /path/on/host:/path/in/container image_name This will mount the host file or directory located at /path/on/host as a volume inside the container at /path/in/container. The user inside the container will be able to access the files in the volume at /path/in/container. You can also use the --mount option to mount a volume. This option provides more control over the volume, such as specifying the volume driver and options. For example: Copy code docker run -it --mount type=bind,source=/path/on/host,target=/path/in/container image_name This will also mount the host file or directory located at /path/on/host as a volume inside the container at /path/in/container, and the user inside the container will be able to access the files in the volume at /path/in/container. Note that the /path/on/host must be an absolute path and must exist on the host system. The /path/in/container can be any path inside the container and does not need to exist prior to the mount. It's also important to note that the user inside the container must have the appropriate permissions to access the mounted files. You can use the --user option to specify the user that the container should run as, or you can set the ownership and permissions of the mounted files on the host system to allow the user inside the container to access them. Hope it works for you.. ? Best, Regards, Kerem ?eliker IBM | Red Hat Champion keremceliker.medium.com Sent from my iPhone > On 27 Dec 2022, at 15:07, openstack-discuss-request at lists.openstack.org wrote: > ?Send openstack-discuss mailing list submissions to > openstack-discuss at lists.openstack.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss > > or, via email, send a message with subject or body 'help' to > openstack-discuss-request at lists.openstack.org > > You can reach the person managing the list at > openstack-discuss-owner at lists.openstack.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of openstack-discuss digest..." > > > Today's Topics: > > 1. Container access mounted device (vincent lee) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 27 Dec 2022 19:01:21 +0800 > From: vincent lee > To: openstack-discuss at lists.openstack.org > Subject: Container access mounted device > Message-ID: > > > Hi, I have deployed a working OpenStack and would like to know if there are > any possible options I can enable so that when creating a new container, > files can be mounted to it, and a user from the container can access it. > For instance, having a mounted device from the compute host and allowing > the /dev/(mounted device name) file to be accessible in the container. > > Best regards, > Vincent > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > openstack-discuss mailing list > openstack-discuss at lists.openstack.org > > > ------------------------------ > > End of openstack-discuss Digest, Vol 50, Issue 62 > ************************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From celiker.kerem at icloud.com Wed Dec 28 08:59:21 2022 From: celiker.kerem at icloud.com (=?utf-8?Q?Kerem_=C3=87eliker?=) Date: Wed, 28 Dec 2022 11:59:21 +0300 Subject: openstack-discuss Digest, Vol 50, Issue 63 In-Reply-To: References: Message-ID: <31BE6825-EA86-4E01-82B2-481D86591D56@icloud.com> Hello Can, To fix an RBD (Rados Block Device) IOPS bottleneck on the client side in OpenStack, you can try the following: Monitor the CPU and memory usage on the client machine to ensure that it has sufficient resources. You can use tools like top or htop to view real-time resource usage. Check the network bandwidth between the client and the storage system to ensure that it is not a bottleneck. You can use tools like iperf or tcpdump to measure network performance. Review the configuration of the storage system to ensure that it is optimized for the workload. This may include adjusting the number and type of disks used, as well as the RAID level and chunk size. Consider using a storage system with a higher IOPS rating to see if it can improve performance. This may involve upgrading to faster disks or using a storage solution with more disks or SSDs. Try using a different client machine with more resources (e.g., a machine with a faster CPU and more memory) to see if it can issue more I/O requests. Consider using a different network connection between the client and the storage system, such as a faster network card or a direct connection rather than a network switch. If you are using Ceph as the underlying storage system, you can try adjusting the Ceph configuration to improve performance. This may include adjusting the number of placement groups, the object size, or the number of OSDs (object storage devices). It's also worth noting that an IOPS bottleneck can also occur on the server side (i.e., within the storage system itself). In this case, you may need to adjust the configuration of the storage system or add more resources (e.g., disks or SSDs) to improve performance. BR, Kerem ?eliker keremceliker.medium.com IBM | Red Hat Champion Sent from my iPhone > On 28 Dec 2022, at 08:13, openstack-discuss-request at lists.openstack.org wrote: > > ?Send openstack-discuss mailing list submissions to > openstack-discuss at lists.openstack.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss > > or, via email, send a message with subject or body 'help' to > openstack-discuss-request at lists.openstack.org > > You can reach the person managing the list at > openstack-discuss-owner at lists.openstack.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of openstack-discuss digest..." > > > Today's Topics: > > 1. [ops][nova] RBD IOPS bottleneck on client-side (Can ?zyurt) > 2. [keystone][Meeting] Reminder Keystone meeting is cancelled > today (Dave Wilde) > 3. Nova libvirt/kvm sound device (Zakhar Kirpichenko) > 4. Re: [Tacker][SRBAC] Update regarding implementation of > project personas in Tacker (Yasufumi Ogawa) > 5. Re: [Tacker][SRBAC] Update regarding implementation of > project personas in Tacker (manpreet kaur) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 27 Dec 2022 15:33:56 +0300 > From: Can ?zyurt > To: OpenStack Discuss > Subject: [ops][nova] RBD IOPS bottleneck on client-side > Message-ID: > > Content-Type: text/plain; charset="UTF-8" > > Hi everyone, > > I hope you are all doing well. We are trying to pinpoint an IOPS > problem with RBD and decided to ask you for your take on it. > > 1 control plane > 1 compute node > 5 storage with 8 SSD disks each > Openstack Stein/Ceph Mimic deployed with kolla-ansible on ubuntu-1804 > (kernel 5.4) > isolcpus 4-127 on compute > vcpu_pin_set 4-127 in nova.conf > > image_metadatas: > hw_scsi_model: virtio-scsi > hw_disk_bus: scsi > flavor_metadatas: > hw:cpu_policy: dedicated > > What we have tested: > fio --directory=. --ioengine=libaio --direct=1 > --name=benchmark_random_read_write --filename=test_rand --bs=4k > --iodepth=32 --size=1G --readwrite=randrw --rwmixread=50 --time_based > --runtime=300s --numjobs=16 > > 1. First we run the fio test above on a guest VM, we see average 5K/5K > read/write IOPS consistently. What we realize is that during the test, > one single core on compute host is used at max, which is the first of > the pinned cpus of the guest. 'top -Hp $qemupid' shows that some > threads (notably tp_librbd) share the very same core throughout the > test. (also emulatorpin set = vcpupin set as expected) > 2. We remove isolcpus and every other configuration stays the same. > Now fio tests now show 11K/11K read/write IOPS. No bottlenecked single > cpu on the host, observed threads seem to visit all emulatorpins. > 3. We bring isolcpus back and redeploy the cluster with Train/Nautilus > on ubuntu-1804. Observations are identical to #1. > 4. We tried replacing vcpu_pin_set to cpu_shared_set and > cpu_dedicated_set to be able to pin emulator cpuset to 0-4 to no > avail. Multiple guests on a host can easily deplete resources and IOPS > drops. > 5. Isolcpus are still in place and we deploy Ussuri with kolla-ansible > and Train (to limit the moving parts) with ceph-ansible both on > ubuntu-1804. Now we see 7K/7K read/write IOPS. > 6. We destroy only the compute node and boot it with ubuntu-2004 with > isolcpus set. Add it back to the existing cluster and fio shows > slightly above 10K/10K read/write IOPS. > > > What we think happens: > > 1. Since isolcpus disables scheduling between given cpus, qemu process > and its threads are stuck at the same cpu which created the > bottleneck. They should be runnable on any given emulatorpin cpus. > 2. Ussuri is more performant despite isolcpus, with the improvements > made over time. > 3. Ubuntu-2004 is more performant despite isolcpus, with the > improvements made over time in the kernel. > > Now the questions are: > > 1. What else are we missing here? > 2. Are any of those assumptions false? > 3. If all true, what can we do to solve this issue given that we > cannot upgrade openstack nor ceph on production overnight? > 4. Has anyone dealt with this issue before? > > We welcome any opinion and suggestions at this point as we need to > make sure that we are on the right path regarding the problem and > upgrade is not the only solution. Thanks in advance. > > > > ------------------------------ > > Message: 2 > Date: Tue, 27 Dec 2022 07:00:30 -0800 > From: Dave Wilde > To: openstack-discuss at lists.openstack.org > Subject: [keystone][Meeting] Reminder Keystone meeting is cancelled > today > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Just a quick reminder that there won?t be the keystone weekly meeting > today. We?ll resume our regulararly scheduled programming on 03-Jan-2023. > Please update the agenda if you have anything you?d like to discuess. The > reviewathon is also cancelled this week, to be resumed on 06-Jan-2023. > > /Dave > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 3 > Date: Tue, 27 Dec 2022 17:40:47 +0200 > From: Zakhar Kirpichenko > To: openstack-discuss at lists.openstack.org > Subject: Nova libvirt/kvm sound device > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Hi! > > I'd like to have the following configuration added to every guest on a > specific host managed by Nova and libvirt/kvm: > > >
function='0x0'/> > > > When I add the device manually to instance xml, it works as intended but > the instance configuration gets overwritten on instance stop/start or hard > reboot via Nova. > > What is the currently supported / proper way to add a virtual sound device > without having to modify libvirt or Nova code? I would appreciate any > advice. > > Best regards, > Zakhar > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 4 > Date: Wed, 28 Dec 2022 04:42:16 +0900 > From: Yasufumi Ogawa > To: manpreet kaur > Cc: openstack-discuss > Subject: Re: [Tacker][SRBAC] Update regarding implementation of > project personas in Tacker > Message-ID: <5e4a7010-ffc7-83d9-e74b-ed6aae76c15c at gmail.com> > Content-Type: text/plain; charset=UTF-8; format=flowed > > Hi Manpreet-san, > > Thanks for your notice. I've started to review and understood this > change is considered backward compatibility from suggestions on the > etherpad. Although it's LGTM, I'd like to ask if Yuta has any comment > for the proposal because he has also propose a policy management feature > in this release. > > For the deadline, let us discuss again to reschedule it if we cannot > merge the deadline. > > Thanks, > Yasufumi > >> On 2022/12/26 14:07, manpreet kaur wrote: >> Hi Ogawa san and Tacker team, >> >> This mailer is regarding the SRBAC implementation happening in Tacker. >> >> In the Tacker release 2023.1 virtual PTG [1], it was decided by the >> Tacker community to partially implement the project personas >> (project-reader role) in the current release. And in upcoming releases, >> we will implement the remaining project-member role. >> >> To address the above requirement, I have prepared a specification [2] >> and pushed the same in Gerrit for community review. >> >> Ghanshyam san reviewed the specification and shared TC's opinion and >> suggestion to implement both the project-reader and project-member roles. >> The complete persona implementation will depreciate the 'owner' rule, >> and?help in?restricting any other role to accessing project-based resources. >> Additionally, intact legacy admin (current admin), works?in?the same way >> so that we do not break things and introduce the project personas which >> should be additional things to be available for operators to adopt. >> >> Current Status: Incorporated the new?requirement and uploaded a new >> patch set to address the review comment. >> >> Note: The Tacker spec freeze date is 28th Dec 2022, there might be some >> delay in merging the specification in shared timelines. >> >> [1] https://etherpad.opendev.org/p/tacker-antelope-ptg#L186 >> >> [2] https://review.opendev.org/c/openstack/tacker-specs/+/866956 >> >> >> Thanks & Regards, >> Manpreet Kaur > > > > ------------------------------ > > Message: 5 > Date: Wed, 28 Dec 2022 10:35:32 +0530 > From: manpreet kaur > To: Yasufumi Ogawa > Cc: openstack-discuss > Subject: Re: [Tacker][SRBAC] Update regarding implementation of > project personas in Tacker > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Hi Ogawa san, > > Thanks for accepting the new RBAC proposal, please find the latest > patch-set 7 [1] as the final version. > Would try to merge the specification within the proposed timelines. > > @Ghanshyam san, > Thanks for adding clarity to the proposed changes and for a quick review. > > [1] https://review.opendev.org/c/openstack/tacker-specs/+/866956 > > Best Regards, > Manpreet Kaur > >> On Wed, Dec 28, 2022 at 1:12 AM Yasufumi Ogawa wrote: >> >> Hi Manpreet-san, >> >> Thanks for your notice. I've started to review and understood this >> change is considered backward compatibility from suggestions on the >> etherpad. Although it's LGTM, I'd like to ask if Yuta has any comment >> for the proposal because he has also propose a policy management feature >> in this release. >> >> For the deadline, let us discuss again to reschedule it if we cannot >> merge the deadline. >> >> Thanks, >> Yasufumi >> >>> On 2022/12/26 14:07, manpreet kaur wrote: >>> Hi Ogawa san and Tacker team, >>> >>> This mailer is regarding the SRBAC implementation happening in Tacker. >>> >>> In the Tacker release 2023.1 virtual PTG [1], it was decided by the >>> Tacker community to partially implement the project personas >>> (project-reader role) in the current release. And in upcoming releases, >>> we will implement the remaining project-member role. >>> >>> To address the above requirement, I have prepared a specification [2] >>> and pushed the same in Gerrit for community review. >>> >>> Ghanshyam san reviewed the specification and shared TC's opinion and >>> suggestion to implement both the project-reader and project-member roles. >>> The complete persona implementation will depreciate the 'owner' rule, >>> and help in restricting any other role to accessing project-based >> resources. >>> Additionally, intact legacy admin (current admin), works in the same way >>> so that we do not break things and introduce the project personas which >>> should be additional things to be available for operators to adopt. >>> >>> Current Status: Incorporated the new requirement and uploaded a new >>> patch set to address the review comment. >>> >>> Note: The Tacker spec freeze date is 28th Dec 2022, there might be some >>> delay in merging the specification in shared timelines. >>> >>> [1] https://etherpad.opendev.org/p/tacker-antelope-ptg#L186 >>> >>> [2] https://review.opendev.org/c/openstack/tacker-specs/+/866956 >>> >>> >>> Thanks & Regards, >>> Manpreet Kaur >> > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > openstack-discuss mailing list > openstack-discuss at lists.openstack.org > > > ------------------------------ > > End of openstack-discuss Digest, Vol 50, Issue 63 > ************************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandcruz666 at gmail.com Thu Dec 29 10:05:55 2022 From: sandcruz666 at gmail.com (K Santhosh) Date: Thu, 29 Dec 2022 15:35:55 +0530 Subject: kolla openstack the freezer end point is not reachable Message-ID: Hai , In kolla openstack the freezer end point is not reachable and the freezer api couldn't reach in web browser if any document is there make it to me -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: unnamed.png Type: image/png Size: 36461 bytes Desc: not available URL: From gmann at ghanshyammann.com Thu Dec 29 19:55:39 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 29 Dec 2022 11:55:39 -0800 Subject: [all][requirements][qa]: requirement-checks job broken for <=stable/yoga because it run on ubuntu-jammy on stable branch Message-ID: <1855f745c9c.dc037d01103221.8611563073122375054@ghanshyammann.com> Hello Everyone, While base jobs in opendev moved to run on ubuntu-jammy, requirement-checks job start running on ubuntu-jammy which is ok for the master but not for stable branches. As result, requirement-checks on stable branch running on ubuntu-jammy and started failing for stable/yoga and less: Error: No package matching 'python-dev' is available - https://zuul.opendev.org/t/openstack/build/811ca9311b894b97bfae4438070e5e22 Fixing it by setting the nodeset explicitly in that job so that it does not get changed in future when we move to ubuntu new version for testing. - https://review.opendev.org/q/Ie676079c0b798335ab2a82c03fe4715e0eee4d57 If you see the requirement-checks job failing (if there is a requirement change in the stable branch backport), please hold the recheck until the above fixes are merged. As tox4 fails the requirement gate, pushing the above fixes on top of tox4 fixes - https://review.opendev.org/q/I7bc7c98954395765f16cc943e5c826983db5dba0 -gmann From thierry at openstack.org Fri Dec 30 09:46:59 2022 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 30 Dec 2022 10:46:59 +0100 Subject: [release] Release countdown for week R-11, Jan 2 - 6 Message-ID: <27c8b2b6-edd2-552d-d657-9911bce7c5d3@openstack.org> Development Focus ----------------- The Antelope-2 milestone is next week, on January 5, 2023! Antelope-related specs should now be finalized so that teams can move to implementation ASAP. Some teams observe specific deadlines on the second milestone (mostly spec freezes): please refer to https://releases.openstack.org/antelope/schedule.html for details. General Information ------------------- Libraries need to be released at least once per milestone period. Next week, the release team will propose releases for any library that has not been otherwise released since milestone 1. PTL's and release liaisons, please watch for these and give a +1 to acknowledge them. If there is some reason to hold off on a release, let us know that as well. A +1 would be appreciated, but if we do not hear anything at all by the end of the week, we will assume things are OK to proceed. Remember that non-library deliverables that follow the cycle-with-intermediary release model should have an intermediary?release before milestone-2. Those who haven't will be proposed to switch to the cycle-with-rc model, which is more suited to deliverables that are released only once per cycle. Next week is also the deadline to freeze the contents of the final release. All new 'Antelope' deliverables need to have a deliverable file in https://opendev.org/openstack/releases/src/branch/master/deliverables and need to have done a release by milestone-2. Changes proposing those deliverables for inclusion in Antelope have been posted, please update them with an actual release request before the milestone-2 deadline if you plan on including that deliverable in Antelope, or -1 if you need one more cycle to be ready. Upcoming Deadlines & Dates -------------------------- Antelope-2 Milestone: January 5th, 2023 OpenInfra Summit Vancouver CFP deadline: January 10th, 2023 Final 2023.1 Antelope release: March 22nd, 2023 -- Thierry Carrez (ttx) From jmeng at redhat.com Fri Dec 30 16:05:34 2022 From: jmeng at redhat.com (Jakob Meng) Date: Fri, 30 Dec 2022 17:05:34 +0100 Subject: [ansible][ACO] - Contribution questions In-Reply-To: <93eb35c5-2f88-b119-a8d7-26d0b3d5a7b2@redhat.com> References: <93eb35c5-2f88-b119-a8d7-26d0b3d5a7b2@redhat.com> Message-ID: PS: tox issues have been fixed in master branch of Ansible OpenStack collection, please rebase your patches :) On 27.12.22 11:40, Jakob Meng wrote: > Hello Ga?l, > thank you for giving us feedback on our Ansible modules and actually > submitting a new module ?? We are currently lagging a bit in responses > because we are trying to get release 2.0.0 of the Ansible OpenStack > collection out of the door in January 2023. > > As part of this effort we also refactored our CI integration tests, it > is more consistent nowadays but still not complete. With > compute_service_info you picked our worst case, it is tested in role > nova_services ? A relict from the past. > > Initially we planned to write one Ansible role per module, e.g. role > project_info for openstack.cloud.project_info. But doing so produced a > lot of redundant code. So during this year we changed our plan. Now we > merge tests for *_info modules with their non-info equivalents. For > example, tests for both modules federation_mapping and > federation_mapping_info can be found in role federation_mapping. > > Integration tests for volume_service_info would be located in Ansible > role volume_service. > > Instead of compute_service_info better take neutron_rbac_policies_info > as an example of how to write and test *_info modules. Refactoring > compute_service_info is still on my todo list ? Same goes for our > docs on how to write modules etc. ? > > Best, > Jakob > > On 27.12.22 00:11, Ga?l THEROND wrote: >> Hi sadi, >> >> Thanks for this feedback! >> I?ll wait for this patch to be merged then, no biggies as it?s >> currently greetings season?s so no rush xD >> >> I?ll probably have few patches after that especially around unifying >> options (filtering especially) on few modules. >> >> Thanks for the answer! >> >> Le?lun. 26 d?c. 2022 ? 21:06, Sagi Shnaidman a >> ?crit?: >> >> Hi,? Gael, >> >> Thanks for your contribution! Currently the tox-2.12 CI job >> always fails, it's because of tox version 4 changes. I add a >> workaround in the patch >> https://review.opendev.org/c/openstack/ansible-collections-openstack/+/868607 >> >> When it (or other solution) is merged, you're good to go with >> your patch. Sorry for the inconvenience . >> >> Thanks >> >> >> On Sat, Dec 24, 2022 at 4:22 PM Ga?l THEROND >> wrote: >> >> Hi ansible collections openstack team! >> >> I finally had time to list all my issues met with the >> project, created few bug reports and even contributed to a >> patch today (minor, mainly copy/paste) however I?ve few >> questions regarding the CI process! >> >> Overall, what?s the rule with the CI code testing? >> I?ve read the contributing guide and had an eye on previous >> patches to see how it?s used but I?m having a hard time to >> find a real unified method. For instance, it seems that some >> module miss CI tasks (such as compute_service_info) or did I >> missed something? >> >> Thanks a lot for all the good job! >> >> >> >> -- >> Best regards >> Sagi Shnaidman >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Fri Dec 30 16:17:31 2022 From: smooney at redhat.com (Sean Mooney) Date: Fri, 30 Dec 2022 16:17:31 +0000 Subject: Glance api deployed only on a single controller on multi-controller deployment [kolla] In-Reply-To: References: Message-ID: On Mon, 2022-12-19 at 22:29 +0530, pradeep wrote: > Hi, > > Sorry to hijack this thread. I have a similar issue. I have used netapp NAS > share, mounted in all the 3 controllers as /glanceimages and modified > glance_file_datadir_volume:/glanceimages in globals.yml. I have tried > deploy and reconfigure commands but was unlucky. It doesn't deploy > glance-api containers in controller2 and 3. Please let me know if you > succeed in this. looking at the glance-api group it is defiend as the childeren of the glance group https://github.com/openstack/kolla-ansible/blob/master/ansible/inventory/multinode#L263-L264 [glance-api:children] glance which in trun is defiend as the childern of the controler group https://github.com/openstack/kolla-ansible/blob/master/ansible/inventory/multinode#L114-L115 [glance:children] control looking breifly at the glance role the api will be deployed if its enabled and the current host is in the relevent group https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/glance/tasks/check-containers.yml#L15-L16 host_in_group is defiend here https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/glance/defaults/main.yml#L6 host_in_groups: "{{ inventory_hostname in glance_api_hosts }}" glance_api_hosts is defined here https://github.com/openstack/kolla-ansible/blob/ae3de342e48bc4293564a3b532a66bfbf1326c0d/ansible/group_vars/all.yml#L903 glance_api_hosts: "{{ [groups['glance-api'] | first] if glance_backend_file | bool and glance_file_datadir_volume == 'glance' else groups['glance- api'] }}" so that is what is making it only work with the first host. with that said you have set glance_file_datadir_volume:/glanceimages in the globals.yaml so that should have also caused the else banch to be taken. i have not worked on kolla-ansible for 5+ years at this point but it looks like the ablity to use multiple api instance was modifed by https://github.com/openstack/kolla-ansible/commit/d57c7019a92f498f8dd876f4daea9051801f856c for https://bugs.launchpad.net/kolla-ansible/+bug/1722422 but that simply moved where that variable was defined the actual behavior change was intoduced by https://github.com/openstack/kolla-ansible/commit/4fde486dc8b29b8d087ab4bfff0e626b2479abd2 for https://bugs.launchpad.net/kolla-ansible/+bug/1681301 the intent of that change was not to prevent you runnign multipel glance api servers but just to ensure that by default only one instance was deployed in a multinode config. global.yaml is passed with -e to ansible by kolla-ansible so defining "glance_file_datadir_volume:/glanceimages" should have been enough to enable multiple glance instnaces to be deployed. so it sounds like you did everything correctly but perhaps the kolla-ansible team can point you in the right direction. at first glance you could try defining? glance_api_hosts: [groups['glance-api']] in the global.yaml but that should not be required if you have glance_file_datadir_volume defined to a non default value. > > Regards > Pradeep > > On Mon, 19 Dec 2022 at 14:49, A Monster wrote: > > > Thank you for the clarification. > > So in order to deploy glance on multiple nodes, I need to first set up an > > NFS storage then specify the shared path either by overriding the content > > of ansible/group_vars/all.yml or by adding > > glance_file_datadir_volume:NFS_PATH in globals.yml ? > > Which NFS tool do you think I should use? > > Thank you again. Regards > > > > On Thu, 21 Jul 2022 at 10:42, Pierre Riteau wrote: > > > > > With the default backend (file), Glance is deployed on a single > > > controller, because it uses a local Docker volume to store Glance images. > > > This is explained in the documentation [1]: "By default when using file > > > backend only one glance-api container can be running". See also the > > > definition of glance_api_hosts in ansible/group_vars/all.yml. > > > > > > If you set glance_file_datadir_volume to a non-default path, it > > > is assumed to be on shared storage and kolla-ansible will automatically use > > > all glance-api group members. > > > > > > You can also switch to another backend such as Ceph or Swift. > > > > > > [1] > > > https://docs.openstack.org/kolla-ansible/latest/reference/shared-services/glance-guide.html > > > > > > On Thu, 21 Jul 2022 at 10:57, A Monster wrote: > > > > > > > I've deployed openstack xena using kolla ansible on a centos 8 stream > > > > cluster, using two controller nodes, however I found out after the > > > > deployment that glance api is not available in one node, I tried > > > > redeploying but I got the same behavior, although the deployment finished > > > > without displaying any error. > > > > > > > > Thank you. Regards > > > > > > > > From smooney at redhat.com Fri Dec 30 16:42:15 2022 From: smooney at redhat.com (Sean Mooney) Date: Fri, 30 Dec 2022 16:42:15 +0000 Subject: [ops][nova] RBD IOPS bottleneck on client-side In-Reply-To: References: Message-ID: On Thu, 2022-12-29 at 12:30 +0300, Can ?zyurt wrote: > Thanks for the fast reply and for sharing your experience. > > We have considered removing isolcpus as well but the idea of > introducing noise into guest workload is somewhat concerning. Also > restraining dockerized deployment without isolcpus will not be as > easy. We definitely keep this option as a last resort. in our downstream product and also a a general upstream recomendation we discourage using isolcpus unless its a realtime host. when isolcpus is used you need to ensure that you run the qemu emulator thread on a core that does not overlap with the vm cpus. if you have a new enough nova that supports cpu_shared_set then you can define that in your nova.conf and use the shared emulator threads policy otherwise you will need to use the isolate policy for the emulator threads. The emulator threads policy feature was intoduced in pike https://specs.openstack.org/openstack/nova-specs/specs/pike/implemented/libvirt-emulator-threads-policy.html note that if you are using train or later we changed the meaning of cpu_shared_set https://specs.openstack.org/openstack/nova-specs/specs/train/implemented/cpu-resources.html from train on its primary use when combined with cpu_dedicated_Set is to define the cores that will be used for emulator threads and floating vms. where as before train it was only for emulator threads. the cpu resouces spec explains this change in more detail but i woudl recommend using the new behior if you cloud supprots it as vcpu_pin_set will be removed in a future release (hopefully B or C if we get time.) the performance hit is cause because the emulator thread and the vm CPU are compeating for execution on the same core and with isolcpus the emulator tread will not automatically float to one of the vms cores that is idle since the the kernel schduler is prevented form doing that via isolcpus. when using kvm to accclerate qemu kvm is offloading the device emulation fo the cpu to the kvm kernel module but the device emulation fo storage/network devices like the virtio-blk or virtio-scsi contoler is done on the emulator thread in the absence of iothreads(a qemu feature which nova does not support). as such when using isolcpus the deployer must ensure the emulator thread and vm cpus must not over lap using the hw:cpu_emulator_threads extra specs and config options. if you are not running a realtime kernel you should remove isolcpus if you are then you shoudl correctly configure nova-compute with a pool of cpus to use for the emulator thread and update the flavor accordingly to use hw:cpu_emulator_threads=isolate|share. > > On Wed, 28 Dec 2022 at 15:59, ?mit Seren wrote: > > > > We had a similar performance issue with networking (via openswitch) instead of I/O. > > > > Our hypervisor and VM configuration were like yours (VCPU pinning + isolcpus). We saw a 50% drop in virtualized networking throughput (measure via iperf). > > This was because the vhost_net kthreads which are responsible for the virtualized networking were pinned to 2 cores per socket and this quickly became the bottleneck. This was with OpenStack Queens and RHEL 7.6. > > > > We ended up keeping the VCPU pinning but removing the isolcpus kernel setting. This fixed the performance regression. > > > > > > > > Unfortunately, we didn?t further investigate this, so I don?t know why a newer kernel and/or newer Openstack release improves it. > > > > > > > > Hope this still helps > > > > > > > > Best > > > > ?mit > > > > > > > > On 27.12.22, 13:33, "Can ?zyurt" wrote: > > > > Hi everyone, > > > > > > > > I hope you are all doing well. We are trying to pinpoint an IOPS > > > > problem with RBD and decided to ask you for your take on it. > > > > > > > > 1 control plane > > > > 1 compute node > > > > 5 storage with 8 SSD disks each > > > > Openstack Stein/Ceph Mimic deployed with kolla-ansible on ubuntu-1804 > > > > (kernel 5.4) > > > > isolcpus 4-127 on compute > > > > vcpu_pin_set 4-127 in nova.conf > > > > > > > > image_metadatas: > > > > hw_scsi_model: virtio-scsi > > > > hw_disk_bus: scsi > > > > flavor_metadatas: > > > > hw:cpu_policy: dedicated > > > > > > > > What we have tested: > > > > fio --directory=. --ioengine=libaio --direct=1 > > > > --name=benchmark_random_read_write --filename=test_rand --bs=4k > > > > --iodepth=32 --size=1G --readwrite=randrw --rwmixread=50 --time_based > > > > --runtime=300s --numjobs=16 > > > > > > > > 1. First we run the fio test above on a guest VM, we see average 5K/5K > > > > read/write IOPS consistently. What we realize is that during the test, > > > > one single core on compute host is used at max, which is the first of > > > > the pinned cpus of the guest. 'top -Hp $qemupid' shows that some > > > > threads (notably tp_librbd) share the very same core throughout the > > > > test. (also emulatorpin set = vcpupin set as expected) > > > > 2. We remove isolcpus and every other configuration stays the same. > > > > Now fio tests now show 11K/11K read/write IOPS. No bottlenecked single > > > > cpu on the host, observed threads seem to visit all emulatorpins. > > > > 3. We bring isolcpus back and redeploy the cluster with Train/Nautilus > > > > on ubuntu-1804. Observations are identical to #1. > > > > 4. We tried replacing vcpu_pin_set to cpu_shared_set and > > > > cpu_dedicated_set to be able to pin emulator cpuset to 0-4 to no > > > > avail. Multiple guests on a host can easily deplete resources and IOPS > > > > drops. > > > > 5. Isolcpus are still in place and we deploy Ussuri with kolla-ansible > > > > and Train (to limit the moving parts) with ceph-ansible both on > > > > ubuntu-1804. Now we see 7K/7K read/write IOPS. > > > > 6. We destroy only the compute node and boot it with ubuntu-2004 with > > > > isolcpus set. Add it back to the existing cluster and fio shows > > > > slightly above 10K/10K read/write IOPS. > > > > > > > > > > > > What we think happens: > > > > > > > > 1. Since isolcpus disables scheduling between given cpus, qemu process > > > > and its threads are stuck at the same cpu which created the > > > > bottleneck. They should be runnable on any given emulatorpin cpus. > > > > 2. Ussuri is more performant despite isolcpus, with the improvements > > > > made over time. > > > > 3. Ubuntu-2004 is more performant despite isolcpus, with the > > > > improvements made over time in the kernel. > > > > > > > > Now the questions are: > > > > > > > > 1. What else are we missing here? > > > > 2. Are any of those assumptions false? > > > > 3. If all true, what can we do to solve this issue given that we > > > > cannot upgrade openstack nor ceph on production overnight? > > > > 4. Has anyone dealt with this issue before? > > > > > > > > We welcome any opinion and suggestions at this point as we need to > > > > make sure that we are on the right path regarding the problem and > > > > upgrade is not the only solution. Thanks in advance. > > > > > > > > > > > > > From gmann at ghanshyammann.com Fri Dec 30 17:55:32 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 30 Dec 2022 09:55:32 -0800 Subject: [release] Release countdown for week R-11, Jan 2 - 6 In-Reply-To: <27c8b2b6-edd2-552d-d657-9911bce7c5d3@openstack.org> References: <27c8b2b6-edd2-552d-d657-9911bce7c5d3@openstack.org> Message-ID: <185642cbf43.11973d0b9144130.569191593749934845@ghanshyammann.com> ---- On Fri, 30 Dec 2022 01:46:59 -0800 Thierry Carrez wrote --- > Development Focus > ----------------- > > The Antelope-2 milestone is next week, on January 5, 2023! > Antelope-related specs should now be finalized so that teams can move to > implementation ASAP. Some teams observe specific deadlines on the second > milestone (mostly spec freezes): please refer to > https://releases.openstack.org/antelope/schedule.html for details. > > General Information > ------------------- > > Libraries need to be released at least once per milestone period. Next > week, the release team will propose releases for any library that has > not been otherwise released since milestone 1. PTL's and release > liaisons, please watch for these and give a +1 to acknowledge them. If > there is some reason to hold off on a release, let us know that as well. > A +1 would be appreciated, but if we do not hear anything at all by the > end of the week, we will assume things are OK to proceed. > > Remember that non-library deliverables that follow the > cycle-with-intermediary release model should have an > intermediary?release before milestone-2. Those who haven't will be > proposed to switch to the cycle-with-rc model, which is more suited to > deliverables that are released only once per cycle. > > Next week is also the deadline to freeze the contents of the final > release. All new 'Antelope' deliverables need to have a deliverable file > in https://opendev.org/openstack/releases/src/branch/master/deliverables > and need to have done a release by milestone-2. > > Changes proposing those deliverables for inclusion in Antelope have been > posted, please update them with an actual release request before the > milestone-2 deadline if you plan on including that deliverable in > Antelope, or -1 if you need one more cycle to be ready. > I am sure tox4 failure will cause some delay in this milestone release especially who has fewer active contributors or they are on extended holidays. Many of us are fixing tox4 errors in many projects, requesting them to merge asap, or other projects can refer to the changes as they are very straight forwards. - https://review.opendev.org/q/topic:tox4 -gmann > Upcoming Deadlines & Dates > -------------------------- > > Antelope-2 Milestone: January 5th, 2023 > OpenInfra Summit Vancouver CFP deadline: January 10th, 2023 > Final 2023.1 Antelope release: March 22nd, 2023 > > -- > Thierry Carrez (ttx) > > From fungi at yuggoth.org Fri Dec 30 22:14:34 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 30 Dec 2022 22:14:34 +0000 Subject: [ansible][ACO] - Contribution questions In-Reply-To: References: <93eb35c5-2f88-b119-a8d7-26d0b3d5a7b2@redhat.com> Message-ID: <20221230221434.j4zck4p4arh2fcgo@yuggoth.org> On 2022-12-30 17:05:34 +0100 (+0100), Jakob Meng wrote: > PS: tox issues have been fixed in master branch of Ansible > OpenStack collection, please rebase your patches :) [...] Zuul always tests by merging proposed changes to the latest state of the targeted branch, so rebasing should only be necessary in order to address merge conflicts. If you want to get updated test results which take the tox fixes into account, leaving a review comment like "recheck now that tox configuration is fixed" (or anything starting with the word "recheck" really) should be sufficient. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jmeng at redhat.com Sat Dec 31 17:34:02 2022 From: jmeng at redhat.com (Jakob Meng) Date: Sat, 31 Dec 2022 18:34:02 +0100 Subject: [ansible][ACO] - Contribution questions In-Reply-To: <20221230221434.j4zck4p4arh2fcgo@yuggoth.org> References: <93eb35c5-2f88-b119-a8d7-26d0b3d5a7b2@redhat.com> <20221230221434.j4zck4p4arh2fcgo@yuggoth.org> Message-ID: On 30.12.22 23:14, Jeremy Stanley wrote: > On 2022-12-30 17:05:34 +0100 (+0100), Jakob Meng wrote: >> PS: tox issues have been fixed in master branch of Ansible >> OpenStack collection, please rebase your patches :) > [...] > > Zuul always tests by merging proposed changes to the latest state of > the targeted branch, so rebasing should only be necessary in order > to address merge conflicts. If you want to get updated test results > which take the tox fixes into account, leaving a review comment like > "recheck now that tox configuration is fixed" (or anything starting > with the word "recheck" really) should be sufficient. Without a rebase, a new (additional) merge commit will be created. Merge commits vs rebases is a controversial topic, both have their pros and cons [1][2]. For Ansible OpenStack collection, I prefer to rebase each patch, esp. before +w'ing it, to get a flat and readable history. This helps me with keeping track of what has to be backported from master branch to stable/1.0.0 branch. YMMV. ? [1] https://www.atlassian.com/git/articles/git-team-workflows-merge-or-rebase [2] https://stackoverflow.com/questions/457927/git-workflow-and-rebase-vs-merge-questions/11219380 From fungi at yuggoth.org Sat Dec 31 21:02:30 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 31 Dec 2022 21:02:30 +0000 Subject: [ansible][ACO] - Contribution questions In-Reply-To: References: <93eb35c5-2f88-b119-a8d7-26d0b3d5a7b2@redhat.com> <20221230221434.j4zck4p4arh2fcgo@yuggoth.org> Message-ID: <20221231210229.bndwt6vqjtlxh6kj@yuggoth.org> On 2022-12-31 18:34:02 +0100 (+0100), Jakob Meng wrote: [...] > Without a rebase, a new (additional) merge commit will be created. > Merge commits vs rebases is a controversial topic, both have their > pros and cons [1][2]. For Ansible OpenStack collection, I prefer > to rebase each patch, esp. before +w'ing it, to get a flat and > readable history. This helps me with keeping track of what has to > be backported from master branch to stable/1.0.0 branch. YMMV. ? [...] I suppose my experience is clouded by working on projects like OpenStack where you have multiple reviewers approving changes in the same repositories and merge-time testing, so no guarantees that the resulting commits will be fast-forward merges unless you explicitly serialize them (effectively abandoning the benefits of Zuul's parallel gating). Outside OpenStack, I agree that some people get weird about merge commits and consider them "unclean" or somehow complicating bisection (they don't necessarily but that's another story). For OpenStack and other projects relying on OpenDev's Gerrit and Zuul deployments however, where the merge mechanism is merge-when-necessary, trying to avoid merge commits seems like a lot of pointless effort and tacking into the wind rather than going with the flow. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From satish.txt at gmail.com Sat Dec 31 21:40:57 2022 From: satish.txt at gmail.com (Satish Patel) Date: Sat, 31 Dec 2022 16:40:57 -0500 Subject: [cloudkitty][horizon] change service name list in GUI Message-ID: Folks, I am trying to configure rating for network data bytes in/out but what I have seen is that the ceilometer uses a different name than what cloudkitty GUI drop down-menu has. Ceilometer using "network.outgoing.bytes" and "network.incoming.bytes" Cloudkitty GUI drop-down list has "network.outgoing.bytes.rate" & "network.incoming.bytes.rate" I have tried to change it in cloudkitty metrics.yml file to match it with the ceilometer but the rating is not calculating until I change that in cloudkitty. Please advise how to change it in cloudkitty. -------------- next part -------------- An HTML attachment was scrubbed... URL: